You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
torch.ops.aten.remainder.Scalar seems to return fmod result when input number is big
To Reproduce
save it and run the script below
import torch
import torch.nn as nn
a = torch.tensor([[5950571286963681280]]).cuda()
example_args = (a,)
class ToyModel(nn.Module):
def __init__(self):
super(ToyModel, self).__init__()
def forward(self, x):
return torch.remainder(x, 196613)
model = ToyModel().eval().cuda()
with torch.no_grad():
ep = torch.export.export(model, args=example_args)
from torch_tensorrt.dynamo._compiler import compile as dynamo_compile
from torch_tensorrt import logging as ts_logging
with ts_logging.debug():
compiled = dynamo_compile(
exported_program=ep,
disable_tf32=True,
inputs=example_args,
min_block_size=1,
debug=True,
)
with torch.no_grad():
print(compiled(*example_args))
Bug Description
torch.ops.aten.remainder.Scalar
seems to return fmod result when input number is bigTo Reproduce
save it and run the script below
Expected behavior
expected to return result like
however, the printed result is
my full execution log is
remainder_error.log
Environment
conda
,pip
,libtorch
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: