-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix ldexp casting + unary ufunc dtypes #169
Conversation
torch_np/_ufuncs.py
Outdated
if dtype is not None: | ||
if isinstance(x1, torch.Tensor): | ||
x1 = _util.typecast_tensor(x1, dtype, casting) | ||
else: | ||
x1 = torch.as_tensor(x1, dtype=dtype) | ||
else: | ||
if not isinstance(x1, torch.Tensor): | ||
x1 = torch.as_tensor(x1) | ||
if _dtypes_impl._category(x1.dtype) < 2: | ||
# cast integers to float64 | ||
x1 = x1.double() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we reuse the unary ufunc machinery for this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.double()
is actually wrong because we allow for user control of the default dtypes, so it needs to be default_dtypes.float_dtype
. Ditto for unary ufuncs, and I've extracted the relevant three-liner into _util
in the updated PR. Does that count as a reuse in your books? :-).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Factoring things out helps finding subtle bugs!
Merged, thanks for the review @lezcano ! |
the second commit is just trivial MAINT while I'm looking at this file