Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blogpost for the PyTorch blog #174
Blogpost for the PyTorch blog #174
Changes from 2 commits
3a43fd9
8a3581f
33e168a
5f6c2fc
279b4dd
1172ac0
f2b1d61
7e81913
49ea084
3b6d734
8808ab9
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you compared the performance to other options for compiling numpy. The
get_labels
example contains functions which are supported bynumba
, that might be an interesting datapoint to see how thetorch.compile
speedup compares tonumba.jit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, as this would just make our announcement post way too long. I will expect other people to put up posts comparing this and other approaches (julia / numba / torch.jit / mojo), but that's beyond the scope of this post IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I plan to write a follow-up, based on #168 or https://github.com/Quansight-Labs/numpy_pytorch_interop/tree/main/e2e/smoke, comparing to numba.jit may fit there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd be super cool!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weirdly enough, throwing
@numba.njit
in place oftorch.compile
in this example runs into several rough edges of numba:np.linalg.norm(..., axis=2)
does not compile, chokes onaxis
.np.linalg.norm
withand
njit
-ting bothnorm
andget_labels
gives a slowdown of about 60%.trying
@njit(parallel=True)
onget_labels
crashes the compiler, again on theaxis
argument (TypeError: Failed in nopython mode pipeline (step: Preprocessing for parfors) got an unexpected keyword argument 'axis'
)trying
@njit(parallel=True)
on thenorm
only and@njit
onget_labels
yields a slowdown w.r.t. numpy of 'only' 20%.Got to admit I never had much luck with numba in non-toy situations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May want to link to #5 (or move
itthe list in that issue somewhere more appropriate)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's on my TODO list for next week!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "other" here is a bit of a surprise perhaps, since Quansight hasn't been merged. Or will it be in the author attribution at the top of the post?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the first part of the sentence, maybe be more explicit? E.g. "In parallel to this effort of making
torch.compile
understand NumPy code, ..."There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re other. I agree. I want to start mentioning Quansight here, but it seems a bit out of the blue. Perhaps also mention it at in the first paragraph of the blogpost?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not quiter for SciPy, since it's also encompassing compiled code without matching functionality in PyTorch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This didn't involve any Meta funding actually. Quansight and Intel have been the largest funders, the others are listed on https://data-apis.org/. Meta did fund the work on adding PyTorch support to https://github.com/data-apis/array-api-compat, but that work and funding acknowledgement are covered in Thomas' blog post on the scikit-learn work.