-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infinite Recursion in triton.compile() due to flag_gems.use_gems() #111
Comments
could you please provide a demo code to recurrence this problem? |
You can add the following code at the beginning of tmp = torch.rand(256).cuda()
tmp.ne(0) it will recursively compile or we can confirm that the
|
In triton of commit id fc7a8e35819bda632bdcf1cf75fd9abe4d4e077a, the JITFunction treats all arguments with type annotation as constants, which is not expected behavior. Only those arguments annotated with def __init__(self, fn, version=None, do_not_specialize=None):
...
# annotations
self.annotations = {self.arg_names.index(name): ty for name, ty in fn.__annotations__.items()}
self.__annotations__ = fn.__annotations__
# index of constexprs
self.constexprs = [self.arg_names.index(ann) for ann in self.__annotations__.keys()] So as a workaround, you can remove type annotation for parameters other than |
Issue
There is an identified issue in the
triton.compile()
pipeline where theflag_gems.use_gems()
is being activated all the time. This leads to an infinite recursion problem when certain functions are called to be compiled.Specifically, if
torch.ne.Scalar
function is invoked during thetriton.compile()
pipeline, it will trigger another call totriton.compile()
bylib.impl("ne.Scalar", ne_scalar, "CUDA")
inFlagGems/src/flag_gems/__init__.py::enable()
, causing an infinite loop and eventually a stack overflow.The text was updated successfully, but these errors were encountered: