-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW] Adding RAPIDS <-> DLFrameworks Jupyter Notebook #266
base: branch-0.12
Are you sure you want to change the base?
[REVIEW] Adding RAPIDS <-> DLFrameworks Jupyter Notebook #266
Conversation
The blog for this PR hasn't been written yet, and its dependent on both tfdlpack and PyTorch 1.4. Do these notebooks assume that the user has configured libraries appropriately? |
@awthomp we should not assume that the user has configured these libraries properly. Also, to pass CI, we need the notebooks to be able to run unattended. I had spent some time trying to get We can also move this to the advanced notebooks section and ask to delay CI on it. Thoughts? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tfdlpack-gpu
install needs to work unattended to pass CI. Want to work together on this?
@taureandyernv I'd be happy to help on the Let's work on addressing the |
This notebook shows how to pass data between
__cuda_array_interface__
supporting libraries (CuPy and Numba, for this demonstration) and both PyTorch and TensorFlow. When using PyTorch, we can simply call:torch.as_tensor(foo)
on an existing array that accepts the__cuda_array_interface__
but for TensorFlow, we leverage the nascenttfdlpack
package described in this RFC.There is a bug in tfdlpack.to_dlpack() that is documented here: VoVAllen/tf-dlpack#12 and also in this notebook.