Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The nvcc packages do not honor cuda-version #108

Open
1 task done
leofang opened this issue Oct 30, 2024 · 6 comments · Fixed by #109
Open
1 task done

The nvcc packages do not honor cuda-version #108

leofang opened this issue Oct 30, 2024 · 6 comments · Fixed by #109
Labels
bug Something isn't working

Comments

@leofang
Copy link
Member

leofang commented Oct 30, 2024

Solution to issue cannot be found in the documentation.

  • I checked the documentation.

Issue

@pentschev reported offline that nvcc_linux-64 11.8 can coexist with a CUDA 12.x environment. It turns out that the recipe here does not make cuda-version a dependency:

channels:
  - rapidsai-nightly
  - dask/label/dev
  - conda-forge
  - nvidia
dependencies:
# Base
  - python=3.10
  # - cudatoolkit=11.8
  - cuda-version=12.5
...
  - nvcc_linux-64=11.8
...

Installed packages

n/a

Environment info

n/a

@leofang leofang added the bug Something isn't working label Oct 30, 2024
@leofang
Copy link
Member Author

leofang commented Oct 30, 2024

Based on the conclusion here, it seems the preference would be to either fix the repodata or add nvcc wrapper to cuda-version's run_constrained:
https://github.com/conda-forge/cuda-version-feedstock/blob/c8e4d1cc2c11b78ca53cff1cd2978afaab3d3651/recipe/meta.yaml#L14-L18

@jakirkham
Copy link
Member

In CUDA 11 (and earlier), constraining of CUDA version and installing runtime libraries was done by the same package cudatoolkit (as outlined in issue: conda-forge/conda-forge.github.io#1963 )

nvcc does add cudatoolkit to run_exports, but this only influences Conda packages that depend on nvcc during their build

run_exports:
strong:
- cudatoolkit >={{ cuda_compiler_version }},<{{ cuda_major + 1 }}
- sysroot_{{ cross_target_platform }} >={{ c_stdlib_version }} # [linux]

A natural extension of this for users installing nvcc in their development environments would be to add cudatoolkit as a runtime dependency

However it appears nvcc currently does not have cudatoolkit in requirements/run, which differs from nvcc's other run_exports like sysroot that is in requirements/run

run:
- sed # [linux]
- sysroot_{{ cross_target_platform }} >={{ c_stdlib_version }} # [linux]

As noted above, cuda-version already constrains cudatoolkit to match

So a reasonable fix would simply be to add cudatoolkit to requirements/run with an appropriate version constraint (likely just copying what run_exports already has)

@bdice
Copy link
Contributor

bdice commented Oct 30, 2024

I considered some other options but I think @jakirkham's proposal above is the cleanest. It avoids pushing any complexity on the conda solver for users of CUDA 12, which is ideal. We know the cuda-version solve path is rather difficult for CUDA 12 and we don't want legacy (CUDA 11) constraints to negatively impact the solver going forward.

@bdice bdice mentioned this issue Oct 30, 2024
5 tasks
@bdice
Copy link
Contributor

bdice commented Oct 30, 2024

I opened #109 with @jakirkham's proposed fix. We may need a repodata patch for older versions, too?

@jakirkham
Copy link
Member

Agreed we will want a repodata patch for older versions

@jakirkham
Copy link
Member

@mtjrider to look into making a repodata patch

Here is a good example. The patch would be to nvcc using the change in PR: #109

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants