-
-
Notifications
You must be signed in to change notification settings - Fork 501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]: Add os_version
option to use alma8
#6548
base: main
Are you sure you want to change the base?
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@carterbox, please review |
The CUDA 11.8 images are already based on alma8, last time I checked (as in: they contain glibc 2.28). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that for os_version
(which is what feeds into DEFAULT_LINUX_VERSION
), we should be using alma8
. The cdt_name
will eventually go away, and does not contain the os/image/glibc version.
|
That was only partially correct (for aarch/ppc), here's the fix for x64: conda-forge/docker-images#291 Also, I think we're ready to do the move outright, see #6626. I also put this on the agenda for Thursday. |
I tested these changes by trying to rerender the nvpl-blas feedstock.
Questions: This causes the docker image and cdt_name to change, but not the c_stdlib_version to change. Should it? Why would you want to build with glibc 2.17 by default in a container that has glibc 2.28? How do switch to the conda_2_28 cdt without setting and environment variable? Can I do it by adding a static configuration file to my recipe? |
xref: #6629 |
conda_2_28
os_version
option to use alma8
Thanks Axel! 🙏
It sounds like we still want some optionality here ( |
Thanks for testing Daniel! 🙏 Have pushed some more changes
Have now added One case where this might be useful is if we decided to bump all images to Alma 8. We do a pretty good job of siloing things with the compiler's sysroot, CDTs, etc. that this should largely be ok to do. This is part of what we discussed today
It shouldn't be needed to set the environment variable. All that should be needed is setting |
Marking as ready for review as this is now functional. Though leaving RFC in as it would be good to hear from more folks on these changes cc @conda-forge/core |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be tempted to just drop cos7 as a choice and only offer alma8/alma9 going forward (which we can similarly add here after conda-forge/docker-images#290). I strongly doubt that there are many cases where going back to an older image is required for the build, and even then, it remains possible via overriding the docker_image
(plus other relevant quantities for the zip) in the CBC.
If we decide to keep the option for cos7, then I'd still drop the choice for the CUDA 11.8 images, where this is a reality already for aarch/ppc (on UBI 8, see here), and so pretending it's cos7 is not helpful (many feedstocks have run into the glibc requirements on aarch especially, here's one example)
cdt_name: # [linux] | ||
- cos7 # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "cos7"] | ||
- conda # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "alma8"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can just use
cdt_name: # [linux]
- conda # [linux]
without any further lines/selectors (and remove it from the zips).
This is what we discussed today in the core-call; those CDTs will be available after conda-forge/cdt-builds#71 is merged.
- 2.17 # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "cos7"] | ||
- 2.28 # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "alma8"] | ||
- 2.17 # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"] | ||
- 2.17 # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "cos7"] | ||
- 2.28 # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "alma8"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure we don't want to touch the stdlib version here; for example, the newer image might just be required to satisfy the __glibc
constraint of a build or test dependency, but doesn't necessarily mean that the package itself needs a newer glibc and run-constraint.
Despite the monster-zip it remains possible to override one component locally - e.g. c_stdlib_version
, and then set os_version
for the image. I'd also prefer the choice of c_stdlib_version
to be more visible than something implicitly implied by a knob tucked away in conda-forge.yml
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure we don't want to touch the stdlib version here
OK, I've tested this hypothesis (already before commenting but) now also using a local checkout of #6626 - here's the resulting branch.
Basically, this works as intended if we also take the c_stdlib_version
out of the CUDA zip, which I think is possible now. That would be the ideal scenario for me - the c_stdlib_version
can be independently set (in CBC), from the image (in conda-forge.yml
, or in CBC if the full zip is overridden).
os_version:
linux_aarch64: alma8 I added this section to my conda-forge.yml. This causes the docker image to be alma8 and the stdlib version to become 2.28. I guess I understand @h-vetinari's point about being able to choose stdlib separately from os_version. You may want to build against an older stdlib version than what is in the container to improve compatibility. However, in that case, the default container glibc should always be strictly >= the latest version of glibc that we offer. In other words, we should have migrated to an OS that ships For clarity, If the container glibc is always >= the options available for our sysroot package, then users will mostly never have to worry about changing the os_version in conda-forge.yml because any stdlib version that they pick will generate code that will be able to run in the container. |
Provide support for Alma 8 images based on GLIBC 2.28 with the updated CDTs.
DEFAULT_LINUX_VERSION
forcos7
oralma8
docker_image
,cdt_name
, andc_stdlib_version
os_version
, this is a no-opCould be extended to CUDA 11.8 builds as well, but would require an additional Docker image, which Axel kindly put together in PR ( conda-forge/docker-images#291 ).
Also there is an overly strict check in CUDA cross-compilation support (overly restrictive on CUDA 12). Relaxing that in PR: conda-forge/conda-forge-ci-setup-feedstock#363
Some sample re-renders:
alma8
zlib-feedstock#86alma8
(CUDA 11/12) nccl-feedstock#119xref: conda-forge/conda-forge.github.io#1941
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)