Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC]: Add os_version option to use alma8 #6548

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

jakirkham
Copy link
Member

@jakirkham jakirkham commented Oct 15, 2024

Provide support for Alma 8 images based on GLIBC 2.28 with the updated CDTs.

  • Checks DEFAULT_LINUX_VERSION for cos7 or alma8
  • Extends docker_image, cdt_name, and c_stdlib_version
  • Includes selectors to support the different combinations
  • For feedstocks not using os_version, this is a no-op

Could be extended to CUDA 11.8 builds as well, but would require an additional Docker image, which Axel kindly put together in PR ( conda-forge/docker-images#291 ).

Also there is an overly strict check in CUDA cross-compilation support (overly restrictive on CUDA 12). Relaxing that in PR: conda-forge/conda-forge-ci-setup-feedstock#363

Some sample re-renders:

xref: conda-forge/conda-forge.github.io#1941


Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

@conda-forge-admin
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

@carterbox
Copy link
Member

@carterbox, please review

@h-vetinari
Copy link
Member

h-vetinari commented Oct 16, 2024

Could be extended to CUDA 11.8 builds as well, but would require building additional Docker images first

The CUDA 11.8 images are already based on alma8, last time I checked (as in: they contain glibc 2.28).

Copy link
Member

@h-vetinari h-vetinari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that for os_version (which is what feeds into DEFAULT_LINUX_VERSION), we should be using alma8. The cdt_name will eventually go away, and does not contain the os/image/glibc version.

recipe/conda_build_config.yaml Outdated Show resolved Hide resolved
@carterbox
Copy link
Member

carterbox commented Oct 22, 2024

  • Use conda-smithy -e to test using this pinning on a feedstock.

@h-vetinari
Copy link
Member

The CUDA 11.8 images are already based on alma8, last time I checked (as in: they contain glibc 2.28).

That was only partially correct (for aarch/ppc), here's the fix for x64: conda-forge/docker-images#291

Also, I think we're ready to do the move outright, see #6626. I also put this on the agenda for Thursday.

@carterbox
Copy link
Member

I tested these changes by trying to rerender the nvpl-blas feedstock.

  1. Set the environment variable DEFAULT_LINUX_VERSION=conda_2_28
  2. Rerender with -e pointing to this config

Questions:

This causes the docker image and cdt_name to change, but not the c_stdlib_version to change. Should it? Why would you want to build with glibc 2.17 by default in a container that has glibc 2.28?

How do switch to the conda_2_28 cdt without setting and environment variable? Can I do it by adding a static configuration file to my recipe?

@carterbox
Copy link
Member

xref: #6629

@jakirkham jakirkham changed the title [WIP, RFC]: Add option to use new conda_2_28 [RFC]: Add os_version option to use alma8 Oct 30, 2024
@jakirkham
Copy link
Member Author

The CUDA 11.8 images are already based on alma8, last time I checked (as in: they contain glibc 2.28).

That was only partially correct (for aarch/ppc), here's the fix for x64: conda-forge/docker-images#291

Thanks Axel! 🙏

Also, I think we're ready to do the move outright, see #6626. I also put this on the agenda for Thursday.

It sounds like we still want some optionality here (os_version / DEFAULT_LINUX_VERSION), which this has.

@jakirkham
Copy link
Member Author

Thanks for testing Daniel! 🙏

Have pushed some more changes

This causes the docker image and cdt_name to change, but not the c_stdlib_version to change. Should it? Why would you want to build with glibc 2.17 by default in a container that has glibc 2.28?

Have now added c_stdlib_version

One case where this might be useful is if we decided to bump all images to Alma 8. We do a pretty good job of siloing things with the compiler's sysroot, CDTs, etc. that this should largely be ok to do. This is part of what we discussed today

How do switch to the conda_2_28 cdt without setting and environment variable? Can I do it by adding a static configuration file to my recipe?

It shouldn't be needed to set the environment variable. All that should be needed is setting os_version in conda-forge.yml. Have included some examples in the OP

@jakirkham jakirkham marked this pull request as ready for review October 30, 2024 19:01
@jakirkham jakirkham requested a review from a team as a code owner October 30, 2024 19:01
@jakirkham
Copy link
Member Author

Marking as ready for review as this is now functional. Though leaving RFC in as it would be good to hear from more folks on these changes

cc @conda-forge/core

Copy link
Member

@h-vetinari h-vetinari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be tempted to just drop cos7 as a choice and only offer alma8/alma9 going forward (which we can similarly add here after conda-forge/docker-images#290). I strongly doubt that there are many cases where going back to an older image is required for the build, and even then, it remains possible via overriding the docker_image (plus other relevant quantities for the zip) in the CBC.

If we decide to keep the option for cos7, then I'd still drop the choice for the CUDA 11.8 images, where this is a reality already for aarch/ppc (on UBI 8, see here), and so pretending it's cos7 is not helpful (many feedstocks have run into the glibc requirements on aarch especially, here's one example)

recipe/conda_build_config.yaml Show resolved Hide resolved
Comment on lines +130 to +132
cdt_name: # [linux]
- cos7 # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "cos7"]
- conda # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "alma8"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can just use

cdt_name:  # [linux]
  - conda  # [linux]

without any further lines/selectors (and remove it from the zips).

This is what we discussed today in the core-call; those CDTs will be available after conda-forge/cdt-builds#71 is merged.

Comment on lines +18 to +22
- 2.17 # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "cos7"]
- 2.28 # [linux and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "alma8"]
- 2.17 # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- 2.17 # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "cos7"]
- 2.28 # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("DEFAULT_LINUX_VERSION", "cos7") == "alma8"]
Copy link
Member

@h-vetinari h-vetinari Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure we don't want to touch the stdlib version here; for example, the newer image might just be required to satisfy the __glibc constraint of a build or test dependency, but doesn't necessarily mean that the package itself needs a newer glibc and run-constraint.

Despite the monster-zip it remains possible to override one component locally - e.g. c_stdlib_version, and then set os_version for the image. I'd also prefer the choice of c_stdlib_version to be more visible than something implicitly implied by a knob tucked away in conda-forge.yml.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure we don't want to touch the stdlib version here

OK, I've tested this hypothesis (already before commenting but) now also using a local checkout of #6626 - here's the resulting branch.

Basically, this works as intended if we also take the c_stdlib_version out of the CUDA zip, which I think is possible now. That would be the ideal scenario for me - the c_stdlib_version can be independently set (in CBC), from the image (in conda-forge.yml, or in CBC if the full zip is overridden).

@carterbox
Copy link
Member

carterbox commented Nov 2, 2024

os_version:
  linux_aarch64: alma8

I added this section to my conda-forge.yml. This causes the docker image to be alma8 and the stdlib version to become 2.28.

I guess I understand @h-vetinari's point about being able to choose stdlib separately from os_version. You may want to build against an older stdlib version than what is in the container to improve compatibility. However, in that case, the default container glibc should always be strictly >= the latest version of glibc that we offer. In other words, we should have migrated to an OS that ships glibc >= 2.34 already because that is available on the channel i.e. Alma 9.

For clarity,

If the container glibc is always >= the options available for our sysroot package, then users will mostly never have to worry about changing the os_version in conda-forge.yml because any stdlib version that they pick will generate code that will be able to run in the container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants