Status of Intel compiler/MPI on Cheyenne #395
-
I noticed in the Cheyenne site/compilers.yaml file that the default compiler/MPI for spack-stack on Cheyenne is intel@19.1.1.217/intel-mpi@2019.7.217, with the note that "[w]eird MPI issues when running single-task jobs (mpiexec -np 1) with latest Intel compiler/mpi library - code simply hangs forever". There is a simlar note in the spack-stack RTD (https://spack-stack.readthedocs.io/en/latest/Platforms.html#ncar-wyoming-cheyenne), with the addition of "for JEDI, job hangs forever in a particular MPI communication call in oops." I was wondering if this compiler/MPI combination was selected to optimize the build of JEDI environments (and avoid these hangups in OOPS), or if this same issue was already detected when building the ufs-wm or ufs-srw stack environments? I would prefer to use a newer compiler/MPI, but don't want to repeat work that has already proven this to be a dead-end. Perhaps this needs to be opened as an issue? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
The UFS can use newer versions just fine (it always runs with at least 6 MPI tasks for the global, and usually doesn't run with just one task for the regional either). We did discuss this with UCAR IT and MMM folks, and it was decided that we won't bother on our end, given the remaining lifetime of the system, and stick with the older version for now. We can use a separate environment for the UFS on Cheyenne and then unify them on Derecho. |
Beta Was this translation helpful? Give feedback.
The UFS can use newer versions just fine (it always runs with at least 6 MPI tasks for the global, and usually doesn't run with just one task for the regional either). We did discuss this with UCAR IT and MMM folks, and it was decided that we won't bother on our end, given the remaining lifetime of the system, and stick with the older version for now.
We can use a separate environment for the UFS on Cheyenne and then unify them on Derecho.