You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While working on #985 I had to go through our testing scripts, to make some simple changes to the cmake configuration of albany (namely, explicitly pass CXX/C compiler to cmake).
In doing so, I got very lost in all the bash/cmake scripts in each dashboard folder. It is a bit hard to track down which script calls what, and how they all work together. I know @ikalash and @jewatkins can navigate them, but I feel like Albany should provide something simpler than what we have. Besides, the current system seems to have lots of duplication of stuff across different dashboard folders. It seems to me that we should be able to provide everything via
A bash script with the commands to setup the env (e.g., module load X, append to PATH, ...)
A common CTestConfig.cmake scritpt, which does the same things.
A CMake file containing cmake vars for machine specific libs locations
A bash script that calls the env setup one, and then invokes ctest -S CTestConfig.cmake -DMACHINE=xyz
Less bash scripting and more CMake/CTest centralized scripting makes it for easier to understand framework (imho). Manipulating files to extract strings is a bit fragile (and probably a bit outdate in 2023?).
I would also consider running our testing through jenkins jobs rather bare cron jobs. Some thoughts on this:
Jenkins jobs store console logs of their builds for future inspection, while our current system overwrites last night's logs with today's logs
jenkins jobs can be built in different folders each night, so that we can keep old builds around to inspect differences in stuff like cmake cache, Makefile's, ...
On top of being scheduled, Jenkins jobs can quickly be run on-demand by clicking a button on the build farm.
We can tune each jenking job with mach-specific stuff, without having to commit these details in the repo
Jenkins jobs provide lots of plugins for changing behavior of our builds.
Changing nightly jobs configs (like a simple timeout) does not require to push commits to the repo. What's the point to keep this kind of info in the repo?
There is plenty of support online (and at SNL) if a jenkins is behaving not as expected. If you roll 20+ bash scripts, you're on your own.
I would be willing to modernize our nightly framework. Perhaps I could work on a new machine first (say, mappy), so that I don't disrupt existing build, while everyone decides if the new approach is better or worse.
P.S: I know someone put effort in writing all those scripts, and I don't want to dismiss all the valuable work. I just think that hand-rolled bash scripts with cat/awk/sed/bash-magic can be hard to read for those who didn't write them, making it hard for others to contribute to the testing infrastructure. And in the spirit of modern software design, I think we should take advantage of more structured tools/design.
The text was updated successfully, but these errors were encountered:
@bartgol : thank you for looking at the scripts. I agree that all the bash scripts are a mess. I am the one who wrote them, and various others have replicated / extended them. I am all for cleaning them up, except I unfortunately don't have the bandwidth to do this. If you or someone else would like to volunteer to do the cleanup, I am happy to update the scripts used in the nightlies. I can convert to jenkins jobs too, I have used jenkins before.
I feel your pain... but the problem has always been, there's no budget for upgrading our testing framework. I think we've always just taken what other's have done and modified it so that there's minimum effort. Honestly, IMO, even groups that do have a budget still struggle to maintain a framework...
We did have jenkins at one point but then we had jenkins and cron workflows so it ended up as additional overhead.
I'd be okay with upgrading everything as long as its portable and consistent across platforms and it's easy to learn the new system. Maybe now is the right time if you think it's easy to stand up a modern system. I always felt like it would be a large undertaking.
IMO, we should also tie the spack workflow to this upgrade so that we can make that consistent as well.
While working on #985 I had to go through our testing scripts, to make some simple changes to the cmake configuration of albany (namely, explicitly pass CXX/C compiler to cmake).
In doing so, I got very lost in all the bash/cmake scripts in each dashboard folder. It is a bit hard to track down which script calls what, and how they all work together. I know @ikalash and @jewatkins can navigate them, but I feel like Albany should provide something simpler than what we have. Besides, the current system seems to have lots of duplication of stuff across different dashboard folders. It seems to me that we should be able to provide everything via
ctest -S CTestConfig.cmake -DMACHINE=xyz
Less bash scripting and more CMake/CTest centralized scripting makes it for easier to understand framework (imho). Manipulating files to extract strings is a bit fragile (and probably a bit outdate in 2023?).
I would also consider running our testing through jenkins jobs rather bare cron jobs. Some thoughts on this:
I would be willing to modernize our nightly framework. Perhaps I could work on a new machine first (say, mappy), so that I don't disrupt existing build, while everyone decides if the new approach is better or worse.
@mperego @ikalash @jewatkins @kliegeois @mcarlson801 Thoughts?
P.S: I know someone put effort in writing all those scripts, and I don't want to dismiss all the valuable work. I just think that hand-rolled bash scripts with cat/awk/sed/bash-magic can be hard to read for those who didn't write them, making it hard for others to contribute to the testing infrastructure. And in the spirit of modern software design, I think we should take advantage of more structured tools/design.
The text was updated successfully, but these errors were encountered: