Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automatic benchmarking to FLORIS #992

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

paulf81
Copy link
Collaborator

@paulf81 paulf81 commented Oct 4, 2024

Add automatic benchmarking to FLORIS

This draft PR is meant to add automatic code benchmarking to FLORIS. Proposed solution is to use pytest-benchmark to implement set timing tests:

https://pytest-benchmark.readthedocs.io/en/latest/

https://github.com/ionelmc/pytest-benchmark

And then try to schedule some semi-daily execution of these tests with logged performance checks so we can track changes over time. Here focused on:

https://github.com/benchmark-action/github-action-benchmark

To this end I added a first test to the tests folder including benchmarking to the tests/ folder and confirm the command line:

pytest floris_benchmark_test.py

Produces a benchmark result. At this point might open up for discussion, or others research:

  1. Do these benchmarks belong in tests? Won't that cause them to be run with normal CI process?
  2. What should we test?
  3. We should set a test results page
  4. Test parallel?
  5. Want to track changes coming from our own work, and also those coming from improvement to python, should make sure CI checks most recent version of python?

To include:

  • AEP for 100-turbine farm
  • Analysis for different wake models
  • Parallel FLORIS

@paulf81 paulf81 added the enhancement An improvement of an existing feature label Oct 4, 2024
@paulf81 paulf81 requested a review from misi9170 October 4, 2024 21:42
@paulf81 paulf81 self-assigned this Oct 4, 2024
@Bartdoekemeijer
Copy link
Collaborator

This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus.

@paulf81
Copy link
Collaborator Author

paulf81 commented Oct 7, 2024

This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus.

Sounds good @Bartdoekemeijer ! Added a short todo-list above to track the intention

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An improvement of an existing feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants