Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execute benchmarks on aws #241

Merged
merged 15 commits into from
Jan 20, 2025
Merged

Execute benchmarks on aws #241

merged 15 commits into from
Jan 20, 2025

Conversation

greole
Copy link
Contributor

@greole greole commented Jan 13, 2025

This PR refactores our benchmark setup and aims to be a first step towards a continuos benchmarking approach.

It executes the benchmarks on aws, collects the results and pushes it as json to https://github.com/exasim-project/NeoFOAM-BenchmarkData for storage. All advanced post-processing etc. can be done on the NeoFOAM-BenchmarkData repo.

In order for the current approach to work size and then executor need to be generated in a particular order see:

    auto size = GENERATE(1 << 16, 1 << 17, 1 << 18, 1 << 19, 1 << 20);
    NeoFOAM::Executor exec = GENERATE(
        NeoFOAM::Executor(NeoFOAM::SerialExecutor {}),
        NeoFOAM::Executor(NeoFOAM::CPUExecutor {}),
        NeoFOAM::Executor(NeoFOAM::GPUExecutor {})
    );
    std::string execName = std::visit([](auto e) { return e.name(); }, exec);

Additional Changes:

  • The presets are modified slightly, such that benchmarks are performed in the profiling preset but not in the debug preset.
  • I removed the plotting script and just added a converter between catch2 xml and json

Limitations:
The benchmark output of Catch2 is very messy IMO and we need a single BENCHMARK statement per TEST_CASE.

Future work:
Once we merge this PR into main we can produce benchmark data on main to. Then we can execute the benchmark on aws twice, once for main once for the PR branch and compare the results. This allows to show the impact of a particular PR.

@greole greole added the full-ci a label that triggers the full ci pipeline label Jan 13, 2025
Copy link

Deployed test documentation to https://exasim-project.com/NeoFOAM/Build_PR_241

@greole greole linked an issue Jan 14, 2025 that may be closed by this pull request
@greole greole added ready-for-review Set this label to indicate that the PR is ready for review build Everything related to building NF labels Jan 14, 2025
@greole greole changed the title WIP execute benchmarks on aws Execute benchmarks on aws Jan 15, 2025
@greole greole requested review from MarcelKoch and HenningScheufler and removed request for MarcelKoch January 15, 2025 11:18
if(WIN32)
set_target_properties(
bench_${TEST}
PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/benchmarks/$<0:>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe add a comment why this is needed for windows? (especially the generator stuff).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the generator expressions, I think they actually don't do anything atm.

@greole greole requested a review from MarcelKoch January 20, 2025 09:17
@greole greole force-pushed the build/benchmarking branch from 648d987 to 4227948 Compare January 20, 2025 09:59
@greole greole merged commit f08bc19 into main Jan 20, 2025
19 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Everything related to building NF full-ci a label that triggers the full ci pipeline ready-for-review Set this label to indicate that the PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix benchmark runs
2 participants