-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable OpenMP in particle push and coordinate transformation routines. #241
Conversation
Thanks! Do we need to change some defaults, like the dynamic scheduling default we do in WarpX? |
Yes, at minimum we should set turn tiling on and set a reasonable default tile size when OMP is selected - I will update. |
Here are the current performance results on the expanding beam test:
|
fb30f24
to
49d3f6f
Compare
Thank you! Can you perform a little test on a single CPU package comparing MPI w/ 1 OMP thread vs. all OMP threads? |
Can you please also update MFIter loops to be OpenMP accelerated? |
Here is a comparison between pure MPI and pure OMP on the expanding beam test, with diags disabled:
|
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
@@ -40,6 +40,9 @@ namespace impactx::spacecharge | |||
space_charge_field.at(lev).at("y").setVal(0.); | |||
space_charge_field.at(lev).at("z").setVal(0.); | |||
|
|||
#ifdef AMREX_USE_OMP | |||
#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) | |||
#endif | |||
for (amrex::MFIter mfi(phi.at(lev)); mfi.isValid(); ++mfi) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WeiqunZhang commented:
we can also do tiling on CPU for MFIter loops (not yet done here).
Let's investigate if this helps us here and potentially make it a user option.
Close #195
To Do