-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizations method weighted #23
base: develop
Are you sure you want to change the base?
Optimizations method weighted #23
Conversation
src/mir/method/MethodWeighted.cc
Outdated
std::vector<size_t> forceMissing; // reserving size unnecessary (not the general case) | ||
{ | ||
auto begin = W.begin(0); | ||
auto end(begin); | ||
for (size_t r = 0; r < W.rows(); r++) { | ||
if (begin == (end = W.end(r))) { | ||
forceMissing.push_back(r); | ||
} | ||
begin = end; | ||
#pragma omp parallel for reduction(vec_merge_sorted:forceMissing) | ||
for (size_t r = 0; r < W.rows(); ++r) { | ||
if (W.outer()[r] == W.outer()[r + 1]) { | ||
forceMissing.push_back(r); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part corresponds to Section 3. If I am not mistaken, this calculation could be performed during matrix creation, as the original empty rows in the weight matrix are part of the geometry. A vector containing the indices of these empty rows could be stored as a private member and cached. This approach could save significant time, especially with larger matrices, where traversing all rows is costly. For example, in the base case profiling with eORCA12, each MIR API call takes approximately 40 ms.
The OMP header should look into the conditional pre-processor define (defined in #include "mir/api/mir_config.h") And in case there's no OpenMP found the pragma, or defined symbol, should be doing nothing |
In atlas we have something just for that : #if ATLAS_HAVE_OMP
#define ATLAS_OMP_STR(x) #x
#define ATLAS_OMP_STRINGIFY(x) ATLAS_OMP_STR(x)
#define atlas_omp_pragma(x) _Pragma(ATLAS_OMP_STRINGIFY(x))
#else
#define atlas_omp_pragma(x)
#endif With something similar you can then use: mir_omp_pragma( omp parallel for reduction(vec_merge_sorted:forceMissing) ) Or define the reduction: #define mir_omp_parallel_for_reduction(x) mir_omp_pragma( omp parallel for reduction(x) ) and then use mir_omp_parallel_for_reduction(vec_merge_sorted:forceMissing)
for( ... ) {...} |
This pull request highlights changes that can improve the performance of the code. The first change involves iterating over sparse matrices using indexes instead of iterators, which allows for better compiler optimizations and is generally more suited for structures with a known size, such as vectors.
The second change introduces OpenMP parallelization for certain independent sections of the code, such as the treatment of rows, to further improve performance.
These changes are applied only to specific parts of the code relevant to our current interests. However, we understand that if these changes are accepted, a major refactor would be required to standardize their usage across all parts of the code that rely on this iterator structure. Such a refactor would represent a long-term development effort.
We also acknowledge that the iterator structure may have a specific purpose and might not be replaced. Our goal is to showcase the observed improvements on MN5 and explain how we achieved them.
We applied these changes to the ForceMissing calculation (Section 3) and the MissingValueTreatment, specifically the MissingIfHeaviestMissing case (Section 9).
Other relevant sections include:
This table presents the average burst time for the MIR Method Weighted sections across different servers during an output step of Tco1279-eORCA12 simulations using the DE340 output plans. The data focuses exclusively on the ocean servers, showcasing only eORCA12 data with missing values.
This is the base case.

Using indexes instead of iterators
Using indexes and 16 OMP threads