-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add distributed benchmarks for SpMV, Solver, Multi-Vector #1204
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass - a few small issues, but otherwise really nice work!
Are there any notable changes in the operations I should be aware of (except for supporting the generators)? Reviewing moved code is always a bit of a challenge.
@upsj I don't think I've changed anything significant in the moved code, except for using the generator to create matrices and vectors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there might be an issue with the boundary nodes in the stencil matrix setup, otherwise no major concerns.
d70f984
to
0ec33af
Compare
633f35c
to
512fc17
Compare
Kudos, SonarCloud Quality Gate passed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the generalization of the matrix and vector generate functions. LGTM!
benchmark/utils/generator.hpp
Outdated
} | ||
} | ||
|
||
static std::shared_ptr<gko::LinOp> generate_matrix( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think being explicit doesnt hurt.
static std::shared_ptr<gko::LinOp> generate_matrix( | |
static std::shared_ptr<gko::LinOp> generate_matrix_optimal_format( |
benchmark/utils/generator.hpp
Outdated
} | ||
} | ||
|
||
std::shared_ptr<gko::LinOp> generate_matrix( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::shared_ptr<gko::LinOp> generate_matrix( | |
std::shared_ptr<gko::LinOp> generate_matrix_with_optimal_format( |
auto part = gko::experimental::distributed:: | ||
Partition<itype, global_itype>::build_from_global_size_uniform( | ||
exec, comm.size(), static_cast<global_itype>(data.size[0])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this Partition should be a parameter as well, allowing users to benchmark different kinds of partitions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think instead of a function parameter, a better way to customize the partition would be to derive from the class, and override this function. I'm not sure how else the partition can be passed along.
So I will leave it in its current state.
auto part = gko::experimental::distributed:: | ||
Partition<itype, global_itype>::build_from_global_size_uniform( | ||
exec, comm.size(), static_cast<global_itype>(size[0])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
- remove dead code - rename strided vector creation function - merge logged/unlogged matrix creation Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
- renaming - documentation Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
- renaming - refactor stencil neighbor check Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
it can be generated from the already exisiting functions by using a 1D domain decomposition
- renaming - consistent ordering of slow to fast indices Co-authored-by: Pratik Nayak <pratik.nayak@kit.edu>
7c8edae
to
ffd82c6
Compare
format! |
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
Kudos, SonarCloud Quality Gate passed! |
Codecov ReportBase: 91.49% // Head: 91.49% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## develop #1204 +/- ##
========================================
Coverage 91.49% 91.49%
========================================
Files 556 556
Lines 47465 47467 +2
========================================
+ Hits 43428 43432 +4
+ Misses 4037 4035 -2
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Release 1.6.0 of Ginkgo. The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as: - Several building blocks for GPU-resident sparse direct solvers like symbolic and numerical LU and Cholesky factorization, ..., - A distributed Schwarz preconditioner, - New FGMRES and GCR solvers, - Distributed benchmarks for the SpMV operation, solvers, ... - Support for non-default streams in the CUDA and HIP backends, - Mixed precision support for the CSR SpMV, - A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to provide internal Ginkgo knowledge to most HPC profilers! and much more. If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions). Supported systems and requirements: + For all platforms, CMake 3.13+ + C++14 compliant compiler + Linux and macOS + GCC: 5.5+ + clang: 3.9+ + Intel compiler: 2018+ + Apple Clang: 14.0 is tested. Earlier versions might also work. + NVHPC: 22.7+ + Cray Compiler: 14.0.1+ + CUDA module: CUDA 9.2+ or NVHPC 22.7+ + HIP module: ROCm 4.5+ + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`. + Windows + MinGW: GCC 5.5+ + Microsoft Visual Studio: VS 2019+ + CUDA module: CUDA 9.2+, Microsoft Visual Studio + OpenMP module: MinGW. ### Version Support Changes + ROCm 4.0+ -> 4.5+ after [#1303](#1303) + Removed Cygwin pipeline and support [#1283](#1283) ### Interface Changes + Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234) + The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284) + The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327) + The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327) ### Deprecations + Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288) ### Added Features + A distributed Schwarz preconditioner. [#1248](#1248) + A GCR solver [#1239](#1239) + Flexible Gmres solver [#1244](#1244) + Enable Gmres solver for distributed matrices and vectors [#1201](#1201) + An example that uses Kokkos to assemble the system matrix [#1216](#1216) + A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210) + A numerical Cholesky factorization [#1215](#1215) + Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232) + Symbolic factorization benchmark [#1302](#1302) + The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055) + `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313) + It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228) + A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204) + Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204) + Benchmarks for CSR sorting and lookup [#1219](#1219) + A timer for MPI benchmarks that reports the longest time [#1217](#1217) + A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294) + Support for non-default streams in CUDA and HIP executors [#1236](#1236) + METIS integration for nested dissection reordering [#1296](#1296) + SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328) + Csr mixed-precision SpMV support [#1319](#1319) + A `with_loggers` function for all `Factory` parameters [#1337](#1337) ### Improvements + Improve naming of kernel operations for loggers [#1277](#1277) + Annotate solver iterations in `ProfilerHook` [#1290](#1290) + Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342) + Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261) + Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303) + Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325) ### Fixes + Fix OpenMPI version check [#1200](#1200) + Fix the mpi cxx type binding by c binding [#1306](#1306) + Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249) + Disable thread pinning with GPU executors due to poor performance [#1230](#1230) + Fix hwloc version detection [#1266](#1266) + Fix PAPI detection in non-implicit include directories [#1268](#1268) + Fix PAPI support for newer PAPI versions: [#1321](#1321) + Fix pkg-config file generation for library paths outside prefix [#1271](#1271) + Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251) + Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272) + Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263) + Fix segmentation fault in benchmark matrix construction [#1299](#1299) + Fix the stencil matrix creation for benchmarking [#1305](#1305) + Fix the additional residual check in IR [#1307](#1307) + Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331) + Fix Isai generation for large sparsity powers [#1327](#1327) + Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331) + Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
This PR adds distributed benchmarks for SpMV, Solver, and Multi-Vector operations.
To reduce code duplication, the non-distributed versions of these benchmarks have been refactored. In each case, a common core has been extracted. Additionally, creating vectors/matrices has been delegated to helper types for the sequential and distributed case. The common code uses templating with duck-typing to be generalizable across sequential and distributed benchmarks.
Also, this enables generated stencil matrices for all benchmarks.
The distributed multi-vector benchmark does not support BLAS level 2 or level 3 operations. The non-distributed still supports the same operations as before.
The distributed solver benchmark supports only the currently available distributed solvers.
Both in the distributed solver and spmv benchmark, the local and non-local format can be specified via the CLI.