Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some bug fixes for the next release #1327

Merged
merged 17 commits into from
May 11, 2023
Merged

Some bug fixes for the next release #1327

merged 17 commits into from
May 11, 2023

Conversation

MarcelKoch
Copy link
Member

@MarcelKoch MarcelKoch commented Apr 21, 2023

Fixes:

@ginkgo-bot ginkgo-bot added reg:testing This is related to testing. type:solver This is related to the solvers type:preconditioner This is related to the preconditioners mod:all This touches all Ginkgo modules. labels Apr 21, 2023
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

first part of review

@MarcelKoch MarcelKoch self-assigned this Apr 24, 2023
@MarcelKoch MarcelKoch requested a review from a team April 25, 2023 06:51
upsj
upsj previously approved these changes Apr 25, 2023
Comment on lines 215 to 216

double GKO_FACTORY_PARAMETER_SCALAR(excess_solver_reduction, 1e-6);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because isai has its own valuetype, should the excess_solver_reduction be remove_complex<ValueType>?
casting from the input not the during the excess_solver call.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it were remove_complex<ValueType>, you would get compilation errors when doing something like .with_excess_solver_reduction(0.001). I think we should prevent that, so I'm using double here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could go even further and unpack the macro, replacing the templated parameter by an explicit double, which also makes with_excess_solver_reduction(0.001f) work

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we usually use .with_excess_solver_reduction(static_cast<type>(0.01))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't need to be that verbose, double or float, tolerances are cheap to compare. See Ginkgo Roadmap Meeting / Pain Points

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tolerance will not be the same as user input. for example, 1e-16 can be zero in half. it's also casted to the type implicitly currently.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can catch that at runtime, also C++ has no half precision literals

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will leave it as it is. That seems to be the easiest solution.

@pratikvn pratikvn added this to the Release 1.6.0 milestone Apr 27, 2023
@MarcelKoch MarcelKoch requested a review from yhmtsai April 27, 2023 08:42
@MarcelKoch MarcelKoch added the 1:ST:ready-for-review This PR is ready for review label Apr 27, 2023
@MarcelKoch MarcelKoch requested a review from upsj May 3, 2023 15:34
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type of excess_solver_residual
I do not check the fix of isai in detail though.
the solving should take the inverse sparsity (might be matrix^k) not matrix.
Is there any case such that the rhs index is not equal to the corresponding row size?
the inverse indeed has more nonzero but we only needs those overlapped matrix?

Comment on lines 199 to 200
auto l = std::make_shared<DummyLogger>(
new DummyLogger(false, gko::log::Logger::iteration_complete_mask));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if using make_shared, new DummyLogger can also be removed

Comment on lines +452 to +453
"Please use the version with the additional stopping "
"information.")]] virtual void
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

information. seems to be merged to the previous line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is due to the formatting. Should everything be on one line?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, I just thought it might be from the previous separation.

.on(exec),
check_residual ? gko::stop::ResidualNorm<value_type>::build()
.with_baseline(gko::stop::mode::absolute)
.with_reduction_factor(1e-30)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does it require 1e-30 for residual norm?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed to prevent the residual from becoming zero, which leads to nans in some solvers. This is especially an issue for the test where the initial guess is already the solution. Without this check, Gmres, for example, will only compute nans.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a fix for the lucky breakdown then? I would assume fixing it means preventing the solver from ever underflowing the residual.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can prevent underflow, because in some cases, e.g. initial guess is the solution, that is the expected behavior. So we would need to make sure that we stop in that case. But I think that is difficult to realize without checking the residual twice because we don't know if a residual criterion is already in use.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can act as if the stopping criterion fired, setting a flag based on the implicit residual should be cheap. But maybe we should leave that for another PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I briefly added a criterion to check for breakdown, but since that leads to bigger changes than expected, I moved that into another branch and will create another PR for that.

// incompatible with the mtx-or-solver parameter approach
/*if (Config::is_iterative() && mtx.ref->get_size()) {
if (Config::is_iterative() &&
op.ref->get_size() == mtx.ref->get_size()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
op.ref->get_size() == mtx.ref->get_size()) {
op.ref->get_size() == gko::transpose(mtx.ref->get_size())) {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and also the out, in variable should be reversed. or using gen_in_vec(op)

auto out = gen_in_vec(mtx);
auto in = gen_out_vec(mtx);
mtx->apply(out, in);
guarded_fn(in, out);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A solver generated from a matrix has the same size as the matrix, so I won't use the transpose. But I will change the vector creation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, the solver is like the inverse of the matrix, so the size is the transpose of matrix.

gko::transpose(system_matrix->get_size())),

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, I already fixed that accordingly.

Comment on lines 792 to 795
if (new_stop_factory && new_stop_factory->get_executor() != exec &&
!exec->memory_accessible(new_stop_factory->get_executor())) {
GKO_INVALID_STATE(
"Cross-executor assignment of factories is currently broken.");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does cloning a residual criterion lead the issue?
Previously, cloning iteration criterion works

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The clone is not propagated to all internal factories, which leads to the inconsistencies. Consider the combined criterion, the factory for that stores the parameters, which are a std vector of criteria factories. The clone of the factory is implemented by its copy assignment. That assignment will just copy assign the stored parameters, so the executor of the internal factories is not changed.
This already affects the iteration criterion, since it will try setting the convergence status using the criterion executor, which may not be the same executor as the convergence status.
TBH, I have no idea why that previously worked at all. Perhaps because now I'm checking more than one criteria?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also like to investigate this further

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point, it most likely has to do with Combined not handling executors correctly, but we can fix that easily :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I manually created copy assignment for Combined::Factory that fixes it. But I'm not happy with this, since cloning factory seems to be inherently broken.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cloning factories should be fine, since we ensure everything is on the right executor in the generate step. Combined is the only one I missed in #753

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Within the context of cloning a solver, it is fine. But it would break again if someone tries to clone a solver factory, or any other factory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we copy to the correct executor only at stopping criterion generation, it should be fine, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm talking about cloning factories in general, not just the stopping criteria in the solver. Cloning a factory is only handled through the base factory copy assignment, which will not copy the internal parameters to the correct executor.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't require the parameters of a factory to share a common executors

@@ -1020,6 +1028,9 @@ TYPED_TEST(Solver, CrossExecutorGenerateCopiesToFactoryExecutor)
{
using Config = typename TestFixture::Config;
using Mtx = typename TestFixture::Mtx;
if (!this->ref->memory_accessible(this->exec) && Config::is_iterative()) {
GTEST_SKIP();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
GTEST_SKIP();
GTEST_SKIP() << "Some message";

.on(exec),
check_residual ? gko::stop::ResidualNorm<value_type>::build()
.with_baseline(gko::stop::mode::absolute)
.with_reduction_factor(1e-30)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a fix for the lucky breakdown then? I would assume fixing it means preventing the solver from ever underflowing the residual.

Comment on lines 792 to 795
if (new_stop_factory && new_stop_factory->get_executor() != exec &&
!exec->memory_accessible(new_stop_factory->get_executor())) {
GKO_INVALID_STATE(
"Cross-executor assignment of factories is currently broken.");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also like to investigate this further

auto beta = std::make_reverse_iterator(alpha.end());

alpha[0] = 1;
alpha[1] = diag;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this would break if size == 0, can we add an early return for that case?

@MarcelKoch
Copy link
Member Author

@yhmtsai I think there were some tests which failed if I just set the rhs index to the corresponding row. I didn't investigate further into that.

@MarcelKoch MarcelKoch force-pushed the fixes-collection branch 2 times, most recently from 30aea99 to 91d970d Compare May 10, 2023 08:21
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

return solver::Gmres<value_type>::build()
.with_criteria(
stop::Iteration::build()
.with_max_iters(matrix->get_size()[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test the default behavior on a decently-sized matrix? If we have the choice between taking forever and producing a precise result, and returning an imprecise result after a shorter time, I would prefer going for the latter.

Copy link
Member Author

@MarcelKoch MarcelKoch May 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what I did here, but I wanted it to be min(100, matrix->get_size()[0]). I just wanted to make sure that for small matrices, we don't use more krylov vectors than the matrix size. Wrong place.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think as default, we should solve exactly on the coarsest level. If that can't be achieved in reasonable time, then the problem might be too difficult for the default settings and the user needs to specify the solver anyway.

@MarcelKoch MarcelKoch added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels May 10, 2023
@MarcelKoch
Copy link
Member Author

format-rebase!

MarcelKoch and others added 16 commits May 10, 2023 10:14
- documentation
- formatting
- fix reduction type issue

Co-authored-by: Yu-Hsiang M. Tsai <yhmtsai@gmail.com>
- refactoring
- test for matrix generator

Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Yu-Hsiang M. Tsai <yhmtsai@gmail.com>
Co-authored-by: Pratik Nayak <pratik.nayak@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
- correct value type for reduction factor
- minor refactoring

Co-authored-by: Yu-Hsiang M. Tsai <yhmtsai@gmail.com>
- tests

Co-authored-by: Tobias Ribizel <ribizel@kit.edu>
Co-authored-by: Yu-Hsiang M. Tsai <yhmtsai@gmail.com>
@ginkgo-bot
Copy link
Member

Formatting rebase introduced changes, see Artifacts here to review them

@MarcelKoch MarcelKoch merged commit b9cda64 into develop May 11, 2023
@MarcelKoch MarcelKoch deleted the fixes-collection branch May 11, 2023 06:55
tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:all This touches all Ginkgo modules. reg:testing This is related to testing. type:preconditioner This is related to the preconditioners type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Issue with Isai Preconditioner (Isai Type: General, Sparsity power > 1)
5 participants