Skip to content

Commit

Permalink
update documentation
Browse files Browse the repository at this point in the history
Co-authored-by: Marcel Koch <marcel.koch@kit.edu>
  • Loading branch information
yhmtsai and MarcelKoch committed Feb 13, 2025
1 parent bac6cad commit e2bab59
Show file tree
Hide file tree
Showing 9 changed files with 15 additions and 7 deletions.
2 changes: 1 addition & 1 deletion examples/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ if(GINKGO_HAVE_PAPI_SDE)
endif()

if(GINKGO_BUILD_MPI)
list(APPEND EXAMPLES_LIST distributed-solver distributed-multigrid-preconditioned-solver-customized)
list(APPEND EXAMPLES_LIST distributed-solver distributed-multigrid-preconditioned-solver)
endif()

find_package(Kokkos 4.1.00 QUIET)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
cmake_minimum_required(VERSION 3.16)
project(distributed-multigrid-preconditioned-solver-customized)
project(distributed-multigrid-preconditioned-solver)

# We only need to find Ginkgo if we build this example stand-alone
if (NOT GINKGO_BUILD_EXAMPLES)
find_package(Ginkgo 1.10.0 REQUIRED)
endif()

add_executable(distributed-multigrid-preconditioned-solver-customized distributed-multigrid-preconditioned-solver-customized.cpp)
target_link_libraries(distributed-multigrid-preconditioned-solver-customized Ginkgo::ginkgo)
add_executable(distributed-multigrid-preconditioned-solver distributed-multigrid-preconditioned-solver.cpp)
target_link_libraries(distributed-multigrid-preconditioned-solver Ginkgo::ginkgo)
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,16 @@ int main(int argc, char* argv[])
// non-distributed program. Please note that not all solvers support
// distributed systems at the moment.
using solver = gko::solver::Cg<ValueType>;
// We use the Schwarz preconditioner to extend non-distributed
// preconditioners, like our Jacobi,
// to the distributed case. The Schwarz preconditioner wraps another
// preconditioner, and applies it only to the local part of a distributed
// matrix. This will be used as our distributed multigrid smoother.
using schwarz = gko::experimental::distributed::preconditioner::Schwarz<
ValueType, LocalIndexType, GlobalIndexType>;
using bj = gko::preconditioner::Jacobi<ValueType, LocalIndexType>;
// Multigrid and Pgm can accept the distributed matrix, so we still use the
// same type as the non-distributed case.
using mg = gko::solver::Multigrid;
using pgm = gko::multigrid::Pgm<ValueType, LocalIndexType>;

Expand Down Expand Up @@ -200,7 +207,8 @@ int main(int argc, char* argv[])
solver::build()
.with_criteria(gko::stop::Iteration::build().with_max_iters(4u))
.on(exec));
// It uses Schwarz Jacobi as smoother and GMRES as coarse solver
// The multigrid preconditioner uses the Schwarz Jacobi as smoother and Cg
// as coarse solver
auto mg_factory = gko::share(
mg::build()
.with_mg_level(pgm::build().with_deterministic(true))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<h1>Introduction</h1>
This distributed multigrid preconditioned solver example should help you understand customizing Ginkgo multigrid in a distributed setting.
The example will solve a simple 1D Laplace equation where the system can be distributed row-wise to multiple processes.
Note. Because the stencil is configured equal weighted, the coarsening method does not perform well on this kind of problem.
Note. Because the stencil for the discretized Laplacian is configured with equal weight, the coarsening method does not perform well on this kind of problem.
To run the solver with multiple processes, use `mpirun -n NUM_PROCS ./distributed-solver [executor] [num_grid_points] [num_iterations]`.

If you are using GPU devices, please make sure that you run this example with at most as many processes as you have GPU
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<h1>Results</h1>
This is the expected output for `mpirun -n 4 ./distributed-multigrid-preconditioned-solver-customized`:
This is the expected output for `mpirun -n 4 ./distributed-multigrid-preconditioned-solver`:

@code{.cpp}

Expand Down

0 comments on commit e2bab59

Please sign in to comment.