-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix failing matrix tests #978
Conversation
The PR is now ready to review. There is still a bug in the Dense -> Fbcsr conversion, but I'll tackle that tomorrow. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general, there are some nit/question
kernel:: | ||
count_nonzero_blocks_per_row<<<num_blocks, default_block_size>>>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hipLaunchKernelGGL?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See discussion on Slack: It probably makes sense to move to the chevron launch syntax in the future, so I wanted to make a first step in that direction :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't recall if there's a consensus on this. It might make sense to keep hipLaunchKernelGGL. If not, perhaps it makes sense to move this implementation to common/cuda_hip
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked the HIP implementation, and the macros already resolve to the Chevron syntax since 2018 inside HIP: https://github.com/ROCm-Developer-Tools/hipamd/blob/250bd582540143f53dc6d61bca963b71cf5a1087/include/hip/hcc_detail/hip_runtime.h
Is that justification enough? 😄
format-rebase! |
Formatting rebase introduced changes, see Artifacts here to review them |
Codecov Report
@@ Coverage Diff @@
## develop #978 +/- ##
===========================================
- Coverage 92.37% 92.17% -0.20%
===========================================
Files 495 484 -11
Lines 41675 40626 -1049
===========================================
- Hits 38496 37447 -1049
Misses 3179 3179
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
almost on clone should not change the stride
col < num_cols && is_nonzero(source[row * stride + col]); | ||
} | ||
auto nonzero_mask = | ||
warp.ballot(local_nonzero) | (first_block_nonzero ? 1u : 0u); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
warp.ballot(local_nonzero) | (first_block_nonzero ? 1u : 0u); | |
warp.ballot(local_nonzero) | static_cast<unsigned>(first_block_nonzero); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really think this improves readability. I want to make clear that I set the lowest bit based on whether the first block already had a nonzero found previously.
@@ -60,8 +60,7 @@ class Dense : public ::testing::Test { | |||
static void assert_equal_to_original_mtx(gko::matrix::Dense<value_type>* m) | |||
{ | |||
ASSERT_EQ(m->get_size(), gko::dim<2>(2, 3)); | |||
ASSERT_EQ(m->get_stride(), 4); | |||
ASSERT_EQ(m->get_num_stored_elements(), 2 * 4); | |||
ASSERT_EQ(m->get_num_stored_elements(), 2 * m->get_stride()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it still needs to check the stride.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to not check the stride. If the stride is for performance reasons, it need not be the same as the original stride (the user might choose something non-optimal initially, say). But generally, the stride is to allow the user to access only a part of some original memory allocation. In that case, we don't need to preserve strides while copying or cloning. While moving, we still want to keep the stride, though.
this->assert_equal_to_original_mtx( | ||
dynamic_cast<decltype(this->mtx.get())>(mtx_clone.get())); | ||
this->assert_equal_to_original_mtx(mtx_clone.get()); | ||
ASSERT_EQ(mtx_clone->get_stride(), 3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clone should not change the stride
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine - clone
should ignore the original stride.
// assert(last_thread >= 0); | ||
// mask off everything below first_thread | ||
const auto lower_mask = | ||
first_thread < 0 ? full_mask : ~((one_mask << first_thread) - 1u); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
first_thread < 0 ? full_mask : ~((one_mask << first_thread) - 1u); | |
first_thread < 0 ? full_mask : full_mask - ((one_mask << first_thread) - 1u); |
or use xor operation. it's more for dpcpp because dpcpp the subgroup is less than the type length
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should not really matter, since the purpose of the mask is to mask off bits from other threads. The ballot call in dpcpp should not return bits outside its own full mask.
block_count += | ||
(block_local_col == block_size - 1 && local_mask) ? 1 : 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
block_count += | |
(block_local_col == block_size - 1 && local_mask) ? 1 : 0; | |
block_count += | |
static_cast<IndexType>(block_local_col == block_size - 1 && local_mask); |
if want to reduce the branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
simple ternary operations usually don't get translated into branches, and this one specifically doesn't even need a cmov, because it is just a written-out conversion bool -> int for clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Great job completing more of the Fbcsr kernels; I see you had some fun doing that :) I think the stride handling also looks good to me.
values[i * bs * bs + (in_cols[nz] % bs) * bs + | ||
(in_rows[nz] % bs)] = fake_complex_unpack(in_vals[nz]); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great implementation overall. If this loop could be written more over the original index space (over nnz
rather than over num_blocks
), we could get good performance. But I guess performance is not crucial here, so this will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this would require a lookup nonzero -> block index, which we don't have at that point. We can replace this by our own warp-parallel kernel in the future, where we can take care of load-balancing between blocks, and doing (almost) coalescing reads.
this->assert_equal_to_original_mtx( | ||
dynamic_cast<decltype(this->mtx.get())>(mtx_clone.get())); | ||
this->assert_equal_to_original_mtx(mtx_clone.get()); | ||
ASSERT_EQ(mtx_clone->get_stride(), 3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine - clone
should ignore the original stride.
kernel:: | ||
count_nonzero_blocks_per_row<<<num_blocks, default_block_size>>>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't recall if there's a consensus on this. It might make sense to keep hipLaunchKernelGGL. If not, perhaps it makes sense to move this implementation to common/cuda_hip
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Just once concern below.
result->values_ = this->values_; | ||
result->stride_ = this->stride_; | ||
result->set_size(this->get_size()); | ||
if (result->get_size() != this->get_size()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the original size is larger than required, perhaps we should preserve the original allocation. Then the new stride would be the original stride, while the new size is the new smaller size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about this. It might encourage users to keep pointers to the original allocation, which might cause issues if we change this behavior in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM (except for clone)
// only consider threads in the current block | ||
const auto first_thread = block_base_col - base_col; | ||
const auto last_thread = first_thread + block_size; | ||
// HIP compiles these assertions in Release, traps unconditionally |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the issue in v5.0 and fixed in v5.0.2.
related doc: https://docs.amd.com/bundle/ROCm-Release-Notes-v5.0.2/page/Fixed_Defects_in_This_Release.html
It's still broken for strided accesses though
Co-authored-by: Yuhsiang Tsai <yhmtsai@gmail.com>
This also changes the Dense convert_to(Dense) behavior so that it preserves strides only if no reallocation is necessary, or on moves
This means that padding in the target will be copied and written back after the copy has finished. This fixes issues in builds without OpenMP executor and reference/test/matrix/dense_kernels
This is only relevant for really old HIP/CUDA, but still causes test failures.
* use ConstArrayView for Dense::create_real_view * improve readability of Dense -> Fbcsr conversion * update documentation * remove unnecessary template parameters * more constexpr * fix generic matrix test formatting Co-authored-by: Aditya Kashi <aditya.kashi@kit.edu> Co-authored-by: Yuhsiang M. Tsai <yhmtsai@gmail.com>
Note: This PR changes the Ginkgo ABI:
For details check the full ABI diff under Artifacts here |
SonarCloud Quality Gate failed. |
Advertise release 1.5.0 and last changes + Add changelog, + Update third party libraries + A small fix to a CMake file See PR: #1195 The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as: - MPI-based multi-node support for all matrix formats and most solvers; - full DPC++/SYCL support, - functionality and interface for GPU-resident sparse direct solvers, - an interface for wrapping solvers with scaling and reordering applied, - a new algebraic Multigrid solver/preconditioner, - improved mixed-precision support, - support for device matrix assembly, and much more. If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions). Supported systems and requirements: + For all platforms, CMake 3.13+ + C++14 compliant compiler + Linux and macOS + GCC: 5.5+ + clang: 3.9+ + Intel compiler: 2018+ + Apple LLVM: 8.0+ + NVHPC: 22.7+ + Cray Compiler: 14.0.1+ + CUDA module: CUDA 9.2+ or NVHPC 22.7+ + HIP module: ROCm 4.0+ + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`. + Windows + MinGW and Cygwin: GCC 5.5+ + Microsoft Visual Studio: VS 2019 + CUDA module: CUDA 9.2+, Microsoft Visual Studio + OpenMP module: MinGW or Cygwin. Algorithm and important feature additions: + Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148)) + Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110), [#1142](#1142)) + Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082)) + Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059)) + Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982), [#986](#986)) + Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926)) + Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117)) + Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965)) + Add support for mixed real/complex BLAS operations ([#864](#864)) + Add a FFT LinOp for all but DPC++/SYCL ([#701](#701)) + Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775)) + Add CSR scaling ([#848](#848)) + Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890)) + Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901)) + Add mixed precision SparsityCsr SpMV support ([#970](#970)) + Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964)) + Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942)) Deprecations and important changes: + Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)). + Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101)) + Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052)) + Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020)) + Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)). + Drop official support for old CUDA < 9.2 ([#887](#887)) Improved performance additions: + Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028)) + Add HIP unsafe atomic option for AMD ([#1091](#1091)) + Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)). + Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809)) Fixes: + Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189)) + Fix issues with hwloc-related tests ([#1074](#1074)) + Fix include headers for GCC 12 ([#1071](#1071)) + Fix for simple-solver-logging example ([#1066](#1066)) + Fix for potential memory leak in Logger ([#1056](#1056)) + Fix logging of mixin classes ([#1037](#1037)) + Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753)) + Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978)) + Fix uninitialized data ([#958](#958)) + Fix CUDA version requirement for cusparseSpSM ([#953](#953)) + Fix several issues within bash-script ([#1016](#1016)) + Fixes for `NVHPC` compiler support ([#1194](#1194)) Other additions: + Simplify and properly name GMRES kernels ([#861](#861)) + Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109)) + Improve gdb pretty printer ([#987](#987), [#1114](#1114)) + Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035)) + Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032)) + Better CSR strategy defaults ([#969](#969)) + Add `move_from` to `PolymorphicObject` ([#997](#997)) + Remove unnecessary device_guard usage ([#956](#956)) + Improvements to the generic accessor for mixed-precision ([#727](#727)) + Add a naive lower triangular solver implementation for CUDA ([#764](#764)) + Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897)) + Add a L1 norm implementation ([#900](#900)) + Add reduce_add for arrays ([#831](#831)) + Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)). + Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123)) + Make IDR random initilization deterministic ([#1116](#1116)) + Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088)) + Update CUDA archCoresPerSM ([#1175](#1116)) + Add kernels for Csr sparsity pattern lookup ([#994](#994)) + Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027)) + Add a binary IO format for matrix data ([#984](#984)) + Add a tuple zip_iterator implementation ([#966](#966)) + Simplify kernel stubs and declarations ([#888](#888)) + Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859)) + Simplify copy to device in tests and examples ([#863](#863)) + More verbose output to array assertions ([#858](#858)) + Allow parallel compilation for Jacobi kernels ([#871](#871)) + Change clang-format pointer alignment to left ([#872](#872)) + Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137)) + Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154)) + Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903), [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
I disabled a few matrix tests on purpose to keep them from failing in #904, I will address them with this PR