From 0e1df8438d54fd522a5557f62436901cbfa2a28d Mon Sep 17 00:00:00 2001 From: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> Date: Tue, 28 Jan 2025 20:46:28 -0800 Subject: [PATCH 01/58] CDash: fix variables in CTestConfig.cmake (#5611) Try using CTest module variables, instead of CTest script variables, given that we are not using a CTest script, to fix fields displayed in the CDash dashboard. Examples of guidance from the official documentation (https://cmake.org/cmake/help/v3.24/manual/ctest.1.html): ![Screenshot from 2025-01-28 11-04-12](https://github.com/user-attachments/assets/a59e088d-8da6-4e2d-a0f4-d5dd508047a4) ![Screenshot from 2025-01-28 11-08-45](https://github.com/user-attachments/assets/f677da64-dfc4-4b98-85dc-79e360f1f915) However, this seems to be true for all other variables in CTestConfig.cmake. Should we try to update them all? --- CTestConfig.cmake | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/CTestConfig.cmake b/CTestConfig.cmake index 938d2a4f518..8af4cb36ac1 100644 --- a/CTestConfig.cmake +++ b/CTestConfig.cmake @@ -13,6 +13,8 @@ set(CTEST_SUBMIT_URL https://my.cdash.org/submit.php?project=WarpX) set(CTEST_DROP_SITE_CDASH TRUE) -# Additional settings -set(CTEST_SITE "Azure-Pipelines") -set(CTEST_BUILD_NAME "CI-Development") +# Set site and build names +# - CTest script variables: CTEST_SITE, CTEST_BUILD_NAME +# - CTest module variables: SITE, BUILDNAME +set(SITE "Azure-Pipelines") +set(BUILDNAME "CI-Development") From 547bfbbce076db2222a91264590cda27d677ff37 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 29 Jan 2025 18:42:27 +0100 Subject: [PATCH 02/58] Clang-tidy CI test: bump version from 15 to 16 (#5592) This PR bumps the version used for `clang-tidy` CI tests from 15 to 16. It also addresses all the issues found with the upgraded tool. Most of the issues are related to this newly introduced check: - [cppcoreguidelines-avoid-const-or-ref-data-members](https://releases.llvm.org/16.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/cppcoreguidelines/avoid-const-or-ref-data-members.html) The check enforces [CppCoreGuidelines about constant and reference data members]( https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#c12-dont-make-data-members-const-or-references) . In general, I understand the argument against using constant or reference data members. ~~There is however one case in which I am not fully convinced by the suggestion of the tool: in [PML.H](https://github.com/ECP-WarpX/WarpX/pull/5592/files#diff-f1e020ebe3cd2222f399d50ff05769d0c70482f0e12bbe29b498e9ab2d0f4a53) `const amrex::BoxArray& m_grids;` becomes `amrex::BoxArray m_grids;` and I am wondering if this can be an issue for performances. Maybe we could consider using a (possibly smart) pointer to the `BoxArray`, instead of making a copy.~~ (we are now using a pointer for `amrex::BoxArray m_grids;`). Few issues were instead related to these checks: - [modernize-loop-convert](https://releases.llvm.org/16.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/modernize/loop-convert.html) This check was already enabled, but `clang-tidy-16` broadens its scope with respect to previous versions. - [modernize-use-auto](https://releases.llvm.org/16.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/modernize/use-auto.html) Only one case. I am a bit confused because this should have been found also by version 15 of the tool. - [misc-use-anonymous-namespace](https://releases.llvm.org/16.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/misc/use-anonymous-namespace.html) This is a new check. Actually, the issues found with this check were false positives, but they disappeared when I properly set `misc-use-anonymous-namespace.HeaderFileExtensions` in the `.clang-tidy` configuration file to recognize `.H` files as headers. - [misc-misplaced-const](https://releases.llvm.org/16.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/misc/misplaced-const.html) Only one case. I am a bit confused because this should have been found also by version 15 of the tool. - [readability-misleading-indentation](https://releases.llvm.org/16.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/readability/misleading-indentation.html) [**NOW DISABLED DUE TO FALSE POSITIVES**] This check was already enabled. However, with this newer version of the tool, it seems to me that it generates some false positives. Therefore, I would like to propose to **disable** it. We may try to re-enable it when we will bump the version from 16 to 17. --- .clang-tidy | 3 ++ .github/workflows/clang_tidy.yml | 8 +++--- Source/BoundaryConditions/PML.H | 14 +++++----- Source/BoundaryConditions/PML.cpp | 6 ++-- Source/BoundaryConditions/PML_RZ.H | 4 +-- Source/Diagnostics/BTDiagnostics.H | 2 +- .../BackTransformFunctor.H | 6 ++-- .../ComputeDiagFunctors/CellCenterFunctor.H | 2 +- .../ComputeDiagFunctors/DivBFunctor.H | 2 +- .../ComputeDiagFunctors/DivEFunctor.H | 2 +- .../ComputeDiagFunctors/JFunctor.H | 2 +- .../ComputeDiagFunctors/PartPerCellFunctor.H | 2 +- .../ComputeDiagFunctors/PartPerGridFunctor.H | 2 +- .../ParticleReductionFunctor.H | 8 +++--- .../ComputeDiagFunctors/RhoFunctor.H | 4 +-- .../ComputeDiagFunctors/TemperatureFunctor.H | 4 +-- .../FlushFormats/FlushFormatInSitu.cpp | 6 ++-- Source/Diagnostics/FullDiagnostics.cpp | 10 +++---- .../ReducedDiags/LoadBalanceCosts.H | 4 +-- Source/Diagnostics/WarpXOpenPMD.cpp | 28 +++++++++---------- .../FieldAccessorFunctors.H | 4 +-- .../HybridPICModel/HybridPICModel.cpp | 2 +- Source/Particles/AddPlasmaUtilities.H | 4 +-- .../ElementaryProcess/QEDPairGeneration.H | 2 +- .../ElementaryProcess/QEDPhotonEmission.H | 6 ++-- .../ElementaryProcess/QEDSchwingerProcess.H | 10 +++---- Source/Particles/Filter/FilterFunctors.H | 16 +++++------ Source/Particles/MultiParticleContainer.cpp | 4 +-- Source/Particles/ParticleBoundaryBuffer.cpp | 12 ++++---- .../Particles/ParticleCreation/SmartCreate.H | 2 +- Source/Particles/Sorting/SortingUtils.H | 4 +-- Source/ablastr/fields/Interpolate.H | 6 ++-- Source/ablastr/utils/SignalHandling.cpp | 2 +- Source/ablastr/utils/msg_logger/MsgLogger.H | 6 ++-- Tools/Linter/runClangTidy.sh | 8 +++--- 35 files changed, 104 insertions(+), 103 deletions(-) diff --git a/.clang-tidy b/.clang-tidy index 04d1419c5c7..efb60a001d0 100644 --- a/.clang-tidy +++ b/.clang-tidy @@ -44,6 +44,7 @@ Checks: ' -readability-implicit-bool-conversion, -readability-isolate-declaration, -readability-magic-numbers, + -readability-misleading-indentation, -readability-named-parameter, -readability-uppercase-literal-suffix ' @@ -55,6 +56,8 @@ CheckOptions: value: "H," - key: modernize-pass-by-value.ValuesOnly value: "true" +- key: misc-use-anonymous-namespace.HeaderFileExtensions + value: "H," HeaderFilterRegex: 'Source[a-z_A-Z0-9\/]+\.H$' diff --git a/.github/workflows/clang_tidy.yml b/.github/workflows/clang_tidy.yml index dda7f2185f5..e6816b1c1a9 100644 --- a/.github/workflows/clang_tidy.yml +++ b/.github/workflows/clang_tidy.yml @@ -26,7 +26,7 @@ jobs: - uses: actions/checkout@v4 - name: install dependencies run: | - .github/workflows/dependencies/clang.sh 15 + .github/workflows/dependencies/clang.sh 16 - name: set up cache uses: actions/cache@v4 with: @@ -43,8 +43,8 @@ jobs: export CCACHE_LOGFILE=${{ github.workspace }}/ccache.log.txt ccache -z - export CXX=$(which clang++-15) - export CC=$(which clang-15) + export CXX=$(which clang++-16) + export CC=$(which clang-16) cmake -S . -B build_clang_tidy \ -DCMAKE_VERBOSE_MAKEFILE=ON \ @@ -62,7 +62,7 @@ jobs: ${{github.workspace}}/.github/workflows/source/makeMakefileForClangTidy.py --input ${{github.workspace}}/ccache.log.txt make -j4 --keep-going -f clang-tidy-ccache-misses.mak \ - CLANG_TIDY=clang-tidy-15 \ + CLANG_TIDY=clang-tidy-16 \ CLANG_TIDY_ARGS="--config-file=${{github.workspace}}/.clang-tidy --warnings-as-errors=*" ccache -s diff --git a/Source/BoundaryConditions/PML.H b/Source/BoundaryConditions/PML.H index 9e7dbc0034c..6be9600b9d9 100644 --- a/Source/BoundaryConditions/PML.H +++ b/Source/BoundaryConditions/PML.H @@ -81,10 +81,10 @@ class SigmaBoxFactory : public amrex::FabFactory { public: - SigmaBoxFactory (const amrex::BoxArray& grid_ba, const amrex::Real* dx, + SigmaBoxFactory (const amrex::BoxArray* grid_ba, const amrex::Real* dx, const amrex::IntVect& ncell, const amrex::IntVect& delta, const amrex::Box& regular_domain, const amrex::Real v_sigma_sb) - : m_grids(grid_ba), m_dx(dx), m_ncell(ncell), m_delta(delta), m_regdomain(regular_domain), m_v_sigma_sb(v_sigma_sb) {} + : m_grids{grid_ba}, m_dx(dx), m_ncell(ncell), m_delta(delta), m_regdomain(regular_domain), m_v_sigma_sb(v_sigma_sb) {} ~SigmaBoxFactory () override = default; SigmaBoxFactory (const SigmaBoxFactory&) = default; @@ -97,7 +97,7 @@ public: [[nodiscard]] SigmaBox* create (const amrex::Box& box, int /*ncomps*/, const amrex::FabInfo& /*info*/, int /*box_index*/) const final { - return new SigmaBox(box, m_grids, m_dx, m_ncell, m_delta, m_regdomain, m_v_sigma_sb); + return new SigmaBox(box, *m_grids, m_dx, m_ncell, m_delta, m_regdomain, m_v_sigma_sb); } void destroy (SigmaBox* fab) const final @@ -112,7 +112,7 @@ public: } private: - const amrex::BoxArray& m_grids; + const amrex::BoxArray* m_grids; const amrex::Real* m_dx; amrex::IntVect m_ncell; amrex::IntVect m_delta; @@ -125,7 +125,7 @@ class MultiSigmaBox { public: MultiSigmaBox(const amrex::BoxArray& ba, const amrex::DistributionMapping& dm, - const amrex::BoxArray& grid_ba, const amrex::Real* dx, + const amrex::BoxArray* grid_ba, const amrex::Real* dx, const amrex::IntVect& ncell, const amrex::IntVect& delta, const amrex::Box& regular_domain, amrex::Real v_sigma_sb); void ComputePMLFactorsB (const amrex::Real* dx, amrex::Real dt); @@ -204,8 +204,8 @@ private: bool m_dive_cleaning; bool m_divb_cleaning; - const amrex::IntVect m_fill_guards_fields; - const amrex::IntVect m_fill_guards_current; + amrex::IntVect m_fill_guards_fields; + amrex::IntVect m_fill_guards_current; const amrex::Geometry* m_geom; const amrex::Geometry* m_cgeom; diff --git a/Source/BoundaryConditions/PML.cpp b/Source/BoundaryConditions/PML.cpp index f45ca222e69..390a09a34c3 100644 --- a/Source/BoundaryConditions/PML.cpp +++ b/Source/BoundaryConditions/PML.cpp @@ -506,7 +506,7 @@ SigmaBox::ComputePMLFactorsE (const Real* a_dx, Real dt) } MultiSigmaBox::MultiSigmaBox (const BoxArray& ba, const DistributionMapping& dm, - const BoxArray& grid_ba, const Real* dx, + const BoxArray* grid_ba, const Real* dx, const IntVect& ncell, const IntVect& delta, const amrex::Box& regular_domain, const amrex::Real v_sigma_sb) : FabArray(ba,dm,1,0,MFInfo(), @@ -764,7 +764,7 @@ PML::PML (const int lev, const BoxArray& grid_ba, Box single_domain_box = is_single_box_domain ? domain0 : Box(); // Empty box (i.e., Box()) means it's not a single box domain. - sigba_fp = std::make_unique(ba, dm, grid_ba_reduced, geom->CellSize(), + sigba_fp = std::make_unique(ba, dm, &grid_ba_reduced, geom->CellSize(), IntVect(ncell), IntVect(delta), single_domain_box, v_sigma_sb); if (WarpX::electromagnetic_solver_id == ElectromagneticSolverAlgo::PSATD) { @@ -879,7 +879,7 @@ PML::PML (const int lev, const BoxArray& grid_ba, warpx.m_fields.alloc_init(FieldType::pml_j_cp, Direction{2}, lev, cba_jz, cdm, 1, ngb, 0.0_rt, false, false); single_domain_box = is_single_box_domain ? cdomain : Box(); - sigba_cp = std::make_unique(cba, cdm, grid_cba_reduced, cgeom->CellSize(), + sigba_cp = std::make_unique(cba, cdm, &grid_cba_reduced, cgeom->CellSize(), cncells, cdelta, single_domain_box, v_sigma_sb); if (WarpX::electromagnetic_solver_id == ElectromagneticSolverAlgo::PSATD) { diff --git a/Source/BoundaryConditions/PML_RZ.H b/Source/BoundaryConditions/PML_RZ.H index 20c7d360fc7..5508836a171 100644 --- a/Source/BoundaryConditions/PML_RZ.H +++ b/Source/BoundaryConditions/PML_RZ.H @@ -53,8 +53,8 @@ public: private: - const int m_ncell; - const int m_do_pml_in_domain; + int m_ncell; + int m_do_pml_in_domain; const amrex::Geometry* m_geom; // The MultiFabs pml_E_fp and pml_B_fp are setup using the registry. diff --git a/Source/Diagnostics/BTDiagnostics.H b/Source/Diagnostics/BTDiagnostics.H index c7137f45c9d..f4f118892a8 100644 --- a/Source/Diagnostics/BTDiagnostics.H +++ b/Source/Diagnostics/BTDiagnostics.H @@ -242,7 +242,7 @@ private: * will be used by all snapshots to obtain lab-frame data at the respective * z slice location. */ - std::string const m_cell_centered_data_name; + std::string m_cell_centered_data_name; /** Vector of pointers to compute cell-centered data, per level, per component * using the coarsening-ratio provided by the user. */ diff --git a/Source/Diagnostics/ComputeDiagFunctors/BackTransformFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/BackTransformFunctor.H index bef40ae1ce0..c4410b0a722 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/BackTransformFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/BackTransformFunctor.H @@ -100,11 +100,11 @@ public: amrex::Real beta_boost) const; private: /** pointer to source multifab (cell-centered multi-component multifab) */ - amrex::MultiFab const * const m_mf_src = nullptr; + const amrex::MultiFab* m_mf_src = nullptr; /** level at which m_mf_src is defined */ - int const m_lev; + int m_lev; /** Number of buffers or snapshots */ - int const m_num_buffers; + int m_num_buffers; /** Vector of amrex::Box with index-space in the lab-frame */ amrex::Vector m_buffer_box; /** Vector of current z co-ordinate in the boosted-frame for each buffer */ diff --git a/Source/Diagnostics/ComputeDiagFunctors/CellCenterFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/CellCenterFunctor.H index dd5bb239ecf..6f0818b180e 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/CellCenterFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/CellCenterFunctor.H @@ -36,7 +36,7 @@ public: void operator()(amrex::MultiFab& mf_dst, int dcomp, int /*i_buffer=0*/) const override; private: /** pointer to source multifab (can be multi-component) */ - amrex::MultiFab const * const m_mf_src = nullptr; + const amrex::MultiFab* m_mf_src = nullptr; int m_lev; /**< level on which mf_src is defined (used in cylindrical) */ /**< (for cylindrical) whether to average all modes into 1 comp */ bool m_convertRZmodes2cartesian; diff --git a/Source/Diagnostics/ComputeDiagFunctors/DivBFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/DivBFunctor.H index 1d36b434ae2..347c40e0338 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/DivBFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/DivBFunctor.H @@ -42,7 +42,7 @@ public: private: /** Vector of pointer to source multifab Bx, By, Bz */ ablastr::fields::VectorField m_arr_mf_src; - int const m_lev; /**< level on which mf_src is defined (used in cylindrical) */ + int m_lev; /**< level on which mf_src is defined (used in cylindrical) */ /**< (for cylindrical) whether to average all modes into 1 comp */ bool m_convertRZmodes2cartesian; }; diff --git a/Source/Diagnostics/ComputeDiagFunctors/DivEFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/DivEFunctor.H index e7691187f3a..3874ebeb6c6 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/DivEFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/DivEFunctor.H @@ -41,7 +41,7 @@ public: private: /** Vector of pointer to source multifab Bx, By, Bz */ ablastr::fields::VectorField m_arr_mf_src; - int const m_lev; /**< level on which mf_src is defined (used in cylindrical) */ + int m_lev; /**< level on which mf_src is defined (used in cylindrical) */ /**< (for cylindrical) whether to average all modes into 1 comp */ bool m_convertRZmodes2cartesian; }; diff --git a/Source/Diagnostics/ComputeDiagFunctors/JFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/JFunctor.H index d9f9a1e82e0..21e0d3f5034 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/JFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/JFunctor.H @@ -39,7 +39,7 @@ public: void operator()(amrex::MultiFab& mf_dst, int dcomp, int /*i_buffer=0*/) const override; private: /** direction of the current density to save */ - const int m_dir; + int m_dir; /** level on which mf_src is defined */ int m_lev; /** (for cylindrical) whether to average all modes into 1 comp */ diff --git a/Source/Diagnostics/ComputeDiagFunctors/PartPerCellFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/PartPerCellFunctor.H index 1b8785af7b7..491cd2cfe37 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/PartPerCellFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/PartPerCellFunctor.H @@ -30,7 +30,7 @@ public: */ void operator()(amrex::MultiFab& mf_dst, int dcomp, int /*i_buffer=0*/) const override; private: - int const m_lev; /**< level on which mf_src is defined */ + int m_lev; /**< level on which mf_src is defined */ }; #endif // WARPX_PARTPERCELLFUNCTOR_H_ diff --git a/Source/Diagnostics/ComputeDiagFunctors/PartPerGridFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/PartPerGridFunctor.H index 9718c9c7163..b0c3f28ab90 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/PartPerGridFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/PartPerGridFunctor.H @@ -30,7 +30,7 @@ public: */ void operator()(amrex::MultiFab& mf_dst, int dcomp, int /*i_buffer=0*/) const override; private: - int const m_lev; /**< level on which mf_src is defined */ + int m_lev; /**< level on which mf_src is defined */ }; #endif // WARPX_PARTPERGRIDFUNCTOR_H_ diff --git a/Source/Diagnostics/ComputeDiagFunctors/ParticleReductionFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/ParticleReductionFunctor.H index 7de9844a99e..33211900553 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/ParticleReductionFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/ParticleReductionFunctor.H @@ -43,10 +43,10 @@ public: */ void operator()(amrex::MultiFab& mf_dst, int dcomp, int /*i_buffer=0*/) const override; private: - int const m_lev; /**< level on which mf_src is defined */ - int const m_ispec; /**< index of species to average over */ - bool const m_do_average; /**< Whether to calculate the average of the data */ - bool const m_do_filter; /**< whether to apply #m_filter_fn */ + int m_lev; /**< level on which mf_src is defined */ + int m_ispec; /**< index of species to average over */ + bool m_do_average; /**< Whether to calculate the average of the data */ + bool m_do_filter; /**< whether to apply #m_filter_fn */ /** Parser function to be averaged by the functor. Arguments: x, y, z, ux, uy, uz */ std::unique_ptr m_map_fn_parser; /** Parser function to filter particles before pass to map. Arguments: x, y, z, ux, uy, uz */ diff --git a/Source/Diagnostics/ComputeDiagFunctors/RhoFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/RhoFunctor.H index bc0c8b9f270..073e6476110 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/RhoFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/RhoFunctor.H @@ -43,13 +43,13 @@ public: private: // Level on which source MultiFab mf_src is defined in RZ geometry - int const m_lev; + int m_lev; // Whether to apply k-space filtering of charge density in the diagnostics output in RZ PSATD bool m_apply_rz_psatd_filter; // Species index to dump rho per species - const int m_species_index; + int m_species_index; // Whether to average all modes into one component in RZ geometry bool m_convertRZmodes2cartesian; diff --git a/Source/Diagnostics/ComputeDiagFunctors/TemperatureFunctor.H b/Source/Diagnostics/ComputeDiagFunctors/TemperatureFunctor.H index f6c425e74d5..1716ab61652 100644 --- a/Source/Diagnostics/ComputeDiagFunctors/TemperatureFunctor.H +++ b/Source/Diagnostics/ComputeDiagFunctors/TemperatureFunctor.H @@ -28,8 +28,8 @@ public: */ void operator()(amrex::MultiFab& mf_dst, int dcomp, int /*i_buffer=0*/) const override; private: - int const m_lev; /**< level on which mf_src is defined */ - int const m_ispec; /**< index of species to average over */ + int m_lev; /**< level on which mf_src is defined */ + int m_ispec; /**< index of species to average over */ }; #endif // WARPX_TEMPERATUREFUNCTOR_H_ diff --git a/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp b/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp index be3047d7ab6..d5313d71727 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp +++ b/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp @@ -27,14 +27,14 @@ FlushFormatInSitu::WriteParticles(const amrex::Vector& particle_di // we prefix the fields with "particle_{species_name}" b/c we // want to to uniquely name all the fields that can be plotted - for (unsigned i = 0, n = particle_diags.size(); i < n; ++i) { + for (const auto & particle_diag : particle_diags) { Vector particle_varnames; Vector particle_int_varnames; - std::string prefix = "particle_" + particle_diags[i].getSpeciesName(); + std::string prefix = "particle_" + particle_diag.getSpeciesName(); // Get pc for species // auto& pc = mypc->GetParticleContainer(i); - WarpXParticleContainer* pc = particle_diags[i].getParticleContainer(); + WarpXParticleContainer* pc = particle_diag.getParticleContainer(); // get names of real comps std::map real_comps_map = pc->getParticleComps(); diff --git a/Source/Diagnostics/FullDiagnostics.cpp b/Source/Diagnostics/FullDiagnostics.cpp index 946178fd1b5..8e2ebd3886a 100644 --- a/Source/Diagnostics/FullDiagnostics.cpp +++ b/Source/Diagnostics/FullDiagnostics.cpp @@ -565,12 +565,10 @@ FullDiagnostics::AddRZModesToDiags (int lev) // Check if divE is requested // If so, all components will be written out - bool divE_requested = false; - for (int comp = 0; comp < m_varnames.size(); comp++) { - if ( m_varnames[comp] == "divE" ) { - divE_requested = true; - } - } + const bool divE_requested = std::any_of( + std::begin(m_varnames), + std::end(m_varnames), + [](const auto& varname) { return varname == "divE"; }); // If rho is requested, all components will be written out const bool rho_requested = utils::algorithms::is_in( m_varnames, "rho" ); diff --git a/Source/Diagnostics/ReducedDiags/LoadBalanceCosts.H b/Source/Diagnostics/ReducedDiags/LoadBalanceCosts.H index cfba804d0f0..b30d6b0bb6e 100644 --- a/Source/Diagnostics/ReducedDiags/LoadBalanceCosts.H +++ b/Source/Diagnostics/ReducedDiags/LoadBalanceCosts.H @@ -40,9 +40,9 @@ public: * (cost, processor, level, i_low, j_low, k_low, gpu_ID [if GPU run], num_cells, num_macro_particles * note: the hostname per box is stored separately (in m_data_string) */ #ifdef AMREX_USE_GPU - const int m_nDataFields = 9; + static const int m_nDataFields = 9; #else - const int m_nDataFields = 8; + static const int m_nDataFields = 8; #endif /** used to keep track of max number of boxes over all timesteps; this allows diff --git a/Source/Diagnostics/WarpXOpenPMD.cpp b/Source/Diagnostics/WarpXOpenPMD.cpp index e38ae8c8300..2fac8ede452 100644 --- a/Source/Diagnostics/WarpXOpenPMD.cpp +++ b/Source/Diagnostics/WarpXOpenPMD.cpp @@ -531,10 +531,10 @@ WarpXOpenPMDPlot::WriteOpenPMDParticles (const amrex::Vector& part { WARPX_PROFILE("WarpXOpenPMDPlot::WriteOpenPMDParticles()"); -for (unsigned i = 0, n = particle_diags.size(); i < n; ++i) { +for (const auto & particle_diag : particle_diags) { - WarpXParticleContainer* pc = particle_diags[i].getParticleContainer(); - PinnedMemoryParticleContainer* pinned_pc = particle_diags[i].getPinnedParticleContainer(); + WarpXParticleContainer* pc = particle_diag.getParticleContainer(); + PinnedMemoryParticleContainer* pinned_pc = particle_diag.getPinnedParticleContainer(); if (isBTD || use_pinned_pc) { if (!pinned_pc->isDefined()) { continue; // Skip to the next particle container @@ -546,17 +546,17 @@ for (unsigned i = 0, n = particle_diags.size(); i < n; ++i) { pc->make_alike(); const auto mass = pc->AmIA() ? PhysConst::m_e : pc->getMass(); - RandomFilter const random_filter(particle_diags[i].m_do_random_filter, - particle_diags[i].m_random_fraction); - UniformFilter const uniform_filter(particle_diags[i].m_do_uniform_filter, - particle_diags[i].m_uniform_stride); - ParserFilter parser_filter(particle_diags[i].m_do_parser_filter, + RandomFilter const random_filter(particle_diag.m_do_random_filter, + particle_diag.m_random_fraction); + UniformFilter const uniform_filter(particle_diag.m_do_uniform_filter, + particle_diag.m_uniform_stride); + ParserFilter parser_filter(particle_diag.m_do_parser_filter, utils::parser::compileParser - (particle_diags[i].m_particle_filter_parser.get()), + (particle_diag.m_particle_filter_parser.get()), pc->getMass(), time); parser_filter.m_units = InputUnits::SI; - GeometryFilter const geometry_filter(particle_diags[i].m_do_geom_filter, - particle_diags[i].m_diag_domain); + GeometryFilter const geometry_filter(particle_diag.m_do_geom_filter, + particle_diag.m_diag_domain); if (isBTD || use_pinned_pc) { particlesConvertUnits(ConvertDirection::WarpX_to_SI, pinned_pc, mass); @@ -587,7 +587,7 @@ for (unsigned i = 0, n = particle_diags.size(); i < n; ++i) { } // Gather the electrostatic potential (phi) on the macroparticles - if ( particle_diags[i].m_plot_phi ) { + if ( particle_diag.m_plot_phi ) { storePhiOnParticles( tmp, WarpX::electrostatic_solver_id, !use_pinned_pc ); } @@ -619,7 +619,7 @@ for (unsigned i = 0, n = particle_diags.size(); i < n; ++i) { real_names[x.second+PIdx::nattribs] = detail::snakeToCamel(x.first); } // plot any "extra" fields by default - real_flags = particle_diags[i].m_plot_flags; + real_flags = particle_diag.m_plot_flags; real_flags.resize(tmp.NumRealComps(), 1); // and the names int_names.resize(tmp.NumIntComps()); @@ -634,7 +634,7 @@ for (unsigned i = 0, n = particle_diags.size(); i < n; ++i) { // real_names contains a list of all real particle attributes. // real_flags is 1 or 0, whether quantity is dumped or not. DumpToFile(&tmp, - particle_diags.at(i).getSpeciesName(), + particle_diag.getSpeciesName(), m_CurrentStep, real_flags, int_flags, diff --git a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceAlgorithms/FieldAccessorFunctors.H b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceAlgorithms/FieldAccessorFunctors.H index 05b1db1fe94..ba94eab0b66 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceAlgorithms/FieldAccessorFunctors.H +++ b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceAlgorithms/FieldAccessorFunctors.H @@ -42,9 +42,9 @@ struct FieldAccessorMacroscopic } private: /** Array4 of the source field to be scaled and returned by the operator() */ - amrex::Array4 const m_field; + amrex::Array4 m_field; /** Array4 of the macroscopic parameter used to divide m_field in the operator() */ - amrex::Array4 const m_parameter; + amrex::Array4 m_parameter; }; diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp index ec3742d1ff8..ba6bb0e042c 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp @@ -339,7 +339,7 @@ void HybridPICModel::HybridPICSolveE ( auto& warpx = WarpX::GetInstance(); ablastr::fields::VectorField current_fp_plasma = warpx.m_fields.get_alldirs(FieldType::hybrid_current_fp_plasma, lev); - const ablastr::fields::ScalarField electron_pressure_fp = warpx.m_fields.get(FieldType::hybrid_electron_pressure_fp, lev); + auto* const electron_pressure_fp = warpx.m_fields.get(FieldType::hybrid_electron_pressure_fp, lev); // Solve E field in regular cells warpx.get_pointer_fdtd_solver_fp(lev)->HybridPICSolveE( diff --git a/Source/Particles/AddPlasmaUtilities.H b/Source/Particles/AddPlasmaUtilities.H index bb05d7be3c8..824e3e10955 100644 --- a/Source/Particles/AddPlasmaUtilities.H +++ b/Source/Particles/AddPlasmaUtilities.H @@ -334,8 +334,8 @@ struct QEDHelper amrex::ParticleReal* p_optical_depth_QSR = nullptr; amrex::ParticleReal* p_optical_depth_BW = nullptr; - const bool has_quantum_sync; - const bool has_breit_wheeler; + bool has_quantum_sync; + bool has_breit_wheeler; QuantumSynchrotronGetOpticalDepth quantum_sync_get_opt; BreitWheelerGetOpticalDepth breit_wheeler_get_opt; diff --git a/Source/Particles/ElementaryProcess/QEDPairGeneration.H b/Source/Particles/ElementaryProcess/QEDPairGeneration.H index 180fdf0fb35..f1beb8363a7 100644 --- a/Source/Particles/ElementaryProcess/QEDPairGeneration.H +++ b/Source/Particles/ElementaryProcess/QEDPairGeneration.H @@ -172,7 +172,7 @@ public: private: - const BreitWheelerGeneratePairs + BreitWheelerGeneratePairs m_generate_functor; /*!< A copy of the functor to generate pairs. It contains only pointers to the lookup tables.*/ GetParticlePosition m_get_position; diff --git a/Source/Particles/ElementaryProcess/QEDPhotonEmission.H b/Source/Particles/ElementaryProcess/QEDPhotonEmission.H index 514526374bd..0b6836a38bc 100644 --- a/Source/Particles/ElementaryProcess/QEDPhotonEmission.H +++ b/Source/Particles/ElementaryProcess/QEDPhotonEmission.H @@ -178,12 +178,12 @@ public: } private: - const QuantumSynchrotronGetOpticalDepth + QuantumSynchrotronGetOpticalDepth m_opt_depth_functor; /*!< A copy of the functor to initialize the optical depth of the source species. */ - const int m_opt_depth_runtime_comp = 0; /*!< Index of the optical depth component of source species*/ + int m_opt_depth_runtime_comp = 0; /*!< Index of the optical depth component of source species*/ - const QuantumSynchrotronPhotonEmission + QuantumSynchrotronPhotonEmission m_emission_functor; /*!< A copy of the functor to generate photons. It contains only pointers to the lookup tables.*/ GetParticlePosition m_get_position; diff --git a/Source/Particles/ElementaryProcess/QEDSchwingerProcess.H b/Source/Particles/ElementaryProcess/QEDSchwingerProcess.H index 32b58dc50dc..e7eb7e8be04 100644 --- a/Source/Particles/ElementaryProcess/QEDSchwingerProcess.H +++ b/Source/Particles/ElementaryProcess/QEDSchwingerProcess.H @@ -17,9 +17,9 @@ */ struct SchwingerFilterFunc { - const int m_threshold_poisson_gaussian; - const amrex::Real m_dV; - const amrex::Real m_dt; + int m_threshold_poisson_gaussian; + amrex::Real m_dV; + amrex::Real m_dt; /** Get the number of created pairs in a given cell at a given timestep. * @@ -59,8 +59,8 @@ struct SchwingerFilterFunc */ struct SchwingerTransformFunc { - const amrex::Real m_y_size; - const int m_weight_index; + amrex::Real m_y_size; + int m_weight_index; /** Assign a weight to particles created via the Schwinger process. * diff --git a/Source/Particles/Filter/FilterFunctors.H b/Source/Particles/Filter/FilterFunctors.H index 982eeb0d23a..9d2b5f67a64 100644 --- a/Source/Particles/Filter/FilterFunctors.H +++ b/Source/Particles/Filter/FilterFunctors.H @@ -50,8 +50,8 @@ struct RandomFilter return ( (!m_is_active) || (amrex::Random(engine) < m_fraction) ); } private: - const bool m_is_active; //! select all particles if false - const amrex::Real m_fraction = 1.0; //! range: [0.0:1.0] where 0 is no & 1 is all particles + bool m_is_active; //! select all particles if false + amrex::Real m_fraction = 1.0; //! range: [0.0:1.0] where 0 is no & 1 is all particles }; /** @@ -77,8 +77,8 @@ struct UniformFilter return ( (!m_is_active) || ( p.id()%m_stride == 0 ) ); } private: - const bool m_is_active; //! select all particles if false - const int m_stride = 0; //! selection of every n-th particle + bool m_is_active; //! select all particles if false + int m_stride = 0; //! selection of every n-th particle }; /** @@ -134,10 +134,10 @@ struct ParserFilter } private: /** Whether this diagnostics is activated. Select all particles if false */ - const bool m_is_active; + bool m_is_active; public: /** Parser function with 7 input variables, t,x,y,z,ux,uy,uz */ - amrex::ParserExecutor<7> const m_function_partparser; + amrex::ParserExecutor<7> m_function_partparser; /** Mass of particle species */ amrex::ParticleReal m_mass; /** Store physical time on the coarsest level. */ @@ -171,9 +171,9 @@ struct GeometryFilter } private: /** Whether this diagnostics is activated. Select all particles if false */ - const bool m_is_active; + bool m_is_active; /** Physical extent of the axis-aligned region used for particle check */ - const amrex::RealBox m_domain; + amrex::RealBox m_domain; }; #endif // WARPX_FILTERFUNCTORS_H diff --git a/Source/Particles/MultiParticleContainer.cpp b/Source/Particles/MultiParticleContainer.cpp index 619b54ed7ad..c6724b5185a 100644 --- a/Source/Particles/MultiParticleContainer.cpp +++ b/Source/Particles/MultiParticleContainer.cpp @@ -88,7 +88,7 @@ namespace /** A little collection to transport six Array4 that point to the EM fields */ struct MyFieldList { - Array4< amrex::Real const > const Ex, Ey, Ez, Bx, By, Bz; + Array4< amrex::Real const > Ex, Ey, Ez, Bx, By, Bz; }; } @@ -223,7 +223,7 @@ MultiParticleContainer::ReadParameters () pp_particles, "repeated_plasma_lens_lengths", h_repeated_plasma_lens_lengths); - const int n_lenses = static_cast(h_repeated_plasma_lens_starts.size()); + const auto n_lenses = static_cast(h_repeated_plasma_lens_starts.size()); d_repeated_plasma_lens_starts.resize(n_lenses); d_repeated_plasma_lens_lengths.resize(n_lenses); amrex::Gpu::copyAsync(amrex::Gpu::hostToDevice, diff --git a/Source/Particles/ParticleBoundaryBuffer.cpp b/Source/Particles/ParticleBoundaryBuffer.cpp index a1f1c46d894..dbe5dea7085 100644 --- a/Source/Particles/ParticleBoundaryBuffer.cpp +++ b/Source/Particles/ParticleBoundaryBuffer.cpp @@ -50,11 +50,11 @@ struct IsOutsideDomainBoundary { }; struct FindEmbeddedBoundaryIntersection { - const int m_step_index; - const int m_delta_index; - const int m_normal_index; - const int m_step; - const amrex::Real m_dt; + int m_step_index; + int m_delta_index; + int m_normal_index; + int m_step; + amrex::Real m_dt; amrex::Array4 m_phiarr; amrex::GpuArray m_dxi; amrex::GpuArray m_plo; @@ -173,7 +173,7 @@ struct CopyAndTimestamp { int m_delta_index; int m_normal_index; int m_step; - const amrex::Real m_dt; + amrex::Real m_dt; int m_idim; int m_iside; diff --git a/Source/Particles/ParticleCreation/SmartCreate.H b/Source/Particles/ParticleCreation/SmartCreate.H index fe4cb5929e0..d93624b6433 100644 --- a/Source/Particles/ParticleCreation/SmartCreate.H +++ b/Source/Particles/ParticleCreation/SmartCreate.H @@ -35,7 +35,7 @@ struct SmartCreate { const InitializationPolicy* m_policy_real; const InitializationPolicy* m_policy_int; - const int m_weight_index = 0; + int m_weight_index = 0; template AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE diff --git a/Source/Particles/Sorting/SortingUtils.H b/Source/Particles/Sorting/SortingUtils.H index ba7761bf48a..49366e888ae 100644 --- a/Source/Particles/Sorting/SortingUtils.H +++ b/Source/Particles/Sorting/SortingUtils.H @@ -174,9 +174,9 @@ class fillBufferFlagRemainingParticles amrex::GpuArray m_inv_cell_size; amrex::Box m_domain; int* m_inexflag_ptr; - WarpXParticleContainer::ParticleTileType::ConstParticleTileDataType const m_ptd; + WarpXParticleContainer::ParticleTileType::ConstParticleTileDataType m_ptd; amrex::Array4 m_buffer_mask; - int const m_start_index; + int m_start_index; int const* m_indices_ptr; }; diff --git a/Source/ablastr/fields/Interpolate.H b/Source/ablastr/fields/Interpolate.H index a9f0a7fc75f..e5121215393 100644 --- a/Source/ablastr/fields/Interpolate.H +++ b/Source/ablastr/fields/Interpolate.H @@ -46,9 +46,9 @@ namespace ablastr::fields::details { 0, m_refratio); } - amrex::Array4 const m_phi_fp_arr; - amrex::Array4 const m_phi_cp_arr; - amrex::IntVect const m_refratio; + amrex::Array4 m_phi_fp_arr; + amrex::Array4 m_phi_cp_arr; + amrex::IntVect m_refratio; }; } // namespace ablastr::fields::details diff --git a/Source/ablastr/utils/SignalHandling.cpp b/Source/ablastr/utils/SignalHandling.cpp index 5eeaeec259f..bf4874b4536 100644 --- a/Source/ablastr/utils/SignalHandling.cpp +++ b/Source/ablastr/utils/SignalHandling.cpp @@ -37,7 +37,7 @@ SignalHandling::parseSignalNameToNumber (const std::string &str) #if defined(__linux__) || defined(__APPLE__) const struct { const char* abbrev; - const int value; + int value; } signals_to_parse[] = { {"ABRT", SIGABRT}, {"ALRM", SIGALRM}, diff --git a/Source/ablastr/utils/msg_logger/MsgLogger.H b/Source/ablastr/utils/msg_logger/MsgLogger.H index 401432f5dda..2497bdcfae7 100644 --- a/Source/ablastr/utils/msg_logger/MsgLogger.H +++ b/Source/ablastr/utils/msg_logger/MsgLogger.H @@ -280,9 +280,9 @@ namespace ablastr::utils::msg_logger #endif - const int m_rank /*! MPI rank of the current process*/; - const int m_num_procs /*! Number of MPI ranks*/; - const int m_io_rank /*! Rank of the I/O process*/; + int m_rank /*! MPI rank of the current process*/; + int m_num_procs /*! Number of MPI ranks*/; + int m_io_rank /*! Rank of the I/O process*/; std::map m_messages /*! This stores a map to associate warning messages with the corresponding counters*/; }; diff --git a/Tools/Linter/runClangTidy.sh b/Tools/Linter/runClangTidy.sh index 046c72d7b27..262d713cac6 100755 --- a/Tools/Linter/runClangTidy.sh +++ b/Tools/Linter/runClangTidy.sh @@ -55,13 +55,13 @@ ${CTIDY} --version echo echo "This can be overridden by setting the environment" echo "variables CLANG, CLANGXX, and CLANGTIDY e.g.: " -echo "$ export CLANG=clang-15" -echo "$ export CLANGXX=clang++-15" -echo "$ export CTIDCLANGTIDYY=clang-tidy-15" +echo "$ export CLANG=clang-16" +echo "$ export CLANGXX=clang++-16" +echo "$ export CTIDCLANGTIDYY=clang-tidy-16" echo "$ ./Tools/Linter/runClangTidy.sh" echo echo "******************************************************" -echo "* Warning: clang v15 is currently used in CI tests. *" +echo "* Warning: clang v16 is currently used in CI tests. *" echo "* It is therefore recommended to use this version. *" echo "* Otherwise, a newer version may find issues not *" echo "* currently covered by CI tests while older versions *" From 962829d1781f80c05a8a3dc14e8e5ae0ad43da54 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 29 Jan 2025 18:43:00 +0100 Subject: [PATCH 03/58] WarpX class: evolve_scheme no longer a static variable (#5616) This PR contributes to reducing the usage of static variables in the WarpX class. --- Source/WarpX.H | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Source/WarpX.H b/Source/WarpX.H index f500347febc..3d7835de58b 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -197,7 +197,7 @@ public: //! Integer that corresponds to the type of Maxwell solver (Yee, CKC, PSATD, ECT) static inline auto electromagnetic_solver_id = ElectromagneticSolverAlgo::Default; //! Integer that corresponds to the evolve scheme (explicit, semi_implicit_em, theta_implicit_em) - static inline auto evolve_scheme = EvolveScheme::Default; + EvolveScheme evolve_scheme = EvolveScheme::Default; //! Maximum iterations used for self-consistent particle update in implicit particle-suppressed evolve schemes static int max_particle_its_in_implicit_scheme; //! Relative tolerance used for self-consistent particle update in implicit particle-suppressed evolve schemes From 9f2d0f94b54835cb1fa88bc6a9ea8ef76899e398 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 29 Jan 2025 18:43:29 +0100 Subject: [PATCH 04/58] WarpX class: ProjectionCleanDivB no longer static (#5615) This PR contributes to reducing the usage of static member functions and static variables in the WarpX class. --- Source/Python/WarpX.cpp | 4 ++-- Source/WarpX.H | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Source/Python/WarpX.cpp b/Source/Python/WarpX.cpp index 01ab2d3e48f..870a3a87c91 100644 --- a/Source/Python/WarpX.cpp +++ b/Source/Python/WarpX.cpp @@ -266,8 +266,8 @@ The physical fields in WarpX have the following naming: py::arg("potential"), "Sets the EB potential string and updates the function parser." ) - .def_static("run_div_cleaner", - [] () { WarpX::ProjectionCleanDivB(); }, + .def("run_div_cleaner", + [] (WarpX& wx) { wx.ProjectionCleanDivB(); }, "Executes projection based divergence cleaner on loaded Bfield_fp_external." ) .def("synchronize", diff --git a/Source/WarpX.H b/Source/WarpX.H index 3d7835de58b..fec12affecd 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -849,7 +849,7 @@ public: void ComputeDivE(amrex::MultiFab& divE, int lev); - static void ProjectionCleanDivB (); + void ProjectionCleanDivB (); [[nodiscard]] amrex::IntVect getngEB() const { return guard_cells.ng_alloc_EB; } [[nodiscard]] amrex::IntVect getngF() const { return guard_cells.ng_alloc_F; } From fc37679567f417b3e53c8d99073ef666867d63ae Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 29 Jan 2025 18:43:56 +0100 Subject: [PATCH 05/58] Remove unused SliceDiagnostic.H/cpp (#5617) The functions defined in `SliceDiagnostic.H` and implemented in ` SliceDiagnostic.cpp` are never used in WarpX. Therefore, this PR removes these two source files (and updates `CMakeLists.txt` and `Make.package` accordingly) --- Source/Diagnostics/CMakeLists.txt | 1 - Source/Diagnostics/Make.package | 1 - Source/Diagnostics/SliceDiagnostic.H | 38 -- Source/Diagnostics/SliceDiagnostic.cpp | 526 ------------------------- 4 files changed, 566 deletions(-) delete mode 100644 Source/Diagnostics/SliceDiagnostic.H delete mode 100644 Source/Diagnostics/SliceDiagnostic.cpp diff --git a/Source/Diagnostics/CMakeLists.txt b/Source/Diagnostics/CMakeLists.txt index 376487dc94a..d899bd5e155 100644 --- a/Source/Diagnostics/CMakeLists.txt +++ b/Source/Diagnostics/CMakeLists.txt @@ -7,7 +7,6 @@ foreach(D IN LISTS WarpX_DIMS) FullDiagnostics.cpp MultiDiagnostics.cpp ParticleIO.cpp - SliceDiagnostic.cpp WarpXIO.cpp WarpXOpenPMD.cpp BTDiagnostics.cpp diff --git a/Source/Diagnostics/Make.package b/Source/Diagnostics/Make.package index 75b41fba5e8..28afdb35290 100644 --- a/Source/Diagnostics/Make.package +++ b/Source/Diagnostics/Make.package @@ -4,7 +4,6 @@ CEXE_sources += FullDiagnostics.cpp CEXE_sources += WarpXIO.cpp CEXE_sources += ParticleIO.cpp CEXE_sources += FieldIO.cpp -CEXE_sources += SliceDiagnostic.cpp CEXE_sources += BTDiagnostics.cpp CEXE_sources += BoundaryScrapingDiagnostics.cpp CEXE_sources += BTD_Plotfile_Header_Impl.cpp diff --git a/Source/Diagnostics/SliceDiagnostic.H b/Source/Diagnostics/SliceDiagnostic.H deleted file mode 100644 index 570f86d5384..00000000000 --- a/Source/Diagnostics/SliceDiagnostic.H +++ /dev/null @@ -1,38 +0,0 @@ -/* Copyright 2019 Revathi Jambunathan - * - * This file is part of WarpX. - * - * License: BSD-3-Clause-LBNL - */ -#ifndef WARPX_SliceDiagnostic_H_ -#define WARPX_SliceDiagnostic_H_ - -#include - -#include - -#include - -std::unique_ptr CreateSlice( const amrex::MultiFab& mf, - const amrex::Vector &dom_geom, - amrex::RealBox &slice_realbox, - amrex::IntVect &slice_cr_ratio ); - -void CheckSliceInput( amrex::RealBox real_box, - amrex::RealBox &slice_cc_nd_box, amrex::RealBox &slice_realbox, - amrex::IntVect &slice_cr_ratio, amrex::Vector dom_geom, - amrex::IntVect SliceType, amrex::IntVect &slice_lo, - amrex::IntVect &slice_hi, amrex::IntVect &interp_lo); - -void InterpolateSliceValues( amrex::MultiFab& smf, - amrex::IntVect interp_lo, amrex::RealBox slice_realbox, - const amrex::Vector& geom, int ncomp, int nghost, - amrex::IntVect slice_lo, amrex::IntVect slice_hi, - amrex::IntVect SliceType, amrex::RealBox real_box); - -void InterpolateLo( const amrex::Box& bx, amrex::FArrayBox &fabox, - amrex::IntVect slice_lo, amrex::Vector geom, - int idir, amrex::IntVect IndType, amrex::RealBox slice_realbox, - int srccomp, int ncomp, int nghost, amrex::RealBox real_box); - -#endif diff --git a/Source/Diagnostics/SliceDiagnostic.cpp b/Source/Diagnostics/SliceDiagnostic.cpp deleted file mode 100644 index bcb6070abdf..00000000000 --- a/Source/Diagnostics/SliceDiagnostic.cpp +++ /dev/null @@ -1,526 +0,0 @@ -/* Copyright 2019-2020 Luca Fedeli, Revathi Jambunathan, Weiqun Zhang - * - * - * This file is part of WarpX. - * - * License: BSD-3-Clause-LBNL - */ -#include "SliceDiagnostic.H" - -#include "Fields.H" -#include "Utils/TextMsg.H" -#include "WarpX.H" - -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include - -using namespace amrex; -using warpx::fields::FieldType; - -/* \brief - * The functions creates the slice for diagnostics based on the user-input. - * The slice can be 1D, 2D, or 3D and it inherits the index type of the underlying data. - * The implementation assumes that the slice is aligned with the coordinate axes. - * The input parameters are modified if the user-input does not comply with requirements of coarsenability or if the slice extent is not contained within the simulation domain. - * First a slice multifab (smf) with cell size equal to that of the simulation grid is created such that it extends from slice.dim_lo to slice.dim_hi and shares the same index space as the source multifab (mf) - * The values are copied from src mf to dst smf using amrex::ParallelCopy - * If interpolation is required, then on the smf, using data points stored in the ghost cells, the data in interpolated. - * If coarsening is required, then a coarse slice multifab is generated (cs_mf) and the - * values of the refined slice (smf) is averaged down to obtain the coarse slice. - * \param mf is the source multifab containing the field data - * \param dom_geom is the geometry of the domain and used in the function to obtain the - * CellSize of the underlying grid. - * \param slice_realbox defines the extent of the slice - * \param slice_cr_ratio provides the coarsening ratio for diagnostics - */ - -std::unique_ptr -CreateSlice( const MultiFab& mf, const Vector &dom_geom, - RealBox &slice_realbox, IntVect &slice_cr_ratio ) -{ - std::unique_ptr smf; - std::unique_ptr cs_mf; - - Vector slice_ncells(AMREX_SPACEDIM); - const int nghost = 1; - auto nlevels = static_cast(dom_geom.size()); - const int ncomp = (mf).nComp(); - - WARPX_ALWAYS_ASSERT_WITH_MESSAGE( nlevels==1, - "Slice diagnostics does not work with mesh refinement yet (TO DO)."); - - const auto conversionType = (mf).ixType(); - IntVect SliceType(AMREX_D_DECL(0,0,0)); - for (int idim = 0; idim < AMREX_SPACEDIM; ++idim ) - { - SliceType[idim] = conversionType.nodeCentered(idim); - } - - const RealBox& real_box = dom_geom[0].ProbDomain(); - RealBox slice_cc_nd_box; - const int default_grid_size = 32; - int slice_grid_size = default_grid_size; - - bool interpolate = false; - bool coarsen = false; - - // same index space as domain // - IntVect slice_lo(AMREX_D_DECL(0,0,0)); - IntVect slice_hi(AMREX_D_DECL(1,1,1)); - IntVect interp_lo(AMREX_D_DECL(0,0,0)); - - CheckSliceInput(real_box, slice_cc_nd_box, slice_realbox, slice_cr_ratio, - dom_geom, SliceType, slice_lo, - slice_hi, interp_lo); - int configuration_dim = 0; - // Determine if interpolation is required and number of cells in slice // - for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) { - - // Flag for interpolation if required // - if ( interp_lo[idim] == 1) { - interpolate = true; - } - - // For the case when a dimension is reduced // - if ( ( slice_hi[idim] - slice_lo[idim]) == 1) { - slice_ncells[idim] = 1; - } - else { - slice_ncells[idim] = ( slice_hi[idim] - slice_lo[idim] + 1 ) - / slice_cr_ratio[idim]; - - const int refined_ncells = slice_hi[idim] - slice_lo[idim] + 1 ; - if ( slice_cr_ratio[idim] > 1) { - coarsen = true; - - // modify slice_grid_size if >= refines_cells // - if ( slice_grid_size >= refined_ncells ) { - slice_grid_size = refined_ncells - 1; - } - - } - configuration_dim += 1; - } - } - if (configuration_dim==1) { - ablastr::warn_manager::WMRecordWarning("Diagnostics", - "The slice configuration is 1D and cannot be visualized using yt."); - } - - // Slice generation with index type inheritance // - const Box slice(slice_lo, slice_hi); - - Vector sba(1); - sba[0].define(slice); - sba[0].maxSize(slice_grid_size); - - // Distribution mapping for slice can be different from that of domain // - Vector sdmap(1); - sdmap[0] = DistributionMapping{sba[0]}; - - smf = std::make_unique(amrex::convert(sba[0],SliceType), sdmap[0], - ncomp, nghost); - - // Copy data from domain to slice that has same cell size as that of // - // the domain mf. src and dst have the same number of ghost cells // - const amrex::IntVect nghost_vect(AMREX_D_DECL(nghost, nghost, nghost)); - ablastr::utils::communication::ParallelCopy(*smf, mf, 0, 0, ncomp, nghost_vect, nghost_vect, WarpX::do_single_precision_comms); - - // interpolate if required on refined slice // - if (interpolate) { - InterpolateSliceValues( *smf, interp_lo, slice_cc_nd_box, dom_geom, - ncomp, nghost, slice_lo, slice_hi, SliceType, real_box); - } - - - if (!coarsen) { - return smf; - } - else { - Vector crse_ba(1); - crse_ba[0] = sba[0]; - crse_ba[0].coarsen(slice_cr_ratio); - - AMREX_ALWAYS_ASSERT(crse_ba[0].size() == sba[0].size()); - - cs_mf = std::make_unique(amrex::convert(crse_ba[0],SliceType), - sdmap[0], ncomp,nghost); - - const MultiFab& mfSrc = *smf; - MultiFab& mfDst = *cs_mf; - - auto & warpx = WarpX::GetInstance(); - - using ablastr::fields::Direction; - - MFIter mfi_dst(mfDst); - for (MFIter mfi(mfSrc); mfi.isValid(); ++mfi) { - - Array4 const& Src_fabox = mfSrc.const_array(mfi); - - const Box& Dst_bx = mfi_dst.validbox(); - Array4 const& Dst_fabox = mfDst.array(mfi_dst); - - const int scomp = 0; - const int dcomp = 0; - - const IntVect cctype(AMREX_D_DECL(0,0,0)); - if( SliceType==cctype ) { - amrex::amrex_avgdown(Dst_bx, Dst_fabox, Src_fabox, dcomp, scomp, - ncomp, slice_cr_ratio); - } - const IntVect ndtype(AMREX_D_DECL(1,1,1)); - if( SliceType == ndtype ) { - amrex::amrex_avgdown_nodes(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio); - } - if( SliceType == warpx.m_fields.get(FieldType::Efield_aux, Direction{0}, 0)->ixType().toIntVect() ) { - amrex::amrex_avgdown_edges(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio, 0); - } - if( SliceType == warpx.m_fields.get(FieldType::Efield_aux, Direction{1}, 0)->ixType().toIntVect() ) { - amrex::amrex_avgdown_edges(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio, 1); - } - if( SliceType == warpx.m_fields.get(FieldType::Efield_aux, Direction{2}, 0)->ixType().toIntVect() ) { - amrex::amrex_avgdown_edges(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio, 2); - } - if( SliceType == warpx.m_fields.get(FieldType::Bfield_aux, Direction{0}, 0)->ixType().toIntVect() ) { - amrex::amrex_avgdown_faces(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio, 0); - } - if( SliceType == warpx.m_fields.get(FieldType::Bfield_aux, Direction{1}, 0)->ixType().toIntVect() ) { - amrex::amrex_avgdown_faces(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio, 1); - } - if( SliceType == warpx.m_fields.get(FieldType::Bfield_aux, Direction{2}, 0)->ixType().toIntVect() ) { - amrex::amrex_avgdown_faces(Dst_bx, Dst_fabox, Src_fabox, dcomp, - scomp, ncomp, slice_cr_ratio, 2); - } - - if ( mfi_dst.isValid() ) { - ++mfi_dst; - } - - } - return cs_mf; - } -} - - -/* \brief - * This function modifies the slice input parameters under certain conditions. - * The coarsening ratio, slice_cr_ratio is modified if the input is not an exponent of 2. - * for example, if the coarsening ratio is 3, 5 or 6, which is not an exponent of 2, - * then the value of coarsening ratio is modified to the nearest exponent of 2. - * The default value for coarsening ratio is 1. - * slice_realbox.lo and slice_realbox.hi are set equal to the simulation domain lo and hi - * if for the user-input for the slice lo and hi coordinates are outside the domain. - * If the slice_realbox.lo and slice_realbox.hi coordinates do not align with the data - * points and the number of cells in that dimension is greater than 1, and if the extent of - * the slice in that dimension is not coarsenable, then the value lo and hi coordinates are - * shifted to the nearest coarsenable point to include some extra data points in the slice. - * If slice_realbox.lo==slice_realbox.hi, then that dimension has only one cell and no - * modifications are made to the value. If the lo and hi do not align with a data point, - * then it is flagged for interpolation. - * \param real_box a Real box defined for the underlying domain. - * \param slice_realbox a Real box for defining the slice dimension. - * \param slice_cc_nd_box a Real box for defining the modified lo and hi of the slice - * such that the coordinates align with the underlying data points. - * If the dimension is reduced to have only one cell, the slice_realbox is not modified and * instead the values are interpolated to the coordinate from the nearest data points. - * \param slice_cr_ratio contains values of the coarsening ratio which may be modified - * if the input values do not satisfy coarsenability conditions. - * \param slice_lo and slice_hi are the index values of the slice - * \param interp_lo are set to 0 or 1 if they are flagged for interpolation. - * The slice shares the same index space as that of the simulation domain. - */ - - -void -CheckSliceInput( const RealBox real_box, RealBox &slice_cc_nd_box, - RealBox &slice_realbox, IntVect &slice_cr_ratio, - Vector dom_geom, IntVect const SliceType, - IntVect &slice_lo, IntVect &slice_hi, IntVect &interp_lo) -{ - - IntVect slice_lo2(AMREX_D_DECL(0,0,0)); - for ( int idim = 0; idim < AMREX_SPACEDIM; ++idim) - { - // Modify coarsening ratio if the input value is not an exponent of 2 for AMR // - if ( slice_cr_ratio[idim] > 0 ) { - const int log_cr_ratio = - static_cast(floor ( log2( double(slice_cr_ratio[idim])))); - slice_cr_ratio[idim] = - static_cast (exp2( double(log_cr_ratio) )); - } - - //// Default coarsening ratio is 1 // - // Modify lo if input is out of bounds // - if ( slice_realbox.lo(idim) < real_box.lo(idim) ) { - slice_realbox.setLo( idim, real_box.lo(idim)); - std::stringstream warnMsg; - warnMsg << " slice lo is out of bounds. " << - " Modified it in dimension " << idim << - " to be aligned with the domain box."; - ablastr::warn_manager::WMRecordWarning("Diagnostics", - warnMsg.str(), ablastr::warn_manager::WarnPriority::low); - } - - // Modify hi if input in out od bounds // - if ( slice_realbox.hi(idim) > real_box.hi(idim) ) { - slice_realbox.setHi( idim, real_box.hi(idim)); - std::stringstream warnMsg; - warnMsg << " slice hi is out of bounds. " << - " Modified it in dimension " << idim << - " to be aligned with the domain box."; - ablastr::warn_manager::WMRecordWarning("Diagnostics", - warnMsg.str(), ablastr::warn_manager::WarnPriority::low); - } - - const auto very_small_number = 1E-10; - - // Factor to ensure index values computation depending on index type // - const double fac = ( 1.0 - SliceType[idim] )*dom_geom[0].CellSize(idim)*0.5; - // if dimension is reduced to one cell length // - if ( slice_realbox.hi(idim) - slice_realbox.lo(idim) <= 0) - { - slice_cc_nd_box.setLo( idim, slice_realbox.lo(idim) ); - slice_cc_nd_box.setHi( idim, slice_realbox.hi(idim) ); - - if ( slice_cr_ratio[idim] > 1) { slice_cr_ratio[idim] = 1; } - - // check for interpolation -- compute index lo with floor and ceil - if ( slice_cc_nd_box.lo(idim) - real_box.lo(idim) >= fac ) { - slice_lo[idim] = static_cast( - floor( ( (slice_cc_nd_box.lo(idim) - - (real_box.lo(idim) + fac ) ) - / dom_geom[0].CellSize(idim)) + fac * very_small_number) ); - slice_lo2[idim] = static_cast( - ceil( ( (slice_cc_nd_box.lo(idim) - - (real_box.lo(idim) + fac) ) - / dom_geom[0].CellSize(idim)) - fac * very_small_number) ); - } - else { - slice_lo[idim] = static_cast( - std::round( (slice_cc_nd_box.lo(idim) - - (real_box.lo(idim) ) ) - / dom_geom[0].CellSize(idim)) ); - slice_lo2[idim] = static_cast( - std::ceil((slice_cc_nd_box.lo(idim) - - (real_box.lo(idim) ) ) - / dom_geom[0].CellSize(idim) ) ); - } - - // flag for interpolation -- if reduced dimension location // - // does not align with data point // - if ( slice_lo[idim] == slice_lo2[idim]) { - if ( slice_cc_nd_box.lo(idim) - real_box.lo(idim) < fac ) { - interp_lo[idim] = 1; - } - } - else { - interp_lo[idim] = 1; - } - - // ncells = 1 if dimension is reduced // - slice_hi[idim] = slice_lo[idim] + 1; - } - else - { - // moving realbox.lo and realbox.hi to nearest coarsenable grid point // - auto index_lo = static_cast(floor(((slice_realbox.lo(idim) + very_small_number - - (real_box.lo(idim))) / dom_geom[0].CellSize(idim))) ); - auto index_hi = static_cast(ceil(((slice_realbox.hi(idim) - very_small_number - - (real_box.lo(idim))) / dom_geom[0].CellSize(idim))) ); - - bool modify_cr = true; - - while ( modify_cr ) { - int lo_new = index_lo; - int hi_new = index_hi; - const int mod_lo = index_lo % slice_cr_ratio[idim]; - const int mod_hi = index_hi % slice_cr_ratio[idim]; - modify_cr = false; - - // To ensure that the index.lo is coarsenable // - if ( mod_lo > 0) { - lo_new = index_lo - mod_lo; - } - // To ensure that the index.hi is coarsenable // - if ( mod_hi > 0) { - hi_new = index_hi + (slice_cr_ratio[idim] - mod_hi); - } - - // If modified index.hi is > baselinebox.hi, move the point // - // to the previous coarsenable point // - const auto small_number = 0.01; - if ( (hi_new * dom_geom[0].CellSize(idim)) - > real_box.hi(idim) - real_box.lo(idim) + dom_geom[0].CellSize(idim)*small_number) - { - hi_new = index_hi - mod_hi; - } - - if ( (hi_new - lo_new) == 0 ){ - std::stringstream warnMsg; - warnMsg << " Coarsening ratio " << slice_cr_ratio[idim] << " in dim "<< idim << - "is leading to zero cells for slice." << " Thus reducing cr_ratio by half.\n"; - - ablastr::warn_manager::WMRecordWarning("Diagnostics", - warnMsg.str()); - - slice_cr_ratio[idim] = slice_cr_ratio[idim]/2; - modify_cr = true; - } - - if ( !modify_cr ) { - index_lo = lo_new; - index_hi = hi_new; - } - slice_lo[idim] = index_lo; - slice_hi[idim] = index_hi - 1; // since default is cell-centered - } - slice_realbox.setLo( idim, index_lo * dom_geom[0].CellSize(idim) - + real_box.lo(idim) ); - slice_realbox.setHi( idim, index_hi * dom_geom[0].CellSize(idim) - + real_box.lo(idim) ); - slice_cc_nd_box.setLo( idim, slice_realbox.lo(idim) + Real(fac) ); - slice_cc_nd_box.setHi( idim, slice_realbox.hi(idim) - Real(fac) ); - } - } -} - - -/* \brief - * This function is called if the coordinates of the slice do not align with data points - * \param interp_lo is an IntVect which is flagged as 1, if interpolation - is required in the dimension. - */ -void -InterpolateSliceValues(MultiFab& smf, IntVect interp_lo, RealBox slice_realbox, - const Vector& geom, int ncomp, int nghost, - IntVect slice_lo, IntVect /*slice_hi*/, IntVect SliceType, - const RealBox real_box) -{ - for (MFIter mfi(smf); mfi.isValid(); ++mfi) - { - const Box& bx = mfi.tilebox(); - FArrayBox& fabox = smf[mfi]; - - for ( int idim = 0; idim < AMREX_SPACEDIM; ++idim) { - if ( interp_lo[idim] == 1 ) { - InterpolateLo( bx, fabox, slice_lo, geom, idim, SliceType, - slice_realbox, 0, ncomp, nghost, real_box); - } - } - } - -} - -void -InterpolateLo(const Box& bx, FArrayBox &fabox, IntVect slice_lo, - Vector geom, int idir, IntVect IndType, - RealBox slice_realbox, int srccomp, int ncomp, - int /*nghost*/, const RealBox real_box ) -{ - auto fabarr = fabox.array(); - const auto lo = amrex::lbound(bx); - const auto hi = amrex::ubound(bx); - const double fac = ( 1.0-IndType[idir] )*geom[0].CellSize(idir) * 0.5; - const int imin = slice_lo[idir]; - const double minpos = imin*geom[0].CellSize(idir) + fac + real_box.lo(idir); - const double maxpos = (imin+1)*geom[0].CellSize(idir) + fac + real_box.lo(idir); - const double slice_minpos = slice_realbox.lo(idir) ; - - switch (idir) { - case 0: - { - if ( imin >= lo.x && imin <= lo.x) { - for (int n = srccomp; n < srccomp + ncomp; ++n) { - for (int k = lo.z; k <= hi.z; ++k) { - for (int j = lo.y; j <= hi.y; ++j) { - for (int i = lo.x; i <= hi.x; ++i) { - const double minval = fabarr(i,j,k,n); - const double maxval = fabarr(i+1,j,k,n); - const double ratio = (maxval - minval) / (maxpos - minpos); - const double xdiff = slice_minpos - minpos; - const double newval = minval + xdiff * ratio; - fabarr(i,j,k,n) = static_cast(newval); - } - } - } - } - } - break; - } - case 1: - { - if ( imin >= lo.y && imin <= lo.y) { - for (int n = srccomp; n < srccomp+ncomp; ++n) { - for (int k = lo.z; k <= hi.z; ++k) { - for (int j = lo.y; j <= hi.y; ++j) { - for (int i = lo.x; i <= hi.x; ++i) { - const double minval = fabarr(i,j,k,n); - const double maxval = fabarr(i,j+1,k,n); - const double ratio = (maxval - minval) / (maxpos - minpos); - const double xdiff = slice_minpos - minpos; - const double newval = minval + xdiff * ratio; - fabarr(i,j,k,n) = static_cast(newval); - } - } - } - } - } - break; - } - case 2: - { - if ( imin >= lo.z && imin <= lo.z) { - for (int n = srccomp; n < srccomp+ncomp; ++n) { - for (int k = lo.z; k <= hi.z; ++k) { - for (int j = lo.y; j <= hi.y; ++j) { - for (int i = lo.x; i <= hi.x; ++i) { - const double minval = fabarr(i,j,k,n); - const double maxval = fabarr(i,j,k+1,n); - const double ratio = (maxval - minval) / (maxpos - minpos); - const double xdiff = slice_minpos - minpos; - const double newval = minval + xdiff * ratio; - fabarr(i,j,k,n) = static_cast(newval); - } - } - } - } - } - break; - } - - } - -} From 554a2fff6ee4ce9793827d07d2f76d98f5f027c4 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 29 Jan 2025 18:45:09 +0100 Subject: [PATCH 06/58] WarpX class: AllocInitMultiFab and imultifab_map no longer static (#5614) This PR contributes to reducing the usage of static member functions and static variables in the WarpX class. **Note:** I have observed a [failure](https://dev.azure.com/ECP-WarpX/WarpX/_build/results?buildId=20508&view=logs&jobId=5dcb75fd-7a98-5ebf-88d6-c1115a1d979a&j=5dcb75fd-7a98-5ebf-88d6-c1115a1d979a&t=f00e0ae1-a8d3-5558-a3f3-078bee0de0f0) of the test `test_2d_embedded_circle` . This failure does not seem to be related to the PR. I have observed that sometimes `embedded_circle` tests fail for apparently random reasons. We should look into that, since it might be a race condition or an undefined behavior issue. --- Source/WarpX.H | 4 ++-- Source/WarpX.cpp | 2 -- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/Source/WarpX.H b/Source/WarpX.H index fec12affecd..995d7edd891 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -404,7 +404,7 @@ public: * \param[in] name The name of the iMultiFab to use in the map * \param[in] initial_value The optional initial value */ - static void AllocInitMultiFab ( + void AllocInitMultiFab ( std::unique_ptr& mf, const amrex::BoxArray& ba, const amrex::DistributionMapping& dm, @@ -417,7 +417,7 @@ public: // Maps of all of the iMultiFabs used (this can include MFs from other classes) // This is a convenience for the Python interface, allowing all iMultiFabs // to be easily referenced from Python. - static std::map imultifab_map; + std::map imultifab_map; /** * \brief diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index 9442fed0596..a1eac8d6080 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -176,8 +176,6 @@ bool WarpX::do_dynamic_scheduling = true; bool WarpX::do_multi_J = false; int WarpX::do_multi_J_n_depositions; -std::map WarpX::imultifab_map; - IntVect WarpX::filter_npass_each_dir(1); int WarpX::n_field_gather_buffer = -1; From 4f0bc75fff56c712b4155a6c269d64d786178b5d Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Wed, 29 Jan 2025 10:18:43 -0800 Subject: [PATCH 07/58] CMake/CTest: Opt-in Disable Signal Handling (#5550) In IDEs, we want to attach debuggers to CTest runs. This needs an option to [disable signal handling from AMReX](https://amrex-codes.github.io/amrex/docs_html/Debugging.html#breaking-into-debuggers) to work. --------- Co-authored-by: Edoardo Zoni --- .azure-pipelines.yml | 2 +- CMakeLists.txt | 19 ++++++++++--------- Docs/source/install/cmake.rst | 4 ++++ Examples/CMakeLists.txt | 3 +++ 4 files changed, 18 insertions(+), 10 deletions(-) diff --git a/.azure-pipelines.yml b/.azure-pipelines.yml index badedcb994c..77cc75a0264 100644 --- a/.azure-pipelines.yml +++ b/.azure-pipelines.yml @@ -141,7 +141,7 @@ jobs: df -h # configure export AMReX_CMAKE_FLAGS="-DAMReX_ASSERTIONS=ON -DAMReX_TESTING=ON" - export WARPX_TEST_FLAGS="-DWarpX_TEST_CLEANUP=ON -DWarpX_TEST_FPETRAP=ON -DWarpX_TEST_DEBUG=ON" + export WARPX_TEST_FLAGS="-DWarpX_TEST_CLEANUP=ON -DWarpX_TEST_FPETRAP=ON -DWarpX_BACKTRACE_INFO=ON" cmake -S . -B build \ ${AMReX_CMAKE_FLAGS} \ ${WARPX_CMAKE_FLAGS} \ diff --git a/CMakeLists.txt b/CMakeLists.txt index f1dcece8ce1..24e9338982e 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -81,18 +81,19 @@ option(WarpX_QED_TABLE_GEN "QED table generation (requires PICSAR and Boost)" option(WarpX_QED_TOOLS "Build external tool to generate QED lookup tables (requires PICSAR and Boost)" OFF) -# Advanced option to automatically clean up CI test directories -option(WarpX_TEST_CLEANUP "Clean up CI test directories" OFF) +# Advanced option to run tests +option(WarpX_TEST_CLEANUP "Clean up automated test directories" OFF) +option(WarpX_TEST_DEBUGGER "Run automated tests without AMReX signal handling (to attach debuggers)" OFF) +option(WarpX_TEST_FPETRAP "Run automated tests with FPE-trapping runtime parameters" OFF) mark_as_advanced(WarpX_TEST_CLEANUP) - -# Advanced option to run CI tests with FPE-trapping runtime parameters -option(WarpX_TEST_FPETRAP "Run CI tests with FPE-trapping runtime parameters" OFF) +mark_as_advanced(WarpX_TEST_DEBUGGER) mark_as_advanced(WarpX_TEST_FPETRAP) -# Advanced option to run CI tests with the -g compile option -option(WarpX_TEST_DEBUG "Run CI tests with the -g compile option" OFF) -mark_as_advanced(WarpX_TEST_DEBUG) -if(WarpX_TEST_DEBUG) +# Advanced option to compile with the -g1 option for minimal debug symbols +# (useful to see, e.g., line numbers in backtraces) +option(WarpX_BACKTRACE_INFO "Compile with -g1 for minimal debug symbols (currently used in CI tests)" OFF) +mark_as_advanced(WarpX_BACKTRACE_INFO) +if(WarpX_BACKTRACE_INFO) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g1") endif() diff --git a/Docs/source/install/cmake.rst b/Docs/source/install/cmake.rst index 5c02fb03b9e..fbdc6809853 100644 --- a/Docs/source/install/cmake.rst +++ b/Docs/source/install/cmake.rst @@ -143,6 +143,10 @@ CMake Option Default & Values Des ``WarpX_pybind11_repo`` ``https://github.com/pybind/pybind11.git`` Repository URI to pull and build pybind11 from ``WarpX_pybind11_branch`` *we set and maintain a compatible commit* Repository branch for ``WarpX_pybind11_repo`` ``WarpX_pybind11_internal`` **ON**/OFF Needs a pre-installed pybind11 library if set to ``OFF`` +``WarpX_TEST_CLEANUP`` ON/**OFF** Clean up automated test directories +``WarpX_TEST_DEBUGGER`` ON/**OFF** Run automated tests without AMReX signal handling (to attach debuggers) +``WarpX_TEST_FPETRAP`` ON/**OFF** Run automated tests with FPE-trapping runtime parameters +``WarpX_BACKTRACE_INFO`` ON/**OFF** Compile with -g1 for minimal debug symbols (currently used in CI tests) ============================= ============================================== =========================================================== For example, one can also build against a local AMReX copy. diff --git a/Examples/CMakeLists.txt b/Examples/CMakeLists.txt index c4303aaee0b..b77a3790c36 100644 --- a/Examples/CMakeLists.txt +++ b/Examples/CMakeLists.txt @@ -159,6 +159,9 @@ function(add_warpx_test "amrex.fpe_trap_zero = 1" ) endif() + if(WarpX_TEST_DEBUGGER) + set(runtime_params_fpetrap "amrex.signal_handling = 0") + endif() add_test( NAME ${name}.run COMMAND From ceb172eaf708afe0f6e7c12d833e161756188fb6 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Wed, 29 Jan 2025 10:19:19 -0800 Subject: [PATCH 08/58] Doc: Update Spack Instructions (#5587) Update the Spack instructions to reflect our early 2024 change to include the Python bindings as a variant of the `warpx` package and remove the `py-warpx` package. Close #5563 --- Docs/source/install/users.rst | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/Docs/source/install/users.rst b/Docs/source/install/users.rst index 47378bbf6d6..650cbacd4d0 100644 --- a/Docs/source/install/users.rst +++ b/Docs/source/install/users.rst @@ -79,7 +79,7 @@ Using the Spack Package ----------------------- Packages for WarpX are available via the `Spack `__ package manager. -The package ``warpx`` installs executables and the package ``py-warpx`` includes Python bindings, i.e. `PICMI `_. +The package ``warpx`` installs executables and the variant ``warpx +python`` also includes Python bindings, i.e. `PICMI `__. .. code-block:: bash @@ -88,11 +88,11 @@ The package ``warpx`` installs executables and the package ``py-warpx`` includes spack buildcache keys --install --trust # see `spack info py-warpx` for build options. - # optional arguments: -mpi ^warpx dims=2 compute=cuda - spack install py-warpx - spack load py-warpx + # optional arguments: -mpi compute=cuda + spack install warpx +python + spack load warpx +python -See ``spack info warpx`` or ``spack info py-warpx`` and `the official Spack tutorial `__ for more information. +See ``spack info warpx`` and `the official Spack tutorial `__ for more information. .. _install-pypi: From 98a14f2fda757a8b9f482e532b9a297c9705fee8 Mon Sep 17 00:00:00 2001 From: Weiqun Zhang Date: Thu, 30 Jan 2025 16:05:59 -0800 Subject: [PATCH 09/58] Flux injection from EB: Pick a random point instead of the centroid (#5493) Co-authored-by: Remi Lehe --- Examples/Tests/flux_injection/CMakeLists.txt | 12 ++--- .../analysis_flux_injection_from_eb.py | 3 +- .../Tests/flux_injection/inputs_base_from_eb | 4 +- .../inputs_test_2d_flux_injection_from_eb | 2 +- .../inputs_test_3d_flux_injection_from_eb | 2 +- .../inputs_test_rz_flux_injection_from_eb | 2 +- .../test_2d_flux_injection_from_eb.json | 12 ++--- .../test_3d_flux_injection_from_eb.json | 16 +++---- .../test_rz_flux_injection_from_eb.json | 16 +++---- Source/Particles/AddPlasmaUtilities.H | 39 ++++++++-------- .../Particles/PhysicalParticleContainer.cpp | 46 ++++++++----------- 11 files changed, 72 insertions(+), 82 deletions(-) diff --git a/Examples/Tests/flux_injection/CMakeLists.txt b/Examples/Tests/flux_injection/CMakeLists.txt index 000d5c74917..390c76ec58e 100644 --- a/Examples/Tests/flux_injection/CMakeLists.txt +++ b/Examples/Tests/flux_injection/CMakeLists.txt @@ -26,8 +26,8 @@ add_warpx_test( 3 # dims 2 # nprocs inputs_test_3d_flux_injection_from_eb # inputs - "analysis_flux_injection_from_eb.py diags/diag1000010" # analysis - "analysis_default_regression.py --path diags/diag1000010" # checksum + "analysis_flux_injection_from_eb.py diags/diag1000020" # analysis + "analysis_default_regression.py --path diags/diag1000020" # checksum OFF # dependency ) @@ -36,8 +36,8 @@ add_warpx_test( RZ # dims 2 # nprocs inputs_test_rz_flux_injection_from_eb # inputs - "analysis_flux_injection_from_eb.py diags/diag1000010" # analysis - "analysis_default_regression.py --path diags/diag1000010" # checksum + "analysis_flux_injection_from_eb.py diags/diag1000020" # analysis + "analysis_default_regression.py --path diags/diag1000020" # checksum OFF # dependency ) @@ -46,7 +46,7 @@ add_warpx_test( 2 # dims 2 # nprocs inputs_test_2d_flux_injection_from_eb # inputs - "analysis_flux_injection_from_eb.py diags/diag1000010" # analysis - "analysis_default_regression.py --path diags/diag1000010" # checksum + "analysis_flux_injection_from_eb.py diags/diag1000020" # analysis + "analysis_default_regression.py --path diags/diag1000020" # checksum OFF # dependency ) diff --git a/Examples/Tests/flux_injection/analysis_flux_injection_from_eb.py b/Examples/Tests/flux_injection/analysis_flux_injection_from_eb.py index 0f2a37eea71..96488fd7e71 100755 --- a/Examples/Tests/flux_injection/analysis_flux_injection_from_eb.py +++ b/Examples/Tests/flux_injection/analysis_flux_injection_from_eb.py @@ -147,7 +147,8 @@ def compare_gaussian_flux(u, w, u_th, u_m, label=""): wy = nz * vx - nx * vz wz = nx * vy - ny * vx u_perp2 = ux * wx + uy * wy + uz * wz -compare_gaussian(u_perp2, w, u_th=0.01, label="u_perp") +compare_gaussian(u_perp2, w, u_th=0.01, label="u_perp2") +plt.legend() plt.tight_layout() plt.savefig("Distribution.png") diff --git a/Examples/Tests/flux_injection/inputs_base_from_eb b/Examples/Tests/flux_injection/inputs_base_from_eb index 87b9c32592b..618fd1c941a 100644 --- a/Examples/Tests/flux_injection/inputs_base_from_eb +++ b/Examples/Tests/flux_injection/inputs_base_from_eb @@ -1,5 +1,5 @@ # Maximum number of time steps -max_step = 10 +max_step = 20 # The lo and hi ends of grids are multipliers of blocking factor amr.blocking_factor = 8 @@ -13,7 +13,7 @@ amr.max_level = 0 # Deactivate Maxwell solver algo.maxwell_solver = none -warpx.const_dt = 1e-9 +warpx.const_dt = 0.5e-9 # Embedded boundary warpx.eb_implicit_function = "-(x**2+y**2+z**2-2**2)" diff --git a/Examples/Tests/flux_injection/inputs_test_2d_flux_injection_from_eb b/Examples/Tests/flux_injection/inputs_test_2d_flux_injection_from_eb index f2e6f177887..291ef329ad6 100644 --- a/Examples/Tests/flux_injection/inputs_test_2d_flux_injection_from_eb +++ b/Examples/Tests/flux_injection/inputs_test_2d_flux_injection_from_eb @@ -1,7 +1,7 @@ FILE = inputs_base_from_eb # number of grid points -amr.n_cell = 16 16 +amr.n_cell = 32 32 # Geometry geometry.dims = 2 diff --git a/Examples/Tests/flux_injection/inputs_test_3d_flux_injection_from_eb b/Examples/Tests/flux_injection/inputs_test_3d_flux_injection_from_eb index 81ddc039977..59db133e484 100644 --- a/Examples/Tests/flux_injection/inputs_test_3d_flux_injection_from_eb +++ b/Examples/Tests/flux_injection/inputs_test_3d_flux_injection_from_eb @@ -1,7 +1,7 @@ FILE = inputs_base_from_eb # number of grid points -amr.n_cell = 16 16 16 +amr.n_cell = 32 32 32 # Geometry geometry.dims = 3 diff --git a/Examples/Tests/flux_injection/inputs_test_rz_flux_injection_from_eb b/Examples/Tests/flux_injection/inputs_test_rz_flux_injection_from_eb index 4c970257f57..c206a154646 100644 --- a/Examples/Tests/flux_injection/inputs_test_rz_flux_injection_from_eb +++ b/Examples/Tests/flux_injection/inputs_test_rz_flux_injection_from_eb @@ -1,7 +1,7 @@ FILE = inputs_base_from_eb # number of grid points -amr.n_cell = 8 16 +amr.n_cell = 16 32 # Geometry geometry.dims = RZ diff --git a/Regression/Checksum/benchmarks_json/test_2d_flux_injection_from_eb.json b/Regression/Checksum/benchmarks_json/test_2d_flux_injection_from_eb.json index da993c9ef4b..d4fe12f759f 100644 --- a/Regression/Checksum/benchmarks_json/test_2d_flux_injection_from_eb.json +++ b/Regression/Checksum/benchmarks_json/test_2d_flux_injection_from_eb.json @@ -1,11 +1,11 @@ { "lev=0": {}, "electron": { - "particle_momentum_x": 3.4911323396038835e-19, - "particle_momentum_y": 2.680312173420972e-20, - "particle_momentum_z": 3.4918430443688734e-19, - "particle_position_x": 17950.08139982036, - "particle_position_y": 17949.47183079554, - "particle_weight": 6.269e-08 + "particle_momentum_x": 1.4013860393698154e-18, + "particle_momentum_y": 1.0934049057929508e-19, + "particle_momentum_z": 1.4066623146535866e-18, + "particle_position_x": 72129.9049362857, + "particle_position_y": 72178.76505490103, + "particle_weight": 6.279375e-08 } } diff --git a/Regression/Checksum/benchmarks_json/test_3d_flux_injection_from_eb.json b/Regression/Checksum/benchmarks_json/test_3d_flux_injection_from_eb.json index 15b6c7b602c..c1c888ff808 100644 --- a/Regression/Checksum/benchmarks_json/test_3d_flux_injection_from_eb.json +++ b/Regression/Checksum/benchmarks_json/test_3d_flux_injection_from_eb.json @@ -1,12 +1,12 @@ { "lev=0": {}, "electron": { - "particle_momentum_x": 2.1855512033870577e-18, - "particle_momentum_y": 2.1826030840183147e-18, - "particle_momentum_z": 2.181852403122796e-18, - "particle_position_x": 111042.81925863726, - "particle_position_y": 111012.52928910403, - "particle_position_z": 111015.90903542604, - "particle_weight": 2.4775750000000003e-07 + "particle_momentum_x": 1.7587772989573373e-17, + "particle_momentum_y": 1.7608560965806728e-17, + "particle_momentum_z": 1.7596701993624562e-17, + "particle_position_x": 902783.9285213391, + "particle_position_y": 902981.7980528818, + "particle_position_z": 902777.1246066706, + "particle_weight": 2.503818749999996e-07 } -} \ No newline at end of file +} diff --git a/Regression/Checksum/benchmarks_json/test_rz_flux_injection_from_eb.json b/Regression/Checksum/benchmarks_json/test_rz_flux_injection_from_eb.json index fb7142afed0..f8043c5c3e2 100644 --- a/Regression/Checksum/benchmarks_json/test_rz_flux_injection_from_eb.json +++ b/Regression/Checksum/benchmarks_json/test_rz_flux_injection_from_eb.json @@ -1,12 +1,12 @@ { "lev=0": {}, "electron": { - "particle_momentum_x": 3.3665608248716305e-19, - "particle_momentum_y": 3.392690322852239e-19, - "particle_momentum_z": 5.254577143779578e-19, - "particle_position_x": 26933.772112044953, - "particle_position_y": 26926.994273876346, - "particle_theta": 29492.77423173835, - "particle_weight": 2.4953304765944705e-07 + "particle_momentum_x": 1.3547613622259754e-18, + "particle_momentum_y": 1.3539614160696825e-18, + "particle_momentum_z": 2.102305484242805e-18, + "particle_position_x": 108281.74349700565, + "particle_position_y": 108222.91506078152, + "particle_theta": 118597.06004310239, + "particle_weight": 2.5087578786544294e-07 } -} \ No newline at end of file +} diff --git a/Source/Particles/AddPlasmaUtilities.H b/Source/Particles/AddPlasmaUtilities.H index 824e3e10955..7b8e4e58105 100644 --- a/Source/Particles/AddPlasmaUtilities.H +++ b/Source/Particles/AddPlasmaUtilities.H @@ -111,28 +111,24 @@ AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE amrex::Real compute_scale_fac_area_eb ( const amrex::GpuArray& dx, const amrex::Real num_ppc_real, - amrex::Array4 const& eb_bnd_normal_arr, - int i, int j, int k ) { + AMREX_D_DECL(const amrex::Real n0, + const amrex::Real n1, + const amrex::Real n2)) { using namespace amrex::literals; // Scale particle weight by the area of the emitting surface, within one cell // By definition, eb_bnd_area_arr is normalized (unitless). // Here we undo the normalization (i.e. multiply by the surface used for normalization in amrex: // see https://amrex-codes.github.io/amrex/docs_html/EB.html#embedded-boundary-data-structures) #if defined(WARPX_DIM_3D) - const amrex::Real nx = eb_bnd_normal_arr(i,j,k,0); - const amrex::Real ny = eb_bnd_normal_arr(i,j,k,1); - const amrex::Real nz = eb_bnd_normal_arr(i,j,k,2); - amrex::Real scale_fac = std::sqrt(amrex::Math::powi<2>(nx*dx[1]*dx[2]) + - amrex::Math::powi<2>(ny*dx[0]*dx[2]) + - amrex::Math::powi<2>(nz*dx[0]*dx[1])); + amrex::Real scale_fac = std::sqrt(amrex::Math::powi<2>(n0*dx[1]*dx[2]) + + amrex::Math::powi<2>(n1*dx[0]*dx[2]) + + amrex::Math::powi<2>(n2*dx[0]*dx[1])); #elif defined(WARPX_DIM_RZ) || defined(WARPX_DIM_XZ) - const amrex::Real nx = eb_bnd_normal_arr(i,j,k,0); - const amrex::Real nz = eb_bnd_normal_arr(i,j,k,1); - amrex::Real scale_fac = std::sqrt(amrex::Math::powi<2>(nx*dx[1]) + - amrex::Math::powi<2>(nz*dx[0])); + amrex::Real scale_fac = std::sqrt(amrex::Math::powi<2>(n0*dx[1]) + + amrex::Math::powi<2>(n1*dx[0])); #else - amrex::ignore_unused(dx, eb_bnd_normal_arr, i, j, k); + amrex::ignore_unused(dx, AMREX_D_DECL(n0,n1,n2)); amrex::Real scale_fac = 0.0_rt; #endif // Do not multiply by eb_bnd_area_arr(i,j,k) here because this @@ -159,8 +155,9 @@ amrex::Real compute_scale_fac_area_eb ( AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void rotate_momentum_eb ( PDim3 & pu, - amrex::Array4 const& eb_bnd_normal_arr, - int i, int j, int k ) + AMREX_D_DECL(const amrex::Real n0, + const amrex::Real n1, + const amrex::Real n2)) { using namespace amrex::literals; @@ -168,9 +165,9 @@ void rotate_momentum_eb ( // The minus sign below takes into account the fact that eb_bnd_normal_arr // points towards the covered region, while particles are to be emitted // *away* from the covered region. - amrex::Real const nx = -eb_bnd_normal_arr(i,j,k,0); - amrex::Real const ny = -eb_bnd_normal_arr(i,j,k,1); - amrex::Real const nz = -eb_bnd_normal_arr(i,j,k,2); + amrex::Real const nx = -n0; + amrex::Real const ny = -n1; + amrex::Real const nz = -n2; // Rotate the momentum in theta and phi amrex::Real const cos_theta = nz; @@ -194,14 +191,14 @@ void rotate_momentum_eb ( // The minus sign below takes into account the fact that eb_bnd_normal_arr // points towards the covered region, while particles are to be emitted // *away* from the covered region. - amrex::Real const sin_theta = -eb_bnd_normal_arr(i,j,k,0); - amrex::Real const cos_theta = -eb_bnd_normal_arr(i,j,k,1); + amrex::Real const sin_theta = -n0; + amrex::Real const cos_theta = -n1; amrex::Real const uz = pu.z*cos_theta - pu.x*sin_theta; amrex::Real const ux = pu.x*cos_theta + pu.z*sin_theta; pu.x = ux; pu.z = uz; #else - amrex::ignore_unused(pu, eb_bnd_normal_arr, i, j, k); + amrex::ignore_unused(pu, AMREX_D_DECL(n0,n1,n2)); #endif } #endif //AMREX_USE_EB diff --git a/Source/Particles/PhysicalParticleContainer.cpp b/Source/Particles/PhysicalParticleContainer.cpp index baac138dd38..9bf24e659e0 100644 --- a/Source/Particles/PhysicalParticleContainer.cpp +++ b/Source/Particles/PhysicalParticleContainer.cpp @@ -1351,16 +1351,11 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, #ifdef AMREX_USE_EB bool const inject_from_eb = plasma_injector.m_inject_from_eb; // whether to inject from EB or from a plane // Extract data structures for embedded boundaries + amrex::EBFArrayBoxFactory const* eb_factory = nullptr; amrex::FabArray const* eb_flag = nullptr; - amrex::MultiCutFab const* eb_bnd_area = nullptr; - amrex::MultiCutFab const* eb_bnd_normal = nullptr; - amrex::MultiCutFab const* eb_bnd_cent = nullptr; if (inject_from_eb) { - amrex::EBFArrayBoxFactory const& eb_box_factory = WarpX::GetInstance().fieldEBFactory(0); - eb_flag = &eb_box_factory.getMultiEBCellFlagFab(); - eb_bnd_area = &eb_box_factory.getBndryArea(); - eb_bnd_normal = &eb_box_factory.getBndryNormal(); - eb_bnd_cent = &eb_box_factory.getBndryCent(); + eb_factory = &(WarpX::GetInstance().fieldEBFactory(0)); + eb_flag = &(eb_factory->getMultiEBCellFlagFab()); } #endif @@ -1456,17 +1451,8 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, } #ifdef AMREX_USE_EB - // Extract data structures for embedded boundaries - amrex::Array4::value_type> eb_flag_arr; - amrex::Array4 eb_bnd_area_arr; - amrex::Array4 eb_bnd_normal_arr; - amrex::Array4 eb_bnd_cent_arr; - if (inject_from_eb) { - eb_flag_arr = eb_flag->array(mfi); - eb_bnd_area_arr = eb_bnd_area->array(mfi); - eb_bnd_normal_arr = eb_bnd_normal->array(mfi); - eb_bnd_cent_arr = eb_bnd_cent->array(mfi); - } + auto eb_flag_arr = eb_flag ? eb_flag->const_array(mfi) : Array4{}; + auto eb_data = eb_factory ? eb_factory->getEBData(mfi) : EBData{}; #endif amrex::ParallelForRNG(overlap_box, [=] AMREX_GPU_DEVICE (int i, int j, int k, amrex::RandomEngine const& engine) noexcept @@ -1482,7 +1468,7 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, // Skip cells that are not partially covered by the EB if (eb_flag_arr(i,j,k).isRegular() || eb_flag_arr(i,j,k).isCovered()) { return; } // Scale by the (normalized) area of the EB surface in this cell - num_ppc_real_in_this_cell *= eb_bnd_area_arr(i,j,k); + num_ppc_real_in_this_cell *= eb_data.get(i,j,k); } #else amrex::Real const num_ppc_real_in_this_cell = num_ppc_real; // user input: number of macroparticles per cell @@ -1574,7 +1560,10 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, Real scale_fac; #ifdef AMREX_USE_EB if (inject_from_eb) { - scale_fac = compute_scale_fac_area_eb(dx, num_ppc_real, eb_bnd_normal_arr, i, j, k ); + scale_fac = compute_scale_fac_area_eb(dx, num_ppc_real, + AMREX_D_DECL(eb_data.get(i,j,k,0), + eb_data.get(i,j,k,1), + eb_data.get(i,j,k,2))); } else #endif { @@ -1595,14 +1584,15 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, XDim3 r; #ifdef AMREX_USE_EB if (inject_from_eb) { + auto const& pt = eb_data.randomPointOnEB(i,j,k,engine); #if defined(WARPX_DIM_3D) - pos.x = overlap_corner[0] + (iv[0] + 0.5_rt + eb_bnd_cent_arr(i,j,k,0))*dx[0]; - pos.y = overlap_corner[1] + (iv[1] + 0.5_rt + eb_bnd_cent_arr(i,j,k,1))*dx[1]; - pos.z = overlap_corner[2] + (iv[2] + 0.5_rt + eb_bnd_cent_arr(i,j,k,2))*dx[2]; + pos.x = overlap_corner[0] + (iv[0] + 0.5_rt + pt[0])*dx[0]; + pos.y = overlap_corner[1] + (iv[1] + 0.5_rt + pt[1])*dx[1]; + pos.z = overlap_corner[2] + (iv[2] + 0.5_rt + pt[2])*dx[2]; #elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - pos.x = overlap_corner[0] + (iv[0] + 0.5_rt + eb_bnd_cent_arr(i,j,k,0))*dx[0]; + pos.x = overlap_corner[0] + (iv[0] + 0.5_rt + pt[0])*dx[0]; pos.y = 0.0_rt; - pos.z = overlap_corner[1] + (iv[1] + 0.5_rt + eb_bnd_cent_arr(i,j,k,1))*dx[1]; + pos.z = overlap_corner[1] + (iv[1] + 0.5_rt + pt[1])*dx[1]; #endif } else #endif @@ -1661,7 +1651,9 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, // Injection from EB: rotate momentum according to the normal of the EB surface // (The above code initialized the momentum by assuming that z is the direction // normal to the EB surface. Thus we need to rotate from z to the normal.) - rotate_momentum_eb(pu, eb_bnd_normal_arr, i, j , k); + rotate_momentum_eb(pu, AMREX_D_DECL(eb_data.get(i,j,k,0), + eb_data.get(i,j,k,1), + eb_data.get(i,j,k,2))); } #endif From 28d8b23ab1e5f31430ba272d9f4ae670af3171cf Mon Sep 17 00:00:00 2001 From: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> Date: Fri, 31 Jan 2025 09:10:16 -0800 Subject: [PATCH 10/58] Add reference for new article using WarpX [Tyushev (2025)] (#5627) Just adding a new article that uses WarpX to the Science Highlights section. Signed-off-by: roelof-groenewald --- Docs/source/highlights.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Docs/source/highlights.rst b/Docs/source/highlights.rst index 300d94149f8..2e8eeffbef2 100644 --- a/Docs/source/highlights.rst +++ b/Docs/source/highlights.rst @@ -199,6 +199,11 @@ Related works using WarpX: Nuclear Fusion and Plasma Confinement ************************************* +#. Tyushev M., Papahn Zadeh M., Chopra N. S., Raitses Y., Romadanov I., Likhanskii A., Fubiani G., Garrigues L., Groenewald R. and Smolyakov A. + **Mode transitions and spoke structures in E×B Penning discharge**. + Physics of Plasmas **32**, 013511, 2025. + `DOI:10.1063/5.0238577 `__ + #. Scheffel J. and Jäderberg J. and Bendtz K. and Holmberg R. and Lindvall K., **Axial Confinement in the Novatron Mirror Machine**. arXiv 2410.20134 From 43d5aa6f5b751b8fa2b0272fc5bf789e70e17abb Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Fri, 31 Jan 2025 18:10:51 +0100 Subject: [PATCH 11/58] WarpX class: remove unused static variable (#5626) `static bool do_device_synchronize;` is unused. Therefore this PR removes it from the `WarpX.H` header. --- Source/WarpX.H | 2 -- 1 file changed, 2 deletions(-) diff --git a/Source/WarpX.H b/Source/WarpX.H index 995d7edd891..ee49be787a9 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -371,8 +371,6 @@ public: static bool do_multi_J; static int do_multi_J_n_depositions; - static bool do_device_synchronize; - //! With mesh refinement, particles located inside a refinement patch, but within //! #n_field_gather_buffer cells of the edge of the patch, will gather the fields //! from the lower refinement level instead of the refinement patch itself From 2996dd0fa2bb992199589fcf55c280a56e0b2e6e Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Fri, 31 Jan 2025 09:27:30 -0800 Subject: [PATCH 12/58] AMReX/pyAMReX/PICSAR: Weekly Update (#5613) Weekly update to latest AMReX. Weekly update to latest pyAMReX. Weekly update to latest PICSAR. ```console ./Tools/Release/updateAMReX.py ./Tools/Release/updatepyAMReX.py ./Tools/Release/updatePICSAR.py ``` --------- Signed-off-by: Axel Huebl --- .github/workflows/cuda.yml | 2 +- cmake/dependencies/AMReX.cmake | 2 +- cmake/dependencies/PICSAR.cmake | 2 +- cmake/dependencies/pyAMReX.cmake | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 0d8ad0e0566..12a68d327f7 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -127,7 +127,7 @@ jobs: which nvcc || echo "nvcc not in PATH!" git clone https://github.com/AMReX-Codes/amrex.git ../amrex - cd ../amrex && git checkout --detach 0f46a1615c17f0bbeaedb20c27a97c9f6e439781 && cd - + cd ../amrex && git checkout --detach 69f1ac884c6aba4d9881260819ade3bb25ed8aad && cd - make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4 ccache -s diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index d529712534b..9c8907e835b 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -294,7 +294,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "0f46a1615c17f0bbeaedb20c27a97c9f6e439781" +set(WarpX_amrex_branch "69f1ac884c6aba4d9881260819ade3bb25ed8aad" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") diff --git a/cmake/dependencies/PICSAR.cmake b/cmake/dependencies/PICSAR.cmake index 067ea464d88..9eb9162238a 100644 --- a/cmake/dependencies/PICSAR.cmake +++ b/cmake/dependencies/PICSAR.cmake @@ -109,7 +109,7 @@ if(WarpX_QED) set(WarpX_picsar_repo "https://github.com/ECP-WarpX/picsar.git" CACHE STRING "Repository URI to pull and build PICSAR from if(WarpX_picsar_internal)") - set(WarpX_picsar_branch "47b393993f860943e387b4b5d79407ee7f52d1ab" + set(WarpX_picsar_branch "24.09" CACHE STRING "Repository branch for WarpX_picsar_repo if(WarpX_picsar_internal)") diff --git a/cmake/dependencies/pyAMReX.cmake b/cmake/dependencies/pyAMReX.cmake index 3cb849587dc..257bc264f21 100644 --- a/cmake/dependencies/pyAMReX.cmake +++ b/cmake/dependencies/pyAMReX.cmake @@ -74,7 +74,7 @@ option(WarpX_pyamrex_internal "Download & build pyAMReX" ON) set(WarpX_pyamrex_repo "https://github.com/AMReX-Codes/pyamrex.git" CACHE STRING "Repository URI to pull and build pyamrex from if(WarpX_pyamrex_internal)") -set(WarpX_pyamrex_branch "6d9b9da849f5941777555ec9c9619be299d04912" +set(WarpX_pyamrex_branch "458c9ae7ab3cd4ca4e4e9736e82c60f9a7e7606c" CACHE STRING "Repository branch for WarpX_pyamrex_repo if(WarpX_pyamrex_internal)") From 958a39463c9e3fea0bbe1da0306104ccf9a2164c Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Fri, 31 Jan 2025 09:27:59 -0800 Subject: [PATCH 13/58] CI: Clang-Tidy 250min RZ runs reached the 220min mark on fresh cache. --- .github/workflows/clang_tidy.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/clang_tidy.yml b/.github/workflows/clang_tidy.yml index e6816b1c1a9..3caa11e1885 100644 --- a/.github/workflows/clang_tidy.yml +++ b/.github/workflows/clang_tidy.yml @@ -20,7 +20,7 @@ jobs: dim: [1, 2, RZ, 3] name: clang-tidy-${{ matrix.dim }}D runs-on: ubuntu-22.04 - timeout-minutes: 220 + timeout-minutes: 250 if: github.event.pull_request.draft == false steps: - uses: actions/checkout@v4 From 69a8a11d3d7c395fe4e8ba650b059f8865ec89b5 Mon Sep 17 00:00:00 2001 From: Olga Shapoval <30510597+oshapoval@users.noreply.github.com> Date: Fri, 31 Jan 2025 09:59:37 -0800 Subject: [PATCH 14/58] Added CI to test secondary ion emission in RZ. (#5576) This PR adds secondary ion emission through a callback function, allowing secondary electrons to be emitted when an ion hits the embedded boundary. In the following CI test, the random seed was fixed to ensure consistent emission of secondary electrons for reproducibility. We used a secondary electron emission yield (SEY) of 0.4. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Remi Lehe --- Examples/Tests/CMakeLists.txt | 1 + .../secondary_ion_emission/CMakeLists.txt | 14 + .../Tests/secondary_ion_emission/analysis.py | 58 ++++ .../analysis_default_regression.py | 1 + ...ts_test_rz_secondary_ion_emission_picmi.py | 269 ++++++++++++++++++ .../test_rz_secondary_ion_emission_picmi.json | 26 ++ 6 files changed, 369 insertions(+) create mode 100644 Examples/Tests/secondary_ion_emission/CMakeLists.txt create mode 100644 Examples/Tests/secondary_ion_emission/analysis.py create mode 120000 Examples/Tests/secondary_ion_emission/analysis_default_regression.py create mode 100644 Examples/Tests/secondary_ion_emission/inputs_test_rz_secondary_ion_emission_picmi.py create mode 100644 Regression/Checksum/benchmarks_json/test_rz_secondary_ion_emission_picmi.json diff --git a/Examples/Tests/CMakeLists.txt b/Examples/Tests/CMakeLists.txt index d9e9404ae3e..5ff1d4a9a70 100644 --- a/Examples/Tests/CMakeLists.txt +++ b/Examples/Tests/CMakeLists.txt @@ -71,6 +71,7 @@ add_subdirectory(resampling) add_subdirectory(restart) add_subdirectory(restart_eb) add_subdirectory(rigid_injection) +add_subdirectory(secondary_ion_emission) add_subdirectory(scraping) add_subdirectory(effective_potential_electrostatic) add_subdirectory(silver_mueller) diff --git a/Examples/Tests/secondary_ion_emission/CMakeLists.txt b/Examples/Tests/secondary_ion_emission/CMakeLists.txt new file mode 100644 index 00000000000..e6e38138a08 --- /dev/null +++ b/Examples/Tests/secondary_ion_emission/CMakeLists.txt @@ -0,0 +1,14 @@ +# Add tests (alphabetical order) ############################################## +# + +if(WarpX_EB) + add_warpx_test( + test_rz_secondary_ion_emission_picmi # name + RZ # dims + 1 # nprocs + inputs_test_rz_secondary_ion_emission_picmi.py # inputs + "analysis.py diags/diag1/" # analysis + "analysis_default_regression.py --path diags/diag1/" # checksum + OFF # dependency + ) +endif() diff --git a/Examples/Tests/secondary_ion_emission/analysis.py b/Examples/Tests/secondary_ion_emission/analysis.py new file mode 100644 index 00000000000..8c2ed5b4af6 --- /dev/null +++ b/Examples/Tests/secondary_ion_emission/analysis.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python +""" +This script checks that electron secondary emission (implemented by a callback function) works as intended. + +In this test, four ions hit a spherical embedded boundary, and produce secondary +electrons with a probability of `0.4`. We thus expect ~2 electrons to be produced. +This script tests the number of electrons emitted and checks that their position is +close to the embedded boundary. +""" + +import sys + +import numpy as np +import yt +from openpmd_viewer import OpenPMDTimeSeries + +yt.funcs.mylog.setLevel(0) + +# Open plotfile specified in command line +filename = sys.argv[1] +ts = OpenPMDTimeSeries(filename) + +it = ts.iterations +x, y, z = ts.get_particle(["x", "y", "z"], species="electrons", iteration=it[-1]) + +x_analytic = [-0.091696, 0.011599] +y_analytic = [-0.002282, -0.0111624] +z_analytic = [-0.200242, -0.201728] + +N_sec_e = np.size(z) # number of the secondary electrons + +assert N_sec_e == 2, ( + "Test did not pass: for this set up we expect 2 secondary electrons emitted" +) + +tolerance = 1e-3 + +for i in range(0, N_sec_e): + print("\n") + print(f"Electron # {i}:") + print("NUMERICAL coordinates of the emitted electrons:") + print(f"x={x[i]:5.5f}, y={y[i]:5.5f}, z={z[i]:5.5f}") + print("\n") + print("ANALYTICAL coordinates of the point of contact:") + print(f"x={x_analytic[i]:5.5f}, y={y_analytic[i]:5.5f}, z={z_analytic[i]:5.5f}") + + rel_err_x = np.abs((x[i] - x_analytic[i]) / x_analytic[i]) + rel_err_y = np.abs((y[i] - y_analytic[i]) / y_analytic[i]) + rel_err_z = np.abs((z[i] - z_analytic[i]) / z_analytic[i]) + + print("\n") + print(f"Relative percentage error for x = {rel_err_x * 100:5.4f} %") + print(f"Relative percentage error for y = {rel_err_y * 100:5.4f} %") + print(f"Relative percentage error for z = {rel_err_z * 100:5.4f} %") + + assert ( + (rel_err_x < tolerance) and (rel_err_y < tolerance) and (rel_err_z < tolerance) + ), "Test particle_boundary_interaction did not pass" diff --git a/Examples/Tests/secondary_ion_emission/analysis_default_regression.py b/Examples/Tests/secondary_ion_emission/analysis_default_regression.py new file mode 120000 index 00000000000..d8ce3fca419 --- /dev/null +++ b/Examples/Tests/secondary_ion_emission/analysis_default_regression.py @@ -0,0 +1 @@ +../../analysis_default_regression.py \ No newline at end of file diff --git a/Examples/Tests/secondary_ion_emission/inputs_test_rz_secondary_ion_emission_picmi.py b/Examples/Tests/secondary_ion_emission/inputs_test_rz_secondary_ion_emission_picmi.py new file mode 100644 index 00000000000..5b6248da33c --- /dev/null +++ b/Examples/Tests/secondary_ion_emission/inputs_test_rz_secondary_ion_emission_picmi.py @@ -0,0 +1,269 @@ +#!/usr/bin/env python3 +# This is the script that tests secondary ion emission when ions hit an embedded boundary +# with a specified secondary emission yield of delta_H = 0.4. Specifically, a callback +# function at each time step ensures that the correct number of secondary electrons is +# emitted when ions impact the embedded boundary, following the given secondary emission +# model defined in sigma_nescap function. This distribution depends on the ion's energy and +# suggests that for an ion incident with 1 keV energy, an average of 0.4 secondary +# electrons will be emitted. +# Simulation is initialized with four ions with i_dist distribution and spherical +# embedded boundary given by implicit function. +import numpy as np +from scipy.constants import e, elementary_charge, m_e, proton_mass + +from pywarpx import callbacks, particle_containers, picmi + +########################## +# numerics parameters +########################## + +dt = 0.000000075 + +# --- Nb time steps +Te = 0.0259 # in eV +dist_th = np.sqrt(Te * elementary_charge / m_e) + +max_steps = 3 +diagnostic_interval = 1 + +# --- grid +nr = 64 +nz = 64 + +rmin = 0.0 +rmax = 2 +zmin = -2 +zmax = 2 +delta_H = 0.4 +E_HMax = 250 + +np.random.seed(10025015) +########################## +# numerics components +########################## + +grid = picmi.CylindricalGrid( + number_of_cells=[nr, nz], + n_azimuthal_modes=1, + lower_bound=[rmin, zmin], + upper_bound=[rmax, zmax], + lower_boundary_conditions=["none", "dirichlet"], + upper_boundary_conditions=["dirichlet", "dirichlet"], + lower_boundary_conditions_particles=["none", "reflecting"], + upper_boundary_conditions_particles=["absorbing", "reflecting"], +) + +solver = picmi.ElectrostaticSolver( + grid=grid, method="Multigrid", warpx_absolute_tolerance=1e-7 +) + +embedded_boundary = picmi.EmbeddedBoundary( + implicit_function="-(x**2+y**2+z**2-radius**2)", radius=0.2 +) + +########################## +# physics components +########################## +i_dist = picmi.ParticleListDistribution( + x=[ + 0.025, + 0.0, + -0.1, + -0.14, + ], + y=[0.0, 0.0, 0.0, 0], + z=[-0.26, -0.29, -0.25, -0.23], + ux=[0.18e6, 0.1e6, 0.15e6, 0.21e6], + uy=[0.0, 0.0, 0.0, 0.0], + uz=[8.00e5, 7.20e5, 6.40e5, 5.60e5], + weight=[1, 1, 1, 1], +) + +electrons = picmi.Species( + particle_type="electron", # Specify the particle type + name="electrons", # Name of the species +) + +ions = picmi.Species( + name="ions", + particle_type="proton", + charge=e, + initial_distribution=i_dist, + warpx_save_particles_at_eb=1, +) + +########################## +# diagnostics +########################## + +field_diag = picmi.FieldDiagnostic( + name="diag1", + grid=grid, + period=diagnostic_interval, + data_list=["Er", "Ez", "phi", "rho"], + warpx_format="openpmd", +) + +part_diag = picmi.ParticleDiagnostic( + name="diag1", + period=diagnostic_interval, + species=[ions, electrons], + warpx_format="openpmd", +) + +########################## +# simulation setup +########################## + +sim = picmi.Simulation( + solver=solver, + time_step_size=dt, + max_steps=max_steps, + warpx_embedded_boundary=embedded_boundary, + warpx_amrex_the_arena_is_managed=1, +) + +sim.add_species( + electrons, + layout=picmi.GriddedLayout(n_macroparticle_per_cell=[0, 0, 0], grid=grid), +) + +sim.add_species( + ions, + layout=picmi.GriddedLayout(n_macroparticle_per_cell=[10, 1, 1], grid=grid), +) + +sim.add_diagnostic(part_diag) +sim.add_diagnostic(field_diag) + +sim.initialize_inputs() +sim.initialize_warpx() + +########################## +# python particle data access +########################## + + +def concat(list_of_arrays): + if len(list_of_arrays) == 0: + # Return a 1d array of size 0 + return np.empty(0) + else: + return np.concatenate(list_of_arrays) + + +def sigma_nascap(energy_kEv, delta_H, E_HMax): + """ + Compute sigma_nascap for each element in the energy array using a loop. + + Parameters: + - energy: ndarray or list, energy values in KeV + - delta_H: float, parameter for the formula + - E_HMax: float, parameter for the formula in KeV + + Returns: + - numpy array, computed probability sigma_nascap + """ + sigma_nascap = np.array([]) + # Loop through each energy value + for energy in energy_kEv: + if energy > 0.0: + sigma = ( + delta_H + * (E_HMax + 1.0) + / (E_HMax * 1.0 + energy) + * np.sqrt(energy / 1.0) + ) + else: + sigma = 0.0 + sigma_nascap = np.append(sigma_nascap, sigma) + return sigma_nascap + + +def secondary_emission(): + buffer = particle_containers.ParticleBoundaryBufferWrapper() # boundary buffer + # STEP 1: extract the different parameters of the boundary buffer (normal, time, position) + lev = 0 # level 0 (no mesh refinement here) + n = buffer.get_particle_boundary_buffer_size("ions", "eb") + elect_pc = particle_containers.ParticleContainerWrapper("electrons") + + if n != 0: + r = concat(buffer.get_particle_boundary_buffer("ions", "eb", "x", lev)) + theta = concat(buffer.get_particle_boundary_buffer("ions", "eb", "theta", lev)) + z = concat(buffer.get_particle_boundary_buffer("ions", "eb", "z", lev)) + x = r * np.cos(theta) # from RZ coordinates to 3D coordinates + y = r * np.sin(theta) + ux = concat(buffer.get_particle_boundary_buffer("ions", "eb", "ux", lev)) + uy = concat(buffer.get_particle_boundary_buffer("ions", "eb", "uy", lev)) + uz = concat(buffer.get_particle_boundary_buffer("ions", "eb", "uz", lev)) + w = concat(buffer.get_particle_boundary_buffer("ions", "eb", "w", lev)) + nx = concat(buffer.get_particle_boundary_buffer("ions", "eb", "nx", lev)) + ny = concat(buffer.get_particle_boundary_buffer("ions", "eb", "ny", lev)) + nz = concat(buffer.get_particle_boundary_buffer("ions", "eb", "nz", lev)) + delta_t = concat( + buffer.get_particle_boundary_buffer("ions", "eb", "deltaTimeScraped", lev) + ) + energy_ions = 0.5 * proton_mass * w * (ux**2 + uy**2 + uz**2) + energy_ions_in_kEv = energy_ions / (e * 1000) + sigma_nascap_ions = sigma_nascap(energy_ions_in_kEv, delta_H, E_HMax) + # Loop over all ions in the EB buffer + for i in range(0, n): + sigma = sigma_nascap_ions[i] + # Ne_sec is number of the secondary electrons to be emitted + Ne_sec = int(sigma + np.random.uniform()) + for _ in range(Ne_sec): + xe = np.array([]) + ye = np.array([]) + ze = np.array([]) + we = np.array([]) + delta_te = np.array([]) + uxe = np.array([]) + uye = np.array([]) + uze = np.array([]) + + # Random thermal momenta distribution + ux_th = np.random.normal(0, dist_th) + uy_th = np.random.normal(0, dist_th) + uz_th = np.random.normal(0, dist_th) + + un_th = nx[i] * ux_th + ny[i] * uy_th + nz[i] * uz_th + + if un_th < 0: + ux_th_reflect = ( + -2 * un_th * nx[i] + ux_th + ) # for a "mirror reflection" u(sym)=-2(u.n)n+u + uy_th_reflect = -2 * un_th * ny[i] + uy_th + uz_th_reflect = -2 * un_th * nz[i] + uz_th + + uxe = np.append(uxe, ux_th_reflect) + uye = np.append(uye, uy_th_reflect) + uze = np.append(uze, uz_th_reflect) + else: + uxe = np.append(uxe, ux_th) + uye = np.append(uye, uy_th) + uze = np.append(uze, uz_th) + + xe = np.append(xe, x[i]) + ye = np.append(ye, y[i]) + ze = np.append(ze, z[i]) + we = np.append(we, w[i]) + delta_te = np.append(delta_te, delta_t[i]) + + elect_pc.add_particles( + x=xe + (dt - delta_te) * uxe, + y=ye + (dt - delta_te) * uye, + z=ze + (dt - delta_te) * uze, + ux=uxe, + uy=uye, + uz=uze, + w=we, + ) + buffer.clear_buffer() # reinitialise the boundary buffer + + +# using the new particle container modified at the last step +callbacks.installafterstep(secondary_emission) +########################## +# simulation run +########################## +sim.step(max_steps) # the whole process is done "max_steps" times diff --git a/Regression/Checksum/benchmarks_json/test_rz_secondary_ion_emission_picmi.json b/Regression/Checksum/benchmarks_json/test_rz_secondary_ion_emission_picmi.json new file mode 100644 index 00000000000..cfc84819e97 --- /dev/null +++ b/Regression/Checksum/benchmarks_json/test_rz_secondary_ion_emission_picmi.json @@ -0,0 +1,26 @@ +{ + "electrons": { + "particle_momentum_x": 5.621885683102775e-26, + "particle_momentum_y": 1.2079178196118306e-25, + "particle_momentum_z": 1.2496342823828099e-25, + "particle_position_x": 0.10329568998704057, + "particle_position_y": 0.013444257249267193, + "particle_position_z": 0.4019696082583948, + "particle_weight": 2.0 + }, + "ions": { + "particle_momentum_x": 0.0, + "particle_momentum_y": 0.0, + "particle_momentum_z": 0.0, + "particle_position_x": 0.0, + "particle_position_y": 0.0, + "particle_position_z": 0.0, + "particle_weight": 0.0 + }, + "lev=0": { + "Er": 1.772547702166409e-06, + "Ez": 2.2824957684716966e-06, + "phi": 4.338168233265556e-07, + "rho": 1.933391680367631e-15 + } +} \ No newline at end of file From a50cc40204a2c0b78c9dce88dc3753f1d2fa8d51 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Fri, 31 Jan 2025 20:36:42 +0100 Subject: [PATCH 15/58] Embedded Boundary: take some EB-related methods out of WarpX class (#5625) `ComputeEdgeLengths`, `ComputeFaceAreas`, `ScaleAreas`, and `ScaleEdges` are pure functions that can be easily taken out of the WarpX class, in order to make it simpler. This PR places these two functions under a newly created namespace `warpx::embedded_boundary`, inside the files `EmbeddedBoundray/EmbeddedBoundary.H/cpp` . --- Source/BoundaryConditions/PML.cpp | 7 +- Source/EmbeddedBoundary/CMakeLists.txt | 2 +- Source/EmbeddedBoundary/EmbeddedBoundary.H | 55 +++++ Source/EmbeddedBoundary/EmbeddedBoundary.cpp | 200 +++++++++++++++++++ Source/EmbeddedBoundary/Make.package | 2 + Source/EmbeddedBoundary/WarpXInitEB.cpp | 166 --------------- Source/Initialization/WarpXInitData.cpp | 11 +- Source/WarpX.H | 25 +-- 8 files changed, 271 insertions(+), 197 deletions(-) create mode 100644 Source/EmbeddedBoundary/EmbeddedBoundary.H create mode 100644 Source/EmbeddedBoundary/EmbeddedBoundary.cpp diff --git a/Source/BoundaryConditions/PML.cpp b/Source/BoundaryConditions/PML.cpp index 390a09a34c3..90518dc432f 100644 --- a/Source/BoundaryConditions/PML.cpp +++ b/Source/BoundaryConditions/PML.cpp @@ -11,6 +11,9 @@ #include "BoundaryConditions/PML.H" #include "BoundaryConditions/PMLComponent.H" #include "Fields.H" +#ifdef AMREX_USE_EB +# include "EmbeddedBoundary/EmbeddedBoundary.H" +#endif #ifdef WARPX_USE_FFT # include "FieldSolver/SpectralSolver/SpectralFieldData.H" #endif @@ -738,8 +741,8 @@ PML::PML (const int lev, const BoxArray& grid_ba, auto const eb_fact = fieldEBFactory(); ablastr::fields::VectorField t_pml_edge_lengths = warpx.m_fields.get_alldirs(FieldType::pml_edge_lengths, lev); - WarpX::ComputeEdgeLengths(t_pml_edge_lengths, eb_fact); - WarpX::ScaleEdges(t_pml_edge_lengths, WarpX::CellSize(lev)); + warpx::embedded_boundary::ComputeEdgeLengths(t_pml_edge_lengths, eb_fact); + warpx::embedded_boundary::ScaleEdges(t_pml_edge_lengths, WarpX::CellSize(lev)); } } diff --git a/Source/EmbeddedBoundary/CMakeLists.txt b/Source/EmbeddedBoundary/CMakeLists.txt index 2fa5e3e602b..75f9bbdaa04 100644 --- a/Source/EmbeddedBoundary/CMakeLists.txt +++ b/Source/EmbeddedBoundary/CMakeLists.txt @@ -2,10 +2,10 @@ foreach(D IN LISTS WarpX_DIMS) warpx_set_suffix_dims(SD ${D}) target_sources(lib_${SD} PRIVATE + EmbeddedBoundary.cpp Enabled.cpp WarpXInitEB.cpp WarpXFaceExtensions.cpp WarpXFaceInfoBox.H - Enabled.cpp ) endforeach() diff --git a/Source/EmbeddedBoundary/EmbeddedBoundary.H b/Source/EmbeddedBoundary/EmbeddedBoundary.H new file mode 100644 index 00000000000..fc02667246b --- /dev/null +++ b/Source/EmbeddedBoundary/EmbeddedBoundary.H @@ -0,0 +1,55 @@ +/* Copyright 2021-2025 Lorenzo Giacomel, Luca Fedeli + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#ifndef WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ +#define WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ + +#include "Enabled.H" + +#ifdef AMREX_USE_EB + +#include + +#include +#include + +#include + +namespace warpx::embedded_boundary +{ + /** + * \brief Compute the length of the mesh edges. Here the length is a value in [0, 1]. + * An edge of length 0 is fully covered. + */ + void ComputeEdgeLengths ( + ablastr::fields::VectorField& edge_lengths, + const amrex::EBFArrayBoxFactory& eb_fact); + /** + * \brief Compute the area of the mesh faces. Here the area is a value in [0, 1]. + * An edge of area 0 is fully covered. + */ + void ComputeFaceAreas ( + ablastr::fields::VectorField& face_areas, + const amrex::EBFArrayBoxFactory& eb_fact); + + /** + * \brief Scale the edges lengths by the mesh width to obtain the real lengths. + */ + void ScaleEdges ( + ablastr::fields::VectorField& edge_lengths, + const std::array& cell_size); + /** + * \brief Scale the edges areas by the mesh width to obtain the real areas. + */ + void ScaleAreas ( + ablastr::fields::VectorField& face_areas, + const std::array& cell_size); +} + +#endif + +#endif //WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ diff --git a/Source/EmbeddedBoundary/EmbeddedBoundary.cpp b/Source/EmbeddedBoundary/EmbeddedBoundary.cpp new file mode 100644 index 00000000000..9c3d53aefeb --- /dev/null +++ b/Source/EmbeddedBoundary/EmbeddedBoundary.cpp @@ -0,0 +1,200 @@ +/* Copyright 2021-2025 Lorenzo Giacomel, Luca Fedeli + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#include "Enabled.H" + +#ifdef AMREX_USE_EB + +#include "EmbeddedBoundary.H" + +#include "Utils/TextMsg.H" + +#include +#include +#include +#include +#include +#include + +namespace web = warpx::embedded_boundary; + +void +web::ComputeEdgeLengths ( + ablastr::fields::VectorField& edge_lengths, + const amrex::EBFArrayBoxFactory& eb_fact) +{ + BL_PROFILE("ComputeEdgeLengths"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ComputeEdgeLengths only implemented in 2D and 3D"); +#endif + + auto const &flags = eb_fact.getMultiEBCellFlagFab(); + auto const &edge_centroid = eb_fact.getEdgeCent(); + for (int idim = 0; idim < 3; ++idim){ +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 1) { + edge_lengths[1]->setVal(0.); + continue; + } +#endif + for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi){ + amrex::Box const box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), + edge_lengths[idim]->nGrowVect()); + amrex::FabType const fab_type = flags[mfi].getType(box); + auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); + + if (fab_type == amrex::FabType::regular) { + // every cell in box is all regular + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + edge_lengths_dim(i, j, k) = 1.; + }); + } else if (fab_type == amrex::FabType::covered) { + // every cell in box is all covered + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + edge_lengths_dim(i, j, k) = 0.; + }); + } else { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + int idim_amrex = idim; + if (idim == 2) { idim_amrex = 1; } + auto const &edge_cent = edge_centroid[idim_amrex]->const_array(mfi); +#elif defined(WARPX_DIM_3D) + auto const &edge_cent = edge_centroid[idim]->const_array(mfi); +#endif + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + if (edge_cent(i, j, k) == amrex::Real(-1.0)) { + // This edge is all covered + edge_lengths_dim(i, j, k) = 0.; + } else if (edge_cent(i, j, k) == amrex::Real(1.0)) { + // This edge is all open + edge_lengths_dim(i, j, k) = 1.; + } else { + // This edge is cut. + edge_lengths_dim(i, j, k) = 1 - amrex::Math::abs(amrex::Real(2.0) + * edge_cent(i, j, k)); + } + + }); + } + } + } +} + + +void +web::ComputeFaceAreas ( + ablastr::fields::VectorField& face_areas, + const amrex::EBFArrayBoxFactory& eb_fact) +{ + BL_PROFILE("ComputeFaceAreas"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ComputeFaceAreas only implemented in 2D and 3D"); +#endif + + auto const &flags = eb_fact.getMultiEBCellFlagFab(); +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + //In 2D the volume frac is actually the area frac. + auto const &area_frac = eb_fact.getVolFrac(); +#elif defined(WARPX_DIM_3D) + auto const &area_frac = eb_fact.getAreaFrac(); +#endif + + for (int idim = 0; idim < 3; ++idim) { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 0 || idim == 2) { + face_areas[idim]->setVal(0.); + continue; + } +#endif + for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi) { + amrex::Box const box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), + face_areas[idim]->nGrowVect()); + amrex::FabType const fab_type = flags[mfi].getType(box); + auto const &face_areas_dim = face_areas[idim]->array(mfi); + if (fab_type == amrex::FabType::regular) { + // every cell in box is all regular + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) = amrex::Real(1.); + }); + } else if (fab_type == amrex::FabType::covered) { + // every cell in box is all covered + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) = amrex::Real(0.); + }); + } else { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + auto const &face = area_frac.const_array(mfi); +#elif defined(WARPX_DIM_3D) + auto const &face = area_frac[idim]->const_array(mfi); +#endif + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) = face(i, j, k); + }); + } + } + } +} + +void +web::ScaleEdges ( + ablastr::fields::VectorField& edge_lengths, + const std::array& cell_size) +{ + BL_PROFILE("ScaleEdges"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ScaleEdges only implemented in 2D and 3D"); +#endif + + for (int idim = 0; idim < 3; ++idim){ +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 1) { continue; } +#endif + for (amrex::MFIter mfi(*edge_lengths[0]); mfi.isValid(); ++mfi) { + const amrex::Box& box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), + edge_lengths[idim]->nGrowVect() ); + auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + edge_lengths_dim(i, j, k) *= cell_size[idim]; + }); + } + } +} + + +void +web::ScaleAreas ( + ablastr::fields::VectorField& face_areas, + const std::array& cell_size) +{ + BL_PROFILE("ScaleAreas"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ScaleAreas only implemented in 2D and 3D"); +#endif + + for (int idim = 0; idim < 3; ++idim) { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 0 || idim == 2) { continue; } +#endif + for (amrex::MFIter mfi(*face_areas[0]); mfi.isValid(); ++mfi) { + const amrex::Box& box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), + face_areas[idim]->nGrowVect() ); + amrex::Real const full_area = cell_size[(idim+1)%3]*cell_size[(idim+2)%3]; + auto const &face_areas_dim = face_areas[idim]->array(mfi); + + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) *= full_area; + }); + + } + } +} + +#endif diff --git a/Source/EmbeddedBoundary/Make.package b/Source/EmbeddedBoundary/Make.package index 76a20896f85..e1c6422d99c 100644 --- a/Source/EmbeddedBoundary/Make.package +++ b/Source/EmbeddedBoundary/Make.package @@ -1,9 +1,11 @@ +CEXE_headers += EmbeddedBoundary.H CEXE_headers += Enabled.H CEXE_headers += ParticleScraper.H CEXE_headers += ParticleBoundaryProcess.H CEXE_headers += DistanceToEB.H CEXE_headers += WarpXFaceInfoBox.H +CEXE_sources += EmbeddedBoundary.cpp CEXE_sources += Enabled.cpp CEXE_sources += WarpXInitEB.cpp CEXE_sources += WarpXFaceExtensions.cpp diff --git a/Source/EmbeddedBoundary/WarpXInitEB.cpp b/Source/EmbeddedBoundary/WarpXInitEB.cpp index 271f12231b0..3f33259a313 100644 --- a/Source/EmbeddedBoundary/WarpXInitEB.cpp +++ b/Source/EmbeddedBoundary/WarpXInitEB.cpp @@ -124,172 +124,6 @@ WarpX::InitEB () } #ifdef AMREX_USE_EB -void -WarpX::ComputeEdgeLengths (ablastr::fields::VectorField& edge_lengths, - const amrex::EBFArrayBoxFactory& eb_fact) { - BL_PROFILE("ComputeEdgeLengths"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ComputeEdgeLengths only implemented in 2D and 3D"); -#endif - - auto const &flags = eb_fact.getMultiEBCellFlagFab(); - auto const &edge_centroid = eb_fact.getEdgeCent(); - for (int idim = 0; idim < 3; ++idim){ -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 1) { - edge_lengths[1]->setVal(0.); - continue; - } -#endif - for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi){ - amrex::Box const box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), - edge_lengths[idim]->nGrowVect()); - amrex::FabType const fab_type = flags[mfi].getType(box); - auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); - - if (fab_type == amrex::FabType::regular) { - // every cell in box is all regular - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - edge_lengths_dim(i, j, k) = 1.; - }); - } else if (fab_type == amrex::FabType::covered) { - // every cell in box is all covered - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - edge_lengths_dim(i, j, k) = 0.; - }); - } else { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - int idim_amrex = idim; - if (idim == 2) { idim_amrex = 1; } - auto const &edge_cent = edge_centroid[idim_amrex]->const_array(mfi); -#elif defined(WARPX_DIM_3D) - auto const &edge_cent = edge_centroid[idim]->const_array(mfi); -#endif - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - if (edge_cent(i, j, k) == amrex::Real(-1.0)) { - // This edge is all covered - edge_lengths_dim(i, j, k) = 0.; - } else if (edge_cent(i, j, k) == amrex::Real(1.0)) { - // This edge is all open - edge_lengths_dim(i, j, k) = 1.; - } else { - // This edge is cut. - edge_lengths_dim(i, j, k) = 1 - amrex::Math::abs(amrex::Real(2.0) - * edge_cent(i, j, k)); - } - - }); - } - } - } -} - - -void -WarpX::ComputeFaceAreas (VectorField& face_areas, - const amrex::EBFArrayBoxFactory& eb_fact) { - BL_PROFILE("ComputeFaceAreas"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ComputeFaceAreas only implemented in 2D and 3D"); -#endif - - auto const &flags = eb_fact.getMultiEBCellFlagFab(); -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - //In 2D the volume frac is actually the area frac. - auto const &area_frac = eb_fact.getVolFrac(); -#elif defined(WARPX_DIM_3D) - auto const &area_frac = eb_fact.getAreaFrac(); -#endif - - for (int idim = 0; idim < 3; ++idim) { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 0 || idim == 2) { - face_areas[idim]->setVal(0.); - continue; - } -#endif - for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi) { - amrex::Box const box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), - face_areas[idim]->nGrowVect()); - amrex::FabType const fab_type = flags[mfi].getType(box); - auto const &face_areas_dim = face_areas[idim]->array(mfi); - if (fab_type == amrex::FabType::regular) { - // every cell in box is all regular - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) = amrex::Real(1.); - }); - } else if (fab_type == amrex::FabType::covered) { - // every cell in box is all covered - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) = amrex::Real(0.); - }); - } else { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - auto const &face = area_frac.const_array(mfi); -#elif defined(WARPX_DIM_3D) - auto const &face = area_frac[idim]->const_array(mfi); -#endif - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) = face(i, j, k); - }); - } - } - } -} - - -void -WarpX::ScaleEdges (ablastr::fields::VectorField& edge_lengths, - const std::array& cell_size) { - BL_PROFILE("ScaleEdges"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ScaleEdges only implemented in 2D and 3D"); -#endif - - for (int idim = 0; idim < 3; ++idim){ -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 1) { continue; } -#endif - for (amrex::MFIter mfi(*edge_lengths[0]); mfi.isValid(); ++mfi) { - const amrex::Box& box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), - edge_lengths[idim]->nGrowVect() ); - auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - edge_lengths_dim(i, j, k) *= cell_size[idim]; - }); - } - } -} - -void -WarpX::ScaleAreas (ablastr::fields::VectorField& face_areas, - const std::array& cell_size) { - BL_PROFILE("ScaleAreas"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ScaleAreas only implemented in 2D and 3D"); -#endif - - for (int idim = 0; idim < 3; ++idim) { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 0 || idim == 2) { continue; } -#endif - for (amrex::MFIter mfi(*face_areas[0]); mfi.isValid(); ++mfi) { - const amrex::Box& box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), - face_areas[idim]->nGrowVect() ); - amrex::Real const full_area = cell_size[(idim+1)%3]*cell_size[(idim+2)%3]; - auto const &face_areas_dim = face_areas[idim]->array(mfi); - - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) *= full_area; - }); - - } - } -} void WarpX::MarkReducedShapeCells ( diff --git a/Source/Initialization/WarpXInitData.cpp b/Source/Initialization/WarpXInitData.cpp index 3d78615fbc3..cf452df56a2 100644 --- a/Source/Initialization/WarpXInitData.cpp +++ b/Source/Initialization/WarpXInitData.cpp @@ -17,6 +17,9 @@ #include "Diagnostics/MultiDiagnostics.H" #include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include "EmbeddedBoundary/Enabled.H" +#ifdef AMREX_USE_EB +# include "EmbeddedBoundary/EmbeddedBoundary.H" +#endif #include "Fields.H" #include "FieldSolver/ElectrostaticSolvers/ElectrostaticSolver.H" #include "FieldSolver/FiniteDifferenceSolver/MacroscopicProperties/MacroscopicProperties.H" @@ -1236,12 +1239,12 @@ void WarpX::InitializeEBGridData (int lev) if (WarpX::electromagnetic_solver_id == ElectromagneticSolverAlgo::ECT) { auto edge_lengths_lev = m_fields.get_alldirs(FieldType::edge_lengths, lev); - ComputeEdgeLengths(edge_lengths_lev, eb_fact); - ScaleEdges(edge_lengths_lev, CellSize(lev)); + warpx::embedded_boundary::ComputeEdgeLengths(edge_lengths_lev, eb_fact); + warpx::embedded_boundary::ScaleEdges(edge_lengths_lev, CellSize(lev)); auto face_areas_lev = m_fields.get_alldirs(FieldType::face_areas, lev); - ComputeFaceAreas(face_areas_lev, eb_fact); - ScaleAreas(face_areas_lev, CellSize(lev)); + warpx::embedded_boundary::ComputeFaceAreas(face_areas_lev, eb_fact); + warpx::embedded_boundary::ScaleAreas(face_areas_lev, CellSize(lev)); // Compute additional quantities required for the ECT solver MarkExtensionCells(); diff --git a/Source/WarpX.H b/Source/WarpX.H index ee49be787a9..077e8f5d954 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -1047,30 +1047,7 @@ public: ablastr::fields::VectorField const& face_areas, ablastr::fields::VectorField const& edge_lengths ); - /** - * \brief Compute the length of the mesh edges. Here the length is a value in [0, 1]. - * An edge of length 0 is fully covered. - */ - static void ComputeEdgeLengths (ablastr::fields::VectorField& edge_lengths, - const amrex::EBFArrayBoxFactory& eb_fact); - /** - * \brief Compute the area of the mesh faces. Here the area is a value in [0, 1]. - * An edge of area 0 is fully covered. - */ - static void ComputeFaceAreas (ablastr::fields::VectorField& face_areas, - const amrex::EBFArrayBoxFactory& eb_fact); - - /** - * \brief Scale the edges lengths by the mesh width to obtain the real lengths. - */ - static void ScaleEdges (ablastr::fields::VectorField& edge_lengths, - const std::array& cell_size); - /** - * \brief Scale the edges areas by the mesh width to obtain the real areas. - */ - static void ScaleAreas (ablastr::fields::VectorField& face_areas, - const std::array& cell_size); - /** + /** * \brief Initialize information for cell extensions. * The flags convention for m_flag_info_face is as follows * - 0 for unstable cells From 3092d26bbce4613eb4a25abb7c9548490c24a5b2 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Fri, 31 Jan 2025 13:00:11 -0800 Subject: [PATCH 16/58] MultiFabRegister: `throw` in get (#5356) Close #5319 Follow-up to #5230 - [x] Throw a runtime exception instead of returning a `nullptr` if a field is requested via the getter. - [x] update logic to ensure this passes all tests --------- Co-authored-by: Edoardo Zoni Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> --- Source/BoundaryConditions/PML.cpp | 8 +- Source/Evolve/WarpXEvolve.cpp | 73 ++++++++++++------- .../EffectivePotentialES.cpp | 4 +- .../LabFrameExplicitES.cpp | 4 +- Source/FieldSolver/WarpXPushFieldsEM.cpp | 10 ++- .../FieldSolver/WarpXPushFieldsHybridPIC.cpp | 4 +- Source/Parallelization/WarpXComm.cpp | 23 ++++-- Source/Utils/WarpXMovingWindow.cpp | 6 +- Source/ablastr/fields/MultiFabRegister.H | 38 +++++++--- Source/ablastr/fields/MultiFabRegister.cpp | 56 ++++++++++---- 10 files changed, 154 insertions(+), 72 deletions(-) diff --git a/Source/BoundaryConditions/PML.cpp b/Source/BoundaryConditions/PML.cpp index 90518dc432f..1b66195d163 100644 --- a/Source/BoundaryConditions/PML.cpp +++ b/Source/BoundaryConditions/PML.cpp @@ -1301,16 +1301,16 @@ PML::PushPSATD (ablastr::fields::MultiFabRegister& fields, const int lev) { ablastr::fields::VectorField pml_E_fp = fields.get_alldirs(FieldType::pml_E_fp, lev); ablastr::fields::VectorField pml_B_fp = fields.get_alldirs(FieldType::pml_B_fp, lev); - ablastr::fields::ScalarField pml_F_fp = fields.get(FieldType::pml_F_fp, lev); - ablastr::fields::ScalarField pml_G_fp = fields.get(FieldType::pml_G_fp, lev); + ablastr::fields::ScalarField pml_F_fp = (fields.has(FieldType::pml_F_fp, lev)) ? fields.get(FieldType::pml_F_fp, lev) : nullptr; + ablastr::fields::ScalarField pml_G_fp = (fields.has(FieldType::pml_G_fp, lev)) ? fields.get(FieldType::pml_G_fp, lev) : nullptr; // Update the fields on the fine and coarse patch PushPMLPSATDSinglePatch(lev, *spectral_solver_fp, pml_E_fp, pml_B_fp, pml_F_fp, pml_G_fp, m_fill_guards_fields); if (spectral_solver_cp) { ablastr::fields::VectorField pml_E_cp = fields.get_alldirs(FieldType::pml_E_cp, lev); ablastr::fields::VectorField pml_B_cp = fields.get_alldirs(FieldType::pml_B_cp, lev); - ablastr::fields::ScalarField pml_F_cp = fields.get(FieldType::pml_F_cp, lev); - ablastr::fields::ScalarField pml_G_cp = fields.get(FieldType::pml_G_cp, lev); + ablastr::fields::ScalarField pml_F_cp = (fields.has(FieldType::pml_F_cp, lev)) ? fields.get(FieldType::pml_F_cp, lev) : nullptr; + ablastr::fields::ScalarField pml_G_cp = (fields.has(FieldType::pml_G_cp, lev)) ? fields.get(FieldType::pml_G_cp, lev) : nullptr; PushPMLPSATDSinglePatch(lev, *spectral_solver_cp, pml_E_cp, pml_B_cp, pml_F_cp, pml_G_cp, m_fill_guards_fields); } } diff --git a/Source/Evolve/WarpXEvolve.cpp b/Source/Evolve/WarpXEvolve.cpp index 1b2ff7e34f1..b40503ac1c7 100644 --- a/Source/Evolve/WarpXEvolve.cpp +++ b/Source/Evolve/WarpXEvolve.cpp @@ -671,6 +671,8 @@ WarpX::OneStep_multiJ (const amrex::Real cur_time) using warpx::fields::FieldType; + bool const skip_lev0_coarse_patch = true; + const int rho_mid = spectral_solver_fp[0]->m_spectral_index.rho_mid; const int rho_new = spectral_solver_fp[0]->m_spectral_index.rho_new; @@ -804,8 +806,8 @@ WarpX::OneStep_multiJ (const amrex::Real cur_time) PSATDBackwardTransformEBavg( m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_fp, finest_level), m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_fp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level) + m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level, skip_lev0_coarse_patch), + m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level, skip_lev0_coarse_patch) ); } @@ -876,11 +878,13 @@ WarpX::OneStep_sub1 (Real cur_time) using warpx::fields::FieldType; + bool const skip_lev0_coarse_patch = true; + // i) Push particles and fields on the fine patch (first fine step) PushParticlesandDeposit(fine_lev, cur_time, DtType::FirstHalf); RestrictCurrentFromFineToCoarsePatch( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level), fine_lev); + m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level, skip_lev0_coarse_patch), fine_lev); RestrictRhoFromFineToCoarsePatch(fine_lev); if (use_filter) { ApplyFilterMF( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), fine_lev); @@ -889,10 +893,13 @@ WarpX::OneStep_sub1 (Real cur_time) m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), fine_lev, Geom(fine_lev).periodicity()); - ApplyFilterandSumBoundaryRho( - m_fields.get_mr_levels(FieldType::rho_fp, finest_level), - m_fields.get_mr_levels(FieldType::rho_cp, finest_level), - fine_lev, PatchType::fine, 0, 2*ncomps); + if (m_fields.has(FieldType::rho_fp, finest_level) && + m_fields.has(FieldType::rho_cp, finest_level)) { + ApplyFilterandSumBoundaryRho( + m_fields.get_mr_levels(FieldType::rho_fp, finest_level), + m_fields.get_mr_levels(FieldType::rho_cp, finest_level, skip_lev0_coarse_patch), + fine_lev, PatchType::fine, 0, 2*ncomps); + } EvolveB(fine_lev, PatchType::fine, 0.5_rt*dt[fine_lev], DtType::FirstHalf, cur_time); EvolveF(fine_lev, PatchType::fine, 0.5_rt*dt[fine_lev], DtType::FirstHalf); @@ -922,13 +929,18 @@ WarpX::OneStep_sub1 (Real cur_time) StoreCurrent(coarse_lev); AddCurrentFromFineLevelandSumBoundary( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::current_buf, finest_level), coarse_lev); - AddRhoFromFineLevelandSumBoundary( - m_fields.get_mr_levels(FieldType::rho_fp, finest_level), - m_fields.get_mr_levels(FieldType::rho_cp, finest_level), - m_fields.get_mr_levels(FieldType::rho_buf, finest_level), - coarse_lev, 0, ncomps); + m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level, skip_lev0_coarse_patch), + m_fields.get_mr_levels_alldirs(FieldType::current_buf, finest_level, skip_lev0_coarse_patch), coarse_lev); + + if (m_fields.has(FieldType::rho_fp, finest_level) && + m_fields.has(FieldType::rho_cp, finest_level) && + m_fields.has(FieldType::rho_buf, finest_level)) { + AddRhoFromFineLevelandSumBoundary( + m_fields.get_mr_levels(FieldType::rho_fp, finest_level), + m_fields.get_mr_levels(FieldType::rho_cp, finest_level, skip_lev0_coarse_patch), + m_fields.get_mr_levels(FieldType::rho_buf, finest_level, skip_lev0_coarse_patch), + coarse_lev, 0, ncomps); + } EvolveB(fine_lev, PatchType::coarse, dt[fine_lev], DtType::FirstHalf, cur_time); EvolveF(fine_lev, PatchType::coarse, dt[fine_lev], DtType::FirstHalf); @@ -958,16 +970,20 @@ WarpX::OneStep_sub1 (Real cur_time) PushParticlesandDeposit(fine_lev, cur_time + dt[fine_lev], DtType::SecondHalf); RestrictCurrentFromFineToCoarsePatch( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level), fine_lev); + m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level, skip_lev0_coarse_patch), fine_lev); RestrictRhoFromFineToCoarsePatch(fine_lev); if (use_filter) { ApplyFilterMF( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), fine_lev); } SumBoundaryJ( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), fine_lev, Geom(fine_lev).periodicity()); - ApplyFilterandSumBoundaryRho( - m_fields.get_mr_levels(FieldType::rho_fp, finest_level), - m_fields.get_mr_levels(FieldType::rho_cp, finest_level), - fine_lev, PatchType::fine, 0, ncomps); + + if (m_fields.has(FieldType::rho_fp, finest_level) && + m_fields.has(FieldType::rho_cp, finest_level)) { + ApplyFilterandSumBoundaryRho( + m_fields.get_mr_levels(FieldType::rho_fp, finest_level), + m_fields.get_mr_levels(FieldType::rho_cp, finest_level, skip_lev0_coarse_patch), + fine_lev, PatchType::fine, 0, ncomps); + } EvolveB(fine_lev, PatchType::fine, 0.5_rt*dt[fine_lev], DtType::FirstHalf, cur_time + dt[fine_lev]); EvolveF(fine_lev, PatchType::fine, 0.5_rt*dt[fine_lev], DtType::FirstHalf); @@ -996,14 +1012,19 @@ WarpX::OneStep_sub1 (Real cur_time) RestoreCurrent(coarse_lev); AddCurrentFromFineLevelandSumBoundary( m_fields.get_mr_levels_alldirs(FieldType::current_fp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level), - m_fields.get_mr_levels_alldirs(FieldType::current_buf, finest_level), + m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level, skip_lev0_coarse_patch), + m_fields.get_mr_levels_alldirs(FieldType::current_buf, finest_level, skip_lev0_coarse_patch), coarse_lev); - AddRhoFromFineLevelandSumBoundary( - m_fields.get_mr_levels(FieldType::rho_fp, finest_level), - m_fields.get_mr_levels(FieldType::rho_cp, finest_level), - m_fields.get_mr_levels(FieldType::rho_buf, finest_level), - coarse_lev, ncomps, ncomps); + + if (m_fields.has(FieldType::rho_fp, finest_level) && + m_fields.has(FieldType::rho_cp, finest_level) && + m_fields.has(FieldType::rho_buf, finest_level)) { + AddRhoFromFineLevelandSumBoundary( + m_fields.get_mr_levels(FieldType::rho_fp, finest_level), + m_fields.get_mr_levels(FieldType::rho_cp, finest_level, skip_lev0_coarse_patch), + m_fields.get_mr_levels(FieldType::rho_buf, finest_level, skip_lev0_coarse_patch), + coarse_lev, ncomps, ncomps); + } EvolveE(fine_lev, PatchType::coarse, dt[fine_lev], cur_time + 0.5_rt * dt[fine_lev]); FillBoundaryE(fine_lev, PatchType::coarse, guard_cells.ng_FieldSolver, diff --git a/Source/FieldSolver/ElectrostaticSolvers/EffectivePotentialES.cpp b/Source/FieldSolver/ElectrostaticSolvers/EffectivePotentialES.cpp index 0a5330b049d..b2f93f7e2b3 100644 --- a/Source/FieldSolver/ElectrostaticSolvers/EffectivePotentialES.cpp +++ b/Source/FieldSolver/ElectrostaticSolvers/EffectivePotentialES.cpp @@ -34,9 +34,11 @@ void EffectivePotentialES::ComputeSpaceChargeField ( using ablastr::fields::MultiLevelVectorField; using warpx::fields::FieldType; + bool const skip_lev0_coarse_patch = true; + // grab the simulation fields const MultiLevelScalarField rho_fp = fields.get_mr_levels(FieldType::rho_fp, max_level); - const MultiLevelScalarField rho_cp = fields.get_mr_levels(FieldType::rho_cp, max_level); + const MultiLevelScalarField rho_cp = fields.get_mr_levels(FieldType::rho_cp, max_level, skip_lev0_coarse_patch); const MultiLevelScalarField phi_fp = fields.get_mr_levels(FieldType::phi_fp, max_level); const MultiLevelVectorField Efield_fp = fields.get_mr_levels_alldirs(FieldType::Efield_fp, max_level); diff --git a/Source/FieldSolver/ElectrostaticSolvers/LabFrameExplicitES.cpp b/Source/FieldSolver/ElectrostaticSolvers/LabFrameExplicitES.cpp index 643efefb2f3..88a0899a7cb 100755 --- a/Source/FieldSolver/ElectrostaticSolvers/LabFrameExplicitES.cpp +++ b/Source/FieldSolver/ElectrostaticSolvers/LabFrameExplicitES.cpp @@ -31,8 +31,10 @@ void LabFrameExplicitES::ComputeSpaceChargeField ( using ablastr::fields::MultiLevelVectorField; using warpx::fields::FieldType; + bool const skip_lev0_coarse_patch = true; + const MultiLevelScalarField rho_fp = fields.get_mr_levels(FieldType::rho_fp, max_level); - const MultiLevelScalarField rho_cp = fields.get_mr_levels(FieldType::rho_cp, max_level); + const MultiLevelScalarField rho_cp = fields.get_mr_levels(FieldType::rho_cp, max_level, skip_lev0_coarse_patch); const MultiLevelScalarField phi_fp = fields.get_mr_levels(FieldType::phi_fp, max_level); const MultiLevelVectorField Efield_fp = fields.get_mr_levels_alldirs(FieldType::Efield_fp, max_level); diff --git a/Source/FieldSolver/WarpXPushFieldsEM.cpp b/Source/FieldSolver/WarpXPushFieldsEM.cpp index 7e04f1c2b15..0163d158dd0 100644 --- a/Source/FieldSolver/WarpXPushFieldsEM.cpp +++ b/Source/FieldSolver/WarpXPushFieldsEM.cpp @@ -722,6 +722,8 @@ WarpX::PushPSATD (amrex::Real start_time) "PushFieldsEM: PSATD solver selected but not built"); #else + bool const skip_lev0_coarse_patch = true; + const int rho_old = spectral_solver_fp[0]->m_spectral_index.rho_old; const int rho_new = spectral_solver_fp[0]->m_spectral_index.rho_new; @@ -853,8 +855,8 @@ WarpX::PushPSATD (amrex::Real start_time) if (WarpX::fft_do_time_averaging) { auto Efield_avg_fp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_fp, finest_level); auto Bfield_avg_fp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_fp, finest_level); - auto Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level); - auto Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level); + auto Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level, skip_lev0_coarse_patch); + auto Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level, skip_lev0_coarse_patch); PSATDBackwardTransformEBavg(Efield_avg_fp, Bfield_avg_fp, Efield_avg_cp, Bfield_avg_cp); } if (WarpX::do_dive_cleaning) { PSATDBackwardTransformF(); } @@ -1105,6 +1107,8 @@ WarpX::EvolveG (int lev, PatchType patch_type, amrex::Real a_dt, DtType /*a_dt_t WARPX_PROFILE("WarpX::EvolveG()"); + bool const skip_lev0_coarse_patch = true; + // Evolve G field in regular cells if (patch_type == PatchType::fine) { @@ -1115,7 +1119,7 @@ WarpX::EvolveG (int lev, PatchType patch_type, amrex::Real a_dt, DtType /*a_dt_t } else // coarse patch { - ablastr::fields::MultiLevelVectorField const& Bfield_cp_new = m_fields.get_mr_levels_alldirs(FieldType::Bfield_cp, finest_level); + ablastr::fields::MultiLevelVectorField const& Bfield_cp_new = m_fields.get_mr_levels_alldirs(FieldType::Bfield_cp, finest_level, skip_lev0_coarse_patch); m_fdtd_solver_cp[lev]->EvolveG( m_fields.get(FieldType::G_cp, lev), Bfield_cp_new[lev], a_dt); diff --git a/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp b/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp index 46950030322..18efba3f445 100644 --- a/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp +++ b/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp @@ -206,11 +206,13 @@ void WarpX::HybridPICDepositInitialRhoAndJ () { using warpx::fields::FieldType; + bool const skip_lev0_coarse_patch = true; + ablastr::fields::MultiLevelScalarField rho_fp_temp = m_fields.get_mr_levels(FieldType::hybrid_rho_fp_temp, finest_level); ablastr::fields::MultiLevelVectorField current_fp_temp = m_fields.get_mr_levels_alldirs(FieldType::hybrid_current_fp_temp, finest_level); mypc->DepositCharge(rho_fp_temp, 0._rt); mypc->DepositCurrent(current_fp_temp, dt[0], 0._rt); - SyncRho(rho_fp_temp, m_fields.get_mr_levels(FieldType::rho_cp, finest_level), m_fields.get_mr_levels(FieldType::rho_buf, finest_level)); + SyncRho(rho_fp_temp, m_fields.get_mr_levels(FieldType::rho_cp, finest_level, skip_lev0_coarse_patch), m_fields.get_mr_levels(FieldType::rho_buf, finest_level, skip_lev0_coarse_patch)); SyncCurrent("hybrid_current_fp_temp"); for (int lev=0; lev <= finest_level; ++lev) { // SyncCurrent does not include a call to FillBoundary, but it is needed diff --git a/Source/Parallelization/WarpXComm.cpp b/Source/Parallelization/WarpXComm.cpp index b82e4d687a4..d5c36084467 100644 --- a/Source/Parallelization/WarpXComm.cpp +++ b/Source/Parallelization/WarpXComm.cpp @@ -836,6 +836,8 @@ WarpX::FillBoundaryE_avg(int lev, IntVect ng) void WarpX::FillBoundaryE_avg (int lev, PatchType patch_type, IntVect ng) { + bool const skip_lev0_coarse_patch = true; + if (patch_type == PatchType::fine) { if (do_pml && pml[lev]->ok()) @@ -865,7 +867,7 @@ WarpX::FillBoundaryE_avg (int lev, PatchType patch_type, IntVect ng) WARPX_ABORT_WITH_MESSAGE("Averaged Galilean PSATD with PML is not yet implemented"); } - ablastr::fields::MultiLevelVectorField Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level); + ablastr::fields::MultiLevelVectorField Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level, skip_lev0_coarse_patch); const amrex::Periodicity& cperiod = Geom(lev-1).periodicity(); if ( m_safe_guard_cells ) { @@ -896,6 +898,8 @@ WarpX::FillBoundaryB_avg (int lev, PatchType patch_type, IntVect ng) { using ablastr::fields::Direction; + bool const skip_lev0_coarse_patch = true; + if (patch_type == PatchType::fine) { if (do_pml && pml[lev]->ok()) @@ -925,7 +929,7 @@ WarpX::FillBoundaryB_avg (int lev, PatchType patch_type, IntVect ng) WARPX_ABORT_WITH_MESSAGE("Averaged Galilean PSATD with PML is not yet implemented"); } - ablastr::fields::MultiLevelVectorField Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level); + ablastr::fields::MultiLevelVectorField Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level, skip_lev0_coarse_patch); const amrex::Periodicity& cperiod = Geom(lev-1).periodicity(); if ( m_safe_guard_cells ){ @@ -1077,12 +1081,14 @@ WarpX::SyncCurrent (const std::string& current_fp_string) WARPX_PROFILE("WarpX::SyncCurrent()"); + bool const skip_lev0_coarse_patch = true; + ablastr::fields::MultiLevelVectorField const& J_fp = m_fields.get_mr_levels_alldirs(current_fp_string, finest_level); // If warpx.do_current_centering = 1, center currents from nodal grid to staggered grid if (do_current_centering) { - ablastr::fields::MultiLevelVectorField const& J_fp_nodal = m_fields.get_mr_levels_alldirs(FieldType::current_fp_nodal, finest_level+1); + ablastr::fields::MultiLevelVectorField const& J_fp_nodal = m_fields.get_mr_levels_alldirs(FieldType::current_fp_nodal, finest_level); AMREX_ALWAYS_ASSERT_WITH_MESSAGE(finest_level <= 1, "warpx.do_current_centering=1 not supported with more than one fine levels"); @@ -1192,7 +1198,7 @@ WarpX::SyncCurrent (const std::string& current_fp_string) } }); // Now it's safe to apply filter and sumboundary on J_cp - ablastr::fields::MultiLevelVectorField const& J_cp = m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level); + ablastr::fields::MultiLevelVectorField const& J_cp = m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level, skip_lev0_coarse_patch); if (use_filter) { ApplyFilterMF(J_cp, lev+1, idim); @@ -1207,14 +1213,14 @@ WarpX::SyncCurrent (const std::string& current_fp_string) // filtering depends on the level. This is also done before any // same-level communication because it's easier this way to // avoid double counting. - ablastr::fields::MultiLevelVectorField const& J_cp = m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level); + ablastr::fields::MultiLevelVectorField const& J_cp = m_fields.get_mr_levels_alldirs(FieldType::current_cp, finest_level, skip_lev0_coarse_patch); J_cp[lev][Direction{idim}]->setVal(0.0); ablastr::coarsen::average::Coarsen(*J_cp[lev][Direction{idim}], *J_fp[lev][Direction{idim}], refRatio(lev-1)); if (m_fields.has(FieldType::current_buf, Direction{idim}, lev)) { - ablastr::fields::MultiLevelVectorField const& J_buffer = m_fields.get_mr_levels_alldirs(FieldType::current_buf, finest_level); + ablastr::fields::MultiLevelVectorField const& J_buffer = m_fields.get_mr_levels_alldirs(FieldType::current_buf, finest_level, skip_lev0_coarse_patch); IntVect const& ng = J_cp[lev][Direction{idim}]->nGrowVect(); AMREX_ASSERT(ng.allLE(J_buffer[lev][Direction{idim}]->nGrowVect())); @@ -1241,14 +1247,15 @@ WarpX::SyncCurrent (const std::string& current_fp_string) void WarpX::SyncRho () { + bool const skip_lev0_coarse_patch = true; const ablastr::fields::MultiLevelScalarField rho_fp = m_fields.has(FieldType::rho_fp, 0) ? m_fields.get_mr_levels(FieldType::rho_fp, finest_level) : ablastr::fields::MultiLevelScalarField{static_cast(finest_level+1)}; const ablastr::fields::MultiLevelScalarField rho_cp = m_fields.has(FieldType::rho_cp, 1) ? - m_fields.get_mr_levels(FieldType::rho_cp, finest_level) : + m_fields.get_mr_levels(FieldType::rho_cp, finest_level, skip_lev0_coarse_patch) : ablastr::fields::MultiLevelScalarField{static_cast(finest_level+1)}; const ablastr::fields::MultiLevelScalarField rho_buf = m_fields.has(FieldType::rho_buf, 1) ? - m_fields.get_mr_levels(FieldType::rho_buf, finest_level) : + m_fields.get_mr_levels(FieldType::rho_buf, finest_level, skip_lev0_coarse_patch) : ablastr::fields::MultiLevelScalarField{static_cast(finest_level+1)}; SyncRho(rho_fp, rho_cp, rho_buf); diff --git a/Source/Utils/WarpXMovingWindow.cpp b/Source/Utils/WarpXMovingWindow.cpp index cc8886fc67f..b37aa41e28a 100644 --- a/Source/Utils/WarpXMovingWindow.cpp +++ b/Source/Utils/WarpXMovingWindow.cpp @@ -143,6 +143,8 @@ WarpX::MoveWindow (const int step, bool move_j) using ablastr::fields::Direction; using warpx::fields::FieldType; + bool const skip_lev0_coarse_patch = true; + if (step == start_moving_window_step) { amrex::Print() << Utils::TextMsg::Info("Starting moving window"); } @@ -276,8 +278,8 @@ WarpX::MoveWindow (const int step, bool move_j) shiftMF(*m_fields.get(FieldType::Bfield_aux, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); shiftMF(*m_fields.get(FieldType::Efield_aux, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); if (fft_do_time_averaging) { - ablastr::fields::MultiLevelVectorField Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level); - ablastr::fields::MultiLevelVectorField Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level); + ablastr::fields::MultiLevelVectorField Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level, skip_lev0_coarse_patch); + ablastr::fields::MultiLevelVectorField Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level, skip_lev0_coarse_patch); shiftMF(*Bfield_avg_cp[lev][dim], geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells, m_p_ext_field_params->B_external_grid[dim], use_Bparser, Bfield_parser); shiftMF(*Efield_avg_cp[lev][dim], geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells, diff --git a/Source/ablastr/fields/MultiFabRegister.H b/Source/ablastr/fields/MultiFabRegister.H index 21df20c1678..11cf932c12c 100644 --- a/Source/ablastr/fields/MultiFabRegister.H +++ b/Source/ablastr/fields/MultiFabRegister.H @@ -472,6 +472,7 @@ namespace ablastr::fields * * @param name the name of the field * @param finest_level the highest MR level to return + * @param skip_level_0 return a nullptr for level 0 * @return non-owning pointers to the MultiFab (field) on all levels */ //@{ @@ -479,24 +480,28 @@ namespace ablastr::fields [[nodiscard]] MultiLevelScalarField get_mr_levels ( T name, - int finest_level + int finest_level, + bool skip_level_0=false ) { return internal_get_mr_levels( getExtractedName(name), - finest_level + finest_level, + skip_level_0 ); } template [[nodiscard]] ConstMultiLevelScalarField get_mr_levels ( T name, - int finest_level + int finest_level, + bool skip_level_0=false ) const { return internal_get_mr_levels( getExtractedName(name), - finest_level + finest_level, + skip_level_0 ); } //@} @@ -543,6 +548,7 @@ namespace ablastr::fields * * @param name the name of the field * @param finest_level the highest MR level to return + * @param skip_level_0 return a nullptr for level 0 * @return non-owning pointers to all components of a vector field on all MR levels */ //@{ @@ -550,24 +556,28 @@ namespace ablastr::fields [[nodiscard]] MultiLevelVectorField get_mr_levels_alldirs ( T name, - int finest_level + int finest_level, + bool skip_level_0=false ) { return internal_get_mr_levels_alldirs( getExtractedName(name), - finest_level + finest_level, + skip_level_0 ); } template [[nodiscard]] ConstMultiLevelVectorField get_mr_levels_alldirs ( T name, - int finest_level + int finest_level, + bool skip_level_0=false ) const { return internal_get_mr_levels_alldirs( getExtractedName(name), - finest_level + finest_level, + skip_level_0 ); } //@} @@ -762,12 +772,14 @@ namespace ablastr::fields [[nodiscard]] MultiLevelScalarField internal_get_mr_levels ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ); [[nodiscard]] ConstMultiLevelScalarField internal_get_mr_levels ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ) const; [[nodiscard]] VectorField internal_get_alldirs ( @@ -782,12 +794,14 @@ namespace ablastr::fields [[nodiscard]] MultiLevelVectorField internal_get_mr_levels_alldirs ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ); [[nodiscard]] ConstMultiLevelVectorField internal_get_mr_levels_alldirs ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ) const; void diff --git a/Source/ablastr/fields/MultiFabRegister.cpp b/Source/ablastr/fields/MultiFabRegister.cpp index 2c384a90089..a1266deeab0 100644 --- a/Source/ablastr/fields/MultiFabRegister.cpp +++ b/Source/ablastr/fields/MultiFabRegister.cpp @@ -350,9 +350,7 @@ namespace ablastr::fields ) { if (m_mf_register.count(internal_name) == 0) { - // FIXME: temporary, throw a std::runtime_error - // throw std::runtime_error("MultiFabRegister::get name does not exist in register: " + key); - return nullptr; + throw std::runtime_error("MultiFabRegister::get name does not exist in register: " + internal_name); } amrex::MultiFab & mf = m_mf_register.at(internal_name).m_mf; @@ -365,9 +363,7 @@ namespace ablastr::fields ) const { if (m_mf_register.count(internal_name) == 0) { - // FIXME: temporary, throw a std::runtime_error - // throw std::runtime_error("MultiFabRegister::get name does not exist in register: " + internal_name); - return nullptr; + throw std::runtime_error("MultiFabRegister::get name does not exist in register: " + internal_name); } amrex::MultiFab const & mf = m_mf_register.at(internal_name).m_mf; @@ -419,14 +415,22 @@ namespace ablastr::fields MultiLevelScalarField MultiFabRegister::internal_get_mr_levels ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ) { MultiLevelScalarField field_on_level; field_on_level.reserve(finest_level+1); for (int lvl = 0; lvl <= finest_level; lvl++) { - field_on_level.push_back(internal_get(name, lvl)); + if (lvl == 0 && skip_level_0) + { + field_on_level.push_back(nullptr); + } + else + { + field_on_level.push_back(internal_get(name, lvl)); + } } return field_on_level; } @@ -434,14 +438,22 @@ namespace ablastr::fields ConstMultiLevelScalarField MultiFabRegister::internal_get_mr_levels ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ) const { ConstMultiLevelScalarField field_on_level; field_on_level.reserve(finest_level+1); for (int lvl = 0; lvl <= finest_level; lvl++) { - field_on_level.push_back(internal_get(name, lvl)); + if (lvl == 0 && skip_level_0) + { + field_on_level.push_back(nullptr); + } + else + { + field_on_level.push_back(internal_get(name, lvl)); + } } return field_on_level; } @@ -483,7 +495,8 @@ namespace ablastr::fields MultiLevelVectorField MultiFabRegister::internal_get_mr_levels_alldirs ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ) { MultiLevelVectorField field_on_level; @@ -497,7 +510,14 @@ namespace ablastr::fields // insert components for (Direction const & dir : m_all_dirs) { - field_on_level[lvl][dir] = internal_get(name, dir, lvl); + if (lvl == 0 && skip_level_0) + { + field_on_level[lvl][dir] = nullptr; + } + else + { + field_on_level[lvl][dir] = internal_get(name, dir, lvl); + } } } return field_on_level; @@ -506,7 +526,8 @@ namespace ablastr::fields ConstMultiLevelVectorField MultiFabRegister::internal_get_mr_levels_alldirs ( std::string const & name, - int finest_level + int finest_level, + bool skip_level_0 ) const { ConstMultiLevelVectorField field_on_level; @@ -520,7 +541,14 @@ namespace ablastr::fields // insert components for (Direction const & dir : m_all_dirs) { - field_on_level[lvl][dir] = internal_get(name, dir, lvl); + if (lvl == 0 && skip_level_0) + { + field_on_level[lvl][dir] = nullptr; + } + else + { + field_on_level[lvl][dir] = internal_get(name, dir, lvl); + } } } return field_on_level; From ca9b8f6d48105e398adb672e46df132b6cf5798c Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Mon, 3 Feb 2025 11:20:34 -0800 Subject: [PATCH 17/58] Doc: Frontier OpenMP Load (#5631) Work-around for the ROCm module that does not add the `llvm/lib` sub-directory to the `LD_LIBRARY_PATH`. Only an issue on `install`, if runpath is stripped (default). --- Tools/machines/frontier-olcf/frontier_warpx.profile.example | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Tools/machines/frontier-olcf/frontier_warpx.profile.example b/Tools/machines/frontier-olcf/frontier_warpx.profile.example index ad78ab1acaf..b51946ce832 100644 --- a/Tools/machines/frontier-olcf/frontier_warpx.profile.example +++ b/Tools/machines/frontier-olcf/frontier_warpx.profile.example @@ -13,6 +13,9 @@ module load cray-mpich/8.1.28 module load cce/17.0.0 # must be loaded after rocm # https://docs.olcf.ornl.gov/systems/frontier_user_guide.html#compatible-compiler-rocm-toolchain-versions +# Fix for OpenMP Runtime (OLCFHELP-21543) +export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${ROCM_PATH}/llvm/lib + # optional: faster builds module load ccache module load ninja From cb30300cbba174b321bc244a55f07a4e8583f2aa Mon Sep 17 00:00:00 2001 From: David Grote Date: Mon, 3 Feb 2025 11:27:54 -0800 Subject: [PATCH 18/58] Add FieldPoyntingFlux reduced diagnostic (#5475) This adds a reduced diagnostic that calculates the Poynting flux on the surfaces of the domain, providing the power flow into and out of the domain. This also includes the time integrated data. When using the implicit evolve scheme, to get the energy accounting correct, the flux needs to be calculated at the mid step. For this reason, the `ComputeDiagsMidStep` was added which is called directly at the appropriate times. Because of the time integration, there are two main differences of this reduced diagnostic compared to the others. The first is that it is calculated every time step in order to get the full resolution in time. The intervals parameter still controls how often the diagnostic data is written out. The second is that a facility is added to write out the values of the time integration to a file when a checkpoint is made, so on a restart the integration can continue with the previous values. The facility was written in a general way so that other reduced diagnostics can also do this. The CI test using the implicit solver is dependent on PR #5498 and PR #5489. --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- Docs/source/usage/parameters.rst | 6 + Examples/Tests/pec/CMakeLists.txt | 20 ++ .../pec/analysis_pec_insulator_implicit.py | 57 +++ ...nputs_test_2d_pec_field_insulator_implicit | 73 ++++ ...st_2d_pec_field_insulator_implicit_restart | 5 + .../inputs_test_3d_reduced_diags | 4 +- Python/pywarpx/picmi.py | 1 + .../test_2d_pec_field_insulator_implicit.json | 14 + ..._pec_field_insulator_implicit_restart.json | 14 + .../FlushFormats/FlushFormatCheckpoint.H | 2 + .../FlushFormats/FlushFormatCheckpoint.cpp | 12 + .../Diagnostics/ReducedDiags/CMakeLists.txt | 1 + .../ReducedDiags/FieldPoyntingFlux.H | 63 ++++ .../ReducedDiags/FieldPoyntingFlux.cpp | 333 ++++++++++++++++++ Source/Diagnostics/ReducedDiags/Make.package | 1 + .../ReducedDiags/MultiReducedDiags.H | 11 + .../ReducedDiags/MultiReducedDiags.cpp | 39 ++ .../Diagnostics/ReducedDiags/ReducedDiags.H | 21 ++ .../Diagnostics/ReducedDiags/ReducedDiags.cpp | 21 ++ Source/Diagnostics/WarpXIO.cpp | 3 + .../ImplicitSolvers/SemiImplicitEM.cpp | 2 + .../StrangImplicitSpectralEM.cpp | 2 + .../ImplicitSolvers/ThetaImplicitEM.cpp | 2 + 23 files changed, 706 insertions(+), 1 deletion(-) create mode 100755 Examples/Tests/pec/analysis_pec_insulator_implicit.py create mode 100644 Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit create mode 100644 Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit_restart create mode 100644 Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit.json create mode 100644 Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit_restart.json create mode 100644 Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.H create mode 100644 Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.cpp diff --git a/Docs/source/usage/parameters.rst b/Docs/source/usage/parameters.rst index 7c92b5cf9e7..aaba7130b87 100644 --- a/Docs/source/usage/parameters.rst +++ b/Docs/source/usage/parameters.rst @@ -3182,6 +3182,12 @@ This shifts analysis from post-processing to runtime calculation of reduction op Note that the fields are averaged on the cell centers before their maximum values are computed. + * ``FieldPoyntingFlux`` + Integrates the normal Poynting flux over each domain boundary surface and also integrates the flux over time. + This provides the power and total energy loss into or out of the simulation domain. + The output columns are the flux for each dimension on the lower boundaries, then the higher boundaries, + then the integrated energy loss for each dimension on the the lower and higher boundaries. + * ``FieldProbe`` This type computes the value of each component of the electric and magnetic fields and of the Poynting vector (a measure of electromagnetic flux) at points in the domain. diff --git a/Examples/Tests/pec/CMakeLists.txt b/Examples/Tests/pec/CMakeLists.txt index f331249ded0..66d9dd1c13e 100644 --- a/Examples/Tests/pec/CMakeLists.txt +++ b/Examples/Tests/pec/CMakeLists.txt @@ -40,3 +40,23 @@ add_warpx_test( "analysis_default_regression.py --path diags/diag1000010" # checksum OFF # dependency ) + +add_warpx_test( + test_2d_pec_field_insulator_implicit # name + 2 # dims + 2 # nprocs + inputs_test_2d_pec_field_insulator_implicit # inputs + "analysis_pec_insulator_implicit.py diags/diag1000020" # analysis + "analysis_default_regression.py --path diags/diag1000020" # checksum + OFF # dependency +) + +add_warpx_test( + test_2d_pec_field_insulator_implicit_restart # name + 2 # dims + 2 # nprocs + inputs_test_2d_pec_field_insulator_implicit_restart # inputs + "analysis_pec_insulator_implicit.py diags/diag1000020" # analysis + "analysis_default_regression.py --path diags/diag1000020" # checksum + test_2d_pec_field_insulator_implicit # dependency +) diff --git a/Examples/Tests/pec/analysis_pec_insulator_implicit.py b/Examples/Tests/pec/analysis_pec_insulator_implicit.py new file mode 100755 index 00000000000..1fdbc2261a8 --- /dev/null +++ b/Examples/Tests/pec/analysis_pec_insulator_implicit.py @@ -0,0 +1,57 @@ +#!/usr/bin/env python3 + +# +# +# This file is part of WarpX. +# +# License: BSD-3-Clause-LBNL +# +# This is a script that analyses the simulation results from +# the scripts `inputs_test_2d_pec_field_insulator_implicit` and +# `inputs_test_2d_pec_field_insulator_implicit_restart`. +# The scripts model an insulator boundary condition on part of the +# upper x boundary that pushes B field into the domain. The implicit +# solver is used, converging to machine tolerance. The energy accounting +# should be exact to machine precision, so that the energy is the system +# should be the same as the amount of energy pushed in from the boundary. +# This is checked using the FieldEnergy and FieldPoyntingFlux reduced +# diagnostics. +import sys + +import matplotlib + +matplotlib.use("Agg") +import matplotlib.pyplot as plt +import numpy as np + +# this will be the name of the plot file +fn = sys.argv[1] + +EE = np.loadtxt(f"{fn}/../reducedfiles/fieldenergy.txt", skiprows=1) +SS = np.loadtxt(f"{fn}/../reducedfiles/poyntingflux.txt", skiprows=1) +SSsum = SS[:, 2:6].sum(1) +EEloss = SS[:, 7:].sum(1) + +dt = EE[1, 1] + +fig, ax = plt.subplots() +ax.plot(EE[:, 0], EE[:, 2], label="field energy") +ax.plot(SS[:, 0], -EEloss, label="-flux*dt") +ax.legend() +ax.set_xlabel("time (s)") +ax.set_ylabel("energy (J)") +fig.savefig("energy_history.png") + +fig, ax = plt.subplots() +ax.plot(EE[:, 0], (EE[:, 2] + EEloss) / EE[:, 2].max()) +ax.set_xlabel("time (s)") +ax.set_ylabel("energy difference/max energy (1)") +fig.savefig("energy_difference.png") + +tolerance_rel = 1.0e-13 + +energy_difference_fraction = np.abs((EE[:, 2] + EEloss) / EE[:, 2].max()).max() +print(f"energy accounting error = {energy_difference_fraction}") +print(f"tolerance_rel = {tolerance_rel}") + +assert energy_difference_fraction < tolerance_rel diff --git a/Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit b/Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit new file mode 100644 index 00000000000..ec61e3f8605 --- /dev/null +++ b/Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit @@ -0,0 +1,73 @@ +# Maximum number of time steps +max_step = 20 + +# number of grid points +amr.n_cell = 32 32 +amr.blocking_factor = 16 + +# Maximum level in hierarchy (for now must be 0, i.e., one level in total) +amr.max_level = 0 + +# Geometry +geometry.dims = 2 +geometry.prob_lo = 0. 2.e-2 # physical domain +geometry.prob_hi = 1.e-2 3.e-2 + +# Boundary condition +boundary.field_lo = neumann periodic +boundary.field_hi = pec_insulator periodic + +insulator.area_x_hi(y,z) = (2.25e-2 <= z and z <= 2.75e-2) +insulator.By_x_hi(y,z,t) = min(t/1.0e-12,1)*1.e1*3.3e-4 + +warpx.serialize_initial_conditions = 1 + +# Implicit setup +# Note that this is the CFL step size for the explicit simulation, over 2. +# This value allows quick convergence of the Picard solver. +warpx.const_dt = 7.37079480234276e-13/2. + +algo.maxwell_solver = Yee +algo.evolve_scheme = "theta_implicit_em" +#algo.evolve_scheme = "semi_implicit_em" + +implicit_evolve.theta = 0.5 +#implicit_evolve.max_particle_iterations = 21 +#implicit_evolve.particle_tolerance = 1.0e-12 + +implicit_evolve.nonlinear_solver = "picard" +picard.verbose = true +picard.max_iterations = 25 +picard.relative_tolerance = 0.0 +picard.absolute_tolerance = 0.0 +picard.require_convergence = false + +#implicit_evolve.nonlinear_solver = "newton" +#newton.verbose = true +#newton.max_iterations = 20 +#newton.relative_tolerance = 1.0e-20 +#newton.absolute_tolerance = 0.0 +#newton.require_convergence = false + +#gmres.verbose_int = 2 +#gmres.max_iterations = 1000 +#gmres.relative_tolerance = 1.0e-20 +#gmres.absolute_tolerance = 0.0 + +# Verbosity +warpx.verbose = 1 + +# Diagnostics +diagnostics.diags_names = diag1 chk +diag1.intervals = 20 +diag1.diag_type = Full + +chk.intervals = 10 +chk.diag_type = Full +chk.format = checkpoint + +warpx.reduced_diags_names = fieldenergy poyntingflux +poyntingflux.type = FieldPoyntingFlux +poyntingflux.intervals = 1 +fieldenergy.type = FieldEnergy +fieldenergy.intervals = 1 diff --git a/Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit_restart b/Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit_restart new file mode 100644 index 00000000000..35b78d01acd --- /dev/null +++ b/Examples/Tests/pec/inputs_test_2d_pec_field_insulator_implicit_restart @@ -0,0 +1,5 @@ +# base input parameters +FILE = inputs_test_2d_pec_field_insulator_implicit + +# test input parameters +amr.restart = "../test_2d_pec_field_insulator_implicit/diags/chk000010" diff --git a/Examples/Tests/reduced_diags/inputs_test_3d_reduced_diags b/Examples/Tests/reduced_diags/inputs_test_3d_reduced_diags index dc0c57264ba..cc1b658c27f 100644 --- a/Examples/Tests/reduced_diags/inputs_test_3d_reduced_diags +++ b/Examples/Tests/reduced_diags/inputs_test_3d_reduced_diags @@ -68,7 +68,7 @@ photons.uz_th = 0.2 ################################# ###### REDUCED DIAGS ############ ################################# -warpx.reduced_diags_names = EP NP EF PP PF MF MR FP FP_integrate FP_line FP_plane FR_Max FR_Min FR_Integral Edotj +warpx.reduced_diags_names = EP NP EF PP PF MF PX MR FP FP_integrate FP_line FP_plane FR_Max FR_Min FR_Integral Edotj EP.type = ParticleEnergy EP.intervals = 200 EF.type = FieldEnergy @@ -79,6 +79,8 @@ PF.type = FieldMomentum PF.intervals = 200 MF.type = FieldMaximum MF.intervals = 200 +PX.type = FieldPoyntingFlux +PX.intervals = 200 FP.type = FieldProbe FP.intervals = 200 #The probe is placed at a cell center to match the value in the plotfile diff --git a/Python/pywarpx/picmi.py b/Python/pywarpx/picmi.py index f8261cd7847..da673671953 100644 --- a/Python/pywarpx/picmi.py +++ b/Python/pywarpx/picmi.py @@ -4074,6 +4074,7 @@ def __init__( "FieldEnergy", "FieldMomentum", "FieldMaximum", + "FieldPoyntingFlux", "RhoMaximum", "ParticleNumber", "LoadBalanceCosts", diff --git a/Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit.json b/Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit.json new file mode 100644 index 00000000000..fcb3081f6ae --- /dev/null +++ b/Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit.json @@ -0,0 +1,14 @@ +{ + "lev=0": { + "Bx": 0.0, + "By": 0.35907571934346943, + "Bz": 0.0, + "Ex": 36840284.366667606, + "Ey": 0.0, + "Ez": 107777138.0847348, + "jx": 0.0, + "jy": 0.0, + "jz": 0.0 + } +} + diff --git a/Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit_restart.json b/Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit_restart.json new file mode 100644 index 00000000000..fcb3081f6ae --- /dev/null +++ b/Regression/Checksum/benchmarks_json/test_2d_pec_field_insulator_implicit_restart.json @@ -0,0 +1,14 @@ +{ + "lev=0": { + "Bx": 0.0, + "By": 0.35907571934346943, + "Bz": 0.0, + "Ex": 36840284.366667606, + "Ey": 0.0, + "Ez": 107777138.0847348, + "jx": 0.0, + "jy": 0.0, + "jz": 0.0 + } +} + diff --git a/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.H b/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.H index cb0a6c4b6c7..e2cd28f9e1c 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.H +++ b/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.H @@ -35,6 +35,8 @@ class FlushFormatCheckpoint final : public FlushFormatPlotfile const amrex::Vector& particle_diags) const; void WriteDMaps (const std::string& dir, int nlev) const; + + void WriteReducedDiagsData (std::string const & dir) const; }; #endif // WARPX_FLUSHFORMATCHECKPOINT_H_ diff --git a/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp b/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp index a3a348d90ee..fc308dee936 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp +++ b/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp @@ -5,6 +5,7 @@ # include "BoundaryConditions/PML_RZ.H" #endif #include "Diagnostics/ParticleDiag/ParticleDiag.H" +#include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include "Fields.H" #include "Particles/WarpXParticleContainer.H" #include "Utils/TextMsg.H" @@ -174,6 +175,8 @@ FlushFormatCheckpoint::WriteToFile ( WriteDMaps(checkpointname, nlev); + WriteReducedDiagsData(checkpointname); + VisMF::SetHeaderVersion(current_version); } @@ -263,3 +266,12 @@ FlushFormatCheckpoint::WriteDMaps (const std::string& dir, int nlev) const } } } + +void +FlushFormatCheckpoint::WriteReducedDiagsData (std::string const & dir) const +{ + if (ParallelDescriptor::IOProcessor()) { + auto & warpx = WarpX::GetInstance(); + warpx.reduced_diags->WriteCheckpointData(dir); + } +} diff --git a/Source/Diagnostics/ReducedDiags/CMakeLists.txt b/Source/Diagnostics/ReducedDiags/CMakeLists.txt index bbf1b6b65b0..4fbfc489aba 100644 --- a/Source/Diagnostics/ReducedDiags/CMakeLists.txt +++ b/Source/Diagnostics/ReducedDiags/CMakeLists.txt @@ -9,6 +9,7 @@ foreach(D IN LISTS WarpX_DIMS) FieldEnergy.cpp FieldMaximum.cpp FieldMomentum.cpp + FieldPoyntingFlux.cpp FieldProbe.cpp FieldProbeParticleContainer.cpp FieldReduction.cpp diff --git a/Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.H b/Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.H new file mode 100644 index 00000000000..3a45bd6c789 --- /dev/null +++ b/Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.H @@ -0,0 +1,63 @@ +/* Copyright 2019-2020 + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#ifndef WARPX_DIAGNOSTICS_REDUCEDDIAGS_FIELDPOYTINGFLUX_H_ +#define WARPX_DIAGNOSTICS_REDUCEDDIAGS_FIELDPOYTINGFLUX_H_ + +#include "ReducedDiags.H" + +#include + +/** + * \brief This class mainly contains a function that computes the field Poynting flux, + * S = E cross B, integrated over each face of the domain. + */ +class FieldPoyntingFlux : public ReducedDiags +{ +public: + + /** + * \brief Constructor + * + * \param[in] rd_name reduced diags names + */ + FieldPoyntingFlux (const std::string& rd_name); + + /** + * \brief Call the routine to compute the Poynting flux if needed + * + * \param[in] step current time step + */ + void ComputeDiags (int step) final; + + /** + * \brief Call the routine to compute the Poynting flux at the mid step time level + * + * \param[in] step current time step + */ + void ComputeDiagsMidStep (int step) final; + + /** + * \brief This function computes the electromagnetic Poynting flux, + * obtained by integrating the electromagnetic Poynting flux density g = eps0 * (E x B) + * on the surface of the domain. + * + * \param[in] step current time step + */ + void ComputePoyntingFlux (); + + void WriteCheckpointData (std::string const & dir) final; + + void ReadCheckpointData (std::string const & dir) final; + +private: + + bool use_mid_step_value = false; + +}; + +#endif diff --git a/Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.cpp b/Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.cpp new file mode 100644 index 00000000000..f760516f2b9 --- /dev/null +++ b/Source/Diagnostics/ReducedDiags/FieldPoyntingFlux.cpp @@ -0,0 +1,333 @@ +/* Copyright 2019-2020 + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#include "FieldPoyntingFlux.H" + +#include "Fields.H" +#include "Utils/TextMsg.H" +#include "Utils/WarpXConst.H" +#include "WarpX.H" + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +using namespace amrex::literals; + +FieldPoyntingFlux::FieldPoyntingFlux (const std::string& rd_name) + : ReducedDiags{rd_name} +{ + // Resize data array + // lo and hi is 2 + // space dims is AMREX_SPACEDIM + // instantaneous and integrated is 2 + // The order will be outward flux for low faces, then high faces, + // energy loss for low faces, then high faces + m_data.resize(2*AMREX_SPACEDIM*2, 0.0_rt); + + if (amrex::ParallelDescriptor::IOProcessor()) + { + if (m_write_header) + { + // Open file + std::ofstream ofs{m_path + m_rd_name + "." + m_extension, std::ofstream::out}; + + int c = 0; + + // Write header row + ofs << "#"; + ofs << "[" << c++ << "]step()"; + ofs << m_sep; + ofs << "[" << c++ << "]time(s)"; + + std::vector sides = {"lo", "hi"}; + +#if defined(WARPX_DIM_3D) + std::vector space_coords = {"x", "y", "z"}; +#elif defined(WARPX_DIM_XZ) + std::vector space_coords = {"x", "z"}; +#elif defined(WARPX_DIM_1D_Z) + std::vector space_coords = {"z"}; +#elif defined(WARPX_DIM_RZ) + std::vector space_coords = {"r", "z"}; +#endif + + // Only on level 0 + for (int iside = 0; iside < 2; iside++) { + for (int ic = 0; ic < AMREX_SPACEDIM; ic++) { + ofs << m_sep; + ofs << "[" << c++ << "]outward_power_" + sides[iside] + "_" + space_coords[ic] +"(W)"; + }} + for (int iside = 0; iside < 2; iside++) { + for (int ic = 0; ic < AMREX_SPACEDIM; ic++) { + ofs << m_sep; + ofs << "[" << c++ << "]integrated_energy_loss_" + sides[iside] + "_" + space_coords[ic] +"(J)"; + }} + + ofs << "\n"; + ofs.close(); + } + } +} + +void FieldPoyntingFlux::ComputeDiags (int /*step*/) +{ + // This will be called at the end of the time step. Only calculate the + // flux if it had not already been calculated mid step. + if (!use_mid_step_value) { + ComputePoyntingFlux(); + } +} + +void FieldPoyntingFlux::ComputeDiagsMidStep (int /*step*/) +{ + // If this is called, always use the value calculated here. + use_mid_step_value = true; + ComputePoyntingFlux(); +} + +void FieldPoyntingFlux::ComputePoyntingFlux () +{ + using warpx::fields::FieldType; + using ablastr::fields::Direction; + + // Note that this is calculated every step to get the + // full resolution on the integrated data + + int const lev = 0; + + // Get a reference to WarpX instance + auto & warpx = WarpX::GetInstance(); + + // RZ coordinate only working with one mode +#if defined(WARPX_DIM_RZ) + WARPX_ALWAYS_ASSERT_WITH_MESSAGE(warpx.n_rz_azimuthal_modes == 1, + "FieldPoyntingFlux reduced diagnostics only implemented in RZ geometry for one mode"); +#endif + + amrex::Box domain_box = warpx.Geom(lev).Domain(); + domain_box.surroundingNodes(); + + // Get MultiFab data at given refinement level + amrex::MultiFab const & Ex = *warpx.m_fields.get(FieldType::Efield_fp, Direction{0}, lev); + amrex::MultiFab const & Ey = *warpx.m_fields.get(FieldType::Efield_fp, Direction{1}, lev); + amrex::MultiFab const & Ez = *warpx.m_fields.get(FieldType::Efield_fp, Direction{2}, lev); + amrex::MultiFab const & Bx = *warpx.m_fields.get(FieldType::Bfield_fp, Direction{0}, lev); + amrex::MultiFab const & By = *warpx.m_fields.get(FieldType::Bfield_fp, Direction{1}, lev); + amrex::MultiFab const & Bz = *warpx.m_fields.get(FieldType::Bfield_fp, Direction{2}, lev); + + // Coarsening ratio (no coarsening) + amrex::GpuArray const cr{1,1,1}; + + // Reduction component (fourth component in Array4) + constexpr int comp = 0; + + // Index type (staggering) of each MultiFab + // (with third component set to zero in 2D) + amrex::GpuArray Ex_stag{0,0,0}; + amrex::GpuArray Ey_stag{0,0,0}; + amrex::GpuArray Ez_stag{0,0,0}; + amrex::GpuArray Bx_stag{0,0,0}; + amrex::GpuArray By_stag{0,0,0}; + amrex::GpuArray Bz_stag{0,0,0}; + for (int i = 0; i < AMREX_SPACEDIM; ++i) + { + Ex_stag[i] = Ex.ixType()[i]; + Ey_stag[i] = Ey.ixType()[i]; + Ez_stag[i] = Ez.ixType()[i]; + Bx_stag[i] = Bx.ixType()[i]; + By_stag[i] = By.ixType()[i]; + Bz_stag[i] = Bz.ixType()[i]; + } + + for (amrex::OrientationIter face; face; ++face) { + + int const face_dir = face().coordDir(); + + if (face().isHigh() && WarpX::field_boundary_hi[face_dir] == FieldBoundaryType::Periodic) { + // For upper periodic boundaries, copy the lower value instead of regenerating it. + int const iu = int(face()); + int const il = int(face().flip()); + m_data[iu] = -m_data[il]; + m_data[iu + 2*AMREX_SPACEDIM] = -m_data[il + 2*AMREX_SPACEDIM]; + continue; + } + + amrex::Box const boundary = amrex::bdryNode(domain_box, face()); + + // Get cell area + amrex::Real const *dx = warpx.Geom(lev).CellSize(); + std::array dxtemp = {AMREX_D_DECL(dx[0], dx[1], dx[2])}; + dxtemp[face_dir] = 1._rt; + amrex::Real const dA = AMREX_D_TERM(dxtemp[0], *dxtemp[1], *dxtemp[2]); + + // Node-centered in the face direction, Cell-centered in other directions + amrex::GpuArray cc{0,0,0}; + cc[face_dir] = 1; + + // Only calculate the ExB term that is normal to the surface. + // normal_dir is the normal direction relative to the WarpX coordinates +#if (defined WARPX_DIM_XZ) || (defined WARPX_DIM_RZ) + // For 2D : it is either 0, or 2 + int const normal_dir = 2*face_dir; +#elif (defined WARPX_DIM_1D_Z) + // For 1D : it is always 2 + int const normal_dir = 2; +#else + // For 3D : it is the same as the face direction + int const normal_dir = face_dir; +#endif + + amrex::ReduceOps reduce_ops; + amrex::ReduceData reduce_data(reduce_ops); + +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + // Loop over boxes, interpolate E,B data to cell face centers + // and compute sum over cells of (E x B) components + for (amrex::MFIter mfi(Ex, amrex::TilingIfNotGPU()); mfi.isValid(); ++mfi) + { + amrex::Array4 const & Ex_arr = Ex[mfi].array(); + amrex::Array4 const & Ey_arr = Ey[mfi].array(); + amrex::Array4 const & Ez_arr = Ez[mfi].array(); + amrex::Array4 const & Bx_arr = Bx[mfi].array(); + amrex::Array4 const & By_arr = By[mfi].array(); + amrex::Array4 const & Bz_arr = Bz[mfi].array(); + + amrex::Box box = enclosedCells(mfi.nodaltilebox()); + box.surroundingNodes(face_dir); + + // Find the intersection with the boundary + // boundary needs to have the same type as box + amrex::Box const boundary_matched = amrex::convert(boundary, box.ixType()); + box &= boundary_matched; + +#if defined(WARPX_DIM_RZ) + // Lower corner of box physical domain + amrex::XDim3 const xyzmin = WarpX::LowerCorner(box, lev, 0._rt); + amrex::Dim3 const lo = amrex::lbound(box); + amrex::Real const dr = warpx.Geom(lev).CellSize(lev); + amrex::Real const rmin = xyzmin.x; + int const irmin = lo.x; +#endif + + auto area_factor = [=] AMREX_GPU_DEVICE(int i, int j, int k) noexcept { + amrex::ignore_unused(i,j,k); +#if defined WARPX_DIM_RZ + amrex::Real r; + if (normal_dir == 0) { + r = rmin + (i - irmin)*dr; + } else { + r = rmin + (i + 0.5_rt - irmin)*dr; + } + return 2._rt*MathConst::pi*r; +#else + return 1._rt; +#endif + }; + + // Compute E x B + reduce_ops.eval(box, reduce_data, + [=] AMREX_GPU_DEVICE (int i, int j, int k) -> amrex::GpuTuple + { + amrex::Real Ex_cc = 0._rt, Ey_cc = 0._rt, Ez_cc = 0._rt; + amrex::Real Bx_cc = 0._rt, By_cc = 0._rt, Bz_cc = 0._rt; + + if (normal_dir == 1 || normal_dir == 2) { + Ex_cc = ablastr::coarsen::sample::Interp(Ex_arr, Ex_stag, cc, cr, i, j, k, comp); + Bx_cc = ablastr::coarsen::sample::Interp(Bx_arr, Bx_stag, cc, cr, i, j, k, comp); + } + + if (normal_dir == 0 || normal_dir == 2) { + Ey_cc = ablastr::coarsen::sample::Interp(Ey_arr, Ey_stag, cc, cr, i, j, k, comp); + By_cc = ablastr::coarsen::sample::Interp(By_arr, By_stag, cc, cr, i, j, k, comp); + } + if (normal_dir == 0 || normal_dir == 1) { + Ez_cc = ablastr::coarsen::sample::Interp(Ez_arr, Ez_stag, cc, cr, i, j, k, comp); + Bz_cc = ablastr::coarsen::sample::Interp(Bz_arr, Bz_stag, cc, cr, i, j, k, comp); + } + + amrex::Real const af = area_factor(i,j,k); + if (normal_dir == 0) { return af*(Ey_cc * Bz_cc - Ez_cc * By_cc); } + else if (normal_dir == 1) { return af*(Ez_cc * Bx_cc - Ex_cc * Bz_cc); } + else { return af*(Ex_cc * By_cc - Ey_cc * Bx_cc); } + }); + } + + int const sign = (face().isLow() ? -1 : 1); + auto r = reduce_data.value(); + int const ii = int(face()); + m_data[ii] = sign*amrex::get<0>(r)/PhysConst::mu0*dA; + + } + + amrex::ParallelDescriptor::ReduceRealSum(m_data.data(), 2*AMREX_SPACEDIM); + + amrex::Real const dt = warpx.getdt(lev); + for (int ii=0 ; ii < 2*AMREX_SPACEDIM ; ii++) { + m_data[ii + 2*AMREX_SPACEDIM] += m_data[ii]*dt; + } + +} + +void +FieldPoyntingFlux::WriteCheckpointData (std::string const & dir) +{ + // Write out the current values of the time integrated data + std::ofstream chkfile{dir + "/FieldPoyntingFlux_data.txt", std::ofstream::out}; + if (!chkfile.good()) { + WARPX_ABORT_WITH_MESSAGE("FieldPoyntingFlux::WriteCheckpointData: could not open file for writing checkpoint data"); + } + + chkfile.precision(17); + + for (int i=0; i < 2*AMREX_SPACEDIM; i++) { + chkfile << m_data[2*AMREX_SPACEDIM + i] << "\n"; + } +} + +void +FieldPoyntingFlux::ReadCheckpointData (std::string const & dir) +{ + // Read in the current values of the time integrated data + std::ifstream chkfile{dir + "/FieldPoyntingFlux_data.txt", std::ifstream::in}; + if (!chkfile.good()) { + WARPX_ABORT_WITH_MESSAGE("FieldPoyntingFlux::ReadCheckpointData: could not open file for reading checkpoint data"); + } + + for (int i=0; i < 2*AMREX_SPACEDIM; i++) { + amrex::Real data; + if (chkfile >> data) { + m_data[2*AMREX_SPACEDIM + i] = data; + } else { + WARPX_ABORT_WITH_MESSAGE("FieldPoyntingFlux::ReadCheckpointData: could not read in time integrated data"); + } + } +} diff --git a/Source/Diagnostics/ReducedDiags/Make.package b/Source/Diagnostics/ReducedDiags/Make.package index 2611831a3dd..4d2e4d7def9 100644 --- a/Source/Diagnostics/ReducedDiags/Make.package +++ b/Source/Diagnostics/ReducedDiags/Make.package @@ -7,6 +7,7 @@ CEXE_sources += DifferentialLuminosity.cpp CEXE_sources += FieldEnergy.cpp CEXE_sources += FieldMaximum.cpp CEXE_sources += FieldMomentum.cpp +CEXE_sources += FieldPoyntingFlux.cpp CEXE_sources += FieldProbe.cpp CEXE_sources += FieldProbeParticleContainer.cpp CEXE_sources += FieldReduction.cpp diff --git a/Source/Diagnostics/ReducedDiags/MultiReducedDiags.H b/Source/Diagnostics/ReducedDiags/MultiReducedDiags.H index 1a2f51794c6..5a782db7118 100644 --- a/Source/Diagnostics/ReducedDiags/MultiReducedDiags.H +++ b/Source/Diagnostics/ReducedDiags/MultiReducedDiags.H @@ -49,10 +49,21 @@ public: * @param[in] step current iteration time */ void ComputeDiags (int step); + /** Loop over all ReducedDiags and call their ComputeDiagsMidStep + * @param[in] step current iteration time */ + void ComputeDiagsMidStep (int step); + /** Loop over all ReducedDiags and call their WriteToFile * @param[in] step current iteration time */ void WriteToFile (int step); + /** \brief Loop over all ReducedDiags and call their WriteCheckpointData + * @param[in] dir checkpoint directory */ + void WriteCheckpointData (std::string const & dir); + + /** \brief Loop over all ReducedDiags and call their ReadCheckpointData + * @param[in] dir checkpoint directory */ + void ReadCheckpointData (std::string const & dir); }; #endif diff --git a/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp b/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp index 5035eac58a8..0ce18174111 100644 --- a/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp +++ b/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp @@ -13,6 +13,7 @@ #include "FieldEnergy.H" #include "FieldMaximum.H" #include "FieldMomentum.H" +#include "FieldPoyntingFlux.H" #include "FieldProbe.H" #include "FieldReduction.H" #include "LoadBalanceCosts.H" @@ -66,6 +67,7 @@ MultiReducedDiags::MultiReducedDiags () {"FieldEnergy", [](CS s){return std::make_unique(s);}}, {"FieldMaximum", [](CS s){return std::make_unique(s);}}, {"FieldMomentum", [](CS s){return std::make_unique(s);}}, + {"FieldPoyntingFlux", [](CS s){return std::make_unique(s);}}, {"FieldProbe", [](CS s){return std::make_unique(s);}}, {"FieldReduction", [](CS s){return std::make_unique(s);}}, {"LoadBalanceCosts", [](CS s){return std::make_unique(s);}}, @@ -124,6 +126,20 @@ void MultiReducedDiags::ComputeDiags (int step) } // end void MultiReducedDiags::ComputeDiags +// call functions to compute diags at the mid step time level +void MultiReducedDiags::ComputeDiagsMidStep (int step) +{ + WARPX_PROFILE("MultiReducedDiags::ComputeDiagsMidStep()"); + + // loop over all reduced diags + for (int i_rd = 0; i_rd < static_cast(m_rd_names.size()); ++i_rd) + { + m_multi_rd[i_rd] -> ComputeDiagsMidStep(step); + } + // end loop over all reduced diags +} +// end void MultiReducedDiags::ComputeDiagsMidStep + // function to write data void MultiReducedDiags::WriteToFile (int step) { @@ -142,3 +158,26 @@ void MultiReducedDiags::WriteToFile (int step) // end loop over all reduced diags } // end void MultiReducedDiags::WriteToFile + +void MultiReducedDiags::WriteCheckpointData (std::string const & dir) +{ + // Only the I/O rank does + if ( !ParallelDescriptor::IOProcessor() ) { return; } + + // loop over all reduced diags + for (int i_rd = 0; i_rd < static_cast(m_rd_names.size()); ++i_rd) + { + m_multi_rd[i_rd]->WriteCheckpointData(dir); + } + // end loop over all reduced diags +} + +void MultiReducedDiags::ReadCheckpointData (std::string const & dir) +{ + // loop over all reduced diags + for (int i_rd = 0; i_rd < static_cast(m_rd_names.size()); ++i_rd) + { + m_multi_rd[i_rd]->ReadCheckpointData(dir); + } + // end loop over all reduced diags +} diff --git a/Source/Diagnostics/ReducedDiags/ReducedDiags.H b/Source/Diagnostics/ReducedDiags/ReducedDiags.H index 2c942e1df6d..a32de30cc6f 100644 --- a/Source/Diagnostics/ReducedDiags/ReducedDiags.H +++ b/Source/Diagnostics/ReducedDiags/ReducedDiags.H @@ -83,6 +83,13 @@ public: */ virtual void ComputeDiags (int step) = 0; + /** + * function to compute diags at the mid step time level + * + * @param[in] step current time step + */ + virtual void ComputeDiagsMidStep (int step); + /** * write to file function * @@ -90,6 +97,20 @@ public: */ virtual void WriteToFile (int step) const; + /** + * \brief Write out checkpoint related data + * + * \param[in] dir Directory where checkpoint data is written + */ + virtual void WriteCheckpointData (std::string const & dir); + + /** + * \brief Read in checkpoint related data + * + * \param[in] dir Directory where checkpoint data was written + */ + virtual void ReadCheckpointData (std::string const & dir); + /** * This function queries deprecated input parameters and aborts * the run if one of them is specified. diff --git a/Source/Diagnostics/ReducedDiags/ReducedDiags.cpp b/Source/Diagnostics/ReducedDiags/ReducedDiags.cpp index a3529cd305d..b0e20584a12 100644 --- a/Source/Diagnostics/ReducedDiags/ReducedDiags.cpp +++ b/Source/Diagnostics/ReducedDiags/ReducedDiags.cpp @@ -92,6 +92,27 @@ void ReducedDiags::LoadBalance () // load balancing operations } +void ReducedDiags::ComputeDiagsMidStep (int /*step*/) +{ + // Defines an empty function ComputeDiagsMidStep() to be overwritten if needed. + // Function used to calculate the diagnostic at the mid step time leve + // (instead of at the end of the step). +} + +void ReducedDiags::WriteCheckpointData (std::string const & /*dir*/) +{ + // Defines an empty function WriteCheckpointData() to be overwritten if needed. + // Function used to write out and data needed by the diagnostic in + // the checkpoint. +} + +void ReducedDiags::ReadCheckpointData (std::string const & /*dir*/) +{ + // Defines an empty function ReadCheckpointData() to be overwritten if needed. + // Function used to read in any data that was written out in the checkpoint + // when doing a restart. +} + void ReducedDiags::BackwardCompatibility () const { const amrex::ParmParse pp_rd_name(m_rd_name); diff --git a/Source/Diagnostics/WarpXIO.cpp b/Source/Diagnostics/WarpXIO.cpp index f2921f820fd..e90ae98eb17 100644 --- a/Source/Diagnostics/WarpXIO.cpp +++ b/Source/Diagnostics/WarpXIO.cpp @@ -19,6 +19,7 @@ #include "Utils/WarpXProfilerWrapper.H" #include "WarpX.H" #include "Diagnostics/MultiDiagnostics.H" +#include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include #include @@ -400,6 +401,8 @@ WarpX::InitFromCheckpoint () if (EB::enabled()) { InitializeEBGridData(maxLevel()); } + reduced_diags->ReadCheckpointData(restart_chkfile); + // Initialize particles mypc->AllocData(); mypc->Restart(restart_chkfile); diff --git a/Source/FieldSolver/ImplicitSolvers/SemiImplicitEM.cpp b/Source/FieldSolver/ImplicitSolvers/SemiImplicitEM.cpp index 41fdf515581..bf8441e1992 100644 --- a/Source/FieldSolver/ImplicitSolvers/SemiImplicitEM.cpp +++ b/Source/FieldSolver/ImplicitSolvers/SemiImplicitEM.cpp @@ -5,6 +5,7 @@ * License: BSD-3-Clause-LBNL */ #include "SemiImplicitEM.H" +#include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include "WarpX.H" using warpx::fields::FieldType; @@ -83,6 +84,7 @@ void SemiImplicitEM::OneStep ( amrex::Real start_time, // Update WarpX owned Efield_fp to t_{n+1/2} m_WarpX->SetElectricFieldAndApplyBCs( m_E, half_time ); + m_WarpX->reduced_diags->ComputeDiagsMidStep(a_step); // Advance particles from time n+1/2 to time n+1 m_WarpX->FinishImplicitParticleUpdate(); diff --git a/Source/FieldSolver/ImplicitSolvers/StrangImplicitSpectralEM.cpp b/Source/FieldSolver/ImplicitSolvers/StrangImplicitSpectralEM.cpp index cd672f18f98..b8be6b93c63 100644 --- a/Source/FieldSolver/ImplicitSolvers/StrangImplicitSpectralEM.cpp +++ b/Source/FieldSolver/ImplicitSolvers/StrangImplicitSpectralEM.cpp @@ -6,6 +6,7 @@ */ #include "Fields.H" #include "StrangImplicitSpectralEM.H" +#include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include "WarpX.H" using namespace warpx::fields; @@ -84,6 +85,7 @@ void StrangImplicitSpectralEM::OneStep ( amrex::Real start_time, // Update WarpX owned Efield_fp and Bfield_fp to t_{n+1/2} UpdateWarpXFields( m_E, half_time ); + m_WarpX->reduced_diags->ComputeDiagsMidStep(a_step); // Advance particles from time n+1/2 to time n+1 m_WarpX->FinishImplicitParticleUpdate(); diff --git a/Source/FieldSolver/ImplicitSolvers/ThetaImplicitEM.cpp b/Source/FieldSolver/ImplicitSolvers/ThetaImplicitEM.cpp index aa6ee63f7df..1e6596f5eaa 100644 --- a/Source/FieldSolver/ImplicitSolvers/ThetaImplicitEM.cpp +++ b/Source/FieldSolver/ImplicitSolvers/ThetaImplicitEM.cpp @@ -6,6 +6,7 @@ */ #include "Fields.H" #include "ThetaImplicitEM.H" +#include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include "WarpX.H" using warpx::fields::FieldType; @@ -109,6 +110,7 @@ void ThetaImplicitEM::OneStep ( const amrex::Real start_time, // Update WarpX owned Efield_fp and Bfield_fp to t_{n+theta} UpdateWarpXFields( m_E, start_time ); + m_WarpX->reduced_diags->ComputeDiagsMidStep(a_step); // Advance particles from time n+1/2 to time n+1 m_WarpX->FinishImplicitParticleUpdate(); From 409d346b60dd5bc16ab4aa332a1c1e7d2a551119 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Mon, 3 Feb 2025 15:37:38 -0800 Subject: [PATCH 19/58] Doc Lassen: Pip Cache Disabled (#5632) Script aborted on `python3 -m pip cache purge`. No extra `--no-cache-dir` suffixes needed to compensate since, as the error says, system disabled pip caches. --- Tools/machines/lassen-llnl/install_v100_dependencies_toss3.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Tools/machines/lassen-llnl/install_v100_dependencies_toss3.sh b/Tools/machines/lassen-llnl/install_v100_dependencies_toss3.sh index 1b14159cd22..86f330060f6 100644 --- a/Tools/machines/lassen-llnl/install_v100_dependencies_toss3.sh +++ b/Tools/machines/lassen-llnl/install_v100_dependencies_toss3.sh @@ -114,7 +114,7 @@ rm -rf ${SW_DIR}/venvs/warpx-lassen-toss3 python3 -m venv ${SW_DIR}/venvs/warpx-lassen-toss3 source ${SW_DIR}/venvs/warpx-lassen-toss3/bin/activate python3 -m pip install --upgrade pip -python3 -m pip cache purge +# python3 -m pip cache purge # error: pip cache commands can not function since cache is disabled python3 -m pip install --upgrade build python3 -m pip install --upgrade packaging python3 -m pip install --upgrade wheel From 8bc62d8cd77336e248bccc30ddb39164e2988fb1 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Mon, 3 Feb 2025 15:40:22 -0800 Subject: [PATCH 20/58] [pre-commit.ci] pre-commit autoupdate (#5633) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.3 → v0.9.4](https://github.com/astral-sh/ruff-pre-commit/compare/v0.9.3...v0.9.4) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- .pre-commit-config.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index bb03acf77ca..577f0ffc1f0 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -69,7 +69,7 @@ repos: # Python: Ruff linter & formatter # https://docs.astral.sh/ruff/ - repo: https://github.com/astral-sh/ruff-pre-commit - rev: v0.9.3 + rev: v0.9.4 hooks: # Run the linter - id: ruff From 57931b81ea0efdbfec4bc1f84f789b5188b036ed Mon Sep 17 00:00:00 2001 From: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> Date: Mon, 3 Feb 2025 16:02:49 -0800 Subject: [PATCH 21/58] Add execution of `afterEpush` callback in hybrid solver (#5629) Signed-off-by: roelof-groenewald --- .../FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp index ba6bb0e042c..64ee83b10e0 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp @@ -10,6 +10,7 @@ #include "HybridPICModel.H" #include "EmbeddedBoundary/Enabled.H" +#include "Python/callbacks.H" #include "Fields.H" #include "WarpX.H" @@ -304,6 +305,8 @@ void HybridPICModel::HybridPICSolveE ( eb_update_E[lev], lev, solve_for_Faraday ); } + // Allow execution of Python callback after E-field push + ExecutePythonCallback("afterEpush"); } void HybridPICModel::HybridPICSolveE ( From 57f6317adbec30eb0314c94592b975d42fc217c5 Mon Sep 17 00:00:00 2001 From: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> Date: Mon, 3 Feb 2025 16:45:30 -0800 Subject: [PATCH 22/58] Fix bug with DSMC collisions in RZ (#5622) I realized there is a bug in the DSMC module for RZ geometry where the velocity vectors for the colliding pair was not rotated so that a proper center-of-momentum calculation could be done. This PR fixes the bug. To check that the fix in this PR works, I compared the azimuthal velocity distribution for energetic ions created from NBI with finite impact parameter (such that a net rotation should be imparted on the ions), when simulated in 3d versus RZ: 3d result: ![image](https://github.com/user-attachments/assets/458f7e9c-b7b2-46ec-b456-8733ce959f94) RZ result: ![image](https://github.com/user-attachments/assets/ce37a51c-f127-44be-b9b9-4f9a1d7d2cbb) --- .../Collision/BinaryCollision/DSMC/DSMCFunc.H | 1 + .../DSMC/SplitAndScatterFunc.H | 26 +++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.H b/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.H index 5a3c925e9bd..6051aab1b59 100644 --- a/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.H +++ b/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.H @@ -176,6 +176,7 @@ public: m_process_count, m_scattering_processes_data, engine); #if (defined WARPX_DIM_RZ) + /* Undo the earlier velocity rotation. */ amrex::ParticleReal const u1xbuf_new = u1x[I1[i1]]; u1x[I1[i1]] = u1xbuf_new*std::cos(-theta) - u1y[I1[i1]]*std::sin(-theta); u1y[I1[i1]] = u1xbuf_new*std::sin(-theta) + u1y[I1[i1]]*std::cos(-theta); diff --git a/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H b/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H index 473199a6b21..239a76c50c7 100644 --- a/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H +++ b/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H @@ -154,6 +154,25 @@ public: auto& uy2 = soa_products_data[1].m_rdata[PIdx::uy][product2_index]; auto& uz2 = soa_products_data[1].m_rdata[PIdx::uz][product2_index]; +#if (defined WARPX_DIM_RZ) + /* In RZ geometry, macroparticles can collide with other macroparticles + * in the same *cylindrical* cell. For this reason, collisions between macroparticles + * are actually not local in space. In this case, the underlying assumption is that + * particles within the same cylindrical cell represent a cylindrically-symmetry + * momentum distribution function. Therefore, here, we temporarily rotate the + * momentum of one of the macroparticles in agreement with this cylindrical symmetry. + * (This is technically only valid if we use only the m=0 azimuthal mode in the simulation; + * there is a corresponding assert statement at initialization.) + */ + amrex::ParticleReal const theta = ( + soa_products_data[1].m_rdata[PIdx::theta][product2_index] + - soa_products_data[0].m_rdata[PIdx::theta][product1_index] + ); + amrex::ParticleReal const ux1buf = ux1; + ux1 = ux1buf*std::cos(theta) - uy1*std::sin(theta); + uy1 = ux1buf*std::sin(theta) + uy1*std::cos(theta); +#endif + // for simplicity (for now) we assume non-relativistic particles // and simply calculate the center-of-momentum velocity from the // rest masses @@ -213,6 +232,13 @@ public: ux2 += uCOM_x; uy2 += uCOM_y; uz2 += uCOM_z; + +#if (defined WARPX_DIM_RZ) + /* Undo the earlier velocity rotation. */ + amrex::ParticleReal const ux1buf_new = ux1; + ux1 = ux1buf_new*std::cos(-theta) - uy1*std::sin(-theta); + uy1 = ux1buf_new*std::sin(-theta) + uy1*std::cos(-theta); +#endif } }); From 93466dd9065f3849997e85baa35b8d1ed95a2ff5 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Mon, 3 Feb 2025 21:48:12 -0800 Subject: [PATCH 23/58] Fix Dangling Ref in EB Init (#5635) Follow-up to #5209: My compiler says those locations would reference temporary objects that were destroyed at the end of the line. That seems to be the case indeed. Copy instead to make the temporary a named and thus persistent variable. ![Screenshot from 2025-02-03 16-59-22](https://github.com/user-attachments/assets/8259f6d7-099b-4d09-8382-f24baefb5793) --- Source/EmbeddedBoundary/WarpXInitEB.cpp | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Source/EmbeddedBoundary/WarpXInitEB.cpp b/Source/EmbeddedBoundary/WarpXInitEB.cpp index 3f33259a313..371bd6a0570 100644 --- a/Source/EmbeddedBoundary/WarpXInitEB.cpp +++ b/Source/EmbeddedBoundary/WarpXInitEB.cpp @@ -147,7 +147,7 @@ WarpX::MarkReducedShapeCells ( amrex::Array4 const & eb_reduce_particle_shape_arr = eb_reduce_particle_shape->array(mfi); // Check if the box (including one layer of guard cells) contains a mix of covered and regular cells - const amrex::Box& eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); + const amrex::Box eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); amrex::FabType const fab_type = eb_flag[mfi].getType( eb_info_box ); if (fab_type == amrex::FabType::regular) { // All cells in the box are regular @@ -240,7 +240,7 @@ WarpX::MarkUpdateCellsStairCase ( amrex::Array4 const & eb_update_arr = eb_update[idim]->array(mfi); // Check if the box (including one layer of guard cells) contains a mix of covered and regular cells - const amrex::Box& eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); + const amrex::Box eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); amrex::FabType const fab_type = eb_flag[mfi].getType( eb_info_box ); if (fab_type == amrex::FabType::regular) { // All cells in the box are regular From 12269a0ee7622f73326be2577f8458f0e935b465 Mon Sep 17 00:00:00 2001 From: "S. Eric Clark" <25495882+clarkse@users.noreply.github.com> Date: Tue, 4 Feb 2025 15:28:36 -0800 Subject: [PATCH 24/58] =?UTF-8?q?Fixing=20bug=20in=20hyper-resistivity=20c?= =?UTF-8?q?alculation=20which=20had=20missing=20terms=20i=E2=80=A6=20(#563?= =?UTF-8?q?8)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit …n vector laplacian evaluation. Additioanally fixing a staggering bug for density calculation in RZ. --------- Signed-off-by: S. Eric Clark <25495882+clarkse@users.noreply.github.com> Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> --- .../Tests/ohm_solver_em_modes/analysis_rz.py | 2 +- .../test_rz_ohm_solver_em_modes_picmi.json | 14 ++++---- .../HybridPICSolveE.cpp | 36 ++++++++++++++----- 3 files changed, 36 insertions(+), 16 deletions(-) diff --git a/Examples/Tests/ohm_solver_em_modes/analysis_rz.py b/Examples/Tests/ohm_solver_em_modes/analysis_rz.py index 841e1177630..7cd5086c408 100755 --- a/Examples/Tests/ohm_solver_em_modes/analysis_rz.py +++ b/Examples/Tests/ohm_solver_em_modes/analysis_rz.py @@ -179,5 +179,5 @@ def process(it): amps = np.abs(F_kw[2, 1, len(kz) // 2 - 2 : len(kz) // 2 + 2]) print("Amplitude sample: ", amps) assert np.allclose( - amps, np.array([61.02377286, 19.80026021, 100.47687017, 10.83331295]) + amps, np.array([59.23850009, 19.26746169, 92.65794174, 10.83627164]) ) diff --git a/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_em_modes_picmi.json b/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_em_modes_picmi.json index ec1b6272092..feca88922e2 100644 --- a/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_em_modes_picmi.json +++ b/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_em_modes_picmi.json @@ -1,12 +1,12 @@ { "lev=0": {}, "ions": { - "particle_momentum_x": 5.0438993756415296e-17, - "particle_momentum_y": 5.0444406612873916e-17, - "particle_momentum_z": 5.0519292431385393e-17, - "particle_position_x": 143164.41713467025, - "particle_position_y": 143166.51845281923, - "particle_theta": 2573261.8729711357, - "particle_weight": 8.128680645366887e+18 + "particle_momentum_x": 5.043784704795177e-17, + "particle_momentum_y": 5.0444695502620235e-17, + "particle_momentum_z": 5.05193106847111e-17, + "particle_position_x": 143164.53685279266, + "particle_position_y": 143166.5185853012, + "particle_theta": 2573262.446840369, + "particle_weight": 8.128680645366886e+18 } } diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp index 47e45bbe753..2047e87b696 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp @@ -611,9 +611,10 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( if (include_hyper_resistivity_term) { // r on cell-centered point (Jr is cell-centered in r) - Real const r = rmin + (i + 0.5_rt)*dr; - - auto nabla2Jr = T_Algo::Dr_rDr_over_r(Jr, r, dr, coefs_r, n_coefs_r, i, j, 0, 0); + const Real r = rmin + (i + 0.5_rt)*dr; + const Real jr_val = Interp(Jr, Jr_stag, Er_stag, coarsen, i, j, 0, 0); + auto nabla2Jr = T_Algo::Dr_rDr_over_r(Jr, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) + + T_Algo::Dzz(Jr, coefs_z, n_coefs_z, i, j, 0, 0) - jr_val/(r*r); Er(i, j, 0) -= eta_h * nabla2Jr; } }, @@ -633,7 +634,7 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Er_stag, coarsen, i, j, 0, 0); + Real rho_val = Interp(rho, nodal, Et_stag, coarsen, i, j, 0, 0); // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -659,7 +660,13 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Et(i, j, 0) += eta(rho_val, jtot_val) * Jt(i, j, 0); } - // Note: Hyper-resisitivity should be revisited here when modal decomposition is implemented + if (include_hyper_resistivity_term) { + const Real jt_val = Interp(Jt, Jt_stag, Et_stag, coarsen, i, j, 0, 0); + auto nabla2Jt = T_Algo::Dr_rDr_over_r(Jt, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) + + T_Algo::Dzz(Jt, coefs_z, n_coefs_z, i, j, 0, 0) - jt_val/(r*r); + + Et(i, j, 0) -= eta_h * nabla2Jt; + } }, // Ez calculation @@ -697,7 +704,14 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( if (solve_for_Faraday) { Ez(i, j, 0) += eta(rho_val, jtot_val) * Jz(i, j, 0); } if (include_hyper_resistivity_term) { + // r on nodal point (Jz is nodal in r) + Real const r = rmin + i*dr; + auto nabla2Jz = T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, 0, 0); + if (r > 0.5_rt*dr) { + nabla2Jz += T_Algo::Dr_rDr_over_r(Jz, r, dr, coefs_r, n_coefs_r, i, j, 0, 0); + } + Ez(i, j, 0) -= eta_h * nabla2Jz; } } @@ -918,7 +932,9 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( if (solve_for_Faraday) { Ex(i, j, k) += eta(rho_val, jtot_val) * Jx(i, j, k); } if (include_hyper_resistivity_term) { - auto nabla2Jx = T_Algo::Dxx(Jx, coefs_x, n_coefs_x, i, j, k); + auto nabla2Jx = T_Algo::Dxx(Jx, coefs_x, n_coefs_x, i, j, k) + + T_Algo::Dyy(Jx, coefs_y, n_coefs_y, i, j, k) + + T_Algo::Dzz(Jx, coefs_z, n_coefs_z, i, j, k); Ex(i, j, k) -= eta_h * nabla2Jx; } }, @@ -958,7 +974,9 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( if (solve_for_Faraday) { Ey(i, j, k) += eta(rho_val, jtot_val) * Jy(i, j, k); } if (include_hyper_resistivity_term) { - auto nabla2Jy = T_Algo::Dyy(Jy, coefs_y, n_coefs_y, i, j, k); + auto nabla2Jy = T_Algo::Dxx(Jy, coefs_x, n_coefs_x, i, j, k) + + T_Algo::Dyy(Jy, coefs_y, n_coefs_y, i, j, k) + + T_Algo::Dzz(Jy, coefs_z, n_coefs_z, i, j, k); Ey(i, j, k) -= eta_h * nabla2Jy; } }, @@ -998,7 +1016,9 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( if (solve_for_Faraday) { Ez(i, j, k) += eta(rho_val, jtot_val) * Jz(i, j, k); } if (include_hyper_resistivity_term) { - auto nabla2Jz = T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, k); + auto nabla2Jz = T_Algo::Dxx(Jz, coefs_x, n_coefs_x, i, j, k) + + T_Algo::Dyy(Jz, coefs_y, n_coefs_y, i, j, k) + + T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, k); Ez(i, j, k) -= eta_h * nabla2Jz; } } From cdb9e279ef2c385f447c175509df549ec1456e42 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Tue, 4 Feb 2025 15:29:17 -0800 Subject: [PATCH 25/58] Release 25.02 (#5639) Prepare the February release of WarpX: ```bash # update dependencies ./Tools/Release/updateAMReX.py ./Tools/Release/updatePICSAR.py ./Tools/Release/updatepyAMReX.py # bump version number ./Tools/Release/newVersion.sh ``` Following this workflow: https://warpx.readthedocs.io/en/latest/maintenance/release.html --------- Signed-off-by: Axel Huebl --- .github/workflows/cuda.yml | 2 +- CMakeLists.txt | 2 +- Docs/source/conf.py | 4 ++-- Python/setup.py | 2 +- Tools/Release/releasePR.py | 2 +- Tools/Release/weeklyUpdate.py | 2 +- cmake/dependencies/AMReX.cmake | 4 ++-- cmake/dependencies/PICSAR.cmake | 4 ++-- cmake/dependencies/pyAMReX.cmake | 4 ++-- setup.py | 2 +- 10 files changed, 14 insertions(+), 14 deletions(-) diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 12a68d327f7..21f762f4819 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -127,7 +127,7 @@ jobs: which nvcc || echo "nvcc not in PATH!" git clone https://github.com/AMReX-Codes/amrex.git ../amrex - cd ../amrex && git checkout --detach 69f1ac884c6aba4d9881260819ade3bb25ed8aad && cd - + cd ../amrex && git checkout --detach 25.02 && cd - make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4 ccache -s diff --git a/CMakeLists.txt b/CMakeLists.txt index 24e9338982e..bb3ee66f786 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -1,7 +1,7 @@ # Preamble #################################################################### # cmake_minimum_required(VERSION 3.24.0) -project(WarpX VERSION 25.01) +project(WarpX VERSION 25.02) include(${WarpX_SOURCE_DIR}/cmake/WarpXFunctions.cmake) diff --git a/Docs/source/conf.py b/Docs/source/conf.py index 247e11faa4f..666aaf858fa 100644 --- a/Docs/source/conf.py +++ b/Docs/source/conf.py @@ -107,9 +107,9 @@ def __init__(self, *args, **kwargs): # built documents. # # The short X.Y version. -version = "25.01" +version = "25.02" # The full version, including alpha/beta/rc tags. -release = "25.01" +release = "25.02" # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/Python/setup.py b/Python/setup.py index a50b467c070..e0ec6c98a7d 100644 --- a/Python/setup.py +++ b/Python/setup.py @@ -65,7 +65,7 @@ setup( name="pywarpx", - version="25.01", + version="25.02", packages=["pywarpx"], package_dir={"pywarpx": "pywarpx"}, description="""Wrapper of WarpX""", diff --git a/Tools/Release/releasePR.py b/Tools/Release/releasePR.py index 9dfa178e5b4..47a380901b1 100755 --- a/Tools/Release/releasePR.py +++ b/Tools/Release/releasePR.py @@ -93,7 +93,7 @@ def concat_answers(answers): # PICSAR New Version ########################################################## -PICSAR_version = "24.09" +PICSAR_version = "25.01" answers = concat_answers(["y", PICSAR_version, PICSAR_version, "y"]) process = subprocess.Popen( diff --git a/Tools/Release/weeklyUpdate.py b/Tools/Release/weeklyUpdate.py index 005c8c5d373..6c32993f79e 100755 --- a/Tools/Release/weeklyUpdate.py +++ b/Tools/Release/weeklyUpdate.py @@ -88,7 +88,7 @@ def concat_answers(answers): # PICSAR New Version ########################################################## -PICSAR_version = "24.09" +PICSAR_version = "25.01" answers = concat_answers(["y", PICSAR_version, PICSAR_version, "y"]) process = subprocess.Popen( diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index 9c8907e835b..83feb0ff1db 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -271,7 +271,7 @@ macro(find_amrex) endif() set(COMPONENT_PRECISION ${WarpX_PRECISION} P${WarpX_PARTICLE_PRECISION}) - find_package(AMReX 25.01 CONFIG REQUIRED COMPONENTS ${COMPONENT_ASCENT} ${COMPONENT_CATALYST} ${COMPONENT_DIMS} ${COMPONENT_EB} ${COMPONENT_FFT} PARTICLES ${COMPONENT_PIC} ${COMPONENT_PRECISION} ${COMPONENT_SENSEI} LSOLVERS) + find_package(AMReX 25.02 CONFIG REQUIRED COMPONENTS ${COMPONENT_ASCENT} ${COMPONENT_CATALYST} ${COMPONENT_DIMS} ${COMPONENT_EB} ${COMPONENT_FFT} PARTICLES ${COMPONENT_PIC} ${COMPONENT_PRECISION} ${COMPONENT_SENSEI} LSOLVERS) # note: TINYP skipped because user-configured and optional # AMReX CMake helper scripts @@ -294,7 +294,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "69f1ac884c6aba4d9881260819ade3bb25ed8aad" +set(WarpX_amrex_branch "25.02" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") diff --git a/cmake/dependencies/PICSAR.cmake b/cmake/dependencies/PICSAR.cmake index 9eb9162238a..d5249b61641 100644 --- a/cmake/dependencies/PICSAR.cmake +++ b/cmake/dependencies/PICSAR.cmake @@ -88,7 +88,7 @@ function(find_picsar) #message(STATUS "PICSAR: Using version '${PICSAR_VERSION}'") else() # not supported by PICSAR (yet) - #find_package(PICSAR 24.09 CONFIG REQUIRED QED) + #find_package(PICSAR 25.01 CONFIG REQUIRED QED) #message(STATUS "PICSAR: Found version '${PICSAR_VERSION}'") message(FATAL_ERROR "PICSAR: Cannot be used as externally installed " "library yet. " @@ -109,7 +109,7 @@ if(WarpX_QED) set(WarpX_picsar_repo "https://github.com/ECP-WarpX/picsar.git" CACHE STRING "Repository URI to pull and build PICSAR from if(WarpX_picsar_internal)") - set(WarpX_picsar_branch "24.09" + set(WarpX_picsar_branch "25.01" CACHE STRING "Repository branch for WarpX_picsar_repo if(WarpX_picsar_internal)") diff --git a/cmake/dependencies/pyAMReX.cmake b/cmake/dependencies/pyAMReX.cmake index 257bc264f21..975644ebf2b 100644 --- a/cmake/dependencies/pyAMReX.cmake +++ b/cmake/dependencies/pyAMReX.cmake @@ -59,7 +59,7 @@ function(find_pyamrex) endif() elseif(NOT WarpX_pyamrex_internal) # TODO: MPI control - find_package(pyAMReX 25.01 CONFIG REQUIRED) + find_package(pyAMReX 25.02 CONFIG REQUIRED) message(STATUS "pyAMReX: Found version '${pyAMReX_VERSION}'") endif() endfunction() @@ -74,7 +74,7 @@ option(WarpX_pyamrex_internal "Download & build pyAMReX" ON) set(WarpX_pyamrex_repo "https://github.com/AMReX-Codes/pyamrex.git" CACHE STRING "Repository URI to pull and build pyamrex from if(WarpX_pyamrex_internal)") -set(WarpX_pyamrex_branch "458c9ae7ab3cd4ca4e4e9736e82c60f9a7e7606c" +set(WarpX_pyamrex_branch "25.02" CACHE STRING "Repository branch for WarpX_pyamrex_repo if(WarpX_pyamrex_internal)") diff --git a/setup.py b/setup.py index 9538adcb106..fae11aa0654 100644 --- a/setup.py +++ b/setup.py @@ -280,7 +280,7 @@ def build_extension(self, ext): setup( name="pywarpx", # note PEP-440 syntax: x.y.zaN but x.y.z.devN - version="25.01", + version="25.02", packages=["pywarpx"], package_dir={"pywarpx": "Python/pywarpx"}, author="Jean-Luc Vay, David P. Grote, Maxence Thévenet, Rémi Lehe, Andrew Myers, Weiqun Zhang, Axel Huebl, et al.", From 10af74faacdc0c34c3648b780f052f6e9e32394a Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Wed, 5 Feb 2025 15:04:31 -0800 Subject: [PATCH 26/58] AMReX/pyAMReX/PICSAR: Weekly Update (#5643) Weekly update to latest AMReX. Weekly update to latest pyAMReX. Weekly update to latest PICSAR (no changes). ```console ./Tools/Release/updateAMReX.py ./Tools/Release/updatepyAMReX.py ./Tools/Release/updatePICSAR.py ``` --------- Signed-off-by: Axel Huebl --- .github/workflows/cuda.yml | 2 +- cmake/dependencies/AMReX.cmake | 2 +- cmake/dependencies/pyAMReX.cmake | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 21f762f4819..0943de41e55 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -127,7 +127,7 @@ jobs: which nvcc || echo "nvcc not in PATH!" git clone https://github.com/AMReX-Codes/amrex.git ../amrex - cd ../amrex && git checkout --detach 25.02 && cd - + cd ../amrex && git checkout --detach 78bdf0faabc4101d5333ebb421e553efcc7ec04e && cd - make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4 ccache -s diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index 83feb0ff1db..5136cb8f2f4 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -294,7 +294,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "25.02" +set(WarpX_amrex_branch "78bdf0faabc4101d5333ebb421e553efcc7ec04e" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") diff --git a/cmake/dependencies/pyAMReX.cmake b/cmake/dependencies/pyAMReX.cmake index 975644ebf2b..b716e883be9 100644 --- a/cmake/dependencies/pyAMReX.cmake +++ b/cmake/dependencies/pyAMReX.cmake @@ -74,7 +74,7 @@ option(WarpX_pyamrex_internal "Download & build pyAMReX" ON) set(WarpX_pyamrex_repo "https://github.com/AMReX-Codes/pyamrex.git" CACHE STRING "Repository URI to pull and build pyamrex from if(WarpX_pyamrex_internal)") -set(WarpX_pyamrex_branch "25.02" +set(WarpX_pyamrex_branch "006bf94a4c68466fac8a1281750391b5a6083d82" CACHE STRING "Repository branch for WarpX_pyamrex_repo if(WarpX_pyamrex_internal)") From 609b163bb731b269fd1ce415431a492773ab04b7 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Thu, 6 Feb 2025 10:57:25 -0800 Subject: [PATCH 27/58] RTD: Fix GA Integration (#5645) GA was dropped from RTD in early Oct, 2024. This adds it again. --- Docs/requirements.txt | 1 + Docs/source/conf.py | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/Docs/requirements.txt b/Docs/requirements.txt index 14fafe406fb..14d07e29f6e 100644 --- a/Docs/requirements.txt +++ b/Docs/requirements.txt @@ -27,5 +27,6 @@ sphinx-copybutton sphinx-design sphinx_rtd_theme>=1.1.1 sphinxcontrib-bibtex +sphinxcontrib-googleanalytics sphinxcontrib-napoleon yt # for checksumAPI diff --git a/Docs/source/conf.py b/Docs/source/conf.py index 666aaf858fa..a5fed3a4614 100644 --- a/Docs/source/conf.py +++ b/Docs/source/conf.py @@ -56,8 +56,13 @@ "sphinx_design", "breathe", "sphinxcontrib.bibtex", + "sphinxcontrib.googleanalytics", ] +# Google Analytics +googleanalytics_id = "G-QZGY5060MZ" +googleanalytics_enabled = True + # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] From 4f0f16302dbb3db346d371bbcf1a636685dab76f Mon Sep 17 00:00:00 2001 From: Brian Jensen <127121969+budjensen@users.noreply.github.com> Date: Thu, 6 Feb 2025 20:29:45 -0500 Subject: [PATCH 28/58] Add MCC forward scattering (#5621) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Added a version of forward scattering suggested in [J F J Janssen et al (2016)](https://doi.org/10.1088/0963-0252/25/5/055026). This process decreases total particle energy by the process' energy threshold. If no energy threshold is given in the input file, this process is equivalent to no collision being carried out (no scattering and no energy change). Adjusted documentation appropriately and fixed a pre-existing typo. Feature was tested on my own machine by confirming with python callbacks that the pre-collision and post-collision velocities were equal in the case of no energy cost threshold being supplied, and that the velocities were scaled down by the appropriate amount when a threshold was supplied (it was also checked that particle direction was the same before and after collision). No formal test was added since no current MCC tests exist and adding a framework to access cross sections while executing tests was prohibitive. --- Docs/source/refs.bib | 11 +++++++++++ Docs/source/theory/multiphysics/collisions.rst | 12 +++++++++++- Docs/source/usage/parameters.rst | 4 ++-- .../BackgroundMCC/BackgroundMCCCollision.cpp | 8 ++++++++ .../Collision/BinaryCollision/DSMC/DSMCFunc.cpp | 8 ++++++++ .../BinaryCollision/DSMC/SplitAndScatterFunc.H | 2 ++ Source/Particles/Collision/ScatteringProcess.H | 1 + Source/Particles/Collision/ScatteringProcess.cpp | 2 ++ 8 files changed, 45 insertions(+), 3 deletions(-) diff --git a/Docs/source/refs.bib b/Docs/source/refs.bib index d6c81c34404..6623bacd452 100644 --- a/Docs/source/refs.bib +++ b/Docs/source/refs.bib @@ -35,6 +35,17 @@ @ARTICLE{Birdsall1991 year = {1991} } +@misc{Janssen2016 +author = {Janssen, J. F. J. and Pitchford L. C. and Hagelaar G. J. M. and van Dijk J.}, +doi = {10.1088/0963-0252/25/5/055026}, +journal = {Plasma Sources Science and Technology}, +number = {5}, +pages = {055026}, +title = {{Evaluation of angular scattering models for electron-neutral collisions in Monte Carlo simulations}}, +volume = {25}, +year = {2016} +} + @misc{Lim2007, author = {Lim, Chul-Hyun}, issn = {0419-4217}, diff --git a/Docs/source/theory/multiphysics/collisions.rst b/Docs/source/theory/multiphysics/collisions.rst index a2b11bf42a2..1c7593a0e4e 100644 --- a/Docs/source/theory/multiphysics/collisions.rst +++ b/Docs/source/theory/multiphysics/collisions.rst @@ -121,13 +121,23 @@ The particle velocity in the COM frame is then isotropically scattered using the Back scattering ^^^^^^^^^^^^^^^ -The process is the same as for elastic scattering above expect the scattering angle is fixed at :math:`\pi`, meaning the particle velocity in the COM frame is updated to :math:`-\vec{u}_c`. +The process is the same as for elastic scattering above except the scattering angle is fixed at :math:`\pi`, meaning the particle velocity in the COM frame is updated to :math:`-\vec{u}_c`. Excitation ^^^^^^^^^^ The process is also the same as for elastic scattering except the excitation energy cost is subtracted from the particle energy. This is done by reducing the velocity before a scattering angle is chosen. +Forward scattering +^^^^^^^^^^^^^^^^^^ + +This process operates in two ways: + +1. If an excitation energy cost is provided, the energy cost is subtracted from the particle energy and no scattering is performed. +2. If an excitation energy cost is not provided, the particle is not scattered and the velocity is unchanged (corresponding to a scattering angle of :math:`0` in the elastic scattering process above). + +See :cite:t:`b-Janssen2016` for a recommended use of this process. + Benchmarks ---------- diff --git a/Docs/source/usage/parameters.rst b/Docs/source/usage/parameters.rst index aaba7130b87..2de029127fa 100644 --- a/Docs/source/usage/parameters.rst +++ b/Docs/source/usage/parameters.rst @@ -2167,8 +2167,8 @@ Details about the collision models can be found in the :ref:`theory section .scattering_processes`` (`strings` separated by spaces) Only for ``dsmc`` and ``background_mcc``. The scattering processes that should be - included. Available options are ``elastic``, ``back`` & ``charge_exchange`` - for ions and ``elastic``, ``excitationX`` & ``ionization`` for electrons. + included. Available options are ``elastic``, ``excitationX``, ``forward``, ``back``, and ``charge_exchange`` + for ions and ``elastic``, ``excitationX``, ``ionization`` & ``forward`` for electrons. Multiple excitation events can be included for electrons corresponding to excitation to different levels, the ``X`` above can be changed to a unique identifier for each excitation process. For each scattering process specified diff --git a/Source/Particles/Collision/BackgroundMCC/BackgroundMCCCollision.cpp b/Source/Particles/Collision/BackgroundMCC/BackgroundMCCCollision.cpp index 80ce13744fd..8becd7d231a 100644 --- a/Source/Particles/Collision/BackgroundMCC/BackgroundMCCCollision.cpp +++ b/Source/Particles/Collision/BackgroundMCC/BackgroundMCCCollision.cpp @@ -106,6 +106,14 @@ BackgroundMCCCollision::BackgroundMCCCollision (std::string const& collision_nam utils::parser::getWithParser( pp_collision_name, kw_energy.c_str(), energy); } + // if the scattering process is forward scattering get the energy + // associated with the process if it is given (this allows forward + // scattering to be used both with and without a fixed energy loss) + else if (scattering_process.find("forward") != std::string::npos) { + const std::string kw_energy = scattering_process + "_energy"; + utils::parser::queryWithParser( + pp_collision_name, kw_energy.c_str(), energy); + } ScatteringProcess process(scattering_process, cross_section_file, energy); diff --git a/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.cpp b/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.cpp index e40a4e9822c..cf5f8de8d3c 100644 --- a/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.cpp +++ b/Source/Particles/Collision/BinaryCollision/DSMC/DSMCFunc.cpp @@ -46,6 +46,14 @@ DSMCFunc::DSMCFunc ( utils::parser::getWithParser( pp_collision_name, kw_energy.c_str(), energy); } + // if the scattering process is forward scattering get the energy + // associated with the process if it is given (this allows forward + // scattering to be used both with and without a fixed energy loss) + else if (scattering_process.find("forward") != std::string::npos) { + const std::string kw_energy = scattering_process + "_energy"; + utils::parser::queryWithParser( + pp_collision_name, kw_energy.c_str(), energy); + } ScatteringProcess process(scattering_process, cross_section_file, energy); diff --git a/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H b/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H index 239a76c50c7..db04dbc7f32 100644 --- a/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H +++ b/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H @@ -221,6 +221,8 @@ public: else { amrex::Abort("Uneven mass charge-exchange not implemented yet."); } + } else if (mask[i] == int(ScatteringProcessType::FORWARD)) { + amrex::Abort("Forward scattering with DSMC not implemented yet."); } else { amrex::Abort("Unknown scattering process."); diff --git a/Source/Particles/Collision/ScatteringProcess.H b/Source/Particles/Collision/ScatteringProcess.H index 59ef7a02afb..0c3f2daf8c1 100644 --- a/Source/Particles/Collision/ScatteringProcess.H +++ b/Source/Particles/Collision/ScatteringProcess.H @@ -21,6 +21,7 @@ enum class ScatteringProcessType { CHARGE_EXCHANGE, EXCITATION, IONIZATION, + FORWARD, }; class ScatteringProcess diff --git a/Source/Particles/Collision/ScatteringProcess.cpp b/Source/Particles/Collision/ScatteringProcess.cpp index ea1b4b40f54..ad3f179fa18 100644 --- a/Source/Particles/Collision/ScatteringProcess.cpp +++ b/Source/Particles/Collision/ScatteringProcess.cpp @@ -87,6 +87,8 @@ ScatteringProcess::parseProcessType(const std::string& scattering_process) return ScatteringProcessType::IONIZATION; } else if (scattering_process.find("excitation") != std::string::npos) { return ScatteringProcessType::EXCITATION; + } else if (scattering_process.find("forward") != std::string::npos) { + return ScatteringProcessType::FORWARD; } else { return ScatteringProcessType::INVALID; } From 86806f9cb777c55f5caffadd24c47f7efc8fb752 Mon Sep 17 00:00:00 2001 From: David Grote Date: Thu, 6 Feb 2025 22:36:59 -0800 Subject: [PATCH 29/58] Add PMC boundary conditions (#5628) --- Docs/source/theory/boundary_conditions.rst | 20 ++++ Docs/source/usage/parameters.rst | 2 + Examples/Tests/pec/CMakeLists.txt | 10 ++ Examples/Tests/pec/inputs_test_3d_pmc_field | 54 +++++++++++ .../test_3d_magnetostatic_eb.json | 30 +++--- .../test_3d_magnetostatic_eb_picmi.json | 46 ++++----- .../benchmarks_json/test_3d_pmc_field.json | 6 ++ .../WarpXFieldBoundaries.cpp | 94 ++++++++++++++----- Source/BoundaryConditions/WarpX_PEC.H | 5 +- Source/BoundaryConditions/WarpX_PEC.cpp | 82 ++++++++++------ .../ImplicitSolvers/ImplicitSolver.cpp | 3 +- .../DivCleaner/ProjectionDivCleaner.cpp | 4 +- Source/Utils/WarpXAlgorithmSelection.H | 2 +- Source/Utils/WarpXUtil.cpp | 9 ++ Source/WarpX.H | 10 +- 15 files changed, 272 insertions(+), 105 deletions(-) create mode 100644 Examples/Tests/pec/inputs_test_3d_pmc_field create mode 100644 Regression/Checksum/benchmarks_json/test_3d_pmc_field.json diff --git a/Docs/source/theory/boundary_conditions.rst b/Docs/source/theory/boundary_conditions.rst index 395b072ccbe..d8b3de40c11 100644 --- a/Docs/source/theory/boundary_conditions.rst +++ b/Docs/source/theory/boundary_conditions.rst @@ -301,3 +301,23 @@ the right boundary is reflecting. .. bibliography:: :keyprefix: bc- + +.. _theory-bc-pmc: + +Perfect Magnetic Conductor +---------------------------- + +This boundary can be used to model a symmetric surface, where charges and current are +symmetric across the boundary. +This is equivalent to the Neumann (zero-derivative) boundary condition. +For the electromagnetic solve, at PMC, the tangential magnetic field and the normal electric +field are odd across the boundary and set to 0 on the boundary. +In the guard-cell region, those fields are set equal and +opposite to the respective field component in the mirror location across the PMC boundary. +The other components, the normal magnetic field and tangential electric field, are even +and set equal to the field component in the mirror location in the domain across the PMC boundary. + +The PMC boundary condition also impacts the deposition of charge and current density. +The charge and current densities deposited into the guard cells are reflected back into +the domain, adding them to the mirror cells in the domain. +This represents the charge and current from the virtual symmetric particles in the guard cells. diff --git a/Docs/source/usage/parameters.rst b/Docs/source/usage/parameters.rst index 2de029127fa..253f9ca0071 100644 --- a/Docs/source/usage/parameters.rst +++ b/Docs/source/usage/parameters.rst @@ -533,6 +533,8 @@ Domain Boundary Conditions * ``pec``: This option can be used to set a Perfect Electric Conductor at the simulation boundary. Please see the :ref:`PEC theory section ` for more details. Note that PEC boundary is invalid at `r=0` for the RZ solver. Please use ``none`` option. This boundary condition does not work with the spectral solver. + * ``pmc``: This option can be used to set a Perfect Magnetic Conductor at the simulation boundary. Please see the :ref:`PEC theory section ` for more details. This is equivalent to ``Neumann``. This boundary condition does not work with the spectral solver. + * ``pec_insulator``: This option specifies a mixed perfect electric conductor and insulator boundary, where some part of the boundary is PEC and some is insulator. In the insulator portion, the normal fields are extrapolated and the tangential fields are either set to the specified value or extrapolated. The region that is insulator is specified using a spatially dependent expression with the insulator being in the area where the value of the expression is greater than zero. diff --git a/Examples/Tests/pec/CMakeLists.txt b/Examples/Tests/pec/CMakeLists.txt index 66d9dd1c13e..15aa17c2d5f 100644 --- a/Examples/Tests/pec/CMakeLists.txt +++ b/Examples/Tests/pec/CMakeLists.txt @@ -41,6 +41,16 @@ add_warpx_test( OFF # dependency ) +add_warpx_test( + test_3d_pmc_field # name + 3 # dims + 2 # nprocs + inputs_test_3d_pmc_field # inputs + "analysis_pec.py diags/diag1000134" # analysis + "analysis_default_regression.py --path diags/diag1000134" # checksum + OFF # dependency +) + add_warpx_test( test_2d_pec_field_insulator_implicit # name 2 # dims diff --git a/Examples/Tests/pec/inputs_test_3d_pmc_field b/Examples/Tests/pec/inputs_test_3d_pmc_field new file mode 100644 index 00000000000..2fc1cb9e5ab --- /dev/null +++ b/Examples/Tests/pec/inputs_test_3d_pmc_field @@ -0,0 +1,54 @@ +# Set-up to test the PMC Boundary condition for the fields +# Constructive interference between the incident and reflected wave result in a +# standing wave. + +# max step +max_step = 134 + +# number of grid points +amr.n_cell = 32 32 256 + +# Maximum allowable size of each subdomain +amr.max_grid_size = 1024 +amr.blocking_factor = 32 + +amr.max_level = 0 + +# Geometry +geometry.dims = 3 +geometry.prob_lo = -8.e-6 -8.e-6 -4.e-6 +geometry.prob_hi = 8.e-6 8.e-6 4.e-6 + +# Boundary condition +boundary.field_lo = periodic periodic pmc +boundary.field_hi = periodic periodic pmc + +warpx.serialize_initial_conditions = 1 + +# Verbosity +warpx.verbose = 1 + +# Algorithms +algo.current_deposition = esirkepov +# CFL +warpx.cfl = 0.9 + + +my_constants.z1 = -2.e-6 +my_constants.z2 = 2.e-6 +my_constants.wavelength = 1.e-6 +warpx.E_ext_grid_init_style = parse_E_ext_grid_function +warpx.Ez_external_grid_function(x,y,z) = "0." +warpx.Ex_external_grid_function(x,y,z) = "0." +warpx.Ey_external_grid_function(x,y,z) = "((1.e5*sin(2*pi*(z)/wavelength)) * (zz1))" + +warpx.B_ext_grid_init_style = parse_B_ext_grid_function +warpx.Bx_external_grid_function(x,y,z)= "(((-1.e5*sin(2*pi*(z)/wavelength))/clight))*(zz1) " +warpx.By_external_grid_function(x,y,z)= "0." +warpx.Bz_external_grid_function(x,y,z) = "0." + +# Diagnostics +diagnostics.diags_names = diag1 +diag1.intervals = 134 +diag1.diag_type = Full +diag1.fields_to_plot = Ey Bx diff --git a/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb.json b/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb.json index a1ec0b4c831..6415fc3e930 100644 --- a/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb.json +++ b/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb.json @@ -1,21 +1,21 @@ { "lev=0": { - "Az": 11.358663326449284, - "Bx": 111.55929407644248, - "By": 111.55929407644244, - "Ex": 31257180402.55472, - "Ey": 31257180402.55473, - "jz": 1034841325.9848926, - "phi": 3143521213.0157924, - "rho": 3.449203918900721 + "Az": 11.358663299932457, + "Bx": 111.55929615203162, + "By": 111.55929615203165, + "Ex": 31463410849.74626, + "Ey": 31463410849.746258, + "jz": 1034841323.6861029, + "phi": 3164328318.15416, + "rho": 3.4565836983918676 }, "beam": { - "particle_momentum_x": 1.3604657334742729e-21, - "particle_momentum_y": 1.3604657334742772e-21, - "particle_momentum_z": 7.150873450281544e-16, - "particle_position_x": 11163.99997371537, - "particle_position_y": 11163.999973715368, - "particle_position_z": 131662.50031035842, + "particle_momentum_x": 1.3829464728617761e-21, + "particle_momentum_y": 1.3829464728617792e-21, + "particle_momentum_z": 7.150871807235339e-16, + "particle_position_x": 11163.99997715059, + "particle_position_y": 11163.999977150592, + "particle_position_z": 131662.5003102683, "particle_weight": 20895107655113.465 } -} \ No newline at end of file +} diff --git a/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb_picmi.json b/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb_picmi.json index abe91ac9e9d..2c99a4218c2 100644 --- a/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb_picmi.json +++ b/Regression/Checksum/benchmarks_json/test_3d_magnetostatic_eb_picmi.json @@ -1,27 +1,27 @@ { + "lev=0": { + "Ax": 1.40889223759456e-05, + "Ay": 1.4088922375945606e-05, + "Az": 11.423480450267745, + "Bx": 112.23826705481486, + "By": 112.23826705481484, + "Bz": 0.00019199345672949735, + "Ex": 31557746267.686367, + "Ey": 31557746267.686363, + "Ez": 3339526660.3539834, + "jx": 1980.6549408566577, + "jy": 1980.6549408566577, + "jz": 1038931605.1197203, + "phi": 3171976204.251914, + "rho": 3.4840085919357926 + }, "beam": { - "particle_momentum_x": 1.3878812158350944e-21, - "particle_momentum_y": 1.387881215835094e-21, - "particle_momentum_z": 7.150872953138685e-16, - "particle_position_x": 11163.999973134894, - "particle_position_y": 11163.999973134896, - "particle_position_z": 131662.5003103311, + "particle_momentum_x": 1.4011190163358655e-21, + "particle_momentum_y": 1.401119016335865e-21, + "particle_momentum_z": 7.15087179293042e-16, + "particle_position_x": 11163.99997543546, + "particle_position_y": 11163.999975435456, + "particle_position_z": 131662.50031026747, "particle_weight": 20895107655113.465 - }, - "lev=0": { - "Ax": 1.408892468360627e-05, - "Ay": 1.4088924683606269e-05, - "Az": 11.423480469161868, - "Bx": 112.23826555908032, - "By": 112.2382655590803, - "Bz": 0.00019186770330025167, - "Ex": 31418238386.183773, - "Ey": 31418238386.183773, - "Ez": 3461330433.5026026, - "jx": 1961.0003914783667, - "jy": 1961.0003914783663, - "jz": 1038931606.7573991, - "phi": 3157908107.1102533, - "rho": 3.46977258905983 } -} \ No newline at end of file +} diff --git a/Regression/Checksum/benchmarks_json/test_3d_pmc_field.json b/Regression/Checksum/benchmarks_json/test_3d_pmc_field.json new file mode 100644 index 00000000000..486f8bb965d --- /dev/null +++ b/Regression/Checksum/benchmarks_json/test_3d_pmc_field.json @@ -0,0 +1,6 @@ +{ + "lev=0": { + "Bx": 4.1354151621557795, + "Ey": 8373879983.480644 + } +} diff --git a/Source/BoundaryConditions/WarpXFieldBoundaries.cpp b/Source/BoundaryConditions/WarpXFieldBoundaries.cpp index 692c9938e86..6217eb04a33 100644 --- a/Source/BoundaryConditions/WarpXFieldBoundaries.cpp +++ b/Source/BoundaryConditions/WarpXFieldBoundaries.cpp @@ -56,10 +56,8 @@ void WarpX::ApplyEfieldBoundary(const int lev, PatchType patch_type, amrex::Real if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { if (patch_type == PatchType::fine) { PEC::ApplyPECtoEfield( - {m_fields.get(FieldType::Efield_fp, Direction{0}, lev), - m_fields.get(FieldType::Efield_fp, Direction{1}, lev), - m_fields.get(FieldType::Efield_fp, Direction{2}, lev)}, - field_boundary_lo, field_boundary_hi, + m_fields.get_alldirs(FieldType::Efield_fp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PEC, get_ng_fieldgather(), Geom(lev), lev, patch_type, ref_ratio); if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { @@ -67,25 +65,59 @@ void WarpX::ApplyEfieldBoundary(const int lev, PatchType patch_type, amrex::Real const bool split_pml_field = true; PEC::ApplyPECtoEfield( m_fields.get_alldirs(FieldType::pml_E_fp, lev), - field_boundary_lo, field_boundary_hi, + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PEC, get_ng_fieldgather(), Geom(lev), lev, patch_type, ref_ratio, split_pml_field); } } else { PEC::ApplyPECtoEfield( - {m_fields.get(FieldType::Efield_cp,Direction{0},lev), - m_fields.get(FieldType::Efield_cp,Direction{1},lev), - m_fields.get(FieldType::Efield_cp,Direction{2},lev)}, - field_boundary_lo, field_boundary_hi, - get_ng_fieldgather(), Geom(lev), - lev, patch_type, ref_ratio); + m_fields.get_alldirs(FieldType::Efield_cp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PEC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio); if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { // apply pec on split E-fields in PML region const bool split_pml_field = true; PEC::ApplyPECtoEfield( m_fields.get_alldirs(FieldType::pml_E_cp, lev), - field_boundary_lo, field_boundary_hi, + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PEC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio, + split_pml_field); + } + } + } + + if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { + if (patch_type == PatchType::fine) { + PEC::ApplyPECtoBfield( + m_fields.get_alldirs(FieldType::Efield_fp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PMC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio); + if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { + // apply pec on split E-fields in PML region + const bool split_pml_field = true; + PEC::ApplyPECtoBfield( + m_fields.get_alldirs(FieldType::pml_E_fp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PMC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio, + split_pml_field); + } + } else { + PEC::ApplyPECtoBfield( + m_fields.get_alldirs(FieldType::Efield_cp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PMC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio); + if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { + // apply pec on split E-fields in PML region + const bool split_pml_field = true; + PEC::ApplyPECtoBfield( + m_fields.get_alldirs(FieldType::pml_E_cp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PMC, get_ng_fieldgather(), Geom(lev), lev, patch_type, ref_ratio, split_pml_field); @@ -152,19 +184,31 @@ void WarpX::ApplyBfieldBoundary (const int lev, PatchType patch_type, DtType a_d if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { if (patch_type == PatchType::fine) { - PEC::ApplyPECtoBfield( { - m_fields.get(FieldType::Bfield_fp,Direction{0},lev), - m_fields.get(FieldType::Bfield_fp,Direction{1},lev), - m_fields.get(FieldType::Bfield_fp,Direction{2},lev) }, - field_boundary_lo, field_boundary_hi, + PEC::ApplyPECtoBfield( + m_fields.get_alldirs(FieldType::Bfield_fp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PEC, get_ng_fieldgather(), Geom(lev), lev, patch_type, ref_ratio); } else { - PEC::ApplyPECtoBfield( { - m_fields.get(FieldType::Bfield_cp,Direction{0},lev), - m_fields.get(FieldType::Bfield_cp,Direction{1},lev), - m_fields.get(FieldType::Bfield_cp,Direction{2},lev) }, - field_boundary_lo, field_boundary_hi, + PEC::ApplyPECtoBfield( + m_fields.get_alldirs(FieldType::Bfield_cp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PEC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio); + } + } + + if (::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { + if (patch_type == PatchType::fine) { + PEC::ApplyPECtoEfield( + m_fields.get_alldirs(FieldType::Bfield_fp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PMC, + get_ng_fieldgather(), Geom(lev), + lev, patch_type, ref_ratio); + } else { + PEC::ApplyPECtoEfield( + m_fields.get_alldirs(FieldType::Bfield_cp, lev), + field_boundary_lo, field_boundary_hi, FieldBoundaryType::PMC, get_ng_fieldgather(), Geom(lev), lev, patch_type, ref_ratio); } @@ -224,7 +268,8 @@ void WarpX::ApplyRhofieldBoundary (const int lev, MultiFab* rho, { if (::isAnyBoundary(particle_boundary_lo, particle_boundary_hi) || ::isAnyBoundary(particle_boundary_lo, particle_boundary_hi) || - ::isAnyBoundary(field_boundary_lo, field_boundary_hi)) + ::isAnyBoundary(field_boundary_lo, field_boundary_hi) || + ::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { PEC::ApplyReflectiveBoundarytoRhofield(rho, field_boundary_lo, field_boundary_hi, @@ -239,7 +284,8 @@ void WarpX::ApplyJfieldBoundary (const int lev, amrex::MultiFab* Jx, { if (::isAnyBoundary(particle_boundary_lo, particle_boundary_hi) || ::isAnyBoundary(particle_boundary_lo, particle_boundary_hi) || - ::isAnyBoundary(field_boundary_lo, field_boundary_hi)) + ::isAnyBoundary(field_boundary_lo, field_boundary_hi) || + ::isAnyBoundary(field_boundary_lo, field_boundary_hi)) { PEC::ApplyReflectiveBoundarytoJfield(Jx, Jy, Jz, field_boundary_lo, field_boundary_hi, diff --git a/Source/BoundaryConditions/WarpX_PEC.H b/Source/BoundaryConditions/WarpX_PEC.H index c387d8c1793..e3fd804b62c 100644 --- a/Source/BoundaryConditions/WarpX_PEC.H +++ b/Source/BoundaryConditions/WarpX_PEC.H @@ -33,6 +33,7 @@ namespace PEC { std::array Efield, const amrex::Array& field_boundary_lo, const amrex::Array& field_boundary_hi, + FieldBoundaryType bc_type, const amrex::IntVect& ng_fieldgather, const amrex::Geometry& geom, int lev, PatchType patch_type, const amrex::Vector& ref_ratios, bool split_pml_field = false); @@ -54,8 +55,10 @@ namespace PEC { std::array Bfield, const amrex::Array& field_boundary_lo, const amrex::Array& field_boundary_hi, + FieldBoundaryType bc_type, const amrex::IntVect& ng_fieldgather, const amrex::Geometry& geom, - int lev, PatchType patch_type, const amrex::Vector& ref_ratios); + int lev, PatchType patch_type, const amrex::Vector& ref_ratios, + bool split_pml_field = false); /** * \brief Reflects charge density deposited over the PEC boundary back into diff --git a/Source/BoundaryConditions/WarpX_PEC.cpp b/Source/BoundaryConditions/WarpX_PEC.cpp index bedc5b264b7..a3b75791582 100644 --- a/Source/BoundaryConditions/WarpX_PEC.cpp +++ b/Source/BoundaryConditions/WarpX_PEC.cpp @@ -121,7 +121,8 @@ namespace amrex::Array4 const& Efield, const amrex::IntVect& is_nodal, amrex::GpuArray const& fbndry_lo, - amrex::GpuArray const& fbndry_hi ) + amrex::GpuArray const& fbndry_hi, + FieldBoundaryType bc_type) { // Tangential Efield components in guard cells set equal and opposite to cells // in the mirror locations across the PEC boundary, whereas normal E-field @@ -136,8 +137,8 @@ namespace // Loop over sides, iside = 0 (lo), iside = 1 (hi) for (int iside = 0; iside < 2; ++iside) { const bool isPECBoundary = ( (iside == 0) - ? fbndry_lo[idim] == FieldBoundaryType::PEC - : fbndry_hi[idim] == FieldBoundaryType::PEC ); + ? fbndry_lo[idim] == bc_type + : fbndry_hi[idim] == bc_type ); #if (defined WARPX_DIM_XZ) || (defined WARPX_DIM_RZ) // For 2D : for icomp==1, (Ey in XZ, Etheta in RZ), // icomp=1 is tangential to both x and z boundaries @@ -260,7 +261,8 @@ namespace amrex::Array4 const& Bfield, const amrex::IntVect & is_nodal, amrex::GpuArray const& fbndry_lo, - amrex::GpuArray const& fbndry_hi ) + amrex::GpuArray const& fbndry_hi, + FieldBoundaryType bc_type) { amrex::IntVect ijk_mirror = ijk_vec; bool OnPECBoundary = false; @@ -271,8 +273,8 @@ namespace // Loop over sides, iside = 0 (lo), iside = 1 (hi) for (int iside = 0; iside < 2; ++iside) { const bool isPECBoundary = ( (iside == 0) - ? fbndry_lo[idim] == FieldBoundaryType::PEC - : fbndry_hi[idim] == FieldBoundaryType::PEC ); + ? fbndry_lo[idim] == bc_type + : fbndry_hi[idim] == bc_type ); if (isPECBoundary) { #if (defined WARPX_DIM_XZ) || (defined WARPX_DIM_RZ) // For 2D : for icomp==1, (By in XZ, Btheta in RZ), @@ -357,7 +359,7 @@ namespace amrex::Array4 const& field, amrex::GpuArray, AMREX_SPACEDIM> const& mirrorfac, amrex::GpuArray, AMREX_SPACEDIM> const& psign, - amrex::GpuArray, AMREX_SPACEDIM> const& is_reflective, + amrex::GpuArray, AMREX_SPACEDIM> const& is_reflective, amrex::GpuArray const& tangent_to_bndy, amrex::Box const& fabbox) { @@ -374,11 +376,11 @@ namespace amrex::IntVect iv_mirror = ijk_vec; iv_mirror[idim] = mirrorfac[idim][iside] - ijk_vec[idim]; - // On the PEC boundary the charge/current density is set to 0 - if (ijk_vec == iv_mirror) { - field(ijk_vec, n) = 0._rt; - // otherwise update the internal cell if the mirror guard cell exists + // Update the cell if the mirror guard cell exists + if (ijk_vec == iv_mirror && is_reflective[idim][iside] == 1) { + field(ijk_vec,n) = 0._rt; } else if (fabbox.contains(iv_mirror)) { + // Note that this includes the cells on the boundary for PMC field(ijk_vec,n) += psign[idim][iside] * field(iv_mirror,n); } } @@ -459,6 +461,7 @@ PEC::ApplyPECtoEfield ( std::array Efield, const amrex::Array& field_boundary_lo, const amrex::Array& field_boundary_hi, + FieldBoundaryType bc_type, const amrex::IntVect& ng_fieldgather, const amrex::Geometry& geom, const int lev, PatchType patch_type, const amrex::Vector& ref_ratios, const bool split_pml_field) @@ -514,7 +517,7 @@ PEC::ApplyPECtoEfield ( const amrex::IntVect iv(AMREX_D_DECL(i,j,k)); const int icomp = 0; ::SetEfieldOnPEC(icomp, domain_lo, domain_hi, iv, n, - Ex, Ex_nodal, fbndry_lo, fbndry_hi); + Ex, Ex_nodal, fbndry_lo, fbndry_hi, bc_type); }, tey, nComp_y, [=] AMREX_GPU_DEVICE (int i, int j, int k, int n) { @@ -522,7 +525,7 @@ PEC::ApplyPECtoEfield ( const amrex::IntVect iv(AMREX_D_DECL(i,j,k)); const int icomp = 1; ::SetEfieldOnPEC(icomp, domain_lo, domain_hi, iv, n, - Ey, Ey_nodal, fbndry_lo, fbndry_hi); + Ey, Ey_nodal, fbndry_lo, fbndry_hi, bc_type); }, tez, nComp_z, [=] AMREX_GPU_DEVICE (int i, int j, int k, int n) { @@ -530,7 +533,7 @@ PEC::ApplyPECtoEfield ( const amrex::IntVect iv(AMREX_D_DECL(i,j,k)); const int icomp = 2; ::SetEfieldOnPEC(icomp, domain_lo, domain_hi, iv, n, - Ez, Ez_nodal, fbndry_lo, fbndry_hi); + Ez, Ez_nodal, fbndry_lo, fbndry_hi, bc_type); } ); } @@ -542,8 +545,10 @@ PEC::ApplyPECtoBfield ( std::array Bfield, const amrex::Array& field_boundary_lo, const amrex::Array& field_boundary_hi, + FieldBoundaryType bc_type, const amrex::IntVect& ng_fieldgather, const amrex::Geometry& geom, - const int lev, PatchType patch_type, const amrex::Vector& ref_ratios) + const int lev, PatchType patch_type, const amrex::Vector& ref_ratios, + const bool split_pml_field) { amrex::Box domain_box = geom.Domain(); if (patch_type == PatchType::coarse && (lev > 0)) { @@ -579,9 +584,12 @@ PEC::ApplyPECtoBfield ( // gather fields from in the guard-cell region are included. // Note that for simulations without particles or laser, ng_field_gather is 0 // and the guard-cell values of the B-field multifab will not be modified. - amrex::Box const& tbx = mfi.tilebox(Bfield[0]->ixType().toIntVect(), ng_fieldgather); - amrex::Box const& tby = mfi.tilebox(Bfield[1]->ixType().toIntVect(), ng_fieldgather); - amrex::Box const& tbz = mfi.tilebox(Bfield[2]->ixType().toIntVect(), ng_fieldgather); + amrex::Box const& tbx = (split_pml_field) ? mfi.tilebox(Bfield[0]->ixType().toIntVect()) + : mfi.tilebox(Bfield[0]->ixType().toIntVect(), ng_fieldgather); + amrex::Box const& tby = (split_pml_field) ? mfi.tilebox(Bfield[1]->ixType().toIntVect()) + : mfi.tilebox(Bfield[1]->ixType().toIntVect(), ng_fieldgather); + amrex::Box const& tbz = (split_pml_field) ? mfi.tilebox(Bfield[2]->ixType().toIntVect()) + : mfi.tilebox(Bfield[2]->ixType().toIntVect(), ng_fieldgather); // loop over cells and update fields amrex::ParallelFor( @@ -591,7 +599,7 @@ PEC::ApplyPECtoBfield ( const amrex::IntVect iv(AMREX_D_DECL(i,j,k)); const int icomp = 0; ::SetBfieldOnPEC(icomp, domain_lo, domain_hi, iv, n, - Bx, Bx_nodal, fbndry_lo, fbndry_hi); + Bx, Bx_nodal, fbndry_lo, fbndry_hi, bc_type); }, tby, nComp_y, [=] AMREX_GPU_DEVICE (int i, int j, int k, int n) { @@ -599,7 +607,7 @@ PEC::ApplyPECtoBfield ( const amrex::IntVect iv(AMREX_D_DECL(i,j,k)); const int icomp = 1; ::SetBfieldOnPEC(icomp, domain_lo, domain_hi, iv, n, - By, By_nodal, fbndry_lo, fbndry_hi); + By, By_nodal, fbndry_lo, fbndry_hi, bc_type); }, tbz, nComp_z, [=] AMREX_GPU_DEVICE (int i, int j, int k, int n) { @@ -607,7 +615,7 @@ PEC::ApplyPECtoBfield ( const amrex::IntVect iv(AMREX_D_DECL(i,j,k)); const int icomp = 2; ::SetBfieldOnPEC(icomp, domain_lo, domain_hi, iv, n, - Bz, Bz_nodal, fbndry_lo, fbndry_hi); + Bz, Bz_nodal, fbndry_lo, fbndry_hi, bc_type); } ); } @@ -650,7 +658,7 @@ PEC::ApplyReflectiveBoundarytoRhofield ( // cells for boundaries that are NOT PEC amrex::Box grown_domain_box = domain_box; - amrex::GpuArray, AMREX_SPACEDIM> is_reflective; + amrex::GpuArray, AMREX_SPACEDIM> is_reflective; amrex::GpuArray is_tangent_to_bndy; amrex::GpuArray, AMREX_SPACEDIM> psign; amrex::GpuArray, AMREX_SPACEDIM> mirrorfac; @@ -658,9 +666,11 @@ PEC::ApplyReflectiveBoundarytoRhofield ( is_reflective[idim][0] = ( particle_boundary_lo[idim] == ParticleBoundaryType::Reflecting) || ( particle_boundary_lo[idim] == ParticleBoundaryType::Thermal) || ( field_boundary_lo[idim] == FieldBoundaryType::PEC); + if (field_boundary_lo[idim] == FieldBoundaryType::PMC) { is_reflective[idim][0] = 2; } is_reflective[idim][1] = ( particle_boundary_hi[idim] == ParticleBoundaryType::Reflecting) || ( particle_boundary_hi[idim] == ParticleBoundaryType::Thermal) || ( field_boundary_hi[idim] == FieldBoundaryType::PEC); + if (field_boundary_hi[idim] == FieldBoundaryType::PMC) { is_reflective[idim][1] = 2; } if (!is_reflective[idim][0]) { grown_domain_box.growLo(idim, ng_fieldgather[idim]); } if (!is_reflective[idim][1]) { grown_domain_box.growHi(idim, ng_fieldgather[idim]); } @@ -669,10 +679,12 @@ PEC::ApplyReflectiveBoundarytoRhofield ( is_tangent_to_bndy[idim] = true; psign[idim][0] = ((particle_boundary_lo[idim] == ParticleBoundaryType::Reflecting) - ||(particle_boundary_lo[idim] == ParticleBoundaryType::Thermal)) + ||(particle_boundary_lo[idim] == ParticleBoundaryType::Thermal) + ||(field_boundary_lo[idim] == FieldBoundaryType::PMC)) ? 1._rt : -1._rt; psign[idim][1] = ((particle_boundary_hi[idim] == ParticleBoundaryType::Reflecting) - ||(particle_boundary_hi[idim] == ParticleBoundaryType::Thermal)) + ||(particle_boundary_hi[idim] == ParticleBoundaryType::Thermal) + ||(field_boundary_hi[idim] == FieldBoundaryType::PMC)) ? 1._rt : -1._rt; mirrorfac[idim][0] = 2*domain_lo[idim] - (1 - rho_nodal[idim]); mirrorfac[idim][1] = 2*domain_hi[idim] + (1 - rho_nodal[idim]); @@ -746,17 +758,21 @@ PEC::ApplyReflectiveBoundarytoJfield( // directions of the current density multifab const amrex::IntVect ng_fieldgather = Jx->nGrowVect(); - amrex::GpuArray, AMREX_SPACEDIM> is_reflective; + amrex::GpuArray, AMREX_SPACEDIM> is_reflective; amrex::GpuArray, 3> is_tangent_to_bndy; amrex::GpuArray, AMREX_SPACEDIM>, 3> psign; amrex::GpuArray, AMREX_SPACEDIM>, 3> mirrorfac; for (int idim=0; idim < AMREX_SPACEDIM; ++idim) { is_reflective[idim][0] = ( particle_boundary_lo[idim] == ParticleBoundaryType::Reflecting) || ( particle_boundary_lo[idim] == ParticleBoundaryType::Thermal) - || ( field_boundary_lo[idim] == FieldBoundaryType::PEC); + || ( field_boundary_lo[idim] == FieldBoundaryType::PEC) + || ( field_boundary_lo[idim] == FieldBoundaryType::PMC); + if (field_boundary_lo[idim] == FieldBoundaryType::PMC) { is_reflective[idim][0] = 2; } is_reflective[idim][1] = ( particle_boundary_hi[idim] == ParticleBoundaryType::Reflecting) || ( particle_boundary_hi[idim] == ParticleBoundaryType::Thermal) - || ( field_boundary_hi[idim] == FieldBoundaryType::PEC); + || ( field_boundary_hi[idim] == FieldBoundaryType::PEC) + || ( field_boundary_hi[idim] == FieldBoundaryType::PMC); + if (field_boundary_hi[idim] == FieldBoundaryType::PMC) { is_reflective[idim][1] = 2; } if (!is_reflective[idim][0]) { grown_domain_box.growLo(idim, ng_fieldgather[idim]); } if (!is_reflective[idim][1]) { grown_domain_box.growHi(idim, ng_fieldgather[idim]); } @@ -778,18 +794,22 @@ PEC::ApplyReflectiveBoundarytoJfield( if (is_tangent_to_bndy[icomp][idim]){ psign[icomp][idim][0] = ( (particle_boundary_lo[idim] == ParticleBoundaryType::Reflecting) - ||(particle_boundary_lo[idim] == ParticleBoundaryType::Thermal)) + ||(particle_boundary_lo[idim] == ParticleBoundaryType::Thermal) + ||(field_boundary_lo[idim] == FieldBoundaryType::PMC)) ? 1._rt : -1._rt; psign[icomp][idim][1] = ( (particle_boundary_hi[idim] == ParticleBoundaryType::Reflecting) - ||(particle_boundary_hi[idim] == ParticleBoundaryType::Thermal)) + ||(particle_boundary_hi[idim] == ParticleBoundaryType::Thermal) + ||(field_boundary_hi[idim] == FieldBoundaryType::PMC)) ? 1._rt : -1._rt; } else { psign[icomp][idim][0] = ( (particle_boundary_lo[idim] == ParticleBoundaryType::Reflecting) - ||(particle_boundary_lo[idim] == ParticleBoundaryType::Thermal)) + ||(particle_boundary_lo[idim] == ParticleBoundaryType::Thermal) + ||(field_boundary_lo[idim] == FieldBoundaryType::PMC)) ? -1._rt : 1._rt; psign[icomp][idim][1] = ( (particle_boundary_hi[idim] == ParticleBoundaryType::Reflecting) - ||(particle_boundary_hi[idim] == ParticleBoundaryType::Thermal)) + ||(particle_boundary_hi[idim] == ParticleBoundaryType::Thermal) + ||(field_boundary_hi[idim] == FieldBoundaryType::PMC)) ? -1._rt : 1._rt; } } diff --git a/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp b/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp index da60bc62c46..d06e84859d8 100644 --- a/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp +++ b/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp @@ -62,13 +62,12 @@ Array ImplicitSolver::convertFieldBCToLinOpBC (const lbc[i] = LinOpBCType::Periodic; } else if (a_fbc[i] == FieldBoundaryType::PEC) { WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); - } else if (a_fbc[i] == FieldBoundaryType::PMC) { - WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); } else if (a_fbc[i] == FieldBoundaryType::Damped) { WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); } else if (a_fbc[i] == FieldBoundaryType::Absorbing_SilverMueller) { WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); } else if (a_fbc[i] == FieldBoundaryType::Neumann) { + // Also for FieldBoundaryType::PMC lbc[i] = LinOpBCType::Neumann; } else if (a_fbc[i] == FieldBoundaryType::None) { WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); diff --git a/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp b/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp index 670f962f7c3..1209f621e31 100644 --- a/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp +++ b/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp @@ -141,7 +141,7 @@ ProjectionDivCleaner::solve () std::map bcmap{ {FieldBoundaryType::PEC, LinOpBCType::Dirichlet}, - {FieldBoundaryType::Neumann, LinOpBCType::Neumann}, + {FieldBoundaryType::Neumann, LinOpBCType::Neumann}, // Note that PMC is the same as Neumann {FieldBoundaryType::Periodic, LinOpBCType::Periodic}, {FieldBoundaryType::None, LinOpBCType::Neumann} }; @@ -151,7 +151,7 @@ ProjectionDivCleaner::solve () auto ithi = bcmap.find(WarpX::field_boundary_hi[idim]); if (itlo == bcmap.end() || ithi == bcmap.end()) { WARPX_ABORT_WITH_MESSAGE( - "Field boundary conditions have to be either periodic, PEC or neumann " + "Field boundary conditions have to be either periodic, PEC, PMC, or neumann " "when using the MLMG projection based divergence cleaner solver." ); } diff --git a/Source/Utils/WarpXAlgorithmSelection.H b/Source/Utils/WarpXAlgorithmSelection.H index 187be924666..278088e16b6 100644 --- a/Source/Utils/WarpXAlgorithmSelection.H +++ b/Source/Utils/WarpXAlgorithmSelection.H @@ -124,11 +124,11 @@ AMREX_ENUM(FieldBoundaryType, Periodic, PEC, //!< perfect electric conductor (PEC) with E_tangential=0 PMC, //!< perfect magnetic conductor (PMC) with B_tangential=0 + Neumann = PMC, // For electrostatic, the normal E is set to zero Damped, // Fields in the guard cells are damped for PSATD //in the moving window direction Absorbing_SilverMueller, // Silver-Mueller boundary condition absorbingsilvermueller = Absorbing_SilverMueller, - Neumann, // For electrostatic, the normal E is set to zero None, // The fields values at the boundary are not updated. This is // useful for RZ simulations, at r=0. Open, // Used in the Integrated Green Function Poisson solver diff --git a/Source/Utils/WarpXUtil.cpp b/Source/Utils/WarpXUtil.cpp index dcaa3118ab4..ae2adfac043 100644 --- a/Source/Utils/WarpXUtil.cpp +++ b/Source/Utils/WarpXUtil.cpp @@ -443,6 +443,15 @@ void ReadBCParams () "PEC boundary not implemented for PSATD, yet!" ); + WARPX_ALWAYS_ASSERT_WITH_MESSAGE( + (electromagnetic_solver_id != ElectromagneticSolverAlgo::PSATD) || + ( + WarpX::field_boundary_lo[idim] != FieldBoundaryType::PMC && + WarpX::field_boundary_hi[idim] != FieldBoundaryType::PMC + ), + "PMC boundary not implemented for PSATD, yet!" + ); + if(WarpX::field_boundary_lo[idim] == FieldBoundaryType::Open && WarpX::field_boundary_hi[idim] == FieldBoundaryType::Open){ WARPX_ALWAYS_ASSERT_WITH_MESSAGE( diff --git a/Source/WarpX.H b/Source/WarpX.H index 077e8f5d954..729e6f7d126 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -212,17 +212,15 @@ public: * (BackwardEuler - 0, Lax-Wendroff - 1) */ static inline auto macroscopic_solver_algo = MacroscopicSolverAlgo::Default; - /** Integers that correspond to boundary condition applied to fields at the - * lower domain boundaries - * (0 to 6 correspond to PML, Periodic, PEC, PMC, Damped, Absorbing Silver-Mueller, None) + /** Boundary conditions applied to fields at the lower domain boundaries + * (Possible values PML, Periodic, PEC, PMC, Neumann, Damped, Absorbing Silver-Mueller, None) */ static inline amrex::Array field_boundary_lo {AMREX_D_DECL(FieldBoundaryType::Default, FieldBoundaryType::Default, FieldBoundaryType::Default)}; - /** Integers that correspond to boundary condition applied to fields at the - * upper domain boundaries - * (0 to 6 correspond to PML, Periodic, PEC, PMC, Damped, Absorbing Silver-Mueller, None) + /** Boundary conditions applied to fields at the upper domain boundaries + * (Possible values PML, Periodic, PEC, PMC, Neumann, Damped, Absorbing Silver-Mueller, None) */ static inline amrex::Array field_boundary_hi {AMREX_D_DECL(FieldBoundaryType::Default, From c0eacd9225b7ed0b54ba637a3974c8c1758023db Mon Sep 17 00:00:00 2001 From: Andrew Myers Date: Sat, 8 Feb 2025 10:49:28 -0800 Subject: [PATCH 30/58] Remove NamedComponentParticleContainer (Use from AMReX) (#5481) This capability has been upstreamed to AMReX. Co-authored-by: Axel Huebl --- Docs/source/developers/particles.rst | 2 +- ...puts_test_2d_particle_attr_access_picmi.py | 4 +- .../inputs_test_2d_prev_positions_picmi.py | 4 +- ...inputs_test_2d_runtime_components_picmi.py | 7 +- Python/pywarpx/particle_containers.py | 16 +- Source/Diagnostics/BTDiagnostics.cpp | 11 + .../FlushFormats/FlushFormatCheckpoint.cpp | 26 +- .../FlushFormats/FlushFormatInSitu.cpp | 11 +- .../FlushFormats/FlushFormatPlotfile.cpp | 18 +- .../Diagnostics/ParticleDiag/ParticleDiag.cpp | 17 +- Source/Diagnostics/ParticleIO.cpp | 38 +-- Source/Diagnostics/WarpXOpenPMD.cpp | 61 ++--- .../ImplicitSolvers/ImplicitSolver.cpp | 12 +- .../ImplicitSolvers/WarpXImplicitOps.cpp | 28 +-- Source/Particles/AddPlasmaUtilities.H | 13 +- .../DSMC/SplitAndScatterFunc.H | 2 +- .../BinaryCollision/ParticleCreationFunc.H | 2 +- .../ElementaryProcess/QEDPairGeneration.H | 4 +- .../ElementaryProcess/QEDPhotonEmission.H | 4 +- Source/Particles/LaserParticleContainer.cpp | 6 +- Source/Particles/MultiParticleContainer.cpp | 3 +- .../NamedComponentParticleContainer.H | 222 ------------------ Source/Particles/ParticleBoundaryBuffer.H | 4 +- Source/Particles/ParticleBoundaryBuffer.cpp | 38 ++- .../ParticleCreation/DefaultInitialization.H | 35 +-- .../ParticleCreation/FilterCopyTransform.H | 6 +- .../FilterCreateTransformFromFAB.H | 4 +- Source/Particles/ParticleCreation/SmartCopy.H | 8 +- .../Particles/ParticleCreation/SmartCreate.H | 4 +- .../Particles/ParticleCreation/SmartUtils.H | 4 +- .../Particles/ParticleCreation/SmartUtils.cpp | 13 +- Source/Particles/PhotonParticleContainer.cpp | 2 +- .../Particles/PhysicalParticleContainer.cpp | 70 +++--- .../Particles/PinnedMemoryParticleContainer.H | 4 +- Source/Particles/Pusher/GetAndSetPosition.H | 1 - .../RigidInjectedParticleContainer.cpp | 2 +- Source/Particles/WarpXParticleContainer.H | 77 +++++- Source/Particles/WarpXParticleContainer.cpp | 54 +++-- Source/Python/Particles/CMakeLists.txt | 1 - .../PinnedMemoryParticleContainer.cpp | 31 --- .../Particles/WarpXParticleContainer.cpp | 14 +- Source/Python/pyWarpX.cpp | 2 - 42 files changed, 368 insertions(+), 517 deletions(-) delete mode 100644 Source/Particles/NamedComponentParticleContainer.H delete mode 100644 Source/Python/Particles/PinnedMemoryParticleContainer.cpp diff --git a/Docs/source/developers/particles.rst b/Docs/source/developers/particles.rst index 45a92107ae9..9f199bdbb91 100644 --- a/Docs/source/developers/particles.rst +++ b/Docs/source/developers/particles.rst @@ -141,7 +141,7 @@ Attribute name ``int``/``real`` Description Wher Wheeler process physics is used. ==================== ================ ================================== ===== ==== ====================== -WarpX allows extra runtime attributes to be added to particle containers (through ``NewRealComp("attrname")`` or ``NewIntComp("attrname")``). +WarpX allows extra runtime attributes to be added to particle containers (through ``AddRealComp("attrname")`` or ``AddIntComp("attrname")``). The attribute name can then be used to access the values of that attribute. For example, using a particle iterator, ``pti``, to loop over the particles the command ``pti.GetAttribs(particle_comps["attrname"]).dataPtr();`` will return the values of the ``"attrname"`` attribute. diff --git a/Examples/Tests/particle_data_python/inputs_test_2d_particle_attr_access_picmi.py b/Examples/Tests/particle_data_python/inputs_test_2d_particle_attr_access_picmi.py index dbd29a43bc7..0d8c2ac209b 100755 --- a/Examples/Tests/particle_data_python/inputs_test_2d_particle_attr_access_picmi.py +++ b/Examples/Tests/particle_data_python/inputs_test_2d_particle_attr_access_picmi.py @@ -150,8 +150,8 @@ def add_particles(): ########################## assert elec_wrapper.nps == 270 / (2 - args.unique) -assert elec_wrapper.particle_container.get_comp_index("w") == 2 -assert elec_wrapper.particle_container.get_comp_index("newPid") == 6 +assert elec_wrapper.particle_container.get_real_comp_index("w") == 2 +assert elec_wrapper.particle_container.get_real_comp_index("newPid") == 6 new_pid_vals = elec_wrapper.get_particle_real_arrays("newPid", 0) for vals in new_pid_vals: diff --git a/Examples/Tests/particle_data_python/inputs_test_2d_prev_positions_picmi.py b/Examples/Tests/particle_data_python/inputs_test_2d_prev_positions_picmi.py index 2ad86ecea95..c15409edb0c 100755 --- a/Examples/Tests/particle_data_python/inputs_test_2d_prev_positions_picmi.py +++ b/Examples/Tests/particle_data_python/inputs_test_2d_prev_positions_picmi.py @@ -111,8 +111,8 @@ elec_count = elec_wrapper.nps # check that the runtime attributes have the right indices -assert elec_wrapper.particle_container.get_comp_index("prev_x") == 6 -assert elec_wrapper.particle_container.get_comp_index("prev_z") == 7 +assert elec_wrapper.particle_container.get_real_comp_index("prev_x") == 6 +assert elec_wrapper.particle_container.get_real_comp_index("prev_z") == 7 # sanity check that the prev_z values are reasonable and # that the correct number of values are returned diff --git a/Examples/Tests/restart/inputs_test_2d_runtime_components_picmi.py b/Examples/Tests/restart/inputs_test_2d_runtime_components_picmi.py index e90bfd266a7..746dff27a42 100755 --- a/Examples/Tests/restart/inputs_test_2d_runtime_components_picmi.py +++ b/Examples/Tests/restart/inputs_test_2d_runtime_components_picmi.py @@ -107,7 +107,8 @@ np.random.seed(30025025) electron_wrapper = particle_containers.ParticleContainerWrapper("electrons") -electron_wrapper.add_real_comp("newPid") +if not sim.amr_restart: + electron_wrapper.add_real_comp("newPid") def add_particles(): @@ -140,8 +141,8 @@ def add_particles(): ########################## assert electron_wrapper.nps == 90 -assert electron_wrapper.particle_container.get_comp_index("w") == 2 -assert electron_wrapper.particle_container.get_comp_index("newPid") == 6 +assert electron_wrapper.particle_container.get_real_comp_index("w") == 2 +assert electron_wrapper.particle_container.get_real_comp_index("newPid") == 6 new_pid_vals = electron_wrapper.get_particle_real_arrays("newPid", 0) for vals in new_pid_vals: diff --git a/Python/pywarpx/particle_containers.py b/Python/pywarpx/particle_containers.py index 3d77a61cb07..a66fd131aed 100644 --- a/Python/pywarpx/particle_containers.py +++ b/Python/pywarpx/particle_containers.py @@ -170,7 +170,9 @@ def add_particles( # --- Note that the velocities are handled separately and not included in attr # --- (even though they are stored as attributes in the C++) for key, vals in kwargs.items(): - attr[:, self.particle_container.get_comp_index(key) - built_in_attrs] = vals + attr[ + :, self.particle_container.get_real_comp_index(key) - built_in_attrs + ] = vals nattr_int = 0 attr_int = np.empty([0], dtype=np.int32) @@ -264,7 +266,7 @@ def get_particle_real_arrays(self, comp_name, level, copy_to_host=False): List of arrays The requested particle array data """ - comp_idx = self.particle_container.get_comp_index(comp_name) + comp_idx = self.particle_container.get_real_comp_index(comp_name) data_array = [] for pti in libwarpx.libwarpx_so.WarpXParIter(self.particle_container, level): @@ -309,7 +311,7 @@ def get_particle_int_arrays(self, comp_name, level, copy_to_host=False): List of arrays The requested particle array data """ - comp_idx = self.particle_container.get_icomp_index(comp_name) + comp_idx = self.particle_container.get_int_comp_index(comp_name) data_array = [] for pti in libwarpx.libwarpx_so.WarpXParIter(self.particle_container, level): @@ -842,16 +844,16 @@ def get_particle_boundary_buffer(self, species_name, boundary, comp_name, level) ) data_array = [] # loop over the real attributes - if comp_name in part_container.real_comp_names: - comp_idx = part_container.real_comp_names[comp_name] + if comp_name in part_container.real_soa_names: + comp_idx = part_container.get_real_comp_index(comp_name) for ii, pti in enumerate( libwarpx.libwarpx_so.BoundaryBufferParIter(part_container, level) ): soa = pti.soa() data_array.append(xp.array(soa.get_real_data(comp_idx), copy=False)) # loop over the integer attributes - elif comp_name in part_container.int_comp_names: - comp_idx = part_container.int_comp_names[comp_name] + elif comp_name in part_container.int_soa_names: + comp_idx = part_container.get_int_comp_index(comp_name) for ii, pti in enumerate( libwarpx.libwarpx_so.BoundaryBufferParIter(part_container, level) ): diff --git a/Source/Diagnostics/BTDiagnostics.cpp b/Source/Diagnostics/BTDiagnostics.cpp index 09167452c1a..cae2d2bbc03 100644 --- a/Source/Diagnostics/BTDiagnostics.cpp +++ b/Source/Diagnostics/BTDiagnostics.cpp @@ -1462,6 +1462,17 @@ BTDiagnostics::InitializeParticleBuffer () m_totalParticles_in_buffer[i][isp] = 0; m_particles_buffer[i][isp] = std::make_unique(WarpX::GetInstance().GetParGDB()); const int idx = mpc.getSpeciesID(m_output_species_names[isp]); + + // SoA component names + { + auto &pc = mpc.GetParticleContainer(idx); + auto rn = pc.GetRealSoANames(); + rn.resize(WarpXParticleContainer::NArrayReal); // strip runtime comps + auto in = pc.GetRealSoANames(); + in.resize(WarpXParticleContainer::NArrayInt); // strip runtime comps + m_particles_buffer[i][isp]->SetSoACompileTimeNames(rn, in); + } + m_output_species[i].push_back(ParticleDiag(m_diag_name, m_output_species_names[isp], mpc.GetParticleContainerPtr(idx), diff --git a/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp b/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp index fc308dee936..ba371464782 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp +++ b/Source/Diagnostics/FlushFormats/FlushFormatCheckpoint.cpp @@ -209,27 +209,25 @@ FlushFormatCheckpoint::CheckpointParticles ( write_real_comps.push_back(1); } - int const compile_time_comps = static_cast(real_names.size()); - - // get the names of the real comps - // note: skips the mandatory AMREX_SPACEDIM positions for pure SoA + // get the names of the extra real comps real_names.resize(pc->NumRealComps() - AMREX_SPACEDIM); write_real_comps.resize(pc->NumRealComps() - AMREX_SPACEDIM); - auto runtime_rnames = pc->getParticleRuntimeComps(); - for (auto const& x : runtime_rnames) { - int const i = x.second + PIdx::nattribs - AMREX_SPACEDIM; - real_names[i] = x.first; - write_real_comps[i] = pc->h_redistribute_real_comp[i + compile_time_comps]; + + // note, skip the required compnent names here + auto rnames = pc->GetRealSoANames(); + for (std::size_t index = PIdx::nattribs; index < rnames.size(); ++index) { + std::size_t const i = index - AMREX_SPACEDIM; + real_names[i] = rnames[index]; + write_real_comps[i] = pc->h_redistribute_real_comp[index]; } // and the int comps int_names.resize(pc->NumIntComps()); write_int_comps.resize(pc->NumIntComps()); - auto runtime_inames = pc->getParticleRuntimeiComps(); - for (auto const& x : runtime_inames) { - int const i = x.second + 0; - int_names[i] = x.first; - write_int_comps[i] = pc->h_redistribute_int_comp[i+AMREX_SPACEDIM]; + auto inames = pc->GetIntSoANames(); + for (std::size_t index = 0; index < inames.size(); ++index) { + int_names[index] = inames[index]; + write_int_comps[index] = pc->h_redistribute_int_comp[index]; } pc->Checkpoint(dir, part_diag.getSpeciesName(), diff --git a/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp b/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp index d5313d71727..af8f53df9b9 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp +++ b/Source/Diagnostics/FlushFormats/FlushFormatInSitu.cpp @@ -37,7 +37,7 @@ FlushFormatInSitu::WriteParticles(const amrex::Vector& particle_di WarpXParticleContainer* pc = particle_diag.getParticleContainer(); // get names of real comps - std::map real_comps_map = pc->getParticleComps(); + std::vector real_comps_map = pc->GetRealSoANames(); // WarpXParticleContainer compile-time extra AoS attributes (Real): 0 // WarpXParticleContainer compile-time extra AoS attributes (int): 0 @@ -46,14 +46,7 @@ FlushFormatInSitu::WriteParticles(const amrex::Vector& particle_di // not an efficient search, but N is small... for(int j = 0; j < PIdx::nattribs; ++j) { - auto rvn_it = real_comps_map.begin(); - for (; rvn_it != real_comps_map.end(); ++rvn_it) - if (rvn_it->second == j) - break; - WARPX_ALWAYS_ASSERT_WITH_MESSAGE( - rvn_it != real_comps_map.end(), - "WarpX In Situ: SoA real attribute not found"); - std::string varname = rvn_it->first; + std::string varname = real_comps_map.at(j); particle_varnames.push_back(prefix + "_" + varname); } // WarpXParticleContainer compile-time extra SoA attributes (int): 0 diff --git a/Source/Diagnostics/FlushFormats/FlushFormatPlotfile.cpp b/Source/Diagnostics/FlushFormats/FlushFormatPlotfile.cpp index 879a5986434..13117bad105 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatPlotfile.cpp +++ b/Source/Diagnostics/FlushFormats/FlushFormatPlotfile.cpp @@ -372,13 +372,13 @@ FlushFormatPlotfile::WriteParticles(const std::string& dir, real_names.push_back("theta"); #endif - // get the names of the real comps - - // note: skips the mandatory AMREX_SPACEDIM positions for pure SoA + // get the names of the extra real comps real_names.resize(tmp.NumRealComps() - AMREX_SPACEDIM); - auto runtime_rnames = tmp.getParticleRuntimeComps(); - for (auto const& x : runtime_rnames) { - real_names[x.second + PIdx::nattribs - AMREX_SPACEDIM] = x.first; + + // note, skip the required compnent names here + auto rnames = tmp.GetRealSoANames(); + for (std::size_t index = PIdx::nattribs; index < rnames.size(); ++index) { + real_names[index - AMREX_SPACEDIM] = rnames[index]; } // plot any "extra" fields by default @@ -390,8 +390,10 @@ FlushFormatPlotfile::WriteParticles(const std::string& dir, // and the names int_names.resize(tmp.NumIntComps()); - auto runtime_inames = tmp.getParticleRuntimeiComps(); - for (auto const& x : runtime_inames) { int_names[x.second+0] = x.first; } + auto inames = tmp.GetIntSoANames(); + for (std::size_t index = 0; index < inames.size(); ++index) { + int_names[index] = inames[index]; + } // plot by default int_flags.resize(tmp.NumIntComps(), 1); diff --git a/Source/Diagnostics/ParticleDiag/ParticleDiag.cpp b/Source/Diagnostics/ParticleDiag/ParticleDiag.cpp index 1a64ae20f0e..8e61e7464ad 100644 --- a/Source/Diagnostics/ParticleDiag/ParticleDiag.cpp +++ b/Source/Diagnostics/ParticleDiag/ParticleDiag.cpp @@ -36,26 +36,23 @@ ParticleDiag::ParticleDiag ( std::fill(m_plot_flags.begin(), m_plot_flags.end(), 0); bool contains_positions = false; if (variables[0] != "none"){ - std::map existing_variable_names = pc->getParticleComps(); + for (auto& var : variables){ #ifdef WARPX_DIM_RZ - // we reconstruct to Cartesian x,y,z for RZ particle output - existing_variable_names["y"] = PIdx::theta; + // we reconstruct to Cartesian x,y,z for RZ particle output + if (var == "y") { var = "theta"; } #endif - for (const auto& var : variables){ if (var == "phi") { // User requests phi on particle. This is *not* part of the variables that // the particle container carries, and is only added to particles during output. // Therefore, this case needs to be treated specifically. m_plot_phi = true; } else { - const auto search = existing_variable_names.find(var); - WARPX_ALWAYS_ASSERT_WITH_MESSAGE( - search != existing_variable_names.end(), + WARPX_ALWAYS_ASSERT_WITH_MESSAGE(pc->HasRealComp(var), "variables argument '" + var +"' is not an existing attribute for this species"); - m_plot_flags[existing_variable_names.at(var)] = 1; + m_plot_flags[pc->GetRealCompIndex(var)] = 1; - if (var == "x" || var == "y" || var == "z") { + if (var == "x" || var == "y" || var == "z" || var == "theta") { contains_positions = true; } } @@ -75,7 +72,7 @@ ParticleDiag::ParticleDiag ( // Always write out theta, whether or not it's requested, // to be consistent with always writing out r and z. // TODO: openPMD does a reconstruction to Cartesian, so we can now skip force-writing this - m_plot_flags[pc->getParticleComps().at("theta")] = 1; + m_plot_flags[pc->GetRealCompIndex("theta")] = 1; #endif // build filter functors diff --git a/Source/Diagnostics/ParticleIO.cpp b/Source/Diagnostics/ParticleIO.cpp index d7a26326e52..62a5e126558 100644 --- a/Source/Diagnostics/ParticleIO.cpp +++ b/Source/Diagnostics/ParticleIO.cpp @@ -153,27 +153,30 @@ MultiParticleContainer::Restart (const std::string& dir) real_comp_names.push_back(comp_name); } - for (auto const& comp : pc->getParticleRuntimeComps()) { - auto search = std::find(real_comp_names.begin(), real_comp_names.end(), comp.first); + int n_rc = 0; + for (auto const& comp : pc->GetRealSoANames()) { + // skip compile-time components + if (n_rc < WarpXParticleContainer::NArrayReal) { continue; } + n_rc++; + + auto search = std::find(real_comp_names.begin(), real_comp_names.end(), comp); WARPX_ALWAYS_ASSERT_WITH_MESSAGE( search != real_comp_names.end(), "Species " + species_names[i] - + "needs runtime real component " + comp.first + + " needs runtime real component " + comp + ", but it was not found in the checkpoint file." ); } for (int j = PIdx::nattribs-AMREX_SPACEDIM; j < nr; ++j) { const auto& comp_name = real_comp_names[j]; - auto current_comp_names = pc->getParticleComps(); - auto search = current_comp_names.find(comp_name); - if (search == current_comp_names.end()) { + if (!pc->HasRealComp(comp_name)) { amrex::Print() << Utils::TextMsg::Info( "Runtime real component " + comp_name + " was found in the checkpoint file, but it has not been added yet. " + " Adding it now." ); - pc->NewRealComp(comp_name); + pc->AddRealComp(comp_name); } } @@ -187,26 +190,29 @@ MultiParticleContainer::Restart (const std::string& dir) int_comp_names.push_back(comp_name); } - for (auto const& comp : pc->getParticleRuntimeiComps()) { - auto search = std::find(int_comp_names.begin(), int_comp_names.end(), comp.first); + int n_ic = 0; + for (auto const& comp : pc->GetIntSoANames()) { + // skip compile-time components + if (n_ic < WarpXParticleContainer::NArrayInt) { continue; } + n_ic++; + + auto search = std::find(int_comp_names.begin(), int_comp_names.end(), comp); WARPX_ALWAYS_ASSERT_WITH_MESSAGE( search != int_comp_names.end(), - "Species " + species_names[i] + "needs runtime int component " + comp.first + "Species " + species_names[i] + " needs runtime int component " + comp + ", but it was not found in the checkpoint file." ); } for (int j = 0; j < ni; ++j) { const auto& comp_name = int_comp_names[j]; - auto current_comp_names = pc->getParticleiComps(); - auto search = current_comp_names.find(comp_name); - if (search == current_comp_names.end()) { + if (!pc->HasIntComp(comp_name)) { amrex::Print()<< Utils::TextMsg::Info( "Runtime int component " + comp_name + " was found in the checkpoint file, but it has not been added yet. " + " Adding it now." ); - pc->NewIntComp(comp_name); + pc->AddIntComp(comp_name); } } @@ -258,8 +264,8 @@ storePhiOnParticles ( PinnedMemoryParticleContainer& tmp, is_full_diagnostic, "Output of the electrostatic potential (phi) on the particles was requested, " "but this is only available with `diag_type = Full`."); - tmp.NewRealComp("phi"); - int const phi_index = tmp.getParticleComps().at("phi"); + tmp.AddRealComp("phi"); + int const phi_index = tmp.GetRealCompIndex("phi"); auto& warpx = WarpX::GetInstance(); for (int lev=0; lev<=warpx.finestLevel(); lev++) { const amrex::Geometry& geom = warpx.Geom(lev); diff --git a/Source/Diagnostics/WarpXOpenPMD.cpp b/Source/Diagnostics/WarpXOpenPMD.cpp index 2fac8ede452..96e8bb846bb 100644 --- a/Source/Diagnostics/WarpXOpenPMD.cpp +++ b/Source/Diagnostics/WarpXOpenPMD.cpp @@ -10,7 +10,6 @@ #include "Diagnostics/ParticleDiag/ParticleDiag.H" #include "FieldIO.H" #include "Particles/Filter/FilterFunctors.H" -#include "Particles/NamedComponentParticleContainer.H" #include "Utils/TextMsg.H" #include "Utils/Parser/ParserUtils.H" #include "Utils/RelativeCellPosition.H" @@ -591,44 +590,52 @@ for (const auto & particle_diag : particle_diags) { storePhiOnParticles( tmp, WarpX::electrostatic_solver_id, !use_pinned_pc ); } - // names of amrex::Real and int particle attributes in SoA data + // names of amrex::ParticleReal and int particle attributes in SoA data + auto const rn = tmp.GetRealSoANames(); + auto const in = tmp.GetIntSoANames(); amrex::Vector real_names; - amrex::Vector int_names; - amrex::Vector int_flags; - amrex::Vector real_flags; - // see openPMD ED-PIC extension for namings - // note: an underscore separates the record name from its component - // for non-scalar records - // note: in RZ, we reconstruct x,y,z positions from r,z,theta in WarpX + amrex::Vector int_names(in.begin(), in.end()); + + // transform names to openPMD, separated by underscores + { + // see openPMD ED-PIC extension for namings + // note: an underscore separates the record name from its component + // for non-scalar records + // note: in RZ, we reconstruct x,y,z positions from r,z,theta in WarpX #if !defined (WARPX_DIM_1D_Z) - real_names.push_back("position_x"); + real_names.push_back("position_x"); #endif #if defined (WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - real_names.push_back("position_y"); + real_names.push_back("position_y"); #endif - real_names.push_back("position_z"); - real_names.push_back("weighting"); - real_names.push_back("momentum_x"); - real_names.push_back("momentum_y"); - real_names.push_back("momentum_z"); - // get the names of the real comps - real_names.resize(tmp.NumRealComps()); - auto runtime_rnames = tmp.getParticleRuntimeComps(); - for (auto const& x : runtime_rnames) + real_names.push_back("position_z"); + real_names.push_back("weighting"); + real_names.push_back("momentum_x"); + real_names.push_back("momentum_y"); + real_names.push_back("momentum_z"); + } + for (size_t i = real_names.size(); i < rn.size(); ++i) { - real_names[x.second+PIdx::nattribs] = detail::snakeToCamel(x.first); + real_names.push_back(rn[i]); } + + for (size_t i = PIdx::nattribs; i < rn.size(); ++i) + { + real_names[i] = detail::snakeToCamel(rn[i]); + } + // plot any "extra" fields by default - real_flags = particle_diag.m_plot_flags; + amrex::Vector real_flags = particle_diag.m_plot_flags; real_flags.resize(tmp.NumRealComps(), 1); - // and the names - int_names.resize(tmp.NumIntComps()); - auto runtime_inames = tmp.getParticleRuntimeiComps(); - for (auto const& x : runtime_inames) + + // and the int names + for (size_t i = 0; i < in.size(); ++i) { - int_names[x.second+0] = detail::snakeToCamel(x.first); + int_names[i] = detail::snakeToCamel(in[i]); } + // plot by default + amrex::Vector int_flags; int_flags.resize(tmp.NumIntComps(), 1); // real_names contains a list of all real particle attributes. diff --git a/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp b/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp index d06e84859d8..ab064772922 100644 --- a/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp +++ b/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp @@ -13,15 +13,15 @@ void ImplicitSolver::CreateParticleAttributes () const // Add space to save the positions and velocities at the start of the time steps for (auto const& pc : m_WarpX->GetPartContainer()) { #if (AMREX_SPACEDIM >= 2) - pc->NewRealComp("x_n", comm); + pc->AddRealComp("x_n", comm); #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - pc->NewRealComp("y_n", comm); + pc->AddRealComp("y_n", comm); #endif - pc->NewRealComp("z_n", comm); - pc->NewRealComp("ux_n", comm); - pc->NewRealComp("uy_n", comm); - pc->NewRealComp("uz_n", comm); + pc->AddRealComp("z_n", comm); + pc->AddRealComp("ux_n", comm); + pc->AddRealComp("uy_n", comm); + pc->AddRealComp("uz_n", comm); } } diff --git a/Source/FieldSolver/ImplicitSolvers/WarpXImplicitOps.cpp b/Source/FieldSolver/ImplicitSolvers/WarpXImplicitOps.cpp index 9b62bd91b0c..06e1820854c 100644 --- a/Source/FieldSolver/ImplicitSolvers/WarpXImplicitOps.cpp +++ b/Source/FieldSolver/ImplicitSolvers/WarpXImplicitOps.cpp @@ -169,7 +169,7 @@ WarpX::SaveParticlesAtImplicitStepStart ( ) #endif { - auto particle_comps = pc->getParticleComps(); + auto particle_comps = pc->GetRealSoANames(); for (WarpXParIter pti(*pc, lev); pti.isValid(); ++pti) { @@ -181,15 +181,15 @@ WarpX::SaveParticlesAtImplicitStepStart ( ) amrex::ParticleReal* const AMREX_RESTRICT uz = attribs[PIdx::uz].dataPtr(); #if (AMREX_SPACEDIM >= 2) - amrex::ParticleReal* x_n = pti.GetAttribs(particle_comps["x_n"]).dataPtr(); + amrex::ParticleReal* x_n = pti.GetAttribs("x_n").dataPtr(); #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - amrex::ParticleReal* y_n = pti.GetAttribs(particle_comps["y_n"]).dataPtr(); + amrex::ParticleReal* y_n = pti.GetAttribs("y_n").dataPtr(); #endif - amrex::ParticleReal* z_n = pti.GetAttribs(particle_comps["z_n"]).dataPtr(); - amrex::ParticleReal* ux_n = pti.GetAttribs(particle_comps["ux_n"]).dataPtr(); - amrex::ParticleReal* uy_n = pti.GetAttribs(particle_comps["uy_n"]).dataPtr(); - amrex::ParticleReal* uz_n = pti.GetAttribs(particle_comps["uz_n"]).dataPtr(); + amrex::ParticleReal* z_n = pti.GetAttribs("z_n").dataPtr(); + amrex::ParticleReal* ux_n = pti.GetAttribs("ux_n").dataPtr(); + amrex::ParticleReal* uy_n = pti.GetAttribs("uy_n").dataPtr(); + amrex::ParticleReal* uz_n = pti.GetAttribs("uz_n").dataPtr(); const long np = pti.numParticles(); @@ -239,7 +239,7 @@ WarpX::FinishImplicitParticleUpdate () #endif { - auto particle_comps = pc->getParticleComps(); + auto particle_comps = pc->GetRealSoANames(); for (WarpXParIter pti(*pc, lev); pti.isValid(); ++pti) { @@ -252,15 +252,15 @@ WarpX::FinishImplicitParticleUpdate () amrex::ParticleReal* const AMREX_RESTRICT uz = attribs[PIdx::uz].dataPtr(); #if (AMREX_SPACEDIM >= 2) - amrex::ParticleReal* x_n = pti.GetAttribs(particle_comps["x_n"]).dataPtr(); + amrex::ParticleReal* x_n = pti.GetAttribs("x_n").dataPtr(); #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - amrex::ParticleReal* y_n = pti.GetAttribs(particle_comps["y_n"]).dataPtr(); + amrex::ParticleReal* y_n = pti.GetAttribs("y_n").dataPtr(); #endif - amrex::ParticleReal* z_n = pti.GetAttribs(particle_comps["z_n"]).dataPtr(); - amrex::ParticleReal* ux_n = pti.GetAttribs(particle_comps["ux_n"]).dataPtr(); - amrex::ParticleReal* uy_n = pti.GetAttribs(particle_comps["uy_n"]).dataPtr(); - amrex::ParticleReal* uz_n = pti.GetAttribs(particle_comps["uz_n"]).dataPtr(); + amrex::ParticleReal* z_n = pti.GetAttribs("z_n").dataPtr(); + amrex::ParticleReal* ux_n = pti.GetAttribs("ux_n").dataPtr(); + amrex::ParticleReal* uy_n = pti.GetAttribs("uy_n").dataPtr(); + amrex::ParticleReal* uz_n = pti.GetAttribs("uz_n").dataPtr(); const long np = pti.numParticles(); diff --git a/Source/Particles/AddPlasmaUtilities.H b/Source/Particles/AddPlasmaUtilities.H index 7b8e4e58105..12d964adf64 100644 --- a/Source/Particles/AddPlasmaUtilities.H +++ b/Source/Particles/AddPlasmaUtilities.H @@ -251,8 +251,6 @@ struct PlasmaParserHelper PlasmaParserHelper (SoAType& a_soa, std::size_t old_size, const std::vector& a_user_int_attribs, const std::vector& a_user_real_attribs, - std::map& a_particle_icomps, - std::map& a_particle_comps, const PlasmaParserWrapper& wrapper) : m_wrapper_ptr(&wrapper) { m_pa_user_int_pinned.resize(a_user_int_attribs.size()); @@ -266,10 +264,10 @@ struct PlasmaParserHelper #endif for (std::size_t ia = 0; ia < a_user_int_attribs.size(); ++ia) { - m_pa_user_int_pinned[ia] = a_soa.GetIntData(a_particle_icomps[a_user_int_attribs[ia]]).data() + old_size; + m_pa_user_int_pinned[ia] = a_soa.GetIntData(a_user_int_attribs[ia]).data() + old_size; } for (std::size_t ia = 0; ia < a_user_real_attribs.size(); ++ia) { - m_pa_user_real_pinned[ia] = a_soa.GetRealData(a_particle_comps[a_user_real_attribs[ia]]).data() + old_size; + m_pa_user_real_pinned[ia] = a_soa.GetRealData(a_user_real_attribs[ia]).data() + old_size; } #ifdef AMREX_USE_GPU @@ -308,7 +306,6 @@ struct QEDHelper { template QEDHelper (SoAType& a_soa, std::size_t old_size, - std::map& a_particle_comps, bool a_has_quantum_sync, bool a_has_breit_wheeler, const std::shared_ptr& a_shr_p_qs_engine, const std::shared_ptr& a_shr_p_bw_engine) @@ -317,14 +314,12 @@ struct QEDHelper if(has_quantum_sync){ quantum_sync_get_opt = a_shr_p_qs_engine->build_optical_depth_functor(); - p_optical_depth_QSR = a_soa.GetRealData( - a_particle_comps["opticalDepthQSR"]).data() + old_size; + p_optical_depth_QSR = a_soa.GetRealData("opticalDepthQSR").data() + old_size; } if(has_breit_wheeler){ breit_wheeler_get_opt = a_shr_p_bw_engine->build_optical_depth_functor(); - p_optical_depth_BW = a_soa.GetRealData( - a_particle_comps["opticalDepthBW"]).data() + old_size; + p_optical_depth_BW = a_soa.GetRealData("opticalDepthBW").data() + old_size; } } diff --git a/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H b/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H index db04dbc7f32..e4b4d8a6a3a 100644 --- a/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H +++ b/Source/Particles/Collision/BinaryCollision/DSMC/SplitAndScatterFunc.H @@ -252,7 +252,7 @@ public: ParticleCreation::DefaultInitializeRuntimeAttributes(*tile_products[i], 0, 0, pc_products[i]->getUserRealAttribs(), pc_products[i]->getUserIntAttribs(), - pc_products[i]->getParticleComps(), pc_products[i]->getParticleiComps(), + pc_products[i]->GetRealSoANames(), pc_products[i]->GetIntSoANames(), pc_products[i]->getUserRealAttribParser(), pc_products[i]->getUserIntAttribParser(), #ifdef WARPX_QED diff --git a/Source/Particles/Collision/BinaryCollision/ParticleCreationFunc.H b/Source/Particles/Collision/BinaryCollision/ParticleCreationFunc.H index e4772aab7c9..59565c92516 100644 --- a/Source/Particles/Collision/BinaryCollision/ParticleCreationFunc.H +++ b/Source/Particles/Collision/BinaryCollision/ParticleCreationFunc.H @@ -235,7 +235,7 @@ public: ParticleCreation::DefaultInitializeRuntimeAttributes(*tile_products[i], 0, 0, pc_products[i]->getUserRealAttribs(), pc_products[i]->getUserIntAttribs(), - pc_products[i]->getParticleComps(), pc_products[i]->getParticleiComps(), + pc_products[i]->GetRealSoANames(), pc_products[i]->GetIntSoANames(), pc_products[i]->getUserRealAttribParser(), pc_products[i]->getUserIntAttribParser(), #ifdef WARPX_QED diff --git a/Source/Particles/ElementaryProcess/QEDPairGeneration.H b/Source/Particles/ElementaryProcess/QEDPairGeneration.H index f1beb8363a7..99e87b5c796 100644 --- a/Source/Particles/ElementaryProcess/QEDPairGeneration.H +++ b/Source/Particles/ElementaryProcess/QEDPairGeneration.H @@ -41,7 +41,7 @@ public: /** * \brief Constructor of the PairGenerationFilterFunc functor. * - * @param[in] opt_depth_runtime_comp index of the optical depth component + * @param[in] opt_depth_runtime_comp index of the optical depth runtime component */ PairGenerationFilterFunc(int const opt_depth_runtime_comp) : m_opt_depth_runtime_comp(opt_depth_runtime_comp) @@ -67,7 +67,7 @@ public: } private: - int m_opt_depth_runtime_comp = 0; /*!< Index of the optical depth component of the species.*/ + int m_opt_depth_runtime_comp = 0; /*!< Index of the optical depth runtime component of the species. */ }; /** diff --git a/Source/Particles/ElementaryProcess/QEDPhotonEmission.H b/Source/Particles/ElementaryProcess/QEDPhotonEmission.H index 0b6836a38bc..f509f884c48 100644 --- a/Source/Particles/ElementaryProcess/QEDPhotonEmission.H +++ b/Source/Particles/ElementaryProcess/QEDPhotonEmission.H @@ -47,7 +47,7 @@ public: /** * \brief Constructor of the PhotonEmissionFilterFunc functor. * - * @param[in] opt_depth_runtime_comp Index of the optical depth component + * @param[in] opt_depth_runtime_comp Index of the optical depth component in the runtime real data */ PhotonEmissionFilterFunc(int const opt_depth_runtime_comp) : m_opt_depth_runtime_comp(opt_depth_runtime_comp) @@ -73,7 +73,7 @@ public: } private: - int m_opt_depth_runtime_comp; /*!< Index of the optical depth component of the source species*/ + int m_opt_depth_runtime_comp; /*!< Index of the optical depth runtime component of the source species */ }; /** diff --git a/Source/Particles/LaserParticleContainer.cpp b/Source/Particles/LaserParticleContainer.cpp index 1954b822084..c79d1f675b5 100644 --- a/Source/Particles/LaserParticleContainer.cpp +++ b/Source/Particles/LaserParticleContainer.cpp @@ -873,18 +873,18 @@ LaserParticleContainer::update_laser_particle (WarpXParIter& pti, #if (AMREX_SPACEDIM >= 2) ParticleReal* x_n = nullptr; if (push_type == PushType::Implicit) { - x_n = pti.GetAttribs(particle_comps["x_n"]).dataPtr(); + x_n = pti.GetAttribs("x_n").dataPtr(); } #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) ParticleReal* y_n = nullptr; if (push_type == PushType::Implicit) { - y_n = pti.GetAttribs(particle_comps["y_n"]).dataPtr(); + y_n = pti.GetAttribs("y_n").dataPtr(); } #endif ParticleReal* z_n = nullptr; if (push_type == PushType::Implicit) { - z_n = pti.GetAttribs(particle_comps["z_n"]).dataPtr(); + z_n = pti.GetAttribs("z_n").dataPtr(); } // Copy member variables to tmp copies for GPU runs. diff --git a/Source/Particles/MultiParticleContainer.cpp b/Source/Particles/MultiParticleContainer.cpp index c6724b5185a..6c08dc6aa8d 100644 --- a/Source/Particles/MultiParticleContainer.cpp +++ b/Source/Particles/MultiParticleContainer.cpp @@ -21,7 +21,6 @@ # include "Particles/ElementaryProcess/QEDPhotonEmission.H" #endif #include "Particles/LaserParticleContainer.H" -#include "Particles/NamedComponentParticleContainer.H" #include "Particles/ParticleCreation/FilterCopyTransform.H" #ifdef WARPX_QED # include "Particles/ParticleCreation/FilterCreateTransformFromFAB.H" @@ -1622,7 +1621,7 @@ void MultiParticleContainer::doQedQuantumSync (int lev, auto Transform = PhotonEmissionTransformFunc( m_shr_p_qs_engine->build_optical_depth_functor(), - pc_source->particle_runtime_comps["opticalDepthQSR"], + pc_source->GetRealCompIndex("opticalDepthQSR") - pc_source->NArrayReal, m_shr_p_qs_engine->build_phot_em_functor(), pti, lev, Ex.nGrowVect(), Ex[pti], Ey[pti], Ez[pti], diff --git a/Source/Particles/NamedComponentParticleContainer.H b/Source/Particles/NamedComponentParticleContainer.H deleted file mode 100644 index 57c65746d18..00000000000 --- a/Source/Particles/NamedComponentParticleContainer.H +++ /dev/null @@ -1,222 +0,0 @@ -/* Copyright 2022 Remi Lehe - * - * This file is part of WarpX. - * - * License: BSD-3-Clause-LBNL - */ -#ifndef WARPX_NamedComponentParticleContainer_H_ -#define WARPX_NamedComponentParticleContainer_H_ - -#include "Utils/TextMsg.H" - -#include -#include -#include - -#include -#include -#include - - -/** Real Particle Attributes stored in amrex::ParticleContainer's struct of array - */ -struct PIdx -{ - enum { -#if !defined (WARPX_DIM_1D_Z) - x, -#endif -#if defined (WARPX_DIM_3D) - y, -#endif - z, - w, ///< weight - ux, uy, uz, -#ifdef WARPX_DIM_RZ - theta, ///< RZ needs all three position components -#endif - nattribs ///< number of compile-time attributes - }; -}; - -/** Integer Particle Attributes stored in amrex::ParticleContainer's struct of array - */ -struct PIdxInt -{ - enum { - nattribs ///< number of compile-time attributes - }; -}; - -/** Particle Container class that allows to add/access particle components - * with a name (string) instead of doing so with an integer index. - * (The "components" are all the particle amrex::Real quantities.) - * - * This is done by storing maps that give the index of the component - * that corresponds to a given string. - * - * @tparam T_Allocator Mainly controls in which type of memory (e.g. device - * arena, pinned memory arena, etc.) the particle data will be stored - */ -template class T_Allocator=amrex::DefaultAllocator> -class NamedComponentParticleContainer : -public amrex::ParticleContainerPureSoA -{ -public: - /** Construct an empty NamedComponentParticleContainer **/ - NamedComponentParticleContainer () : amrex::ParticleContainerPureSoA() {} - - /** Construct a NamedComponentParticleContainer from an AmrParGDB object - * - * In this case, the only components are the default ones: - * weight, momentum and (in RZ geometry) theta. - * - * @param amr_pgdb A pointer to a ParGDBBase, which contains pointers to - * the Geometry, DistributionMapping, and BoxArray objects that define the - * AMR hierarchy. Usually, this is generated by an AmrCore or AmrLevel object. - */ - NamedComponentParticleContainer (amrex::AmrParGDB* amr_pgdb) - : amrex::ParticleContainerPureSoA(amr_pgdb) { - // build up the map of string names to particle component numbers -#if !defined (WARPX_DIM_1D_Z) - particle_comps["x"] = PIdx::x; -#endif -#if defined (WARPX_DIM_3D) - particle_comps["y"] = PIdx::y; -#endif - particle_comps["z"] = PIdx::z; - particle_comps["w"] = PIdx::w; - particle_comps["ux"] = PIdx::ux; - particle_comps["uy"] = PIdx::uy; - particle_comps["uz"] = PIdx::uz; -#ifdef WARPX_DIM_RZ - particle_comps["theta"] = PIdx::theta; -#endif - } - - /** Destructor for NamedComponentParticleContainer */ - ~NamedComponentParticleContainer() override = default; - - /** Construct a NamedComponentParticleContainer from a regular - * amrex::ParticleContainer, and additional name-to-index maps - * - * @param pc regular particle container, where components are not named (only indexed) - * @param p_comps name-to-index map for compile-time and run-time real components - * @param p_icomps name-to-index map for compile-time and run-time integer components - * @param p_rcomps name-to-index map for run-time real components - * @param p_ricomps name-to-index map for run-time integer components - */ - NamedComponentParticleContainer( - amrex::ParticleContainerPureSoA && pc, - std::map p_comps, - std::map p_icomps, - std::map p_rcomps, - std::map p_ricomps) - : amrex::ParticleContainerPureSoA(std::move(pc)), - particle_comps(std::move(p_comps)), - particle_icomps(std::move(p_icomps)), - particle_runtime_comps(std::move(p_rcomps)), - particle_runtime_icomps(std::move(p_ricomps)) {} - - /** Copy constructor for NamedComponentParticleContainer */ - NamedComponentParticleContainer ( const NamedComponentParticleContainer &) = delete; - /** Copy operator for NamedComponentParticleContainer */ - NamedComponentParticleContainer& operator= ( const NamedComponentParticleContainer & ) = delete; - - /** Move constructor for NamedComponentParticleContainer */ - NamedComponentParticleContainer ( NamedComponentParticleContainer && ) noexcept = default; - /** Move operator for NamedComponentParticleContainer */ - NamedComponentParticleContainer& operator= ( NamedComponentParticleContainer && ) noexcept = default; - - /** Create an empty particle container - * - * This creates a new NamedComponentParticleContainer with same compile-time - * and run-time attributes. But it can change its allocator. - * - * This function overloads the corresponding function from the parent - * class (amrex::ParticleContainer) - */ - template class NewAllocator=amrex::DefaultAllocator> - NamedComponentParticleContainer - make_alike () const { - auto tmp = NamedComponentParticleContainer( - amrex::ParticleContainerPureSoA::template make_alike(), - particle_comps, - particle_icomps, - particle_runtime_comps, - particle_runtime_icomps); - - return tmp; - } - - using amrex::ParticleContainerPureSoA::NumRealComps; - using amrex::ParticleContainerPureSoA::NumIntComps; - using amrex::ParticleContainerPureSoA::AddRealComp; - using amrex::ParticleContainerPureSoA::AddIntComp; - - /** Allocate a new run-time real component - * - * @param name Name of the new component - * @param comm Whether to communicate this component, in the particle Redistribute - */ - void NewRealComp (const std::string& name, bool comm=true) - { - auto search = particle_comps.find(name); - if (search == particle_comps.end()) { - particle_comps[name] = NumRealComps(); - particle_runtime_comps[name] = NumRealComps() - PIdx::nattribs; - AddRealComp(comm); - } else { - amrex::Print() << Utils::TextMsg::Info( - name + " already exists in particle_comps, not adding."); - } - } - - /** Allocate a new run-time integer component - * - * @param name Name of the new component - * @param comm Whether to communicate this component, in the particle Redistribute - */ - void NewIntComp (const std::string& name, bool comm=true) - { - auto search = particle_icomps.find(name); - if (search == particle_icomps.end()) { - particle_icomps[name] = NumIntComps(); - particle_runtime_icomps[name] = NumIntComps() - 0; - AddIntComp(comm); - } else { - amrex::Print() << Utils::TextMsg::Info( - name + " already exists in particle_icomps, not adding."); - } - } - - void defineAllParticleTiles () noexcept - { - for (int lev = 0; lev <= amrex::ParticleContainerPureSoA::finestLevel(); ++lev) - { - for (auto mfi = amrex::ParticleContainerPureSoA::MakeMFIter(lev); mfi.isValid(); ++mfi) - { - const int grid_id = mfi.index(); - const int tile_id = mfi.LocalTileIndex(); - amrex::ParticleContainerPureSoA::DefineAndReturnParticleTile(lev, grid_id, tile_id); - } - } - } - - /** Return the name-to-index map for the compile-time and runtime-time real components */ - [[nodiscard]] std::map getParticleComps () const noexcept { return particle_comps;} - /** Return the name-to-index map for the compile-time and runtime-time integer components */ - [[nodiscard]] std::map getParticleiComps () const noexcept { return particle_icomps;} - /** Return the name-to-index map for the runtime-time real components */ - [[nodiscard]] std::map getParticleRuntimeComps () const noexcept { return particle_runtime_comps;} - /** Return the name-to-index map for the runtime-time integer components */ - [[nodiscard]] std::map getParticleRuntimeiComps () const noexcept { return particle_runtime_icomps;} - -protected: - std::map particle_comps; - std::map particle_icomps; - std::map particle_runtime_comps; - std::map particle_runtime_icomps; -}; - -#endif //WARPX_NamedComponentParticleContainer_H_ diff --git a/Source/Particles/ParticleBoundaryBuffer.H b/Source/Particles/ParticleBoundaryBuffer.H index 24b388be00e..c9589ac0c75 100644 --- a/Source/Particles/ParticleBoundaryBuffer.H +++ b/Source/Particles/ParticleBoundaryBuffer.H @@ -32,9 +32,9 @@ public: /** Copy operator for ParticleBoundaryBuffer */ ParticleBoundaryBuffer& operator= ( const ParticleBoundaryBuffer & ) = delete; - /** Move constructor for NamedComponentParticleContainer */ + /** Move constructor for ParticleBoundaryBuffer */ ParticleBoundaryBuffer ( ParticleBoundaryBuffer && ) = default; - /** Move operator for NamedComponentParticleContainer */ + /** Move operator for ParticleBoundaryBuffer */ ParticleBoundaryBuffer& operator= ( ParticleBoundaryBuffer && ) = default; int numSpecies() const { return static_cast(getSpeciesNames().size()); } diff --git a/Source/Particles/ParticleBoundaryBuffer.cpp b/Source/Particles/ParticleBoundaryBuffer.cpp index dbe5dea7085..048534bff6a 100644 --- a/Source/Particles/ParticleBoundaryBuffer.cpp +++ b/Source/Particles/ParticleBoundaryBuffer.cpp @@ -384,11 +384,11 @@ void ParticleBoundaryBuffer::gatherParticlesFromDomainBoundaries (MultiParticleC if (!buffer[i].isDefined()) { buffer[i] = pc.make_alike(); - buffer[i].NewIntComp("stepScraped", false); - buffer[i].NewRealComp("deltaTimeScraped", false); - buffer[i].NewRealComp("nx", false); - buffer[i].NewRealComp("ny", false); - buffer[i].NewRealComp("nz", false); + buffer[i].AddIntComp("stepScraped", false); + buffer[i].AddRealComp("deltaTimeScraped", false); + buffer[i].AddRealComp("nx", false); + buffer[i].AddRealComp("ny", false); + buffer[i].AddRealComp("nz", false); } auto& species_buffer = buffer[i]; @@ -443,11 +443,10 @@ void ParticleBoundaryBuffer::gatherParticlesFromDomainBoundaries (MultiParticleC WARPX_PROFILE("ParticleBoundaryBuffer::gatherParticles::filterAndTransform"); auto& warpx = WarpX::GetInstance(); const auto dt = warpx.getdt(pti.GetLevel()); - auto string_to_index_intcomp = buffer[i].getParticleRuntimeiComps(); - const int step_scraped_index = string_to_index_intcomp.at("stepScraped"); - auto string_to_index_realcomp = buffer[i].getParticleRuntimeComps(); - const int delta_index = string_to_index_realcomp.at("deltaTimeScraped"); - const int normal_index = string_to_index_realcomp.at("nx"); + auto & buf = buffer[i]; + const int step_scraped_index = buf.GetIntCompIndex("stepScraped") - PinnedMemoryParticleContainer::NArrayInt; + const int delta_index = buf.GetRealCompIndex("deltaTimeScraped") - PinnedMemoryParticleContainer::NArrayReal; + const int normal_index = buf.GetRealCompIndex("nx") - PinnedMemoryParticleContainer::NArrayReal; const int step = warpx_instance.getistep(0); amrex::filterAndTransformParticles(ptile_buffer, ptile, predicate, @@ -481,11 +480,11 @@ void ParticleBoundaryBuffer::gatherParticlesFromEmbeddedBoundaries ( if (!buffer[i].isDefined()) { buffer[i] = pc.make_alike(); - buffer[i].NewIntComp("stepScraped", false); - buffer[i].NewRealComp("deltaTimeScraped", false); - buffer[i].NewRealComp("nx", false); - buffer[i].NewRealComp("ny", false); - buffer[i].NewRealComp("nz", false); + buffer[i].AddIntComp("stepScraped", false); + buffer[i].AddRealComp("deltaTimeScraped", false); + buffer[i].AddRealComp("nx", false); + buffer[i].AddRealComp("ny", false); + buffer[i].AddRealComp("nz", false); } @@ -546,11 +545,10 @@ void ParticleBoundaryBuffer::gatherParticlesFromEmbeddedBoundaries ( } auto &warpx = WarpX::GetInstance(); const auto dt = warpx.getdt(pti.GetLevel()); - auto string_to_index_intcomp = buffer[i].getParticleRuntimeiComps(); - const int step_scraped_index = string_to_index_intcomp.at("stepScraped"); - auto string_to_index_realcomp = buffer[i].getParticleRuntimeComps(); - const int delta_index = string_to_index_realcomp.at("deltaTimeScraped"); - const int normal_index = string_to_index_realcomp.at("nx"); + auto & buf = buffer[i]; + const int step_scraped_index = buf.GetIntCompIndex("stepScraped") - PinnedMemoryParticleContainer::NArrayInt; + const int delta_index = buf.GetRealCompIndex("deltaTimeScraped") - PinnedMemoryParticleContainer::NArrayReal; + const int normal_index = buf.GetRealCompIndex("nx") - PinnedMemoryParticleContainer::NArrayReal; const int step = warpx_instance.getistep(0); { diff --git a/Source/Particles/ParticleCreation/DefaultInitialization.H b/Source/Particles/ParticleCreation/DefaultInitialization.H index 88b23905481..1922c829379 100644 --- a/Source/Particles/ParticleCreation/DefaultInitialization.H +++ b/Source/Particles/ParticleCreation/DefaultInitialization.H @@ -102,8 +102,8 @@ namespace ParticleCreation { * These are NOT initialized by this function. * @param[in] user_real_attribs The names of the real components for this particle tile * @param[in] user_int_attribs The names of the int components for this particle tile - * @param[in] particle_comps map between particle component index and component name for real comps - * @param[in] particle_icomps map between particle component index and component name for int comps + * @param[in] particle_comps particle component names for real comps + * @param[in] particle_icomps particle component names for int comps * @param[in] user_real_attrib_parser the parser functions used to initialize the user real components * @param[in] user_int_attrib_parser the parser functions used to initialize the user int components * @param[in] do_qed_comps whether to initialize the qed components (these are usually handled by @@ -120,8 +120,8 @@ void DefaultInitializeRuntimeAttributes (PTile& ptile, const int n_external_attr_int, const std::vector& user_real_attribs, const std::vector& user_int_attribs, - const std::map& particle_comps, - const std::map& particle_icomps, + const std::vector& particle_comps, + const std::vector& particle_icomps, const std::vector& user_real_attrib_parser, const std::vector& user_int_attrib_parser, #ifdef WARPX_QED @@ -151,8 +151,9 @@ void DefaultInitializeRuntimeAttributes (PTile& ptile, auto attr_ptr = ptile.GetStructOfArrays().GetRealData(j).data(); #ifdef WARPX_QED // Current runtime comp is quantum synchrotron optical depth - if (particle_comps.find("opticalDepthQSR") != particle_comps.end() && - particle_comps.at("opticalDepthQSR") == j) + auto const it_qsr = std::find(particle_comps.begin(), particle_comps.end(), "opticalDepthQSR"); + if (it_qsr != particle_comps.end() && + std::distance(particle_comps.begin(), it_qsr) == j) { if (!do_qed_comps) { continue; } const QuantumSynchrotronGetOpticalDepth quantum_sync_get_opt = @@ -172,9 +173,10 @@ void DefaultInitializeRuntimeAttributes (PTile& ptile, } } - // Current runtime comp is Breit-Wheeler optical depth - if (particle_comps.find("opticalDepthBW") != particle_comps.end() && - particle_comps.at("opticalDepthBW") == j) + // Current runtime comp is Breit-Wheeler optical depth + auto const it_bw = std::find(particle_comps.begin(), particle_comps.end(), "opticalDepthBW"); + if (it_bw != particle_comps.end() && + std::distance(particle_comps.begin(), it_bw) == j) { if (!do_qed_comps) { continue; } const BreitWheelerGetOpticalDepth breit_wheeler_get_opt = @@ -198,8 +200,9 @@ void DefaultInitializeRuntimeAttributes (PTile& ptile, for (int ia = 0; ia < n_user_real_attribs; ++ia) { // Current runtime comp is ia-th user defined attribute - if (particle_comps.find(user_real_attribs[ia]) != particle_comps.end() && - particle_comps.at(user_real_attribs[ia]) == j) + auto const it_ura = std::find(particle_comps.begin(), particle_comps.end(), user_real_attribs[ia]); + if (it_ura != particle_comps.end() && + std::distance(particle_comps.begin(), it_ura) == j) { const amrex::ParserExecutor<7> user_real_attrib_parserexec = user_real_attrib_parser[ia]->compile<7>(); @@ -232,8 +235,9 @@ void DefaultInitializeRuntimeAttributes (PTile& ptile, auto attr_ptr = ptile.GetStructOfArrays().GetIntData(j).data(); // Current runtime comp is ionization level - if (particle_icomps.find("ionizationLevel") != particle_icomps.end() && - particle_icomps.at("ionizationLevel") == j) + auto const it_ioniz = std::find(particle_icomps.begin(), particle_icomps.end(), "ionizationLevel"); + if (it_ioniz != particle_icomps.end() && + std::distance(particle_icomps.begin(), it_ioniz) == j) { if constexpr (amrex::RunOnGpu>::value) { amrex::ParallelFor(stop - start, @@ -251,8 +255,9 @@ void DefaultInitializeRuntimeAttributes (PTile& ptile, for (int ia = 0; ia < n_user_int_attribs; ++ia) { // Current runtime comp is ia-th user defined attribute - if (particle_icomps.find(user_int_attribs[ia]) != particle_icomps.end() && - particle_icomps.at(user_int_attribs[ia]) == j) + auto const it_uia = std::find(particle_icomps.begin(), particle_icomps.end(), user_int_attribs[ia]); + if (it_uia != particle_icomps.end() && + std::distance(particle_icomps.begin(), it_uia) == j) { const amrex::ParserExecutor<7> user_int_attrib_parserexec = user_int_attrib_parser[ia]->compile<7>(); diff --git a/Source/Particles/ParticleCreation/FilterCopyTransform.H b/Source/Particles/ParticleCreation/FilterCopyTransform.H index c6ca69d5e89..c05038fae2f 100644 --- a/Source/Particles/ParticleCreation/FilterCopyTransform.H +++ b/Source/Particles/ParticleCreation/FilterCopyTransform.H @@ -88,7 +88,7 @@ Index filterCopyTransformParticles (DstPC& pc, DstTile& dst, SrcTile& src, ParticleCreation::DefaultInitializeRuntimeAttributes(dst, 0, 0, pc.getUserRealAttribs(), pc.getUserIntAttribs(), - pc.getParticleComps(), pc.getParticleiComps(), + pc.GetRealSoANames(), pc.GetIntSoANames(), pc.getUserRealAttribParser(), pc.getUserIntAttribParser(), #ifdef WARPX_QED @@ -258,7 +258,7 @@ Index filterCopyTransformParticles (DstPC& pc1, DstPC& pc2, DstTile& dst1, DstTi ParticleCreation::DefaultInitializeRuntimeAttributes(dst1, 0, 0, pc1.getUserRealAttribs(), pc1.getUserIntAttribs(), - pc1.getParticleComps(), pc1.getParticleiComps(), + pc1.GetRealSoANames(), pc1.GetIntSoANames(), pc1.getUserRealAttribParser(), pc1.getUserIntAttribParser(), #ifdef WARPX_QED @@ -272,7 +272,7 @@ Index filterCopyTransformParticles (DstPC& pc1, DstPC& pc2, DstTile& dst1, DstTi ParticleCreation::DefaultInitializeRuntimeAttributes(dst2, 0, 0, pc2.getUserRealAttribs(), pc2.getUserIntAttribs(), - pc2.getParticleComps(), pc2.getParticleiComps(), + pc2.GetRealSoANames(), pc2.GetIntSoANames(), pc2.getUserRealAttribParser(), pc2.getUserIntAttribParser(), #ifdef WARPX_QED diff --git a/Source/Particles/ParticleCreation/FilterCreateTransformFromFAB.H b/Source/Particles/ParticleCreation/FilterCreateTransformFromFAB.H index 424008e18a6..266faae6322 100644 --- a/Source/Particles/ParticleCreation/FilterCreateTransformFromFAB.H +++ b/Source/Particles/ParticleCreation/FilterCreateTransformFromFAB.H @@ -136,7 +136,7 @@ Index filterCreateTransformFromFAB (DstPC& pc1, DstPC& pc2, ParticleCreation::DefaultInitializeRuntimeAttributes(dst1, 0, 0, pc1.getUserRealAttribs(), pc1.getUserIntAttribs(), - pc1.getParticleComps(), pc1.getParticleiComps(), + pc1.GetRealSoANames(), pc1.GetIntSoANames(), pc1.getUserRealAttribParser(), pc1.getUserIntAttribParser(), #ifdef WARPX_QED @@ -150,7 +150,7 @@ Index filterCreateTransformFromFAB (DstPC& pc1, DstPC& pc2, ParticleCreation::DefaultInitializeRuntimeAttributes(dst2, 0, 0, pc2.getUserRealAttribs(), pc2.getUserIntAttribs(), - pc2.getParticleComps(), pc2.getParticleiComps(), + pc2.GetRealSoANames(), pc2.GetIntSoANames(), pc2.getUserRealAttribParser(), pc2.getUserIntAttribParser(), #ifdef WARPX_QED diff --git a/Source/Particles/ParticleCreation/SmartCopy.H b/Source/Particles/ParticleCreation/SmartCopy.H index e1d944e9c30..6be363e6337 100644 --- a/Source/Particles/ParticleCreation/SmartCopy.H +++ b/Source/Particles/ParticleCreation/SmartCopy.H @@ -140,10 +140,10 @@ class SmartCopyFactory public: template SmartCopyFactory (const SrcPC& src, const DstPC& dst) noexcept : - m_tag_real{getSmartCopyTag(src.getParticleComps(), dst.getParticleComps())}, - m_tag_int{getSmartCopyTag(src.getParticleiComps(), dst.getParticleiComps())}, - m_policy_real{getPolicies(dst.getParticleComps())}, - m_policy_int{getPolicies(dst.getParticleiComps())}, + m_tag_real{getSmartCopyTag(src.GetRealSoANames(), dst.GetRealSoANames())}, + m_tag_int{getSmartCopyTag(src.GetIntSoANames(), dst.GetIntSoANames())}, + m_policy_real{getPolicies(dst.GetRealSoANames())}, + m_policy_int{getPolicies(dst.GetIntSoANames())}, m_defined{true} {} diff --git a/Source/Particles/ParticleCreation/SmartCreate.H b/Source/Particles/ParticleCreation/SmartCreate.H index d93624b6433..688f1c3701f 100644 --- a/Source/Particles/ParticleCreation/SmartCreate.H +++ b/Source/Particles/ParticleCreation/SmartCreate.H @@ -97,8 +97,8 @@ class SmartCreateFactory public: template SmartCreateFactory (const PartTileData& part) noexcept: - m_policy_real{getPolicies(part.getParticleComps())}, - m_policy_int{getPolicies(part.getParticleiComps())}, + m_policy_real{getPolicies(part.GetRealSoANames())}, + m_policy_int{getPolicies(part.GetIntSoANames())}, m_defined{true} {} diff --git a/Source/Particles/ParticleCreation/SmartUtils.H b/Source/Particles/ParticleCreation/SmartUtils.H index 652a3aecd17..358c2b1a7a9 100644 --- a/Source/Particles/ParticleCreation/SmartUtils.H +++ b/Source/Particles/ParticleCreation/SmartUtils.H @@ -35,9 +35,9 @@ struct SmartCopyTag [[nodiscard]] int size () const noexcept { return static_cast(common_names.size()); } }; -PolicyVec getPolicies (const NameMap& names) noexcept; +PolicyVec getPolicies (std::vector const & names) noexcept; -SmartCopyTag getSmartCopyTag (const NameMap& src, const NameMap& dst) noexcept; +SmartCopyTag getSmartCopyTag (std::vector const & src, std::vector const & dst) noexcept; /** * \brief Sets the ids of newly created particles to the next values. diff --git a/Source/Particles/ParticleCreation/SmartUtils.cpp b/Source/Particles/ParticleCreation/SmartUtils.cpp index 7e79f58c59e..19e5bee8b97 100644 --- a/Source/Particles/ParticleCreation/SmartUtils.cpp +++ b/Source/Particles/ParticleCreation/SmartUtils.cpp @@ -13,8 +13,11 @@ #include #include -PolicyVec getPolicies (const NameMap& names) noexcept +PolicyVec getPolicies (std::vector const & names_vec) noexcept { + NameMap names; + for (auto i = 0u; i < names_vec.size(); ++i) { names.emplace(names_vec[i], i); } + std::vector h_policies; h_policies.resize(names.size()); for (const auto& kv : names) @@ -31,10 +34,16 @@ PolicyVec getPolicies (const NameMap& names) noexcept return policies; } -SmartCopyTag getSmartCopyTag (const NameMap& src, const NameMap& dst) noexcept +SmartCopyTag getSmartCopyTag (std::vector const & src_names, std::vector const & dst_names) noexcept { SmartCopyTag tag; + // We want to avoid running an NxM algorithm to find pairs, so sort the components first. + NameMap src; + NameMap dst; + for (auto i = 0u; i < src_names.size(); ++i) { src.emplace(src_names[i], i); } + for (auto i = 0u; i < dst_names.size(); ++i) { dst.emplace(dst_names[i], i); } + std::vector h_src_comps; std::vector h_dst_comps; diff --git a/Source/Particles/PhotonParticleContainer.cpp b/Source/Particles/PhotonParticleContainer.cpp index 47c426cd6ff..ad0b3364eea 100644 --- a/Source/Particles/PhotonParticleContainer.cpp +++ b/Source/Particles/PhotonParticleContainer.cpp @@ -122,7 +122,7 @@ PhotonParticleContainer::PushPX (WarpXParIter& pti, const bool local_has_breit_wheeler = has_breit_wheeler(); if (local_has_breit_wheeler) { evolve_opt = m_shr_p_bw_engine->build_evolve_functor(); - p_optical_depth_BW = pti.GetAttribs(particle_comps["opticalDepthBW"]).dataPtr() + offset; + p_optical_depth_BW = pti.GetAttribs("opticalDepthBW").dataPtr() + offset; } #endif diff --git a/Source/Particles/PhysicalParticleContainer.cpp b/Source/Particles/PhysicalParticleContainer.cpp index 9bf24e659e0..88c9a2273fd 100644 --- a/Source/Particles/PhysicalParticleContainer.cpp +++ b/Source/Particles/PhysicalParticleContainer.cpp @@ -342,12 +342,12 @@ PhysicalParticleContainer::PhysicalParticleContainer (AmrCore* amr_core, int isp #ifdef WARPX_QED pp_species_name.query("do_qed_quantum_sync", m_do_qed_quantum_sync); if (m_do_qed_quantum_sync) { - NewRealComp("opticalDepthQSR"); + AddRealComp("opticalDepthQSR"); } pp_species_name.query("do_qed_breit_wheeler", m_do_qed_breit_wheeler); if (m_do_qed_breit_wheeler) { - NewRealComp("opticalDepthBW"); + AddRealComp("opticalDepthBW"); } if(m_do_qed_quantum_sync){ @@ -368,7 +368,7 @@ PhysicalParticleContainer::PhysicalParticleContainer (AmrCore* amr_core, int isp str_int_attrib_function.at(i)); m_user_int_attrib_parser.at(i) = std::make_unique( utils::parser::makeParser(str_int_attrib_function.at(i),{"x","y","z","ux","uy","uz","t"})); - NewIntComp(m_user_int_attribs.at(i)); + AddIntComp(m_user_int_attribs.at(i)); } // User-defined real attributes @@ -383,19 +383,19 @@ PhysicalParticleContainer::PhysicalParticleContainer (AmrCore* amr_core, int isp str_real_attrib_function.at(i)); m_user_real_attrib_parser.at(i) = std::make_unique( utils::parser::makeParser(str_real_attrib_function.at(i),{"x","y","z","ux","uy","uz","t"})); - NewRealComp(m_user_real_attribs.at(i)); + AddRealComp(m_user_real_attribs.at(i)); } // If old particle positions should be saved add the needed components pp_species_name.query("save_previous_position", m_save_previous_position); if (m_save_previous_position) { #if (AMREX_SPACEDIM >= 2) - NewRealComp("prev_x"); + AddRealComp("prev_x"); #endif #if defined(WARPX_DIM_3D) - NewRealComp("prev_y"); + AddRealComp("prev_y"); #endif - NewRealComp("prev_z"); + AddRealComp("prev_z"); #ifdef WARPX_DIM_RZ amrex::Abort("Saving previous particle positions not yet implemented in RZ"); #endif @@ -813,7 +813,7 @@ PhysicalParticleContainer::DefaultInitializeRuntimeAttributes ( ParticleCreation::DefaultInitializeRuntimeAttributes(pinned_tile, n_external_attr_real, n_external_attr_int, m_user_real_attribs, m_user_int_attribs, - particle_comps, particle_icomps, + GetRealSoANames(), GetIntSoANames(), amrex::GetVecOfPtrs(m_user_real_attrib_parser), amrex::GetVecOfPtrs(m_user_int_attrib_parser), #ifdef WARPX_QED @@ -1086,7 +1086,7 @@ PhysicalParticleContainer::AddPlasma (PlasmaInjector const& plasma_injector, int } uint64_t * AMREX_RESTRICT pa_idcpu = soa.GetIdCPUData().data() + old_size; - PlasmaParserHelper plasma_parser_helper (soa, old_size, m_user_int_attribs, m_user_real_attribs, particle_icomps, particle_comps, plasma_parser_wrapper); + PlasmaParserHelper plasma_parser_helper(soa, old_size, m_user_int_attribs, m_user_real_attribs, plasma_parser_wrapper); int** pa_user_int_data = plasma_parser_helper.getUserIntDataPtrs(); ParticleReal** pa_user_real_data = plasma_parser_helper.getUserRealDataPtrs(); amrex::ParserExecutor<7> const* user_int_parserexec_data = plasma_parser_helper.getUserIntParserExecData(); @@ -1094,11 +1094,11 @@ PhysicalParticleContainer::AddPlasma (PlasmaInjector const& plasma_injector, int int* pi = nullptr; if (do_field_ionization) { - pi = soa.GetIntData(particle_icomps["ionizationLevel"]).data() + old_size; + pi = soa.GetIntData("ionizationLevel").data() + old_size; } #ifdef WARPX_QED - const QEDHelper qed_helper(soa, old_size, particle_comps, + const QEDHelper qed_helper(soa, old_size, has_quantum_sync(), has_breit_wheeler(), m_shr_p_qs_engine, m_shr_p_bw_engine); #endif @@ -1522,7 +1522,7 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, } uint64_t * AMREX_RESTRICT pa_idcpu = soa.GetIdCPUData().data() + old_size; - PlasmaParserHelper plasma_parser_helper (soa, old_size, m_user_int_attribs, m_user_real_attribs, particle_icomps, particle_comps, plasma_parser_wrapper); + PlasmaParserHelper plasma_parser_helper(soa, old_size, m_user_int_attribs, m_user_real_attribs, plasma_parser_wrapper); int** pa_user_int_data = plasma_parser_helper.getUserIntDataPtrs(); ParticleReal** pa_user_real_data = plasma_parser_helper.getUserRealDataPtrs(); amrex::ParserExecutor<7> const* user_int_parserexec_data = plasma_parser_helper.getUserIntParserExecData(); @@ -1530,11 +1530,11 @@ PhysicalParticleContainer::AddPlasmaFlux (PlasmaInjector const& plasma_injector, int* p_ion_level = nullptr; if (do_field_ionization) { - p_ion_level = soa.GetIntData(particle_icomps["ionizationLevel"]).data() + old_size; + p_ion_level = soa.GetIntData("ionizationLevel").data() + old_size; } #ifdef WARPX_QED - const QEDHelper qed_helper(soa, old_size, particle_comps, + const QEDHelper qed_helper(soa, old_size, has_quantum_sync(), has_breit_wheeler(), m_shr_p_qs_engine, m_shr_p_bw_engine); #endif @@ -1922,7 +1922,7 @@ PhysicalParticleContainer::Evolve (ablastr::fields::MultiFabRegister& fields, // Deposit charge before particle push, in component 0 of MultiFab rho. const int* const AMREX_RESTRICT ion_lev = (do_field_ionization)? - pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr():nullptr; + pti.GetiAttribs("ionizationLevel").dataPtr():nullptr; amrex::MultiFab* rho = fields.get(FieldType::rho_fp, lev); DepositCharge(pti, wp, ion_lev, rho, 0, 0, @@ -2018,7 +2018,7 @@ PhysicalParticleContainer::Evolve (ablastr::fields::MultiFabRegister& fields, const amrex::Real relative_time = (push_type == PushType::Explicit ? -0.5_rt * dt : 0.0_rt); const int* const AMREX_RESTRICT ion_lev = (do_field_ionization)? - pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr():nullptr; + pti.GetiAttribs("ionizationLevel").dataPtr():nullptr; // Deposit inside domains amrex::MultiFab * jx = fields.get(current_fp_string, Direction{0}, lev); @@ -2050,7 +2050,7 @@ PhysicalParticleContainer::Evolve (ablastr::fields::MultiFabRegister& fields, "Cannot deposit charge in rho component 1: only component 0 is allocated!"); const int* const AMREX_RESTRICT ion_lev = (do_field_ionization)? - pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr():nullptr; + pti.GetiAttribs("ionizationLevel").dataPtr():nullptr; DepositCharge(pti, wp, ion_lev, rho, 1, 0, np_current, thread_num, lev, lev); @@ -2424,7 +2424,7 @@ PhysicalParticleContainer::PushP (int lev, Real dt, int* AMREX_RESTRICT ion_lev = nullptr; if (do_field_ionization) { - ion_lev = pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr(); + ion_lev = pti.GetiAttribs("ionizationLevel").dataPtr(); } // Loop over the particles and update their momentum @@ -2620,7 +2620,7 @@ PhysicalParticleContainer::PushPX (WarpXParIter& pti, int* AMREX_RESTRICT ion_lev = nullptr; if (do_field_ionization) { - ion_lev = pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr() + offset; + ion_lev = pti.GetiAttribs("ionizationLevel").dataPtr() + offset; } const bool save_previous_position = m_save_previous_position; @@ -2629,12 +2629,12 @@ PhysicalParticleContainer::PushPX (WarpXParIter& pti, ParticleReal* z_old = nullptr; if (save_previous_position) { #if (AMREX_SPACEDIM >= 2) - x_old = pti.GetAttribs(particle_comps["prev_x"]).dataPtr() + offset; + x_old = pti.GetAttribs("prev_x").dataPtr() + offset; #endif #if defined(WARPX_DIM_3D) - y_old = pti.GetAttribs(particle_comps["prev_y"]).dataPtr() + offset; + y_old = pti.GetAttribs("prev_y").dataPtr() + offset; #endif - z_old = pti.GetAttribs(particle_comps["prev_z"]).dataPtr() + offset; + z_old = pti.GetAttribs("prev_z").dataPtr() + offset; amrex::ignore_unused(x_old, y_old); } @@ -2654,7 +2654,7 @@ PhysicalParticleContainer::PushPX (WarpXParIter& pti, const bool local_has_quantum_sync = has_quantum_sync(); if (local_has_quantum_sync) { evolve_opt = m_shr_p_qs_engine->build_evolve_functor(); - p_optical_depth_QSR = pti.GetAttribs(particle_comps["opticalDepthQSR"]).dataPtr() + offset; + p_optical_depth_QSR = pti.GetAttribs("opticalDepthQSR").dataPtr() + offset; } #endif @@ -2859,15 +2859,15 @@ PhysicalParticleContainer::ImplicitPushXP (WarpXParIter& pti, ParticleReal* const AMREX_RESTRICT uz = attribs[PIdx::uz].dataPtr() + offset; #if (AMREX_SPACEDIM >= 2) - ParticleReal* x_n = pti.GetAttribs(particle_comps["x_n"]).dataPtr(); + ParticleReal* x_n = pti.GetAttribs("x_n").dataPtr(); #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - ParticleReal* y_n = pti.GetAttribs(particle_comps["y_n"]).dataPtr(); + ParticleReal* y_n = pti.GetAttribs("y_n").dataPtr(); #endif - ParticleReal* z_n = pti.GetAttribs(particle_comps["z_n"]).dataPtr(); - ParticleReal* ux_n = pti.GetAttribs(particle_comps["ux_n"]).dataPtr(); - ParticleReal* uy_n = pti.GetAttribs(particle_comps["uy_n"]).dataPtr(); - ParticleReal* uz_n = pti.GetAttribs(particle_comps["uz_n"]).dataPtr(); + ParticleReal* z_n = pti.GetAttribs("z_n").dataPtr(); + ParticleReal* ux_n = pti.GetAttribs("ux_n").dataPtr(); + ParticleReal* uy_n = pti.GetAttribs("uy_n").dataPtr(); + ParticleReal* uz_n = pti.GetAttribs("uz_n").dataPtr(); const int do_copy = (m_do_back_transformed_particles && (a_dt_type!=DtType::SecondHalf) ); CopyParticleAttribs copyAttribs; @@ -2877,7 +2877,7 @@ PhysicalParticleContainer::ImplicitPushXP (WarpXParIter& pti, int* AMREX_RESTRICT ion_lev = nullptr; if (do_field_ionization) { - ion_lev = pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr() + offset; + ion_lev = pti.GetiAttribs("ionizationLevel").dataPtr() + offset; } // Loop over the particles and update their momentum @@ -2896,7 +2896,7 @@ PhysicalParticleContainer::ImplicitPushXP (WarpXParIter& pti, const bool local_has_quantum_sync = has_quantum_sync(); if (local_has_quantum_sync) { evolve_opt = m_shr_p_qs_engine->build_evolve_functor(); - p_optical_depth_QSR = pti.GetAttribs(particle_comps["opticalDepthQSR"]).dataPtr() + offset; + p_optical_depth_QSR = pti.GetAttribs("opticalDepthQSR").dataPtr() + offset; } #endif @@ -3110,7 +3110,7 @@ PhysicalParticleContainer::InitIonizationModule () physical_element == "H" || !do_adk_correction, "Correction to ADK by Zhang et al., PRA 90, 043410 (2014) only works with Hydrogen"); // Add runtime integer component for ionization level - NewIntComp("ionizationLevel"); + AddIntComp("ionizationLevel"); // Get atomic number and ionization energies from file const int ion_element_id = utils::physics::ion_map_ids.at(physical_element); ion_atomic_number = utils::physics::ion_atomic_numbers[ion_element_id]; @@ -3193,7 +3193,7 @@ PhysicalParticleContainer::getIonizationFunc (const WarpXParIter& pti, adk_exp_prefactor.dataPtr(), adk_power.dataPtr(), adk_correction_factors.dataPtr(), - particle_icomps["ionizationLevel"], + GetIntCompIndex("ionizationLevel"), ion_atomic_number, do_adk_correction}; } @@ -3299,14 +3299,14 @@ PhotonEmissionFilterFunc PhysicalParticleContainer::getPhotonEmissionFilterFunc () { WARPX_PROFILE("PhysicalParticleContainer::getPhotonEmissionFunc()"); - return PhotonEmissionFilterFunc{particle_runtime_comps["opticalDepthQSR"]}; + return PhotonEmissionFilterFunc{GetRealCompIndex("opticalDepthQSR") - NArrayReal}; } PairGenerationFilterFunc PhysicalParticleContainer::getPairGenerationFilterFunc () { WARPX_PROFILE("PhysicalParticleContainer::getPairGenerationFunc()"); - return PairGenerationFilterFunc{particle_runtime_comps["opticalDepthBW"]}; + return PairGenerationFilterFunc{GetRealCompIndex("opticalDepthBW") - NArrayReal}; } #endif diff --git a/Source/Particles/PinnedMemoryParticleContainer.H b/Source/Particles/PinnedMemoryParticleContainer.H index 402c621eb9a..b9fc4bbe79e 100644 --- a/Source/Particles/PinnedMemoryParticleContainer.H +++ b/Source/Particles/PinnedMemoryParticleContainer.H @@ -1,8 +1,8 @@ #ifndef WARPX_PinnedMemoryParticleContainer_H_ #define WARPX_PinnedMemoryParticleContainer_H_ -#include "NamedComponentParticleContainer.H" +#include "WarpXParticleContainer.H" -using PinnedMemoryParticleContainer = NamedComponentParticleContainer; +using PinnedMemoryParticleContainer = amrex::ParticleContainerPureSoA; #endif //WARPX_PinnedMemoryParticleContainer_H_ diff --git a/Source/Particles/Pusher/GetAndSetPosition.H b/Source/Particles/Pusher/GetAndSetPosition.H index ab06fe3d6cd..d2a223c57d8 100644 --- a/Source/Particles/Pusher/GetAndSetPosition.H +++ b/Source/Particles/Pusher/GetAndSetPosition.H @@ -9,7 +9,6 @@ #define WARPX_PARTICLES_PUSHER_GETANDSETPOSITION_H_ #include "Particles/WarpXParticleContainer.H" -#include "Particles/NamedComponentParticleContainer.H" #include #include diff --git a/Source/Particles/RigidInjectedParticleContainer.cpp b/Source/Particles/RigidInjectedParticleContainer.cpp index 5d8b0111825..420d7599ecb 100644 --- a/Source/Particles/RigidInjectedParticleContainer.cpp +++ b/Source/Particles/RigidInjectedParticleContainer.cpp @@ -345,7 +345,7 @@ RigidInjectedParticleContainer::PushP (int lev, Real dt, int* AMREX_RESTRICT ion_lev = nullptr; if (do_field_ionization) { - ion_lev = pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr(); + ion_lev = pti.GetiAttribs("ionizationLevel").dataPtr(); } // Save the position and momenta, making copies diff --git a/Source/Particles/WarpXParticleContainer.H b/Source/Particles/WarpXParticleContainer.H index 9c316b110ee..a4581d4415d 100644 --- a/Source/Particles/WarpXParticleContainer.H +++ b/Source/Particles/WarpXParticleContainer.H @@ -23,7 +23,6 @@ # include "ElementaryProcess/QEDInternals/QuantumSyncEngineWrapper_fwd.H" #endif #include "MultiParticleContainer_fwd.H" -#include "NamedComponentParticleContainer.H" #include @@ -49,6 +48,55 @@ #include #include +/** Real Particle Attributes stored in amrex::ParticleContainer's struct of array + */ +struct PIdx +{ + enum { +#if !defined (WARPX_DIM_1D_Z) + x, +#endif +#if defined (WARPX_DIM_3D) + y, +#endif + z, + w, ///< weight + ux, uy, uz, +#ifdef WARPX_DIM_RZ + theta, ///< RZ needs all three position components +#endif + nattribs ///< number of compile-time attributes + }; + + //! component names + static constexpr auto names = { +#if !defined (WARPX_DIM_1D_Z) + "x", +#endif +#if defined (WARPX_DIM_3D) + "y", +#endif + "z", + "w", + "ux", + "uy", + "uz", +#ifdef WARPX_DIM_RZ + "theta" +#endif + }; + + static_assert(names.size() == nattribs); +}; + +struct IntIdx { + enum + { + nattribs ///< the number of attributes above (always last) + }; + + static constexpr std::initializer_list names = {}; +}; class WarpXParIter : public amrex::ParIterSoA @@ -80,10 +128,35 @@ public: return GetStructOfArrays().GetRealData(comp); } + [[nodiscard]] const IntVector& GetiAttribs (int comp) const + { + return GetStructOfArrays().GetIntData(comp); + } + [[nodiscard]] IntVector& GetiAttribs (int comp) { return GetStructOfArrays().GetIntData(comp); } + + [[nodiscard]] const RealVector& GetAttribs (const std::string& name) const + { + return GetStructOfArrays().GetRealData(name); + } + + [[nodiscard]] RealVector& GetAttribs (const std::string& name) + { + return GetStructOfArrays().GetRealData(name); + } + + [[nodiscard]] const IntVector& GetiAttribs (const std::string& name) const + { + return GetStructOfArrays().GetIntData(name); + } + + [[nodiscard]] IntVector& GetiAttribs (const std::string& name) + { + return GetStructOfArrays().GetIntData(name); + } }; /** @@ -109,7 +182,7 @@ public: * derived classes, e.g., Evolve) or actual functions (e.g. CurrentDeposition). */ class WarpXParticleContainer - : public NamedComponentParticleContainer + : public amrex::ParticleContainerPureSoA { public: friend MultiParticleContainer; diff --git a/Source/Particles/WarpXParticleContainer.cpp b/Source/Particles/WarpXParticleContainer.cpp index 21b76485907..8e91093d95b 100644 --- a/Source/Particles/WarpXParticleContainer.cpp +++ b/Source/Particles/WarpXParticleContainer.cpp @@ -89,10 +89,14 @@ WarpXParIter::WarpXParIter (ContainerType& pc, int level, MFItInfo& info) } WarpXParticleContainer::WarpXParticleContainer (AmrCore* amr_core, int ispecies) - : NamedComponentParticleContainer(amr_core->GetParGDB()) + : amrex::ParticleContainerPureSoA(amr_core->GetParGDB()) , species_id(ispecies) { SetParticleSize(); + SetSoACompileTimeNames( + {PIdx::names.begin(), PIdx::names.end()}, + {IntIdx::names.begin(), IntIdx::names.end()} + ); ReadParameters(); // Reading the external fields needs to be here since ReadParameters @@ -627,22 +631,22 @@ WarpXParticleContainer::DepositCurrent (WarpXParIter& pti, } else if (push_type == PushType::Implicit) { #if (AMREX_SPACEDIM >= 2) - auto& xp_n = pti.GetAttribs(particle_comps["x_n"]); + auto& xp_n = pti.GetAttribs("x_n"); const ParticleReal* xp_n_data = xp_n.dataPtr() + offset; #else const ParticleReal* xp_n_data = nullptr; #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - auto& yp_n = pti.GetAttribs(particle_comps["y_n"]); + auto& yp_n = pti.GetAttribs("y_n"); const ParticleReal* yp_n_data = yp_n.dataPtr() + offset; #else const ParticleReal* yp_n_data = nullptr; #endif - auto& zp_n = pti.GetAttribs(particle_comps["z_n"]); + auto& zp_n = pti.GetAttribs("z_n"); const ParticleReal* zp_n_data = zp_n.dataPtr() + offset; - auto& uxp_n = pti.GetAttribs(particle_comps["ux_n"]); - auto& uyp_n = pti.GetAttribs(particle_comps["uy_n"]); - auto& uzp_n = pti.GetAttribs(particle_comps["uz_n"]); + auto& uxp_n = pti.GetAttribs("ux_n"); + auto& uyp_n = pti.GetAttribs("uy_n"); + auto& uzp_n = pti.GetAttribs("uz_n"); if (WarpX::nox == 1){ doChargeConservingDepositionShapeNImplicit<1>( xp_n_data, yp_n_data, zp_n_data, @@ -680,22 +684,22 @@ WarpXParticleContainer::DepositCurrent (WarpXParIter& pti, } else if (WarpX::current_deposition_algo == CurrentDepositionAlgo::Villasenor) { if (push_type == PushType::Implicit) { #if (AMREX_SPACEDIM >= 2) - auto& xp_n = pti.GetAttribs(particle_comps["x_n"]); + auto& xp_n = pti.GetAttribs("x_n"); const ParticleReal* xp_n_data = xp_n.dataPtr() + offset; #else const ParticleReal* xp_n_data = nullptr; #endif #if defined(WARPX_DIM_3D) || defined(WARPX_DIM_RZ) - auto& yp_n = pti.GetAttribs(particle_comps["y_n"]); + auto& yp_n = pti.GetAttribs("y_n"); const ParticleReal* yp_n_data = yp_n.dataPtr() + offset; #else const ParticleReal* yp_n_data = nullptr; #endif - auto& zp_n = pti.GetAttribs(particle_comps["z_n"]); + auto& zp_n = pti.GetAttribs("z_n"); const ParticleReal* zp_n_data = zp_n.dataPtr() + offset; - auto& uxp_n = pti.GetAttribs(particle_comps["ux_n"]); - auto& uyp_n = pti.GetAttribs(particle_comps["uy_n"]); - auto& uzp_n = pti.GetAttribs(particle_comps["uz_n"]); + auto& uxp_n = pti.GetAttribs("ux_n"); + auto& uyp_n = pti.GetAttribs("uy_n"); + auto& uzp_n = pti.GetAttribs("uz_n"); if (WarpX::nox == 1){ doVillasenorDepositionShapeNImplicit<1>( xp_n_data, yp_n_data, zp_n_data, @@ -790,9 +794,9 @@ WarpXParticleContainer::DepositCurrent (WarpXParIter& pti, xyzmin, lo, q, WarpX::n_rz_azimuthal_modes); } } else if (push_type == PushType::Implicit) { - auto& uxp_n = pti.GetAttribs(particle_comps["ux_n"]); - auto& uyp_n = pti.GetAttribs(particle_comps["uy_n"]); - auto& uzp_n = pti.GetAttribs(particle_comps["uz_n"]); + auto& uxp_n = pti.GetAttribs("ux_n"); + auto& uyp_n = pti.GetAttribs("uy_n"); + auto& uzp_n = pti.GetAttribs("uz_n"); if (WarpX::nox == 1){ doDepositionShapeNImplicit<1>( GetPosition, wp.dataPtr() + offset, @@ -869,7 +873,7 @@ WarpXParticleContainer::DepositCurrent ( int* AMREX_RESTRICT ion_lev = nullptr; if (do_field_ionization) { - ion_lev = pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr(); + ion_lev = pti.GetiAttribs("ionizationLevel").dataPtr(); } DepositCurrent(pti, wp, uxp, uyp, uzp, ion_lev, @@ -1262,7 +1266,7 @@ WarpXParticleContainer::DepositCharge (amrex::MultiFab* rho, int* AMREX_RESTRICT ion_lev = nullptr; if (do_field_ionization) { - ion_lev = pti.GetiAttribs(particle_icomps["ionizationLevel"]).dataPtr(); + ion_lev = pti.GetiAttribs("ionizationLevel").dataPtr(); } DepositCharge(pti, wp, ion_lev, rho, icomp, 0, np, thread_num, lev, lev); @@ -1546,8 +1550,16 @@ WarpXParticleContainer::PushX (int lev, amrex::Real dt) // without runtime component). void WarpXParticleContainer::defineAllParticleTiles () noexcept { - // Call the parent class's method - NamedComponentParticleContainer::defineAllParticleTiles(); + for (int lev = 0; lev <= finestLevel(); ++lev) + { + for (auto mfi = MakeMFIter(lev); mfi.isValid(); ++mfi) + { + const int grid_id = mfi.index(); + const int tile_id = mfi.LocalTileIndex(); + DefineAndReturnParticleTile(lev, grid_id, tile_id); + } + } + // Resize the tmp_particle_data (no present in parent class) tmp_particle_data.resize(finestLevel()+1); @@ -1570,7 +1582,7 @@ WarpXParticleContainer::particlePostLocate(ParticleType& p, { if (not do_splitting) { return; } - // Tag particle if goes to higher level. + // Tag particle if it goes to a higher level. // It will be split later in the loop if (pld.m_lev == lev+1 and p.id() != amrex::LongParticleIds::NoSplitParticleID diff --git a/Source/Python/Particles/CMakeLists.txt b/Source/Python/Particles/CMakeLists.txt index eed1bb07c74..6b7754fdf2d 100644 --- a/Source/Python/Particles/CMakeLists.txt +++ b/Source/Python/Particles/CMakeLists.txt @@ -10,7 +10,6 @@ foreach(D IN LISTS WarpX_DIMS) # pybind11 ParticleBoundaryBuffer.cpp MultiParticleContainer.cpp - PinnedMemoryParticleContainer.cpp WarpXParticleContainer.cpp ) endif() diff --git a/Source/Python/Particles/PinnedMemoryParticleContainer.cpp b/Source/Python/Particles/PinnedMemoryParticleContainer.cpp deleted file mode 100644 index 21dd6a9d364..00000000000 --- a/Source/Python/Particles/PinnedMemoryParticleContainer.cpp +++ /dev/null @@ -1,31 +0,0 @@ -/* Copyright 2021-2023 The WarpX Community - * - * Authors: Axel Huebl, Remi Lehe, Roelof Groenewald - * License: BSD-3-Clause-LBNL - */ - -#include "Python/pyWarpX.H" - -#include - - -void init_PinnedMemoryParticleContainer (py::module& m) -{ - py::class_< - PinnedMemoryParticleContainer, - amrex::ParticleContainerPureSoA - > pmpc (m, "PinnedMemoryParticleContainer"); - pmpc - .def_property_readonly("real_comp_names", - [](PinnedMemoryParticleContainer& pc) - { - return pc.getParticleComps(); - } - ) - .def_property_readonly("int_comp_names", - [](PinnedMemoryParticleContainer& pc) - { - return pc.getParticleiComps(); - } - ); -} diff --git a/Source/Python/Particles/WarpXParticleContainer.cpp b/Source/Python/Particles/WarpXParticleContainer.cpp index 7bf02aab62b..73e0a8b0db0 100644 --- a/Source/Python/Particles/WarpXParticleContainer.cpp +++ b/Source/Python/Particles/WarpXParticleContainer.cpp @@ -30,7 +30,7 @@ void init_WarpXParticleContainer (py::module& m) > wpc (m, "WarpXParticleContainer"); wpc .def("add_real_comp", - [](WarpXParticleContainer& pc, const std::string& name, bool comm) { pc.NewRealComp(name, comm); }, + [](WarpXParticleContainer& pc, const std::string& name, bool comm) { pc.AddRealComp(name, comm); }, py::arg("name"), py::arg("comm") ) .def("add_n_particles", @@ -85,19 +85,19 @@ void init_WarpXParticleContainer (py::module& m) py::arg("nattr_int"), py::arg("attr_int"), py::arg("uniqueparticles"), py::arg("id")=-1 ) - .def("get_comp_index", + .def("get_comp_index", // deprecated: use pyAMReX get_real_comp_index [](WarpXParticleContainer& pc, std::string comp_name) { - auto particle_comps = pc.getParticleComps(); - return particle_comps.at(comp_name); + py::print("get_comp_index is deprecated. Use get_real_comp_index instead."); + return pc.GetRealCompIndex(comp_name); }, py::arg("comp_name") ) - .def("get_icomp_index", + .def("get_icomp_index", // deprecated: use pyAMReX get_int_comp_index [](WarpXParticleContainer& pc, std::string comp_name) { - auto particle_comps = pc.getParticleiComps(); - return particle_comps.at(comp_name); + py::print("get_icomp_index is deprecated. Use get_int_comp_index instead."); + return pc.GetIntCompIndex(comp_name); }, py::arg("comp_name") ) diff --git a/Source/Python/pyWarpX.cpp b/Source/Python/pyWarpX.cpp index e128599abd0..45c4b48614b 100644 --- a/Source/Python/pyWarpX.cpp +++ b/Source/Python/pyWarpX.cpp @@ -34,7 +34,6 @@ void init_BoundaryBufferParIter (py::module&); void init_MultiParticleContainer (py::module&); void init_MultiFabRegister (py::module&); void init_ParticleBoundaryBuffer (py::module&); -void init_PinnedMemoryParticleContainer (py::module&); void init_WarpXParIter (py::module&); void init_WarpXParticleContainer (py::module&); void init_WarpX(py::module&); @@ -61,7 +60,6 @@ PYBIND11_MODULE(PYWARPX_MODULE_NAME, m) { // note: order from parent to child classes init_MultiFabRegister(m); - init_PinnedMemoryParticleContainer(m); init_WarpXParticleContainer(m); init_WarpXParIter(m); init_BoundaryBufferParIter(m); From a995f77c60c8e2fc61d21ecd0ad897e28f7c720d Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Mon, 10 Feb 2025 16:12:30 -0800 Subject: [PATCH 31/58] Doc: New APL on Magnetic Reconnection (#5646) **Magnetic Reconnection: An Alternative Explanation of Radio Emission in Galaxy Clusters** by Subham Ghosh and Pallavi Bhat was just published. https://10.3847/2041-8213/ad9f2d --------- Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> --- Docs/source/highlights.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Docs/source/highlights.rst b/Docs/source/highlights.rst index 2e8eeffbef2..53a176f35f2 100644 --- a/Docs/source/highlights.rst +++ b/Docs/source/highlights.rst @@ -154,6 +154,11 @@ High Energy Astrophysical Plasma Physics Scientific works in astrophysical plasma modeling. +#. Ghosh S, Bhat P. + **Magnetic Reconnection: An Alternative Explanation of Radio Emission in Galaxy Clusters**. + The Astrophysical Journal Letters **979** 1, 2025. + `DOI:10.3847/2041-8213/ad9f2d `__ + #. Klion H, Jambunathan R, Rowan ME, Yang E, Willcox D, Vay J-L, Lehe R, Myers A, Huebl A, Zhang W. **Particle-in-Cell simulations of relativistic magnetic reconnection with advanced Maxwell solver algorithms**. The Astrophysical Journal **952** 8, 2023. From ee15a972438c6e1ea8ec236f8e289ec6ca248415 Mon Sep 17 00:00:00 2001 From: Marco Garten Date: Mon, 10 Feb 2025 16:12:58 -0800 Subject: [PATCH 32/58] Update highlights for Ma et al. PRAB oblique laser in RZ (#5653) Updated highlights in WarpX docs for 20205 PRAB article describing how to simulate oblique laser pulses in quasicylindrical geometry using WarpX. --- Docs/source/highlights.rst | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/Docs/source/highlights.rst b/Docs/source/highlights.rst index 53a176f35f2..c7baca48f76 100644 --- a/Docs/source/highlights.rst +++ b/Docs/source/highlights.rst @@ -14,7 +14,12 @@ Plasma-Based Acceleration Scientific works in laser-plasma and beam-plasma acceleration. -#. Shrock JE, Rockafellow E, Miao B, Le M, Hollinger RC, Wang S, Gonsalves AJ, Picksley A, Rocca JJ, and Milchberg HM +#. Ma M, Zeng M, Wang J, Lu G, Yan W, Chen L, and Li D. + **Particle-in-cell simulation of laser wakefield accelerators with oblique lasers in quasicylindrical geometry**. + Phys. Rev. Accel. Beams **28**, 021301, 2025 + `DOI:10.1103/PhysRevAccelBeams.28.021301 `__ + +#. Shrock JE, Rockafellow E, Miao B, Le M, Hollinger RC, Wang S, Gonsalves AJ, Picksley A, Rocca JJ, and Milchberg HM. **Guided Mode Evolution and Ionization Injection in Meter-Scale Multi-GeV Laser Wakefield Accelerators**. Phys. Rev. Lett. **133**, 045002, 2024 `DOI:10.1103/PhysRevLett.133.045002 `__ From e0421a1cebbd69be6593145cd1e713000ec5ae46 Mon Sep 17 00:00:00 2001 From: Revathi Jambunathan <41089244+RevathiJambunathan@users.noreply.github.com> Date: Mon, 10 Feb 2025 16:13:49 -0800 Subject: [PATCH 33/58] Doc : MR paper highlight (#5651) Co-authored-by: Remi Lehe --- Docs/source/highlights.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Docs/source/highlights.rst b/Docs/source/highlights.rst index c7baca48f76..b40ed16e945 100644 --- a/Docs/source/highlights.rst +++ b/Docs/source/highlights.rst @@ -159,6 +159,11 @@ High Energy Astrophysical Plasma Physics Scientific works in astrophysical plasma modeling. +#. Jambunathan R, Jones H, Corrales L, Klion H, Roward ME, Myers A, Zhang W, Vay J-L. + **Application of mesh refinement to relativistic magnetic reconnection**. + Physics of Plasmas ***32*** 1, 2025 + `DOI:10.1063/5.0233583 `__ + #. Ghosh S, Bhat P. **Magnetic Reconnection: An Alternative Explanation of Radio Emission in Galaxy Clusters**. The Astrophysical Journal Letters **979** 1, 2025. From 8eab0c9c227a4b7f0cd0f1fde2a2246c6b5f03c5 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Tue, 11 Feb 2025 01:31:56 +0100 Subject: [PATCH 34/58] Move Fornberg coefficients calculations from WarpX to ablastr (#5619) The calculation of Fornberg stencil coefficients is rather general, and it can be shared with other projects of the BLAST family. Therefore, this PR moves the responsible functions into `ablastr`. Specifically, the PR does the following: - 2 new files (`FiniteDifference.H` and `FiniteDifference.cpp`) are created under `ablastr/math` (`CMakeLists.txt` and `Make.package` accordingly) - the static method of the WarpX class `getFornbergStencilCoefficients` and the `ReorderFornbergCoefficients` function (originally defined in an anonymous namespace in `WarpX.cpp`) are moved to these new files, inside the namespace `ablastr::math` - the two methods are minimally adapted (e.g., `AMREX_ALWAYS_ASSERT_WITH_MESSAGE` becomes `ABLASTR_ALWAYS_ASSERT_WITH_MESSAGE`) - `WarpX.cpp` and `SpectralKSpace.cpp` (where the aforementioned functions were called) are updated Note that with this PR `SpectralKSpace.cpp` does not need anymore to include the heavy `WarpX.H` header. --- .../SpectralSolver/SpectralKSpace.cpp | 11 ++- Source/WarpX.H | 10 --- Source/WarpX.cpp | 79 ++----------------- Source/ablastr/math/CMakeLists.txt | 8 ++ Source/ablastr/math/FiniteDifference.H | 44 +++++++++++ Source/ablastr/math/FiniteDifference.cpp | 77 ++++++++++++++++++ Source/ablastr/math/Make.package | 6 +- 7 files changed, 147 insertions(+), 88 deletions(-) create mode 100644 Source/ablastr/math/FiniteDifference.H create mode 100644 Source/ablastr/math/FiniteDifference.cpp diff --git a/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp b/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp index 94bd384f265..adf7fff775d 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp @@ -7,10 +7,12 @@ */ #include "SpectralKSpace.H" -#include "WarpX.H" #include "Utils/TextMsg.H" #include "Utils/WarpXConst.H" +#include +#include + #include #include #include @@ -211,7 +213,8 @@ SpectralKSpace::getModifiedKComponent (const DistributionMapping& dm, } else { // Compute real-space stencil coefficients - Vector h_stencil_coef = WarpX::getFornbergStencilCoefficients(n_order, grid_type); + Vector h_stencil_coef = + ablastr::math::getFornbergStencilCoefficients(n_order, grid_type); Gpu::DeviceVector d_stencil_coef(h_stencil_coef.size()); Gpu::copyAsync(Gpu::hostToDevice, h_stencil_coef.begin(), h_stencil_coef.end(), d_stencil_coef.begin()); @@ -237,7 +240,7 @@ SpectralKSpace::getModifiedKComponent (const DistributionMapping& dm, { p_modified_k[i] = 0; for (int n=0; n getFornbergStencilCoefficients (int n_order, ablastr::utils::enums::GridType a_grid_type); - // Device vectors of stencil coefficients used for finite-order centering of fields amrex::Gpu::DeviceVector device_field_centering_stencil_coeffs_x; amrex::Gpu::DeviceVector device_field_centering_stencil_coeffs_y; diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index a1eac8d6080..128e22e2fe3 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -50,6 +50,7 @@ #include "FieldSolver/ImplicitSolvers/ImplicitSolverLibrary.H" +#include #include #include @@ -199,29 +200,6 @@ namespace std::any_of(field_boundary_hi.begin(), field_boundary_hi.end(), is_pml); return is_any_pml; } - - /** - * \brief Re-orders the Fornberg coefficients so that they can be used more conveniently for - * finite-order centering operations. For example, for finite-order centering of order 6, - * the Fornberg coefficients \c (c_0,c_1,c_2) are re-ordered as \c (c_2,c_1,c_0,c_0,c_1,c_2). - * - * \param[in,out] ordered_coeffs host vector where the re-ordered Fornberg coefficients will be stored - * \param[in] unordered_coeffs host vector storing the original sequence of Fornberg coefficients - * \param[in] order order of the finite-order centering along a given direction - */ - void ReorderFornbergCoefficients ( - amrex::Vector& ordered_coeffs, - const amrex::Vector& unordered_coeffs, - const int order) - { - const int n = order / 2; - for (int i = 0; i < n; i++) { - ordered_coeffs[i] = unordered_coeffs[n-1-i]; - } - for (int i = n; i < order; i++) { - ordered_coeffs[i] = unordered_coeffs[i-n]; - } - } } void WarpX::MakeWarpX () @@ -3196,49 +3174,6 @@ WarpX::BuildBufferMasksInBox ( const amrex::Box tbx, amrex::IArrayBox &buffer_ma }); } -amrex::Vector WarpX::getFornbergStencilCoefficients (const int n_order, ablastr::utils::enums::GridType a_grid_type) -{ - AMREX_ALWAYS_ASSERT_WITH_MESSAGE(n_order % 2 == 0, "n_order must be even"); - - const int m = n_order / 2; - amrex::Vector coeffs; - coeffs.resize(m); - - // There are closed-form formula for these coefficients, but they result in - // an overflow when evaluated numerically. One way to avoid the overflow is - // to calculate the coefficients by recurrence. - - // Coefficients for collocated (nodal) finite-difference approximation - if (a_grid_type == GridType::Collocated) - { - // First coefficient - coeffs.at(0) = m * 2._rt / (m+1); - // Other coefficients by recurrence - for (int n = 1; n < m; n++) - { - coeffs.at(n) = - (m-n) * 1._rt / (m+n+1) * coeffs.at(n-1); - } - } - // Coefficients for staggered finite-difference approximation - else - { - Real prod = 1.; - for (int k = 1; k < m+1; k++) - { - prod *= (m + k) / (4._rt * k); - } - // First coefficient - coeffs.at(0) = 4_rt * m * prod * prod; - // Other coefficients by recurrence - for (int n = 1; n < m; n++) - { - coeffs.at(n) = - ((2_rt*n-1) * (m-n)) * 1._rt / ((2_rt*n+1) * (m+n)) * coeffs.at(n-1); - } - } - - return coeffs; -} - void WarpX::AllocateCenteringCoefficients (amrex::Gpu::DeviceVector& device_centering_stencil_coeffs_x, amrex::Gpu::DeviceVector& device_centering_stencil_coeffs_y, amrex::Gpu::DeviceVector& device_centering_stencil_coeffs_z, @@ -3257,9 +3192,9 @@ void WarpX::AllocateCenteringCoefficients (amrex::Gpu::DeviceVector amrex::Vector host_centering_stencil_coeffs_y; amrex::Vector host_centering_stencil_coeffs_z; - Fornberg_stencil_coeffs_x = getFornbergStencilCoefficients(centering_nox, a_grid_type); - Fornberg_stencil_coeffs_y = getFornbergStencilCoefficients(centering_noy, a_grid_type); - Fornberg_stencil_coeffs_z = getFornbergStencilCoefficients(centering_noz, a_grid_type); + Fornberg_stencil_coeffs_x = ablastr::math::getFornbergStencilCoefficients(centering_nox, a_grid_type); + Fornberg_stencil_coeffs_y = ablastr::math::getFornbergStencilCoefficients(centering_noy, a_grid_type); + Fornberg_stencil_coeffs_z = ablastr::math::getFornbergStencilCoefficients(centering_noz, a_grid_type); host_centering_stencil_coeffs_x.resize(centering_nox); host_centering_stencil_coeffs_y.resize(centering_noy); @@ -3267,17 +3202,17 @@ void WarpX::AllocateCenteringCoefficients (amrex::Gpu::DeviceVector // Re-order Fornberg stencil coefficients: // example for order 6: (c_0,c_1,c_2) becomes (c_2,c_1,c_0,c_0,c_1,c_2) - ::ReorderFornbergCoefficients( + ablastr::math::ReorderFornbergCoefficients( host_centering_stencil_coeffs_x, Fornberg_stencil_coeffs_x, centering_nox ); - ::ReorderFornbergCoefficients( + ablastr::math::ReorderFornbergCoefficients( host_centering_stencil_coeffs_y, Fornberg_stencil_coeffs_y, centering_noy ); - ::ReorderFornbergCoefficients( + ablastr::math::ReorderFornbergCoefficients( host_centering_stencil_coeffs_z, Fornberg_stencil_coeffs_z, centering_noz diff --git a/Source/ablastr/math/CMakeLists.txt b/Source/ablastr/math/CMakeLists.txt index 9093da83ae1..0ad3fe80b87 100644 --- a/Source/ablastr/math/CMakeLists.txt +++ b/Source/ablastr/math/CMakeLists.txt @@ -1 +1,9 @@ +foreach(D IN LISTS WarpX_DIMS) + warpx_set_suffix_dims(SD ${D}) + target_sources(ablastr_${SD} + PRIVATE + FiniteDifference.cpp + ) +endforeach() + add_subdirectory(fft) diff --git a/Source/ablastr/math/FiniteDifference.H b/Source/ablastr/math/FiniteDifference.H new file mode 100644 index 00000000000..8761318eb81 --- /dev/null +++ b/Source/ablastr/math/FiniteDifference.H @@ -0,0 +1,44 @@ +/* Copyright 2021-2025 Edoardo Zoni, Luca Fedeli + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ +#ifndef ABLASTR_MATH_FINITE_DIFFERENCE_H_ +#define ABLASTR_MATH_FINITE_DIFFERENCE_H_ + +#include "ablastr/utils/Enums.H" + +#include +#include + +namespace ablastr::math +{ + /** + * \brief Returns an array of coefficients (Fornberg coefficients), corresponding + * to the weight of each point in a finite-difference approximation of a derivative + * (up to order \c n_order). + * + * \param[in] n_order order of the finite-difference approximation + * \param[in] a_grid_type type of grid (collocated or not) + */ + [[nodiscard]] amrex::Vector + getFornbergStencilCoefficients ( + int n_order, ablastr::utils::enums::GridType a_grid_type); + + /** + * \brief Re-orders the Fornberg coefficients so that they can be used more conveniently for + * finite-order centering operations. For example, for finite-order centering of order 6, + * the Fornberg coefficients \c (c_0,c_1,c_2) are re-ordered as \c (c_2,c_1,c_0,c_0,c_1,c_2). + * + * \param[in,out] ordered_coeffs host vector where the re-ordered Fornberg coefficients will be stored + * \param[in] unordered_coeffs host vector storing the original sequence of Fornberg coefficients + * \param[in] order order of the finite-order centering along a given direction + */ + void + ReorderFornbergCoefficients ( + amrex::Vector& ordered_coeffs, + const amrex::Vector& unordered_coeffs, int order); +} + +#endif //ABLASTR_MATH_FINITE_DIFFERENCE_H_ diff --git a/Source/ablastr/math/FiniteDifference.cpp b/Source/ablastr/math/FiniteDifference.cpp new file mode 100644 index 00000000000..85d0b332131 --- /dev/null +++ b/Source/ablastr/math/FiniteDifference.cpp @@ -0,0 +1,77 @@ +/* Copyright 2021-2025 Edoardo Zoni, Luca Fedeli + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#include "FiniteDifference.H" + +#include "ablastr/utils/TextMsg.H" + +using namespace ablastr::utils::enums; +using namespace amrex; + +namespace ablastr::math +{ + + amrex::Vector + getFornbergStencilCoefficients (const int n_order, GridType a_grid_type) + { + ABLASTR_ALWAYS_ASSERT_WITH_MESSAGE(n_order % 2 == 0, "n_order must be even"); + + const int m = n_order / 2; + amrex::Vector coeffs; + coeffs.resize(m); + + // There are closed-form formula for these coefficients, but they result in + // an overflow when evaluated numerically. One way to avoid the overflow is + // to calculate the coefficients by recurrence. + + // Coefficients for collocated (nodal) finite-difference approximation + if (a_grid_type == GridType::Collocated) + { + // First coefficient + coeffs.at(0) = m * 2._rt / (m+1); + // Other coefficients by recurrence + for (int n = 1; n < m; n++) + { + coeffs.at(n) = - (m-n) * 1._rt / (m+n+1) * coeffs.at(n-1); + } + } + // Coefficients for staggered finite-difference approximation + else + { + amrex::Real prod = 1.; + for (int k = 1; k < m+1; k++) + { + prod *= (m + k) / (4._rt * k); + } + // First coefficient + coeffs.at(0) = 4_rt * m * prod * prod; + // Other coefficients by recurrence + for (int n = 1; n < m; n++) + { + coeffs.at(n) = - ((2_rt*n-1) * (m-n)) * 1._rt / ((2_rt*n+1) * (m+n)) * coeffs.at(n-1); + } + } + + return coeffs; + } + + void + ReorderFornbergCoefficients ( + amrex::Vector& ordered_coeffs, + const amrex::Vector& unordered_coeffs, + const int order) + { + const int n = order / 2; + for (int i = 0; i < n; i++) { + ordered_coeffs[i] = unordered_coeffs[n-1-i]; + } + for (int i = n; i < order; i++) { + ordered_coeffs[i] = unordered_coeffs[i-n]; + } + } + +} diff --git a/Source/ablastr/math/Make.package b/Source/ablastr/math/Make.package index a0e95b11225..5e3fd22dc81 100644 --- a/Source/ablastr/math/Make.package +++ b/Source/ablastr/math/Make.package @@ -1,3 +1,5 @@ -include $(WARPX_HOME)/Source/ablastr/math/fft/Make.package +CEXE_sources += FiniteDifference.cpp + +VPATH_LOCATIONS += $(WARPX_HOME)/Source/ablastr/math -VPATH_LOCATIONS += $(WARPX_HOME)/Source/ablastr +include $(WARPX_HOME)/Source/ablastr/math/fft/Make.package From 5f32399069f2acc3c528983e237b87537844e4e2 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 11 Feb 2025 02:21:49 +0000 Subject: [PATCH 35/58] [pre-commit.ci] pre-commit autoupdate (#5652) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit updates: - [github.com/astral-sh/ruff-pre-commit: v0.9.4 → v0.9.6](https://github.com/astral-sh/ruff-pre-commit/compare/v0.9.4...v0.9.6) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- .pre-commit-config.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 577f0ffc1f0..e113fa4c8e5 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -69,7 +69,7 @@ repos: # Python: Ruff linter & formatter # https://docs.astral.sh/ruff/ - repo: https://github.com/astral-sh/ruff-pre-commit - rev: v0.9.4 + rev: v0.9.6 hooks: # Run the linter - id: ruff From 879caeca10d6105d515cb45be604ed035de71ae1 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Tue, 11 Feb 2025 05:56:59 +0100 Subject: [PATCH 36/58] WarpX class : em_solver_medium no longer a static variable (#5642) This PR turns the static variable `em_solver_medium` of the WarpX class into a private non-static member variable : `m_em_solver_medium` . This is done with the aim of reducing the usage of static variables in WarpX. --- Source/Evolve/WarpXEvolve.cpp | 4 ++-- Source/Initialization/WarpXInitData.cpp | 10 +++++----- Source/Utils/WarpXMovingWindow.cpp | 2 +- Source/WarpX.H | 5 +++-- Source/WarpX.cpp | 8 ++++---- 5 files changed, 15 insertions(+), 14 deletions(-) diff --git a/Source/Evolve/WarpXEvolve.cpp b/Source/Evolve/WarpXEvolve.cpp index b40503ac1c7..a5ad9d4034e 100644 --- a/Source/Evolve/WarpXEvolve.cpp +++ b/Source/Evolve/WarpXEvolve.cpp @@ -438,10 +438,10 @@ WarpX::OneStep_nosub (Real cur_time) EvolveB(0.5_rt * dt[0], DtType::FirstHalf, cur_time); // We now have B^{n+1/2} FillBoundaryB(guard_cells.ng_FieldSolver, WarpX::sync_nodal_points); - if (WarpX::em_solver_medium == MediumForEM::Vacuum) { + if (m_em_solver_medium == MediumForEM::Vacuum) { // vacuum medium EvolveE(dt[0], cur_time); // We now have E^{n+1} - } else if (WarpX::em_solver_medium == MediumForEM::Macroscopic) { + } else if (m_em_solver_medium == MediumForEM::Macroscopic) { // macroscopic medium MacroscopicEvolveE(dt[0], cur_time); // We now have E^{n+1} } else { diff --git a/Source/Initialization/WarpXInitData.cpp b/Source/Initialization/WarpXInitData.cpp index cf452df56a2..b2885f8ca6a 100644 --- a/Source/Initialization/WarpXInitData.cpp +++ b/Source/Initialization/WarpXInitData.cpp @@ -288,17 +288,17 @@ WarpX::PrintMainPICparameters () else{ amrex::Print() << "Operation mode: | Electromagnetic" << "\n"; } - if (em_solver_medium == MediumForEM::Vacuum ){ + if (m_em_solver_medium == MediumForEM::Vacuum ){ amrex::Print() << " | - vacuum" << "\n"; } - else if (em_solver_medium == MediumForEM::Macroscopic ){ + else if (m_em_solver_medium == MediumForEM::Macroscopic ){ amrex::Print() << " | - macroscopic" << "\n"; } - if ( (em_solver_medium == MediumForEM::Macroscopic) && + if ( (m_em_solver_medium == MediumForEM::Macroscopic) && (WarpX::macroscopic_solver_algo == MacroscopicSolverAlgo::LaxWendroff)){ amrex::Print() << " | - Lax-Wendroff algorithm\n"; } - else if ((em_solver_medium == MediumForEM::Macroscopic) && + else if ((m_em_solver_medium == MediumForEM::Macroscopic) && (WarpX::macroscopic_solver_algo == MacroscopicSolverAlgo::BackwardEuler)){ amrex::Print() << " | - Backward Euler algorithm\n"; } @@ -561,7 +561,7 @@ WarpX::InitData () BuildBufferMasks(); - if (WarpX::em_solver_medium == MediumForEM::Macroscopic) { + if (m_em_solver_medium == MediumForEM::Macroscopic) { const int lev_zero = 0; m_macroscopic_properties->InitData( Geom(lev_zero), diff --git a/Source/Utils/WarpXMovingWindow.cpp b/Source/Utils/WarpXMovingWindow.cpp index b37aa41e28a..0cea2709312 100644 --- a/Source/Utils/WarpXMovingWindow.cpp +++ b/Source/Utils/WarpXMovingWindow.cpp @@ -464,7 +464,7 @@ WarpX::MoveWindow (const int step, bool move_j) } // Recompute macroscopic properties of the medium - if (WarpX::em_solver_medium == MediumForEM::Macroscopic) { + if (m_em_solver_medium == MediumForEM::Macroscopic) { const int lev_zero = 0; m_macroscopic_properties->InitData( Geom(lev_zero), diff --git a/Source/WarpX.H b/Source/WarpX.H index 27b02021678..b12cb1ab7f0 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -206,8 +206,6 @@ public: * being used (0 or 1 corresponding to timers or heuristic). */ static inline auto load_balance_costs_update_algo = LoadBalanceCostsUpdateAlgo::Default; - //! Integer that corresponds to electromagnetic Maxwell solver (vacuum - 0, macroscopic - 1) - static inline auto em_solver_medium = MediumForEM::Default; /** Integer that correspond to macroscopic Maxwell solver algorithm * (BackwardEuler - 0, Lax-Wendroff - 1) */ @@ -1371,6 +1369,9 @@ private: bool do_fluid_species = false; std::unique_ptr myfl; + //! Integer that corresponds to electromagnetic Maxwell solver (vacuum - 0, macroscopic - 1) + MediumForEM m_em_solver_medium = MediumForEM::Default; + // // Fields: First array for level, second for direction // diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index 128e22e2fe3..1e8e121dd5c 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -352,7 +352,7 @@ WarpX::WarpX () m_field_factory.resize(nlevs_max); - if (em_solver_medium == MediumForEM::Macroscopic) { + if (m_em_solver_medium == MediumForEM::Macroscopic) { // create object for macroscopic solver m_macroscopic_properties = std::make_unique(); } @@ -1248,8 +1248,8 @@ WarpX::ReadParameters () " combined with mesh refinement is currently not implemented"); } - pp_algo.query_enum_sloppy("em_solver_medium", em_solver_medium, "-_"); - if (em_solver_medium == MediumForEM::Macroscopic ) { + pp_algo.query_enum_sloppy("em_solver_medium", m_em_solver_medium, "-_"); + if (m_em_solver_medium == MediumForEM::Macroscopic ) { pp_algo.query_enum_sloppy("macroscopic_sigma_method", macroscopic_solver_algo, "-_"); } @@ -2274,7 +2274,7 @@ WarpX::AllocLevelMFs (int lev, const BoxArray& ba, const DistributionMapping& dm } // Allocate extra multifabs for macroscopic properties of the medium - if (em_solver_medium == MediumForEM::Macroscopic) { + if (m_em_solver_medium == MediumForEM::Macroscopic) { WARPX_ALWAYS_ASSERT_WITH_MESSAGE( lev==0, "Macroscopic properties are not supported with mesh refinement."); m_macroscopic_properties->AllocateLevelMFs(ba, dm, ngEB); From daabdd69a03fa18fe3d03f4f92b78e93919daec1 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Tue, 11 Feb 2025 06:12:22 +0100 Subject: [PATCH 37/58] Clang-tidy CI test: bump version from 16 to 17 (#5600) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR bumps the version used for `clang-tidy` CI tests from 16 to 17. It also addresses all the issues found with the upgraded tool. To be merged **after** https://github.com/ECP-WarpX/WarpX/pull/5592 ✅ ### The issues found 🧐 and fixed 🛠️ with the upgraded tool are the following : - [bugprone-switch-missing-default-case](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/bugprone/switch-missing-default-case.html) A newly introduced check to flag `switch` statements without a `default` case (unless the argument is an `enum`) - [cppcoreguidelines-rvalue-reference-param-not-moved](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/cppcoreguidelines/rvalue-reference-param-not-moved.html) A newly introduced check to flag when an rvalue reference argument of a function is never moved inside the function body. ⚠️ **Warning**: in order to have this check compatible with [performance-move-const-arg](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/performance/move-const-arg.html) I had to set `performance-move-const-arg.CheckTriviallyCopyableMove` to `false` (specifically for the three methods in `ablastr::utils::msg_logger` accepting `std::vector::const_iterator&& rit` arguments). - [misc-header-include-cycle](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/misc/header-include-cycle.html) A newly introduced check to prevent cyclic header inclusions. - [modernize-type-traits](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/modernize/type-traits.html) A newly introduced check. The idea is to replace currencies of, e.g., `std::is_integral::value`, with the less verbose alternative `std::is_integral_v` - [performance-avoid-endl](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/performance/avoid-endl.html) A newly introduced check. The idea is to replace `<< std::endl` with `\n`, since `endl` also forces a flush of the stream. In few cases flushing the buffer is actually the desired behavior. Typically, this happens when we want to write to `std::cerr`, which is however automatically flushed after each write operation. In cases where actually flushing to `std::cout` is the desired behavior one can do `<< \n << std::flush `, which is arguably more transparent than `<< std::endl`. - [performance-noexcept-swap](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/performance/noexcept-swap.html) For performance reasons it is better if `swap` functions are declared as `noexcept`, in order to allow the compiler to perform more aggressive optimizations. In any case, we can use the AMReX function `amrex::Swap`, which is `noexcept`. ### 🔄 Re-enabled checks: - [readability-misleading-indentation](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/readability/misleading-indentation.html) This check was already available in v16, but a bug led to false positives. The bug has been corrected in v17 of the tool, so we can re-enable the check. ### ⛔ The PR excludes the following checks : - [cppcoreguidelines-missing-std-forward](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/cppcoreguidelines/missing-std-forward.html) A newly introduced check that warns when a forwarding reference parameter is not forwarded. In order to comply with this check I think that I have to pass some parameters by reference to lambda functions inside `ParallelFor` constructs. However, this leads to issues when we compile for GPUs. Therefore, I think that the best solution is to exclude this check. See an example below (for `PredFunc&& filter` ): ``` amrex::ParallelForRNG(np, [=,&filter] AMREX_GPU_DEVICE (int i, amrex::RandomEngine const& engine) { p_mask[i] = filter(src_data, i, engine); }); ``` - [misc-include-cleaner](https://releases.llvm.org/17.0.1/tools/clang/tools/extra/docs/clang-tidy/checks/misc/include-cleaner.html) It would be awesome to include this check. However, as it is now implemented, it has no notion of "associated headers". For instance, let's suppose that the header `MyClass.H` has `#include` and that `MyClass.cpp` has `#include "MyClass.H"` and uses `std:string` somewhere. In this case, the check raises a warning stating that you should include `` in `MyClass.cpp` even if it is transitively included via the associate header `MyClass.H` . For this reason, for the moment, it is better to periodically check headers with the `IWYU` tool. --- .clang-tidy | 6 ++++-- .github/workflows/clang_tidy.yml | 8 ++++---- .../FlushFormats/FlushFormatCatalyst.cpp | 6 +++--- Source/Diagnostics/FullDiagnostics.cpp | 2 +- Source/Diagnostics/ReducedDiags/Timestep.cpp | 2 +- .../MagnetostaticSolver/MagnetostaticSolver.cpp | 2 +- .../FieldSolver/SpectralSolver/SpectralKSpace.H | 9 +++++---- .../SpectralSolver/SpectralKSpace.cpp | 2 +- .../SpectralSolver/SpectralKSpace_fwd.H | 2 +- .../DivCleaner/ProjectionDivCleaner.cpp | 4 ++-- Source/NonlinearSolvers/NewtonSolver.H | 2 +- Source/NonlinearSolvers/PicardSolver.H | 2 +- .../Resampling/VelocityCoincidenceThinning.H | 16 +++++----------- Source/Python/callbacks.cpp | 4 ++-- Source/Python/pyWarpX.cpp | 2 +- .../fields/EffectivePotentialPoissonSolver.H | 2 +- Source/ablastr/fields/Interpolate.H | 3 --- Source/ablastr/utils/msg_logger/MsgLogger.H | 12 ++++++------ Source/ablastr/utils/msg_logger/MsgLogger.cpp | 15 +++++++++------ Tools/Linter/runClangTidy.sh | 8 ++++---- Tools/QedTablesUtils/Source/QedTableCommons.H | 4 ++-- 21 files changed, 55 insertions(+), 58 deletions(-) diff --git a/.clang-tidy b/.clang-tidy index efb60a001d0..8111fc2fc25 100644 --- a/.clang-tidy +++ b/.clang-tidy @@ -19,6 +19,7 @@ Checks: ' -cppcoreguidelines-avoid-non-const-global-variables, -cppcoreguidelines-init-variables, -cppcoreguidelines-macro-usage, + -cppcoreguidelines-missing-std-forward, -cppcoreguidelines-narrowing-conversions, -cppcoreguidelines-non-private-member-variables-in-classes, -cppcoreguidelines-owning-memory, @@ -29,6 +30,7 @@ Checks: ' misc-*, -misc-no-recursion, -misc-non-private-member-variables-in-classes, + -misc-include-cleaner, modernize-*, -modernize-avoid-c-arrays, -modernize-return-braced-init-list, @@ -44,7 +46,6 @@ Checks: ' -readability-implicit-bool-conversion, -readability-isolate-declaration, -readability-magic-numbers, - -readability-misleading-indentation, -readability-named-parameter, -readability-uppercase-literal-suffix ' @@ -58,6 +59,7 @@ CheckOptions: value: "true" - key: misc-use-anonymous-namespace.HeaderFileExtensions value: "H," - +- key: performance-move-const-arg.CheckTriviallyCopyableMove + value: "false" HeaderFilterRegex: 'Source[a-z_A-Z0-9\/]+\.H$' diff --git a/.github/workflows/clang_tidy.yml b/.github/workflows/clang_tidy.yml index 3caa11e1885..49f2a5b6e25 100644 --- a/.github/workflows/clang_tidy.yml +++ b/.github/workflows/clang_tidy.yml @@ -26,7 +26,7 @@ jobs: - uses: actions/checkout@v4 - name: install dependencies run: | - .github/workflows/dependencies/clang.sh 16 + .github/workflows/dependencies/clang.sh 17 - name: set up cache uses: actions/cache@v4 with: @@ -43,8 +43,8 @@ jobs: export CCACHE_LOGFILE=${{ github.workspace }}/ccache.log.txt ccache -z - export CXX=$(which clang++-16) - export CC=$(which clang-16) + export CXX=$(which clang++-17) + export CC=$(which clang-17) cmake -S . -B build_clang_tidy \ -DCMAKE_VERBOSE_MAKEFILE=ON \ @@ -62,7 +62,7 @@ jobs: ${{github.workspace}}/.github/workflows/source/makeMakefileForClangTidy.py --input ${{github.workspace}}/ccache.log.txt make -j4 --keep-going -f clang-tidy-ccache-misses.mak \ - CLANG_TIDY=clang-tidy-16 \ + CLANG_TIDY=clang-tidy-17 \ CLANG_TIDY_ARGS="--config-file=${{github.workspace}}/.clang-tidy --warnings-as-errors=*" ccache -s diff --git a/Source/Diagnostics/FlushFormats/FlushFormatCatalyst.cpp b/Source/Diagnostics/FlushFormats/FlushFormatCatalyst.cpp index 3e542f9f871..5e5f3634e8f 100644 --- a/Source/Diagnostics/FlushFormats/FlushFormatCatalyst.cpp +++ b/Source/Diagnostics/FlushFormats/FlushFormatCatalyst.cpp @@ -110,7 +110,7 @@ FlushFormatCatalyst::FlushFormatCatalyst() { if (err != catalyst_status_ok) { std::string message = " Error: Failed to initialize Catalyst!\n"; - std::cerr << message << err << std::endl; + std::cerr << message << err << "\n"; amrex::Print() << message; amrex::Abort(message); } @@ -180,7 +180,7 @@ FlushFormatCatalyst::WriteToFile ( if (err != catalyst_status_ok) { std::string message = " Error: Failed to execute Catalyst!\n"; - std::cerr << message << err << std::endl; + std::cerr << message << err << "\n"; amrex::Print() << message; } WARPX_PROFILE_VAR_STOP(prof_catalyst_execute); @@ -200,7 +200,7 @@ FlushFormatCatalyst::~FlushFormatCatalyst() { if (err != catalyst_status_ok) { std::string message = " Error: Failed to finalize Catalyst!\n"; - std::cerr << message << err << std::endl; + std::cerr << message << err << "\n"; amrex::Print() << message; amrex::Abort(message); } else { diff --git a/Source/Diagnostics/FullDiagnostics.cpp b/Source/Diagnostics/FullDiagnostics.cpp index 8e2ebd3886a..5e8cede12ea 100644 --- a/Source/Diagnostics/FullDiagnostics.cpp +++ b/Source/Diagnostics/FullDiagnostics.cpp @@ -873,7 +873,7 @@ FullDiagnostics::InitializeFieldFunctors (int lev) } else if ( m_varnames[comp] == "divE" ){ m_all_field_functors[lev][comp] = std::make_unique(warpx.m_fields.get_alldirs(FieldType::Efield_aux, lev), lev, m_crse_ratio); } else { - std::cout << "Error on component " << m_varnames[comp] << std::endl; + std::cout << "Error on component " << m_varnames[comp] << "\n"; WARPX_ABORT_WITH_MESSAGE(m_varnames[comp] + " is not a known field output type for this geometry"); } } diff --git a/Source/Diagnostics/ReducedDiags/Timestep.cpp b/Source/Diagnostics/ReducedDiags/Timestep.cpp index 3474121db91..e74f22c27ec 100644 --- a/Source/Diagnostics/ReducedDiags/Timestep.cpp +++ b/Source/Diagnostics/ReducedDiags/Timestep.cpp @@ -50,7 +50,7 @@ Timestep::Timestep (const std::string& rd_name) } // close file - ofs << std::endl; + ofs << "\n"; ofs.close(); } } diff --git a/Source/FieldSolver/MagnetostaticSolver/MagnetostaticSolver.cpp b/Source/FieldSolver/MagnetostaticSolver/MagnetostaticSolver.cpp index fb93342ed08..2a744f3f902 100644 --- a/Source/FieldSolver/MagnetostaticSolver/MagnetostaticSolver.cpp +++ b/Source/FieldSolver/MagnetostaticSolver/MagnetostaticSolver.cpp @@ -130,7 +130,7 @@ WarpX::AddMagnetostaticFieldLabFrame() // temporary fix!!! const amrex::Real absolute_tolerance = 0.0; amrex::Real required_precision; - if constexpr (std::is_same::value) { + if constexpr (std::is_same_v) { required_precision = 1e-5; } else { diff --git a/Source/FieldSolver/SpectralSolver/SpectralKSpace.H b/Source/FieldSolver/SpectralSolver/SpectralKSpace.H index 16f93d8292a..fcf1a2ccd02 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralKSpace.H +++ b/Source/FieldSolver/SpectralSolver/SpectralKSpace.H @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -35,9 +36,9 @@ using SpectralShiftFactor = amrex::LayoutData< // Indicate the type of correction "shift" factor to apply // when the FFT is performed from/to a cell-centered grid in real space. -struct ShiftType { - enum{ TransformFromCellCentered=0, TransformToCellCentered=1 }; -}; +AMREX_ENUM(ShiftType, + TransformFromCellCentered, + TransformToCellCentered); /** * \brief Class that represents the spectral space. @@ -69,7 +70,7 @@ class SpectralKSpace SpectralShiftFactor getSpectralShiftFactor( const amrex::DistributionMapping& dm, int i_dim, - int shift_type ) const; + ShiftType shift_type ) const; protected: amrex::Array k_vec; diff --git a/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp b/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp index adf7fff775d..5313409553f 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp +++ b/Source/FieldSolver/SpectralSolver/SpectralKSpace.cpp @@ -145,7 +145,7 @@ SpectralKSpace::getKComponent( const DistributionMapping& dm, SpectralShiftFactor SpectralKSpace::getSpectralShiftFactor( const DistributionMapping& dm, const int i_dim, - const int shift_type ) const + const ShiftType shift_type ) const { // Initialize an empty DeviceVector in each box SpectralShiftFactor shift_factor( spectralspace_ba, dm ); diff --git a/Source/FieldSolver/SpectralSolver/SpectralKSpace_fwd.H b/Source/FieldSolver/SpectralSolver/SpectralKSpace_fwd.H index a256767d5bc..3b93622ae0b 100644 --- a/Source/FieldSolver/SpectralSolver/SpectralKSpace_fwd.H +++ b/Source/FieldSolver/SpectralSolver/SpectralKSpace_fwd.H @@ -8,7 +8,7 @@ #ifndef WARPX_SPECTRALKSPACE_FWD_H #define WARPX_SPECTRALKSPACE_FWD_H -struct ShiftType; +enum class ShiftType; class SpectralKSpace; diff --git a/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp b/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp index 1209f621e31..d7a3bb3ac92 100644 --- a/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp +++ b/Source/Initialization/DivCleaner/ProjectionDivCleaner.cpp @@ -104,7 +104,7 @@ void ProjectionDivCleaner::ReadParameters () { // Initialize tolerance based on field precision - if constexpr (std::is_same::value) { + if constexpr (std::is_same_v) { m_rtol = 5e-5; m_atol = 0.0; } @@ -337,7 +337,7 @@ WarpX::ProjectionCleanDivB() { && WarpX::poisson_solver_id == PoissonSolverAlgo::Multigrid)) { amrex::Print() << Utils::TextMsg::Info( "Starting Projection B-Field divergence cleaner."); - if constexpr (!std::is_same::value) { + if constexpr (!std::is_same_v) { ablastr::warn_manager::WMRecordWarning("Projection Div Cleaner", "WarpX is running with a field precision of SINGLE." "Convergence of projection based div cleaner is not optimal and may fail.", diff --git a/Source/NonlinearSolvers/NewtonSolver.H b/Source/NonlinearSolvers/NewtonSolver.H index f5147b2e4c0..f92687d6b34 100644 --- a/Source/NonlinearSolvers/NewtonSolver.H +++ b/Source/NonlinearSolvers/NewtonSolver.H @@ -313,7 +313,7 @@ void NewtonSolver::Solve ( Vec& a_U, " and the relative tolerance is " << m_rtol << ". Absolute norm is " << norm_abs << " and the absolute tolerance is " << m_atol; - if (this->m_verbose) { amrex::Print() << convergenceMsg.str() << std::endl; } + if (this->m_verbose) { amrex::Print() << convergenceMsg.str() << "\n"; } if (m_require_convergence) { WARPX_ABORT_WITH_MESSAGE(convergenceMsg.str()); } else { diff --git a/Source/NonlinearSolvers/PicardSolver.H b/Source/NonlinearSolvers/PicardSolver.H index 6fe941cd48f..62323b64a23 100644 --- a/Source/NonlinearSolvers/PicardSolver.H +++ b/Source/NonlinearSolvers/PicardSolver.H @@ -205,7 +205,7 @@ void PicardSolver::Solve ( Vec& a_U, " and the relative tolerance is " << m_rtol << ". Absolute norm is " << norm_abs << " and the absolute tolerance is " << m_atol; - if (this->m_verbose) { amrex::Print() << convergenceMsg.str() << std::endl; } + if (this->m_verbose) { amrex::Print() << convergenceMsg.str() << "\n"; } if (m_require_convergence) { WARPX_ABORT_WITH_MESSAGE(convergenceMsg.str()); } else { diff --git a/Source/Particles/Resampling/VelocityCoincidenceThinning.H b/Source/Particles/Resampling/VelocityCoincidenceThinning.H index a815092e03e..d55aed99bcd 100644 --- a/Source/Particles/Resampling/VelocityCoincidenceThinning.H +++ b/Source/Particles/Resampling/VelocityCoincidenceThinning.H @@ -14,6 +14,8 @@ #include "Utils/Parser/ParserUtils.H" #include "Utils/ParticleUtils.H" +#include + /** * \brief This class implements a particle merging scheme wherein particles * are clustered in phase space and particles in the same cluster is merged @@ -66,14 +68,6 @@ public: */ struct HeapSort { - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE - void swap(int &a, int &b) const - { - const auto temp = b; - b = a; - a = temp; - } - AMREX_GPU_HOST_DEVICE AMREX_FORCE_INLINE void operator() (int index_array[], const int bin_array[], const int start, const int n) const { @@ -84,7 +78,7 @@ public: // move child through heap if it is bigger than its parent while (j > 0 && bin_array[index_array[j+start]] > bin_array[index_array[(j - 1)/2 + start]]) { // swap child and parent until branch is properly ordered - swap(index_array[j+start], index_array[(j - 1)/2 + start]); + amrex::Swap(index_array[j+start], index_array[(j - 1)/2 + start]); j = (j - 1) / 2; } } @@ -92,7 +86,7 @@ public: for (int i = n - 1; i > 0; i--) { // swap value of first (now the largest value) to the new end point - swap(index_array[start], index_array[i+start]); + amrex::Swap(index_array[start], index_array[i+start]); // remake the max heap int j = 0, index; @@ -105,7 +99,7 @@ public: } // if parent is smaller than child, swap parent with child having higher value if (index < i && bin_array[index_array[j+start]] < bin_array[index_array[index+start]]) { - swap(index_array[j+start], index_array[index+start]); + amrex::Swap(index_array[j+start], index_array[index+start]); } j = index; } diff --git a/Source/Python/callbacks.cpp b/Source/Python/callbacks.cpp index 79f15c62835..81d379b189a 100644 --- a/Source/Python/callbacks.cpp +++ b/Source/Python/callbacks.cpp @@ -33,8 +33,8 @@ void ExecutePythonCallback ( const std::string& name ) try { warpx_callback_py_map[name](); } catch (std::exception &e) { - std::cerr << "Python callback '" << name << "' failed!" << std::endl; - std::cerr << e.what() << std::endl; + std::cerr << "Python callback '" << name << "' failed!" << "\n"; + std::cerr << e.what() << "\n"; std::exit(3); // note: NOT amrex::Abort(), to avoid hangs with MPI // future note: diff --git a/Source/Python/pyWarpX.cpp b/Source/Python/pyWarpX.cpp index 45c4b48614b..8ae174b4d3e 100644 --- a/Source/Python/pyWarpX.cpp +++ b/Source/Python/pyWarpX.cpp @@ -93,7 +93,7 @@ PYBIND11_MODULE(PYWARPX_MODULE_NAME, m) { // TODO broken numpy if not at least v1.15.0: raise warning // auto numpy = py::module::import("numpy"); // auto npversion = numpy.attr("__version__"); - // std::cout << "numpy version: " << py::str(npversion) << std::endl; + // std::cout << "numpy version: " << py::str(npversion) << "\n"; m.def("amrex_init", [](const py::list args) { diff --git a/Source/ablastr/fields/EffectivePotentialPoissonSolver.H b/Source/ablastr/fields/EffectivePotentialPoissonSolver.H index c6b5d2c5bcc..80e899df027 100644 --- a/Source/ablastr/fields/EffectivePotentialPoissonSolver.H +++ b/Source/ablastr/fields/EffectivePotentialPoissonSolver.H @@ -260,7 +260,7 @@ computeEffectivePotentialPhi ( } // Run additional operations, such as calculation of the E field for embedded boundaries - if constexpr (!std::is_same::value) { + if constexpr (!std::is_same_v) { if (post_phi_calculation.has_value()) { post_phi_calculation.value()(mlmg, lev); } diff --git a/Source/ablastr/fields/Interpolate.H b/Source/ablastr/fields/Interpolate.H index e5121215393..dc4ad47df94 100644 --- a/Source/ablastr/fields/Interpolate.H +++ b/Source/ablastr/fields/Interpolate.H @@ -11,12 +11,9 @@ #include #include -#include - #include #include - namespace ablastr::fields::details { /** Local interpolation from phi_cp to phi[lev+1] diff --git a/Source/ablastr/utils/msg_logger/MsgLogger.H b/Source/ablastr/utils/msg_logger/MsgLogger.H index 2497bdcfae7..088a613bc87 100644 --- a/Source/ablastr/utils/msg_logger/MsgLogger.H +++ b/Source/ablastr/utils/msg_logger/MsgLogger.H @@ -79,10 +79,10 @@ namespace ablastr::utils::msg_logger * \brief Same as static Msg deserialize(std::vector::const_iterator& it) * but accepting an rvalue as an argument * - * @param[in] it iterator of a byte array + * @param[in] rit iterator of a byte array * @return a Msg struct */ - static Msg deserialize(std::vector::const_iterator&& it); + static Msg deserialize(std::vector::const_iterator&& rit); }; /** @@ -115,10 +115,10 @@ namespace ablastr::utils::msg_logger * \brief Same as static Msg MsgWithCounter(std::vector::const_iterator& it) * but accepting an rvalue as an argument * - * @param[in] it iterator of a byte array + * @param[in] rit iterator of a byte array * @return a MsgWithCounter struct */ - static MsgWithCounter deserialize(std::vector::const_iterator&& it); + static MsgWithCounter deserialize(std::vector::const_iterator&& rit); }; /** @@ -154,10 +154,10 @@ namespace ablastr::utils::msg_logger * \brief Same as static Msg MsgWithCounterAndRanks(std::vector::const_iterator& it) * but accepting an rvalue as an argument * - * @param[in] it iterator of a byte array + * @param[in] rit iterator of a byte array * @return a MsgWithCounterAndRanks struct */ - static MsgWithCounterAndRanks deserialize(std::vector::const_iterator&& it); + static MsgWithCounterAndRanks deserialize(std::vector::const_iterator&& rit); }; /** diff --git a/Source/ablastr/utils/msg_logger/MsgLogger.cpp b/Source/ablastr/utils/msg_logger/MsgLogger.cpp index 6537a8f61e5..6597588d085 100644 --- a/Source/ablastr/utils/msg_logger/MsgLogger.cpp +++ b/Source/ablastr/utils/msg_logger/MsgLogger.cpp @@ -147,9 +147,10 @@ Msg Msg::deserialize (std::vector::const_iterator& it) return msg; } -Msg Msg::deserialize (std::vector::const_iterator&& it) +Msg Msg::deserialize (std::vector::const_iterator&& rit) { - return Msg::deserialize(it); + auto lit = std::vector::const_iterator{std::move(rit)}; + return Msg::deserialize(lit); } std::vector MsgWithCounter::serialize() const @@ -174,9 +175,10 @@ MsgWithCounter MsgWithCounter::deserialize (std::vector::const_iterator& i return msg_with_counter; } -MsgWithCounter MsgWithCounter::deserialize (std::vector::const_iterator&& it) +MsgWithCounter MsgWithCounter::deserialize (std::vector::const_iterator&& rit) { - return MsgWithCounter::deserialize(it); + auto lit = std::vector::const_iterator{std::move(rit)}; + return MsgWithCounter::deserialize(lit); } std::vector MsgWithCounterAndRanks::serialize() const @@ -205,9 +207,10 @@ MsgWithCounterAndRanks::deserialize (std::vector::const_iterator& it) } MsgWithCounterAndRanks -MsgWithCounterAndRanks::deserialize (std::vector::const_iterator&& it) +MsgWithCounterAndRanks::deserialize (std::vector::const_iterator&& rit) { - return MsgWithCounterAndRanks::deserialize(it); + auto lit = std::vector::const_iterator{std::move(rit)}; + return MsgWithCounterAndRanks::deserialize(lit); } Logger::Logger() : diff --git a/Tools/Linter/runClangTidy.sh b/Tools/Linter/runClangTidy.sh index 262d713cac6..4c1948cf372 100755 --- a/Tools/Linter/runClangTidy.sh +++ b/Tools/Linter/runClangTidy.sh @@ -55,13 +55,13 @@ ${CTIDY} --version echo echo "This can be overridden by setting the environment" echo "variables CLANG, CLANGXX, and CLANGTIDY e.g.: " -echo "$ export CLANG=clang-16" -echo "$ export CLANGXX=clang++-16" -echo "$ export CTIDCLANGTIDYY=clang-tidy-16" +echo "$ export CLANG=clang-17" +echo "$ export CLANGXX=clang++-17" +echo "$ export CTIDCLANGTIDYY=clang-tidy-17" echo "$ ./Tools/Linter/runClangTidy.sh" echo echo "******************************************************" -echo "* Warning: clang v16 is currently used in CI tests. *" +echo "* Warning: clang v17 is currently used in CI tests. *" echo "* It is therefore recommended to use this version. *" echo "* Otherwise, a newer version may find issues not *" echo "* currently covered by CI tests while older versions *" diff --git a/Tools/QedTablesUtils/Source/QedTableCommons.H b/Tools/QedTablesUtils/Source/QedTableCommons.H index 2233513bc97..903ba4623a8 100644 --- a/Tools/QedTablesUtils/Source/QedTableCommons.H +++ b/Tools/QedTablesUtils/Source/QedTableCommons.H @@ -12,8 +12,8 @@ bool Contains (const ContainerType& container, const ElementType& el) void AbortWithMessage(const std::string& msg) { - std::cout << "### ABORT : " << msg << std::endl; - std::cout << "___________________________" << std::endl; + std::cerr << "### ABORT : " << msg << "\n"; + std::cerr << "___________________________\n"; exit(1); } From bc936fece76333f27db2e3e478a5a475658d3775 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Tue, 11 Feb 2025 06:15:56 +0100 Subject: [PATCH 38/58] WarpX class: moving initialization of warning manager to WarpXInit (#5579) This PR moves the initialization of the warning manager from the very large `ReadParameters` function of the WarpX class to a free function inside `WarpXInit.H/cpp` . This function is now called by the constructor of the WarpX class. The final goal is to simplify the WarpX class. --- Source/Initialization/WarpXInit.H | 9 ++++++-- Source/Initialization/WarpXInit.cpp | 35 ++++++++++++++++++++++++++++- Source/WarpX.cpp | 28 ++--------------------- 3 files changed, 43 insertions(+), 29 deletions(-) diff --git a/Source/Initialization/WarpXInit.H b/Source/Initialization/WarpXInit.H index cb9de99c3bc..85e3b8d068e 100644 --- a/Source/Initialization/WarpXInit.H +++ b/Source/Initialization/WarpXInit.H @@ -17,14 +17,19 @@ namespace warpx::initialization * @param[in] argc number of arguments from main() * @param[in] argv argument strings from main() */ - void initialize_external_libraries(int argc, char* argv[]); + void initialize_external_libraries (int argc, char* argv[]); /** Initializes, in the following order: * - the FFT library through the anyfft::cleanup() function in ablastr * - the AMReX library * - the MPI library through the mpi_finalize helper function in ablastr */ - void finalize_external_libraries(); + void finalize_external_libraries (); + + /** + * Initializes the Warning manager in ablastr + */ + void initialize_warning_manager (); /** Check that warpx.dims matches the binary name */ diff --git a/Source/Initialization/WarpXInit.cpp b/Source/Initialization/WarpXInit.cpp index e9f3dc95a59..555bea52a7f 100644 --- a/Source/Initialization/WarpXInit.cpp +++ b/Source/Initialization/WarpXInit.cpp @@ -15,7 +15,9 @@ #include #include +#include +#include #include void warpx::initialization::initialize_external_libraries(int argc, char* argv[]) @@ -25,13 +27,44 @@ void warpx::initialization::initialize_external_libraries(int argc, char* argv[] ablastr::math::anyfft::setup(); } -void warpx::initialization::finalize_external_libraries() +void warpx::initialization::finalize_external_libraries () { ablastr::math::anyfft::cleanup(); amrex::Finalize(); ablastr::parallelization::mpi_finalize(); } +void warpx::initialization::initialize_warning_manager () +{ + const auto pp_warpx = amrex::ParmParse{"warpx"}; + + //"Synthetic" warning messages may be injected in the Warning Manager via + // inputfile for debug&testing purposes. + ablastr::warn_manager::GetWMInstance().debug_read_warnings_from_input(pp_warpx); + + // Set the flag to control if WarpX has to emit a warning message as soon as a warning is recorded + bool always_warn_immediately = false; + pp_warpx.query("always_warn_immediately", always_warn_immediately); + ablastr::warn_manager::GetWMInstance().SetAlwaysWarnImmediately(always_warn_immediately); + + // Set the WarnPriority threshold to decide if WarpX has to abort when a warning is recorded + if(std::string str_abort_on_warning_threshold; + pp_warpx.query("abort_on_warning_threshold", str_abort_on_warning_threshold)){ + std::optional abort_on_warning_threshold = std::nullopt; + if (str_abort_on_warning_threshold == "high") { + abort_on_warning_threshold = ablastr::warn_manager::WarnPriority::high; + } else if (str_abort_on_warning_threshold == "medium" ) { + abort_on_warning_threshold = ablastr::warn_manager::WarnPriority::medium; + } else if (str_abort_on_warning_threshold == "low") { + abort_on_warning_threshold = ablastr::warn_manager::WarnPriority::low; + } else { + WARPX_ABORT_WITH_MESSAGE(str_abort_on_warning_threshold + +"is not a valid option for warpx.abort_on_warning_threshold (use: low, medium or high)"); + } + ablastr::warn_manager::GetWMInstance().SetAbortThreshold(abort_on_warning_threshold); + } +} + void warpx::initialization::check_dims() { // Ensure that geometry.dims is set properly. diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index 1e8e121dd5c..a17c7ff432e 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -246,6 +246,8 @@ WarpX::Finalize() WarpX::WarpX () { + warpx::initialization::initialize_warning_manager(); + ReadParameters(); BackwardCompatibility(); @@ -497,32 +499,6 @@ WarpX::ReadParameters () { ParmParse const pp_warpx("warpx"); - //"Synthetic" warning messages may be injected in the Warning Manager via - // inputfile for debug&testing purposes. - ablastr::warn_manager::GetWMInstance().debug_read_warnings_from_input(pp_warpx); - - // Set the flag to control if WarpX has to emit a warning message as soon as a warning is recorded - bool always_warn_immediately = false; - pp_warpx.query("always_warn_immediately", always_warn_immediately); - ablastr::warn_manager::GetWMInstance().SetAlwaysWarnImmediately(always_warn_immediately); - - // Set the WarnPriority threshold to decide if WarpX has to abort when a warning is recorded - if(std::string str_abort_on_warning_threshold; - pp_warpx.query("abort_on_warning_threshold", str_abort_on_warning_threshold)){ - std::optional abort_on_warning_threshold = std::nullopt; - if (str_abort_on_warning_threshold == "high") { - abort_on_warning_threshold = ablastr::warn_manager::WarnPriority::high; - } else if (str_abort_on_warning_threshold == "medium" ) { - abort_on_warning_threshold = ablastr::warn_manager::WarnPriority::medium; - } else if (str_abort_on_warning_threshold == "low") { - abort_on_warning_threshold = ablastr::warn_manager::WarnPriority::low; - } else { - WARPX_ABORT_WITH_MESSAGE(str_abort_on_warning_threshold - +"is not a valid option for warpx.abort_on_warning_threshold (use: low, medium or high)"); - } - ablastr::warn_manager::GetWMInstance().SetAbortThreshold(abort_on_warning_threshold); - } - std::vector numprocs_in; utils::parser::queryArrWithParser( pp_warpx, "numprocs", numprocs_in, 0, AMREX_SPACEDIM); From 6dfa3ba5e74edc7ef80ac6bc6aa88b42b926f46f Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Tue, 11 Feb 2025 09:43:04 +0100 Subject: [PATCH 39/58] WarpX class: move shiftMF to anonymous namespace in WarpXMovingWindow.cpp (#5609) This PR moves the static function `shiftMF` from the WarpX class to an anonymous namespace in `WarpXMovingWindow.cpp`, where it is actually used. This is done to simplify the Warpx class. --- Source/Utils/WarpXMovingWindow.cpp | 439 +++++++++++++++-------------- Source/WarpX.H | 6 - 2 files changed, 231 insertions(+), 214 deletions(-) diff --git a/Source/Utils/WarpXMovingWindow.cpp b/Source/Utils/WarpXMovingWindow.cpp index 0cea2709312..281aa5e75ba 100644 --- a/Source/Utils/WarpXMovingWindow.cpp +++ b/Source/Utils/WarpXMovingWindow.cpp @@ -57,6 +57,199 @@ using namespace amrex; +namespace +{ + + /** This function shifts a MultiFab in a given direction + * + * \param[in,out] mf the MultiFab to be shifted + * \param[in] geom the Geometry object associated to the level of the MultiFab mf + * \param[in] num_shift magnitude of the shift (cell number) + * \param[in] dir direction of the shift + * \param[in] safe_guard_cells flag to enable "safe mode" data exchanges with more guard cells + * \param[in] do_single_precision_comms flag to enable single precision communications + * \param[in,out] cost the pointer to the data structure holding costs for timer-based load-balance + * \param[in] external_field the external field (used to initialize EM fields) + * \param[in] useparser flag to enable the use of a field parser to initialize EM fields + * \param[in] field_parser the field parser + * \param[in] PMLRZ_flag flag to enable a special treatment for PML in RZ simulations + */ + void shiftMF ( + amrex::MultiFab& mf, const amrex::Geometry& geom, + int num_shift, int dir, + bool safe_guard_cells, bool do_single_precision_comms, + amrex::LayoutData* cost, + amrex::Real external_field=0.0, bool useparser = false, + amrex::ParserExecutor<3> const& field_parser={}, + const bool PMLRZ_flag = false) + { + using namespace amrex::literals; + WARPX_PROFILE("warpx::shiftMF()"); + const amrex::BoxArray& ba = mf.boxArray(); + const amrex::DistributionMapping& dm = mf.DistributionMap(); + const int nc = mf.nComp(); + const amrex::IntVect& ng = mf.nGrowVect(); + + AMREX_ALWAYS_ASSERT(ng[dir] >= std::abs(num_shift)); + + amrex::MultiFab tmpmf(ba, dm, nc, ng); + amrex::MultiFab::Copy(tmpmf, mf, 0, 0, nc, ng); + + if ( safe_guard_cells ) { + // Fill guard cells. + ablastr::utils::communication::FillBoundary(tmpmf, do_single_precision_comms, geom.periodicity()); + } else { + amrex::IntVect ng_mw = amrex::IntVect::TheUnitVector(); + // Enough guard cells in the MW direction + ng_mw[dir] = std::abs(num_shift); + // Make sure we don't exceed number of guard cells allocated + ng_mw = ng_mw.min(ng); + // Fill guard cells. + ablastr::utils::communication::FillBoundary(tmpmf, ng_mw, do_single_precision_comms, geom.periodicity()); + } + + // Make a box that covers the region that the window moved into + const amrex::IndexType& typ = ba.ixType(); + const amrex::Box& domainBox = geom.Domain(); + amrex::Box adjBox; + if (num_shift > 0) { + adjBox = adjCellHi(domainBox, dir, ng[dir]); + } else { + adjBox = adjCellLo(domainBox, dir, ng[dir]); + } + adjBox = amrex::convert(adjBox, typ); + + for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) { + if (idim == dir and typ.nodeCentered(dir)) { + if (num_shift > 0) { + adjBox.growLo(idim, -1); + } else { + adjBox.growHi(idim, -1); + } + } else if (idim != dir) { + adjBox.growLo(idim, ng[idim]); + adjBox.growHi(idim, ng[idim]); + } + } + + amrex::IntVect shiftiv(0); + shiftiv[dir] = num_shift; + const amrex::Dim3 shift = shiftiv.dim3(); + + const amrex::RealBox& real_box = geom.ProbDomain(); + const auto dx = geom.CellSizeArray(); + +#ifdef AMREX_USE_OMP + #pragma omp parallel if (Gpu::notInLaunchRegion()) +#endif + for (amrex::MFIter mfi(tmpmf, TilingIfNotGPU()); mfi.isValid(); ++mfi ) + { + if (cost) + { + amrex::Gpu::synchronize(); + } + auto wt = static_cast(amrex::second()); + + auto const& dstfab = mf.array(mfi); + auto const& srcfab = tmpmf.array(mfi); + + const amrex::Box& outbox = mfi.growntilebox() & adjBox; + + if (outbox.ok()) { + if (!useparser) { + AMREX_PARALLEL_FOR_4D ( outbox, nc, i, j, k, n, + { + srcfab(i,j,k,n) = external_field; + }) + } else { + // index type of the src mf + auto const& mf_IndexType = (tmpmf).ixType(); + amrex::IntVect mf_type(AMREX_D_DECL(0,0,0)); + for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) { + mf_type[idim] = mf_IndexType.nodeCentered(idim); + } + + amrex::ParallelFor (outbox, nc, + [=] AMREX_GPU_DEVICE (int i, int j, int k, int n) noexcept + { + // Compute x,y,z co-ordinates based on index type of mf +#if defined(WARPX_DIM_1D_Z) + const amrex::Real x = 0.0_rt; + const amrex::Real y = 0.0_rt; + const amrex::Real fac_z = (1.0_rt - mf_type[0]) * dx[0]*0.5_rt; + const amrex::Real z = i*dx[0] + real_box.lo(0) + fac_z; +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + const amrex::Real fac_x = (1.0_rt - mf_type[0]) * dx[0]*0.5_rt; + const amrex::Real x = i*dx[0] + real_box.lo(0) + fac_x; + const amrex::Real y = 0.0; + const amrex::Real fac_z = (1.0_rt - mf_type[1]) * dx[1]*0.5_rt; + const amrex::Real z = j*dx[1] + real_box.lo(1) + fac_z; +#else + const amrex::Real fac_x = (1.0_rt - mf_type[0]) * dx[0]*0.5_rt; + const amrex::Real x = i*dx[0] + real_box.lo(0) + fac_x; + const amrex::Real fac_y = (1.0_rt - mf_type[1]) * dx[1]*0.5_rt; + const amrex::Real y = j*dx[1] + real_box.lo(1) + fac_y; + const amrex::Real fac_z = (1.0_rt - mf_type[2]) * dx[2]*0.5_rt; + const amrex::Real z = k*dx[2] + real_box.lo(2) + fac_z; +#endif + srcfab(i,j,k,n) = field_parser(x,y,z); + }); + } + + } + + amrex::Box dstBox = mf[mfi].box(); + if (num_shift > 0) { + dstBox.growHi(dir, -num_shift); + } else { + dstBox.growLo(dir, num_shift); + } + AMREX_PARALLEL_FOR_4D ( dstBox, nc, i, j, k, n, + { + dstfab(i,j,k,n) = srcfab(i+shift.x,j+shift.y,k+shift.z,n); + }) + + if (cost) + { + amrex::Gpu::synchronize(); + wt = static_cast(amrex::second()) - wt; + amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt); + } + } + +#if (defined WARPX_DIM_RZ) && (defined WARPX_USE_FFT) + if (PMLRZ_flag) { + // This does the exchange of data in the corner guard cells, the cells that are in the + // guard region both radially and longitudinally. These are the PML cells in the overlapping + // longitudinal region. FillBoundary normally does not update these cells. + // This update is needed so that the cells at the end of the FABs are updated appropriately + // with the data shifted from the neighboring FAB. Without this update, the RZ PML becomes + // unstable with the moving grid. + // This code creates a temporary MultiFab using a BoxList where the radial size of all of + // its boxes is increased so that the radial guard cells are included in the boxes valid domain. + // The temporary MultiFab is setup to refer to the data of the original Multifab (this can + // be done since the shape of the data is all the same, just the indexing is different). + amrex::BoxList bl; + const auto ba_size = static_cast(ba.size()); + for (int i = 0; i < ba_size; ++i) { + bl.push_back(amrex::grow(ba[i], 0, mf.nGrowVect()[0])); + } + const amrex::BoxArray rba(std::move(bl)); + amrex::MultiFab rmf(rba, dm, mf.nComp(), IntVect(0,mf.nGrowVect()[1]), MFInfo().SetAlloc(false)); + + for (amrex::MFIter mfi(mf); mfi.isValid(); ++mfi) { + rmf.setFab(mfi, FArrayBox(mf[mfi], amrex::make_alias, 0, mf.nComp())); + } + rmf.FillBoundary(false); + } +#else + amrex::ignore_unused(PMLRZ_flag); +#endif + + } + +} + void WarpX::UpdateInjectionPosition (const amrex::Real a_dt) { @@ -208,9 +401,6 @@ WarpX::MoveWindow (const int step, bool move_j) int num_shift = num_shift_base; int num_shift_crse = num_shift; - constexpr auto do_update_cost = true; - constexpr auto dont_update_cost = false; //We can't update cost for PML - // Shift the mesh fields for (int lev = 0; lev <= finest_level; ++lev) { @@ -219,6 +409,11 @@ WarpX::MoveWindow (const int step, bool move_j) num_shift *= refRatio(lev-1)[dir]; } + auto* cost_lev = + (WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers) ? getCosts(lev) : nullptr; + + amrex::LayoutData* no_cost = nullptr ; //We can't update cost for PML + // Shift each component of vector fields (E, B, j) for (int dim = 0; dim < 3; ++dim) { // Fine grid @@ -240,59 +435,60 @@ WarpX::MoveWindow (const int step, bool move_j) if (dim == 1) { Efield_parser = m_p_ext_field_params->Eyfield_parser->compile<3>(); } if (dim == 2) { Efield_parser = m_p_ext_field_params->Ezfield_parser->compile<3>(); } } - shiftMF(*m_fields.get(FieldType::Bfield_fp, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*m_fields.get(FieldType::Bfield_fp, Direction{dim}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->B_external_grid[dim], use_Bparser, Bfield_parser); - shiftMF(*m_fields.get(FieldType::Efield_fp, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*m_fields.get(FieldType::Efield_fp, Direction{dim}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->E_external_grid[dim], use_Eparser, Efield_parser); if (fft_do_time_averaging) { ablastr::fields::MultiLevelVectorField Efield_avg_fp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_fp, finest_level); ablastr::fields::MultiLevelVectorField Bfield_avg_fp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_fp, finest_level); - shiftMF(*Bfield_avg_fp[lev][dim], geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*Bfield_avg_fp[lev][dim], geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->B_external_grid[dim], use_Bparser, Bfield_parser); - shiftMF(*Efield_avg_fp[lev][dim], geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*Efield_avg_fp[lev][dim], geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params-> E_external_grid[dim], use_Eparser, Efield_parser); } if (move_j) { - shiftMF(*m_fields.get(FieldType::current_fp, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::current_fp, Direction{dim}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); } if (pml[lev] && pml[lev]->ok()) { amrex::MultiFab* pml_B = m_fields.get(FieldType::pml_B_fp, Direction{dim}, lev); amrex::MultiFab* pml_E = m_fields.get(FieldType::pml_E_fp, Direction{dim}, lev); - shiftMF(*pml_B, geom[lev], num_shift, dir, lev, dont_update_cost, m_safe_guard_cells); - shiftMF(*pml_E, geom[lev], num_shift, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_B, geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); + ::shiftMF(*pml_E, geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); } #if (defined WARPX_DIM_RZ) && (defined WARPX_USE_FFT) + const bool PMLRZ_flag = getPMLRZ(); if (pml_rz[lev] && dim < 2) { amrex::MultiFab* pml_rz_B = m_fields.get(FieldType::pml_B_fp, Direction{dim}, lev); amrex::MultiFab* pml_rz_E = m_fields.get(FieldType::pml_E_fp, Direction{dim}, lev); - shiftMF(*pml_rz_B, geom[lev], num_shift, dir, lev, dont_update_cost, m_safe_guard_cells); - shiftMF(*pml_rz_E, geom[lev], num_shift, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_rz_B, geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, no_cost, 0.0_rt, false, amrex::ParserExecutor<3>{}, PMLRZ_flag); + ::shiftMF(*pml_rz_E, geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, no_cost, 0.0_rt, false, amrex::ParserExecutor<3>{}, PMLRZ_flag); } #endif if (lev > 0) { // coarse grid - shiftMF(*m_fields.get(FieldType::Bfield_cp, Direction{dim}, lev), geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*m_fields.get(FieldType::Bfield_cp, Direction{dim}, lev), geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->B_external_grid[dim], use_Bparser, Bfield_parser); - shiftMF(*m_fields.get(FieldType::Efield_cp, Direction{dim}, lev), geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*m_fields.get(FieldType::Efield_cp, Direction{dim}, lev), geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->E_external_grid[dim], use_Eparser, Efield_parser); - shiftMF(*m_fields.get(FieldType::Bfield_aux, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); - shiftMF(*m_fields.get(FieldType::Efield_aux, Direction{dim}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::Bfield_aux, Direction{dim}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); + ::shiftMF(*m_fields.get(FieldType::Efield_aux, Direction{dim}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); if (fft_do_time_averaging) { ablastr::fields::MultiLevelVectorField Efield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Efield_avg_cp, finest_level, skip_lev0_coarse_patch); ablastr::fields::MultiLevelVectorField Bfield_avg_cp = m_fields.get_mr_levels_alldirs(FieldType::Bfield_avg_cp, finest_level, skip_lev0_coarse_patch); - shiftMF(*Bfield_avg_cp[lev][dim], geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*Bfield_avg_cp[lev][dim], geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->B_external_grid[dim], use_Bparser, Bfield_parser); - shiftMF(*Efield_avg_cp[lev][dim], geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells, + ::shiftMF(*Efield_avg_cp[lev][dim], geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev, m_p_ext_field_params->E_external_grid[dim], use_Eparser, Efield_parser); } if (move_j) { - shiftMF(*m_fields.get(FieldType::current_cp, Direction{dim}, lev), geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::current_cp, Direction{dim}, lev), geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); } if (do_pml && pml[lev]->ok()) { amrex::MultiFab* pml_B_cp = m_fields.get(FieldType::pml_B_cp, Direction{dim}, lev); amrex::MultiFab* pml_E_cp = m_fields.get(FieldType::pml_E_cp, Direction{dim}, lev); - shiftMF(*pml_B_cp, geom[lev-1], num_shift_crse, dir, lev, dont_update_cost, m_safe_guard_cells); - shiftMF(*pml_E_cp, geom[lev-1], num_shift_crse, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_B_cp, geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); + ::shiftMF(*pml_E_cp, geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); } } } @@ -302,11 +498,11 @@ WarpX::MoveWindow (const int step, bool move_j) if (m_fields.has(FieldType::F_fp, lev)) { // Fine grid - shiftMF(*m_fields.get(FieldType::F_fp, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::F_fp, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); if (lev > 0) { // Coarse grid - shiftMF(*m_fields.get(FieldType::F_cp, lev), geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::F_cp, lev), geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); } } @@ -317,7 +513,7 @@ WarpX::MoveWindow (const int step, bool move_j) if (do_pml && pml[lev]->ok()) { amrex::MultiFab* pml_F = m_fields.get(FieldType::pml_F_fp, lev); - shiftMF(*pml_F, geom[lev], num_shift, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_F, geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); } if (lev > 0) { @@ -325,7 +521,7 @@ WarpX::MoveWindow (const int step, bool move_j) if (do_pml && pml[lev]->ok()) { amrex::MultiFab* pml_F = m_fields.get(FieldType::pml_F_cp, lev); - shiftMF(*pml_F, geom[lev-1], num_shift_crse, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_F, geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); } } } @@ -335,11 +531,11 @@ WarpX::MoveWindow (const int step, bool move_j) if (m_fields.has(FieldType::G_fp, lev)) { // Fine grid - shiftMF(*m_fields.get(FieldType::G_fp, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::G_fp, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); if (lev > 0) { // Coarse grid - shiftMF(*m_fields.get(FieldType::G_cp, lev), geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::G_cp, lev), geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); } } @@ -350,7 +546,7 @@ WarpX::MoveWindow (const int step, bool move_j) if (do_pml && pml[lev]->ok()) { amrex::MultiFab* pml_G = m_fields.get(FieldType::pml_G_fp, lev); - shiftMF(*pml_G, geom[lev], num_shift, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_G, geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); } if (lev > 0) { @@ -358,7 +554,7 @@ WarpX::MoveWindow (const int step, bool move_j) if (do_pml && pml[lev]->ok()) { amrex::MultiFab* pml_G = m_fields.get(FieldType::pml_G_cp, lev); - shiftMF(*pml_G, geom[lev-1], num_shift_crse, dir, lev, dont_update_cost, m_safe_guard_cells); + ::shiftMF(*pml_G, geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, no_cost); } } } @@ -367,10 +563,10 @@ WarpX::MoveWindow (const int step, bool move_j) if (move_j) { if (m_fields.has(FieldType::rho_fp, lev)) { // Fine grid - shiftMF(*m_fields.get(FieldType::rho_fp,lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::rho_fp,lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); if (lev > 0){ // Coarse grid - shiftMF(*m_fields.get(FieldType::rho_cp,lev), geom[lev-1], num_shift_crse, dir, lev, do_update_cost, m_safe_guard_cells); + ::shiftMF(*m_fields.get(FieldType::rho_cp,lev), geom[lev-1], num_shift_crse, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev); } } } @@ -380,10 +576,10 @@ WarpX::MoveWindow (const int step, bool move_j) const int n_fluid_species = myfl->nSpecies(); for (int i=0; iGetFluidContainer(i); - shiftMF( *m_fields.get(fl.name_mf_N, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells ); - shiftMF( *m_fields.get(fl.name_mf_NU, Direction{0}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells ); - shiftMF( *m_fields.get(fl.name_mf_NU, Direction{1}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells ); - shiftMF( *m_fields.get(fl.name_mf_NU, Direction{2}, lev), geom[lev], num_shift, dir, lev, do_update_cost, m_safe_guard_cells ); + ::shiftMF( *m_fields.get(fl.name_mf_N, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev ); + ::shiftMF( *m_fields.get(fl.name_mf_NU, Direction{0}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev ); + ::shiftMF( *m_fields.get(fl.name_mf_NU, Direction{1}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev ); + ::shiftMF( *m_fields.get(fl.name_mf_NU, Direction{2}, lev), geom[lev], num_shift, dir, m_safe_guard_cells, do_single_precision_comms, cost_lev ); } } } @@ -477,179 +673,6 @@ WarpX::MoveWindow (const int step, bool move_j) return num_shift_base; } -void -WarpX::shiftMF (amrex::MultiFab& mf, const amrex::Geometry& geom, - int num_shift, int dir, const int lev, bool update_cost_flag, - const bool safe_guard_cells, - amrex::Real external_field, bool useparser, - amrex::ParserExecutor<3> const& field_parser) -{ - using namespace amrex::literals; - WARPX_PROFILE("WarpX::shiftMF()"); - const amrex::BoxArray& ba = mf.boxArray(); - const amrex::DistributionMapping& dm = mf.DistributionMap(); - const int nc = mf.nComp(); - const amrex::IntVect& ng = mf.nGrowVect(); - - AMREX_ALWAYS_ASSERT(ng[dir] >= num_shift); - - amrex::MultiFab tmpmf(ba, dm, nc, ng); - amrex::MultiFab::Copy(tmpmf, mf, 0, 0, nc, ng); - - if ( safe_guard_cells ) { - // Fill guard cells. - ablastr::utils::communication::FillBoundary(tmpmf, WarpX::do_single_precision_comms, geom.periodicity()); - } else { - amrex::IntVect ng_mw = amrex::IntVect::TheUnitVector(); - // Enough guard cells in the MW direction - ng_mw[dir] = num_shift; - // Make sure we don't exceed number of guard cells allocated - ng_mw = ng_mw.min(ng); - // Fill guard cells. - ablastr::utils::communication::FillBoundary(tmpmf, ng_mw, WarpX::do_single_precision_comms, geom.periodicity()); - } - - // Make a box that covers the region that the window moved into - const amrex::IndexType& typ = ba.ixType(); - const amrex::Box& domainBox = geom.Domain(); - amrex::Box adjBox; - if (num_shift > 0) { - adjBox = adjCellHi(domainBox, dir, ng[dir]); - } else { - adjBox = adjCellLo(domainBox, dir, ng[dir]); - } - adjBox = amrex::convert(adjBox, typ); - - for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) { - if (idim == dir and typ.nodeCentered(dir)) { - if (num_shift > 0) { - adjBox.growLo(idim, -1); - } else { - adjBox.growHi(idim, -1); - } - } else if (idim != dir) { - adjBox.growLo(idim, ng[idim]); - adjBox.growHi(idim, ng[idim]); - } - } - - amrex::IntVect shiftiv(0); - shiftiv[dir] = num_shift; - const amrex::Dim3 shift = shiftiv.dim3(); - - const amrex::RealBox& real_box = geom.ProbDomain(); - const auto dx = geom.CellSizeArray(); - - amrex::LayoutData* cost = WarpX::getCosts(lev); -#ifdef AMREX_USE_OMP -#pragma omp parallel if (Gpu::notInLaunchRegion()) -#endif - - for (amrex::MFIter mfi(tmpmf, TilingIfNotGPU()); mfi.isValid(); ++mfi ) - { - if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers) - { - amrex::Gpu::synchronize(); - } - auto wt = static_cast(amrex::second()); - - auto const& dstfab = mf.array(mfi); - auto const& srcfab = tmpmf.array(mfi); - - const amrex::Box& outbox = mfi.growntilebox() & adjBox; - - if (outbox.ok()) { - if (!useparser) { - AMREX_PARALLEL_FOR_4D ( outbox, nc, i, j, k, n, - { - srcfab(i,j,k,n) = external_field; - }) - } else { - // index type of the src mf - auto const& mf_IndexType = (tmpmf).ixType(); - amrex::IntVect mf_type(AMREX_D_DECL(0,0,0)); - for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) { - mf_type[idim] = mf_IndexType.nodeCentered(idim); - } - - amrex::ParallelFor (outbox, nc, - [=] AMREX_GPU_DEVICE (int i, int j, int k, int n) noexcept - { - // Compute x,y,z co-ordinates based on index type of mf -#if defined(WARPX_DIM_1D_Z) - const amrex::Real x = 0.0_rt; - const amrex::Real y = 0.0_rt; - const amrex::Real fac_z = (1.0_rt - mf_type[0]) * dx[0]*0.5_rt; - const amrex::Real z = i*dx[0] + real_box.lo(0) + fac_z; -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - const amrex::Real fac_x = (1.0_rt - mf_type[0]) * dx[0]*0.5_rt; - const amrex::Real x = i*dx[0] + real_box.lo(0) + fac_x; - const amrex::Real y = 0.0; - const amrex::Real fac_z = (1.0_rt - mf_type[1]) * dx[1]*0.5_rt; - const amrex::Real z = j*dx[1] + real_box.lo(1) + fac_z; -#else - const amrex::Real fac_x = (1.0_rt - mf_type[0]) * dx[0]*0.5_rt; - const amrex::Real x = i*dx[0] + real_box.lo(0) + fac_x; - const amrex::Real fac_y = (1.0_rt - mf_type[1]) * dx[1]*0.5_rt; - const amrex::Real y = j*dx[1] + real_box.lo(1) + fac_y; - const amrex::Real fac_z = (1.0_rt - mf_type[2]) * dx[2]*0.5_rt; - const amrex::Real z = k*dx[2] + real_box.lo(2) + fac_z; -#endif - srcfab(i,j,k,n) = field_parser(x,y,z); - }); - } - - } - - amrex::Box dstBox = mf[mfi].box(); - if (num_shift > 0) { - dstBox.growHi(dir, -num_shift); - } else { - dstBox.growLo(dir, num_shift); - } - AMREX_PARALLEL_FOR_4D ( dstBox, nc, i, j, k, n, - { - dstfab(i,j,k,n) = srcfab(i+shift.x,j+shift.y,k+shift.z,n); - }) - - if (cost && update_cost_flag && - WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers) - { - amrex::Gpu::synchronize(); - wt = static_cast(amrex::second()) - wt; - amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt); - } - } - -#if (defined WARPX_DIM_RZ) && (defined WARPX_USE_FFT) - if (WarpX::GetInstance().getPMLRZ()) { - // This does the exchange of data in the corner guard cells, the cells that are in the - // guard region both radially and longitudinally. These are the PML cells in the overlapping - // longitudinal region. FillBoundary normally does not update these cells. - // This update is needed so that the cells at the end of the FABs are updated appropriately - // with the data shifted from the neighboring FAB. Without this update, the RZ PML becomes - // unstable with the moving grid. - // This code creates a temporary MultiFab using a BoxList where the radial size of all of - // its boxes is increased so that the radial guard cells are included in the boxes valid domain. - // The temporary MultiFab is setup to refer to the data of the original Multifab (this can - // be done since the shape of the data is all the same, just the indexing is different). - amrex::BoxList bl; - const auto ba_size = static_cast(ba.size()); - for (int i = 0; i < ba_size; ++i) { - bl.push_back(amrex::grow(ba[i], 0, mf.nGrowVect()[0])); - } - const amrex::BoxArray rba(std::move(bl)); - amrex::MultiFab rmf(rba, dm, mf.nComp(), IntVect(0,mf.nGrowVect()[1]), MFInfo().SetAlloc(false)); - - for (amrex::MFIter mfi(mf); mfi.isValid(); ++mfi) { - rmf.setFab(mfi, FArrayBox(mf[mfi], amrex::make_alias, 0, mf.nComp())); - } - rmf.FillBoundary(false); - } -#endif - -} - void WarpX::ShiftGalileanBoundary () { diff --git a/Source/WarpX.H b/Source/WarpX.H index b12cb1ab7f0..7d164a9e685 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -166,12 +166,6 @@ public: amrex::Vector,3 > >& GetEBUpdateEFlag() { return m_eb_update_E; } amrex::Vector< std::unique_ptr > const & GetEBReduceParticleShapeFlag() const { return m_eb_reduce_particle_shape; } - static void shiftMF (amrex::MultiFab& mf, const amrex::Geometry& geom, - int num_shift, int dir, int lev, bool update_cost_flag, - bool safe_guard_cells, - amrex::Real external_field=0.0, bool useparser = false, - amrex::ParserExecutor<3> const& field_parser={}); - /** * \brief * If an authors' string is specified in the inputfile, this method returns that string. From 2cc4fd2c3ad5be96e1aa5811d72a8e1018d925c4 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Tue, 11 Feb 2025 00:50:59 -0800 Subject: [PATCH 40/58] AMReX/pyAMReX/PICSAR: Weekly Update (#5655) Weekly update to latest AMReX. Weekly update to latest pyAMReX. Weekly update to latest PICSAR (no changes). ```console ./Tools/Release/updateAMReX.py ./Tools/Release/updatepyAMReX.py ./Tools/Release/updatePICSAR.py ``` --------- Signed-off-by: Axel Huebl Co-authored-by: Weiqun Zhang --- .github/workflows/cuda.yml | 2 +- Source/ablastr/fields/IntegratedGreenFunctionSolver.cpp | 2 +- cmake/dependencies/AMReX.cmake | 2 +- cmake/dependencies/pyAMReX.cmake | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 0943de41e55..6e87134904f 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -127,7 +127,7 @@ jobs: which nvcc || echo "nvcc not in PATH!" git clone https://github.com/AMReX-Codes/amrex.git ../amrex - cd ../amrex && git checkout --detach 78bdf0faabc4101d5333ebb421e553efcc7ec04e && cd - + cd ../amrex && git checkout --detach 198da4879a63f1bc8c4e8d674bf9185525318f61 && cd - make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4 ccache -s diff --git a/Source/ablastr/fields/IntegratedGreenFunctionSolver.cpp b/Source/ablastr/fields/IntegratedGreenFunctionSolver.cpp index 74f9b308acd..31d8136e175 100755 --- a/Source/ablastr/fields/IntegratedGreenFunctionSolver.cpp +++ b/Source/ablastr/fields/IntegratedGreenFunctionSolver.cpp @@ -59,7 +59,7 @@ computePhiIGF ( amrex::MultiFab const & rho, } if (!obc_solver || obc_solver->Domain() != domain) { amrex::FFT::Info info{}; - if (is_igf_2d_slices) { info.setBatchMode(true); } // do 2D FFTs + if (is_igf_2d_slices) { info.setTwoDMode(true); } // do 2D FFTs info.setNumProcs(nprocs); obc_solver = std::make_unique>(domain, info); } diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index 5136cb8f2f4..7f5546a931b 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -294,7 +294,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "78bdf0faabc4101d5333ebb421e553efcc7ec04e" +set(WarpX_amrex_branch "198da4879a63f1bc8c4e8d674bf9185525318f61" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") diff --git a/cmake/dependencies/pyAMReX.cmake b/cmake/dependencies/pyAMReX.cmake index b716e883be9..be7c64acd69 100644 --- a/cmake/dependencies/pyAMReX.cmake +++ b/cmake/dependencies/pyAMReX.cmake @@ -74,7 +74,7 @@ option(WarpX_pyamrex_internal "Download & build pyAMReX" ON) set(WarpX_pyamrex_repo "https://github.com/AMReX-Codes/pyamrex.git" CACHE STRING "Repository URI to pull and build pyamrex from if(WarpX_pyamrex_internal)") -set(WarpX_pyamrex_branch "006bf94a4c68466fac8a1281750391b5a6083d82" +set(WarpX_pyamrex_branch "3088ea12a1a6287246bf027c4235f10e92472450" CACHE STRING "Repository branch for WarpX_pyamrex_repo if(WarpX_pyamrex_internal)") From 7c9f8f2e0c401e61b91842832319553015a1d7fc Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Fri, 14 Feb 2025 01:17:27 +0100 Subject: [PATCH 41/58] Move several EB-related methods out of the WarpX class (#5630) This PR transforms the WarpX member functions `MarkReducedShapeCells`, `MarkUpdateCellsStairCase`, `MarkUpdateECellsECT`, `MarkUpdateBCellsECT`, `MarkExtensionCells` into pure functions inside the namespace `warpx::embedded_boundary`, together with `ComputeEdgeLengths`, `ComputeFaceAreas`, `ScaleEdges`, and `ScaleAreas`. The source files containing these functions are renamed as `EmbeddedBoundaryInit.H/cpp` , since these functions are called only during the initialization. --- Source/BoundaryConditions/PML.cpp | 2 +- Source/EmbeddedBoundary/CMakeLists.txt | 2 +- Source/EmbeddedBoundary/EmbeddedBoundary.H | 55 -- Source/EmbeddedBoundary/EmbeddedBoundary.cpp | 200 ------ .../EmbeddedBoundary/EmbeddedBoundaryInit.H | 141 ++++ .../EmbeddedBoundary/EmbeddedBoundaryInit.cpp | 614 ++++++++++++++++++ Source/EmbeddedBoundary/Make.package | 4 +- Source/EmbeddedBoundary/WarpXInitEB.cpp | 434 +------------ Source/Initialization/WarpXInitData.cpp | 19 +- Source/WarpX.H | 79 --- 10 files changed, 782 insertions(+), 768 deletions(-) delete mode 100644 Source/EmbeddedBoundary/EmbeddedBoundary.H delete mode 100644 Source/EmbeddedBoundary/EmbeddedBoundary.cpp create mode 100644 Source/EmbeddedBoundary/EmbeddedBoundaryInit.H create mode 100644 Source/EmbeddedBoundary/EmbeddedBoundaryInit.cpp diff --git a/Source/BoundaryConditions/PML.cpp b/Source/BoundaryConditions/PML.cpp index 1b66195d163..195642ade2c 100644 --- a/Source/BoundaryConditions/PML.cpp +++ b/Source/BoundaryConditions/PML.cpp @@ -12,7 +12,7 @@ #include "BoundaryConditions/PMLComponent.H" #include "Fields.H" #ifdef AMREX_USE_EB -# include "EmbeddedBoundary/EmbeddedBoundary.H" +# include "EmbeddedBoundary/EmbeddedBoundaryInit.H" #endif #ifdef WARPX_USE_FFT # include "FieldSolver/SpectralSolver/SpectralFieldData.H" diff --git a/Source/EmbeddedBoundary/CMakeLists.txt b/Source/EmbeddedBoundary/CMakeLists.txt index 75f9bbdaa04..909886bbad6 100644 --- a/Source/EmbeddedBoundary/CMakeLists.txt +++ b/Source/EmbeddedBoundary/CMakeLists.txt @@ -2,7 +2,7 @@ foreach(D IN LISTS WarpX_DIMS) warpx_set_suffix_dims(SD ${D}) target_sources(lib_${SD} PRIVATE - EmbeddedBoundary.cpp + EmbeddedBoundaryInit.cpp Enabled.cpp WarpXInitEB.cpp WarpXFaceExtensions.cpp diff --git a/Source/EmbeddedBoundary/EmbeddedBoundary.H b/Source/EmbeddedBoundary/EmbeddedBoundary.H deleted file mode 100644 index fc02667246b..00000000000 --- a/Source/EmbeddedBoundary/EmbeddedBoundary.H +++ /dev/null @@ -1,55 +0,0 @@ -/* Copyright 2021-2025 Lorenzo Giacomel, Luca Fedeli - * - * This file is part of WarpX. - * - * License: BSD-3-Clause-LBNL - */ - -#ifndef WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ -#define WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ - -#include "Enabled.H" - -#ifdef AMREX_USE_EB - -#include - -#include -#include - -#include - -namespace warpx::embedded_boundary -{ - /** - * \brief Compute the length of the mesh edges. Here the length is a value in [0, 1]. - * An edge of length 0 is fully covered. - */ - void ComputeEdgeLengths ( - ablastr::fields::VectorField& edge_lengths, - const amrex::EBFArrayBoxFactory& eb_fact); - /** - * \brief Compute the area of the mesh faces. Here the area is a value in [0, 1]. - * An edge of area 0 is fully covered. - */ - void ComputeFaceAreas ( - ablastr::fields::VectorField& face_areas, - const amrex::EBFArrayBoxFactory& eb_fact); - - /** - * \brief Scale the edges lengths by the mesh width to obtain the real lengths. - */ - void ScaleEdges ( - ablastr::fields::VectorField& edge_lengths, - const std::array& cell_size); - /** - * \brief Scale the edges areas by the mesh width to obtain the real areas. - */ - void ScaleAreas ( - ablastr::fields::VectorField& face_areas, - const std::array& cell_size); -} - -#endif - -#endif //WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ diff --git a/Source/EmbeddedBoundary/EmbeddedBoundary.cpp b/Source/EmbeddedBoundary/EmbeddedBoundary.cpp deleted file mode 100644 index 9c3d53aefeb..00000000000 --- a/Source/EmbeddedBoundary/EmbeddedBoundary.cpp +++ /dev/null @@ -1,200 +0,0 @@ -/* Copyright 2021-2025 Lorenzo Giacomel, Luca Fedeli - * - * This file is part of WarpX. - * - * License: BSD-3-Clause-LBNL - */ - -#include "Enabled.H" - -#ifdef AMREX_USE_EB - -#include "EmbeddedBoundary.H" - -#include "Utils/TextMsg.H" - -#include -#include -#include -#include -#include -#include - -namespace web = warpx::embedded_boundary; - -void -web::ComputeEdgeLengths ( - ablastr::fields::VectorField& edge_lengths, - const amrex::EBFArrayBoxFactory& eb_fact) -{ - BL_PROFILE("ComputeEdgeLengths"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ComputeEdgeLengths only implemented in 2D and 3D"); -#endif - - auto const &flags = eb_fact.getMultiEBCellFlagFab(); - auto const &edge_centroid = eb_fact.getEdgeCent(); - for (int idim = 0; idim < 3; ++idim){ -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 1) { - edge_lengths[1]->setVal(0.); - continue; - } -#endif - for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi){ - amrex::Box const box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), - edge_lengths[idim]->nGrowVect()); - amrex::FabType const fab_type = flags[mfi].getType(box); - auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); - - if (fab_type == amrex::FabType::regular) { - // every cell in box is all regular - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - edge_lengths_dim(i, j, k) = 1.; - }); - } else if (fab_type == amrex::FabType::covered) { - // every cell in box is all covered - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - edge_lengths_dim(i, j, k) = 0.; - }); - } else { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - int idim_amrex = idim; - if (idim == 2) { idim_amrex = 1; } - auto const &edge_cent = edge_centroid[idim_amrex]->const_array(mfi); -#elif defined(WARPX_DIM_3D) - auto const &edge_cent = edge_centroid[idim]->const_array(mfi); -#endif - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - if (edge_cent(i, j, k) == amrex::Real(-1.0)) { - // This edge is all covered - edge_lengths_dim(i, j, k) = 0.; - } else if (edge_cent(i, j, k) == amrex::Real(1.0)) { - // This edge is all open - edge_lengths_dim(i, j, k) = 1.; - } else { - // This edge is cut. - edge_lengths_dim(i, j, k) = 1 - amrex::Math::abs(amrex::Real(2.0) - * edge_cent(i, j, k)); - } - - }); - } - } - } -} - - -void -web::ComputeFaceAreas ( - ablastr::fields::VectorField& face_areas, - const amrex::EBFArrayBoxFactory& eb_fact) -{ - BL_PROFILE("ComputeFaceAreas"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ComputeFaceAreas only implemented in 2D and 3D"); -#endif - - auto const &flags = eb_fact.getMultiEBCellFlagFab(); -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - //In 2D the volume frac is actually the area frac. - auto const &area_frac = eb_fact.getVolFrac(); -#elif defined(WARPX_DIM_3D) - auto const &area_frac = eb_fact.getAreaFrac(); -#endif - - for (int idim = 0; idim < 3; ++idim) { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 0 || idim == 2) { - face_areas[idim]->setVal(0.); - continue; - } -#endif - for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi) { - amrex::Box const box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), - face_areas[idim]->nGrowVect()); - amrex::FabType const fab_type = flags[mfi].getType(box); - auto const &face_areas_dim = face_areas[idim]->array(mfi); - if (fab_type == amrex::FabType::regular) { - // every cell in box is all regular - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) = amrex::Real(1.); - }); - } else if (fab_type == amrex::FabType::covered) { - // every cell in box is all covered - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) = amrex::Real(0.); - }); - } else { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - auto const &face = area_frac.const_array(mfi); -#elif defined(WARPX_DIM_3D) - auto const &face = area_frac[idim]->const_array(mfi); -#endif - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) = face(i, j, k); - }); - } - } - } -} - -void -web::ScaleEdges ( - ablastr::fields::VectorField& edge_lengths, - const std::array& cell_size) -{ - BL_PROFILE("ScaleEdges"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ScaleEdges only implemented in 2D and 3D"); -#endif - - for (int idim = 0; idim < 3; ++idim){ -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 1) { continue; } -#endif - for (amrex::MFIter mfi(*edge_lengths[0]); mfi.isValid(); ++mfi) { - const amrex::Box& box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), - edge_lengths[idim]->nGrowVect() ); - auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - edge_lengths_dim(i, j, k) *= cell_size[idim]; - }); - } - } -} - - -void -web::ScaleAreas ( - ablastr::fields::VectorField& face_areas, - const std::array& cell_size) -{ - BL_PROFILE("ScaleAreas"); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) - WARPX_ABORT_WITH_MESSAGE("ScaleAreas only implemented in 2D and 3D"); -#endif - - for (int idim = 0; idim < 3; ++idim) { -#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - if (idim == 0 || idim == 2) { continue; } -#endif - for (amrex::MFIter mfi(*face_areas[0]); mfi.isValid(); ++mfi) { - const amrex::Box& box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), - face_areas[idim]->nGrowVect() ); - amrex::Real const full_area = cell_size[(idim+1)%3]*cell_size[(idim+2)%3]; - auto const &face_areas_dim = face_areas[idim]->array(mfi); - - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - face_areas_dim(i, j, k) *= full_area; - }); - - } - } -} - -#endif diff --git a/Source/EmbeddedBoundary/EmbeddedBoundaryInit.H b/Source/EmbeddedBoundary/EmbeddedBoundaryInit.H new file mode 100644 index 00000000000..ed29fe5b688 --- /dev/null +++ b/Source/EmbeddedBoundary/EmbeddedBoundaryInit.H @@ -0,0 +1,141 @@ +/* Copyright 2021-2025 Lorenzo Giacomel, Luca Fedeli + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#ifndef WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_INIT_H_ +#define WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_INIT_H_ + +#include "Enabled.H" + +#ifdef AMREX_USE_EB + +#include + +#include +#include +#include +#include + +#include + +namespace warpx::embedded_boundary +{ + + /** \brief Set a flag to indicate in which cells a particle should deposit charge/current + * with a reduced, order 1 shape. + * + * More specifically, the flag is set to 1 if any of the neighboring cells over which the + * particle shape might extend are either partially or fully covered by an embedded boundary. + * This ensures that a particle in this cell deposits with an order 1 shape, which in turn + * makes sure that the particle never deposits any charge in a partially or fully covered cell. + * + * \param[in] eb_reduce_particle_shape multifab to be filled with 1s and 0s + * \param[in] eb_fact EB factory + * \param[in] particle_shape_order order of the particle shape function + * \param[in] periodicity TODO Geom(0).periodicity() + */ + void MarkReducedShapeCells ( + std::unique_ptr & eb_reduce_particle_shape, + amrex::EBFArrayBoxFactory const & eb_fact, + int particle_shape_order, + const amrex::Periodicity& periodicity); + + /** \brief Set a flag to indicate on which grid points the field `field` + * should be updated, depending on their position relative to the embedded boundary. + * + * This function is used by all finite-difference solvers, except the + * ECT solver, which instead uses `MarkUpdateECellsECT` and `MarkUpdateBCellsECT`. + * It uses a stair-case approximation of the embedded boundary: + * If a grid point touches cells that are either partially or fully covered + * by the embedded boundary: the corresponding field is not updated. + * + * More specifically, this function fills the iMultiFabs in `eb_update` + * (which have the same indexType as the MultiFabs in `field`) with 1 + * or 0, depending on whether the grid point should be updated or not. + */ + void MarkUpdateCellsStairCase ( + std::array< std::unique_ptr,3> & eb_update, + ablastr::fields::VectorField const & field, + amrex::EBFArrayBoxFactory const & eb_fact ); + + /** \brief Set a flag to indicate on which grid points the E field + * should be updated, depending on their position relative to the embedded boundary. + * + * This function is used by ECT solver. The E field is not updated if + * the edge on which it is defined is fully covered by the embedded boundary. + * + * More specifically, this function fills the iMultiFabs in `eb_update_E` + * (which have the same indexType as the E field) with 1 or 0, depending + * on whether the grid point should be updated or not. + */ + void MarkUpdateECellsECT ( + std::array< std::unique_ptr,3> & eb_update_E, + ablastr::fields::VectorField const& edge_lengths ); + + /** \brief Set a flag to indicate on which grid points the B field + * should be updated, depending on their position relative to the embedded boundary. + * + * This function is used by ECT solver. The B field is not updated if + * the face on which it is defined is fully covered by the embedded boundary. + * + * More specifically, this function fills the iMultiFabs in `eb_update_B` + * (which have the same indexType as the B field) with 1 or 0, depending + * on whether the grid point should be updated or not. + */ + void MarkUpdateBCellsECT ( + std::array< std::unique_ptr,3> & eb_update_B, + ablastr::fields::VectorField const& face_areas, + ablastr::fields::VectorField const& edge_lengths ); + + /** + * \brief Initialize information for cell extensions. + * The flags convention for m_flag_info_face is as follows + * - 0 for unstable cells + * - 1 for stable cells which have not been intruded + * - 2 for stable cells which have been intruded + * Here we cannot know if a cell is intruded or not so we initialize all stable cells with 1 + */ + void MarkExtensionCells( + const std::array& cell_size, + std::array< std::unique_ptr, 3 > & flag_info_face, + std::array< std::unique_ptr, 3 > & flag_ext_face, + const ablastr::fields::VectorField& b_field, + const ablastr::fields::VectorField& face_areas, + const ablastr::fields::VectorField& edge_lengths, + const ablastr::fields::VectorField& area_mod); + + /** + * \brief Compute the length of the mesh edges. Here the length is a value in [0, 1]. + * An edge of length 0 is fully covered. + */ + void ComputeEdgeLengths ( + ablastr::fields::VectorField& edge_lengths, + const amrex::EBFArrayBoxFactory& eb_fact); + /** + * \brief Compute the area of the mesh faces. Here the area is a value in [0, 1]. + * An edge of area 0 is fully covered. + */ + void ComputeFaceAreas ( + ablastr::fields::VectorField& face_areas, + const amrex::EBFArrayBoxFactory& eb_fact); + + /** + * \brief Scale the edges lengths by the mesh width to obtain the real lengths. + */ + void ScaleEdges ( + ablastr::fields::VectorField& edge_lengths, + const std::array& cell_size); + /** + * \brief Scale the edges areas by the mesh width to obtain the real areas. + */ + void ScaleAreas ( + ablastr::fields::VectorField& face_areas, + const std::array& cell_size); +} + +#endif + +#endif //WARPX_EMBEDDED_BOUNDARY_EMBEDDED_BOUNDARY_H_ diff --git a/Source/EmbeddedBoundary/EmbeddedBoundaryInit.cpp b/Source/EmbeddedBoundary/EmbeddedBoundaryInit.cpp new file mode 100644 index 00000000000..6a4caec2e99 --- /dev/null +++ b/Source/EmbeddedBoundary/EmbeddedBoundaryInit.cpp @@ -0,0 +1,614 @@ +/* Copyright 2021-2025 Lorenzo Giacomel, Luca Fedeli + * + * This file is part of WarpX. + * + * License: BSD-3-Clause-LBNL + */ + +#include "Enabled.H" + +#ifdef AMREX_USE_EB + +#include "EmbeddedBoundaryInit.H" + +#include "Fields.H" +#include "Utils/TextMsg.H" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +namespace web = warpx::embedded_boundary; + +void +web::MarkReducedShapeCells ( + std::unique_ptr & eb_reduce_particle_shape, + amrex::EBFArrayBoxFactory const & eb_fact, + int const particle_shape_order, + const amrex::Periodicity& periodicity) +{ + // Pre-fill array with 0, including in the ghost cells outside of the domain. + // (The guard cells in the domain will be updated by `FillBoundary` at the end of this function.) + eb_reduce_particle_shape->setVal(0, eb_reduce_particle_shape->nGrow()); + + // Extract structures for embedded boundaries + amrex::FabArray const& eb_flag = eb_fact.getMultiEBCellFlagFab(); + +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for (amrex::MFIter mfi(*eb_reduce_particle_shape); mfi.isValid(); ++mfi) { + + const amrex::Box& box = mfi.tilebox(); + amrex::Array4 const & eb_reduce_particle_shape_arr = eb_reduce_particle_shape->array(mfi); + + // Check if the box (including one layer of guard cells) contains a mix of covered and regular cells + const amrex::Box eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); + amrex::FabType const fab_type = eb_flag[mfi].getType( eb_info_box ); + + if (fab_type == amrex::FabType::regular) { // All cells in the box are regular + + // Every cell in box is regular: do not reduce particle shape in any cell + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + eb_reduce_particle_shape_arr(i, j, k) = 0; + }); + + } else if (fab_type == amrex::FabType::covered) { // All cells in the box are covered + + // Every cell in box is fully covered: reduce particle shape + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + eb_reduce_particle_shape_arr(i, j, k) = 1; + }); + + } else { // The box contains a mix of covered and regular cells + + auto const & flag = eb_flag[mfi].array(); + + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + + // Check if any of the neighboring cells over which the particle shape might extend + // are either partially or fully covered. In this case, set eb_reduce_particle_shape_arr + // to one for this cell, to indicate that the particle should use an order 1 shape + // (This ensures that the particle never deposits any charge in a partially or + // fully covered cell, even with higher-order shapes) + // Note: in the code below `particle_shape_order/2` corresponds to the number of neighboring cells + // over which the shape factor could extend, in each direction. + int const i_start = i-particle_shape_order/2; + int const i_end = i+particle_shape_order/2; +#if AMREX_SPACEDIM > 1 + int const j_start = j-particle_shape_order/2; + int const j_end = j+particle_shape_order/2; +#else + int const j_start = j; + int const j_end = j; +#endif +#if AMREX_SPACEDIM > 2 + int const k_start = k-particle_shape_order/2; + int const k_end = k+particle_shape_order/2; +#else + int const k_start = k; + int const k_end = k; +#endif + int reduce_shape = 0; + for (int i_cell = i_start; i_cell <= i_end; ++i_cell) { + for (int j_cell = j_start; j_cell <= j_end; ++j_cell) { + for (int k_cell = k_start; k_cell <= k_end; ++k_cell) { + // `isRegular` returns `false` if the cell is either partially or fully covered. + if ( !flag(i_cell, j_cell, k_cell).isRegular() ) { + reduce_shape = 1; + } + } + } + } + eb_reduce_particle_shape_arr(i, j, k) = reduce_shape; + }); + + } + + } + // FillBoundary to set the values in the guard cells + eb_reduce_particle_shape->FillBoundary(periodicity); +} + +void +web::MarkUpdateCellsStairCase ( + std::array< std::unique_ptr,3> & eb_update, + ablastr::fields::VectorField const& field, + amrex::EBFArrayBoxFactory const & eb_fact ) +{ + + using ablastr::fields::Direction; + using warpx::fields::FieldType; + + // Extract structures for embedded boundaries + amrex::FabArray const& eb_flag = eb_fact.getMultiEBCellFlagFab(); + + for (int idim = 0; idim < 3; ++idim) { + +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for (amrex::MFIter mfi(*field[idim]); mfi.isValid(); ++mfi) { + + const amrex::Box& box = mfi.tilebox(); + amrex::Array4 const & eb_update_arr = eb_update[idim]->array(mfi); + + // Check if the box (including one layer of guard cells) contains a mix of covered and regular cells + const amrex::Box eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); + amrex::FabType const fab_type = eb_flag[mfi].getType( eb_info_box ); + + if (fab_type == amrex::FabType::regular) { // All cells in the box are regular + + // Every cell in box is regular: update field in every cell + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + eb_update_arr(i, j, k) = 1; + }); + + } else if (fab_type == amrex::FabType::covered) { // All cells in the box are covered + + // Every cell in box is fully covered: do not update field + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + eb_update_arr(i, j, k) = 0; + }); + + } else { // The box contains a mix of covered and regular cells + + auto const & flag = eb_flag[mfi].array(); + auto index_type = field[idim]->ixType(); + + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + + // Stair-case approximation: If neighboring cells of this gridpoint + // are either partially or fully covered: do not update field + + // The number of cells that we need to check depend on the index type + // of the `eb_update_arr` in each direction. + // If `eb_update_arr` is nodal in a given direction, we need to check the cells + // to the left and right of this nodal gridpoint. + // For instance, if `eb_update_arr` is nodal in the first dimension, we need + // to check the cells at index i-1 and at index i, since, with AMReX indexing conventions, + // these are the neighboring cells for the nodal gripoint at index i. + // If `eb_update_arr` is cell-centerd in a given direction, we only need to check + // the cell at the same position (e.g., in the first dimension: the cell at index i). + int const i_start = ( index_type.nodeCentered(0) )? i-1 : i; +#if AMREX_SPACEDIM > 1 + int const j_start = ( index_type.nodeCentered(1) )? j-1 : j; +#else + int const j_start = j; +#endif +#if AMREX_SPACEDIM > 2 + int const k_start = ( index_type.nodeCentered(2) )? k-1 : k; +#else + int const k_start = k; +#endif + // Loop over neighboring cells + int eb_update_flag = 1; + for (int i_cell = i_start; i_cell <= i; ++i_cell) { + for (int j_cell = j_start; j_cell <= j; ++j_cell) { + for (int k_cell = k_start; k_cell <= k; ++k_cell) { + // If one of the neighboring is either partially or fully covered + // (i.e. if they are not regular cells), do not update field + // (`isRegular` returns `false` if the cell is either partially or fully covered.) + if ( !flag(i_cell, j_cell, k_cell).isRegular() ) { + eb_update_flag = 0; + } + } + } + } + eb_update_arr(i, j, k) = eb_update_flag; + }); + + } + + } + + } + +} + +void +web::MarkUpdateECellsECT ( + std::array< std::unique_ptr,3> & eb_update_E, + ablastr::fields::VectorField const& edge_lengths ) +{ + +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for ( amrex::MFIter mfi(*eb_update_E[0], amrex::TilingIfNotGPU()); mfi.isValid(); ++mfi) { + + const amrex::Box& tbx = mfi.tilebox( eb_update_E[0]->ixType().toIntVect(), eb_update_E[0]->nGrowVect() ); + const amrex::Box& tby = mfi.tilebox( eb_update_E[1]->ixType().toIntVect(), eb_update_E[1]->nGrowVect() ); + const amrex::Box& tbz = mfi.tilebox( eb_update_E[2]->ixType().toIntVect(), eb_update_E[2]->nGrowVect() ); + + amrex::Array4 const & eb_update_Ex_arr = eb_update_E[0]->array(mfi); + amrex::Array4 const & eb_update_Ey_arr = eb_update_E[1]->array(mfi); + amrex::Array4 const & eb_update_Ez_arr = eb_update_E[2]->array(mfi); + + amrex::Array4 const & lx_arr = edge_lengths[0]->array(mfi); + amrex::Array4 const & lz_arr = edge_lengths[2]->array(mfi); +#if defined(WARPX_DIM_3D) + amrex::Array4 const & ly_arr = edge_lengths[1]->array(mfi); +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + amrex::Dim3 const lx_lo = amrex::lbound(lx_arr); + amrex::Dim3 const lx_hi = amrex::ubound(lx_arr); + amrex::Dim3 const lz_lo = amrex::lbound(lz_arr); + amrex::Dim3 const lz_hi = amrex::ubound(lz_arr); +#endif + + amrex::ParallelFor (tbx, tby, tbz, + [=] AMREX_GPU_DEVICE (int i, int j, int k) { + // Do not update Ex if the edge on which it lives is fully covered + eb_update_Ex_arr(i, j, k) = (lx_arr(i, j, k) == 0)? 0 : 1; + }, + [=] AMREX_GPU_DEVICE (int i, int j, int k) { +#ifdef WARPX_DIM_3D + // In 3D: Do not update Ey if the edge on which it lives is fully covered + eb_update_Ey_arr(i, j, k) = (ly_arr(i, j, k) == 0)? 0 : 1; +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + // In XZ and RZ: Ey is associated with a mesh node, + // so we need to check if the mesh node is covered + if((lx_arr(std::min(i , lx_hi.x), std::min(j , lx_hi.y), k)==0) + ||(lx_arr(std::max(i-1, lx_lo.x), std::min(j , lx_hi.y), k)==0) + ||(lz_arr(std::min(i , lz_hi.x), std::min(j , lz_hi.y), k)==0) + ||(lz_arr(std::min(i , lz_hi.x), std::max(j-1, lz_lo.y), k)==0)) { + eb_update_Ey_arr(i, j, k) = 0; + } else { + eb_update_Ey_arr(i, j, k) = 1; + } +#endif + }, + [=] AMREX_GPU_DEVICE (int i, int j, int k) { + // Do not update Ez if the edge on which it lives is fully covered + eb_update_Ez_arr(i, j, k) = (lz_arr(i, j, k) == 0)? 0 : 1; + } + ); + + } +} + +void +web::MarkUpdateBCellsECT ( + std::array< std::unique_ptr,3> & eb_update_B, + ablastr::fields::VectorField const& face_areas, + ablastr::fields::VectorField const& edge_lengths ) +{ + +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for ( amrex::MFIter mfi(*eb_update_B[0], amrex::TilingIfNotGPU()); mfi.isValid(); ++mfi) { + + const amrex::Box& tbx = mfi.tilebox( eb_update_B[0]->ixType().toIntVect(), eb_update_B[0]->nGrowVect() ); + const amrex::Box& tby = mfi.tilebox( eb_update_B[1]->ixType().toIntVect(), eb_update_B[1]->nGrowVect() ); + const amrex::Box& tbz = mfi.tilebox( eb_update_B[2]->ixType().toIntVect(), eb_update_B[2]->nGrowVect() ); + + amrex::Array4 const & eb_update_Bx_arr = eb_update_B[0]->array(mfi); + amrex::Array4 const & eb_update_By_arr = eb_update_B[1]->array(mfi); + amrex::Array4 const & eb_update_Bz_arr = eb_update_B[2]->array(mfi); + +#ifdef WARPX_DIM_3D + amrex::Array4 const & Sx_arr = face_areas[0]->array(mfi); + amrex::Array4 const & Sy_arr = face_areas[1]->array(mfi); + amrex::Array4 const & Sz_arr = face_areas[2]->array(mfi); + amrex::ignore_unused(edge_lengths); +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + amrex::Array4 const & Sy_arr = face_areas[1]->array(mfi); + amrex::Array4 const & lx_arr = edge_lengths[0]->array(mfi); + amrex::Array4 const & lz_arr = edge_lengths[2]->array(mfi); +#endif + amrex::ParallelFor (tbx, tby, tbz, + [=] AMREX_GPU_DEVICE (int i, int j, int k) { +#ifdef WARPX_DIM_3D + // In 3D: do not update Bx if the face on which it lives is fully covered + eb_update_Bx_arr(i, j, k) = (Sx_arr(i, j, k) == 0)? 0 : 1; +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + //In XZ and RZ, Bx lives on a z-edge ; do not update if fully covered + eb_update_Bx_arr(i, j, k) = (lz_arr(i, j, k) == 0)? 0 : 1; +#endif + }, + [=] AMREX_GPU_DEVICE (int i, int j, int k) { + // Do not update By if the face on which it lives is fully covered + eb_update_By_arr(i, j, k) = (Sy_arr(i, j, k) == 0)? 0 : 1; + }, + [=] AMREX_GPU_DEVICE (int i, int j, int k) { +#ifdef WARPX_DIM_3D + // In 3D: do not update Bz if the face on which it lives is fully covered + eb_update_Bz_arr(i, j, k) = (Sz_arr(i, j, k) == 0)? 0 : 1; +#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + //In XZ and RZ, Bz lives on a x-edge ; do not update if fully covered + eb_update_Bz_arr(i, j, k) = (lx_arr(i, j, k) == 0)? 0 : 1; +#endif + } + ); + + } +} + +void +web::MarkExtensionCells ( + const std::array& cell_size, + std::array< std::unique_ptr, 3 > & flag_info_face, + std::array< std::unique_ptr, 3 > & flag_ext_face, + const ablastr::fields::VectorField& b_field, + const ablastr::fields::VectorField& face_areas, + const ablastr::fields::VectorField& edge_lengths, + const ablastr::fields::VectorField& area_mod) +{ + using ablastr::fields::Direction; + using warpx::fields::FieldType; + +#ifdef WARPX_DIM_RZ + amrex::ignore_unused(cell_size, flag_info_face, flag_ext_face, b_field, + face_areas, edge_lengths, area_mod); + +#elif !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) + + WARPX_ABORT_WITH_MESSAGE("MarkExtensionCells only implemented in 2D and 3D"); + +#else + + for (int idim = 0; idim < 3; ++idim) { + +# if defined(WARPX_DIM_XZ) + if (idim == 0 || idim == 2) { + flag_info_face[idim]->setVal(0.); + flag_ext_face[idim]->setVal(0.); + continue; + } +# endif + for (amrex::MFIter mfi(*b_field[idim]); mfi.isValid(); ++mfi) { + auto* face_areas_idim_max_lev = face_areas[idim]; + + const amrex::Box& box = mfi.tilebox(face_areas_idim_max_lev->ixType().toIntVect(), + face_areas_idim_max_lev->nGrowVect() ); + + auto const& S = face_areas_idim_max_lev->array(mfi); + auto const& flag_info_face_data = flag_info_face[idim]->array(mfi); + auto const& flag_ext_face_data = flag_ext_face[idim]->array(mfi); + auto const& lx = edge_lengths[0]->array(mfi); + auto const& ly = edge_lengths[1]->array(mfi); + auto const& lz = edge_lengths[2]->array(mfi); + auto const& mod_areas_dim_data = area_mod[idim]->array(mfi); + + const amrex::Real dx = cell_size[0]; + const amrex::Real dy = cell_size[1]; + const amrex::Real dz = cell_size[2]; + + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + // Minimal area for this cell to be stable + mod_areas_dim_data(i, j, k) = S(i, j, k); + double S_stab; + if (idim == 0){ + S_stab = 0.5 * std::max({ly(i, j, k) * dz, ly(i, j, k + 1) * dz, + lz(i, j, k) * dy, lz(i, j + 1, k) * dy}); + }else if (idim == 1){ + +# if defined(WARPX_DIM_XZ) + S_stab = 0.5 * std::max({lx(i, j, k) * dz, lx(i, j + 1, k) * dz, + lz(i, j, k) * dx, lz(i + 1, j, k) * dx}); +# else + S_stab = 0.5 * std::max({lx(i, j, k) * dz, lx(i, j, k + 1) * dz, + lz(i, j, k) * dx, lz(i + 1, j, k) * dx}); +# endif + }else { + S_stab = 0.5 * std::max({lx(i, j, k) * dy, lx(i, j + 1, k) * dy, + ly(i, j, k) * dx, ly(i + 1, j, k) * dx}); + } + + // Does this face need to be extended? + // The difference between flag_info_face and flag_ext_face is that: + // - for every face flag_info_face contains a: + // * 0 if the face needs to be extended + // * 1 if the face is large enough to lend area to other faces + // * 2 if the face is actually intruded by other face + // Here we only take care of the first two cases. The entries corresponding + // to the intruded faces are going to be set in the function ComputeFaceExtensions + // - for every face flag_ext_face contains a: + // * 1 if the face needs to be extended + // * 0 otherwise + // In the function ComputeFaceExtensions, after the cells are extended, the + // corresponding entries in flag_ext_face are set to zero. This helps to keep + // track of which cells could not be extended + flag_ext_face_data(i, j, k) = int(S(i, j, k) < S_stab && S(i, j, k) > 0); + if(flag_ext_face_data(i, j, k)){ + flag_info_face_data(i, j, k) = 0; + } + // Is this face available to lend area to other faces? + // The criterion is that the face has to be interior and not already unstable itself + if(int(S(i, j, k) > 0 && !flag_ext_face_data(i, j, k))) { + flag_info_face_data(i, j, k) = 1; + } + }); + } + } +#endif +} + +void +web::ComputeEdgeLengths ( + ablastr::fields::VectorField& edge_lengths, + const amrex::EBFArrayBoxFactory& eb_fact) +{ + BL_PROFILE("ComputeEdgeLengths"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ComputeEdgeLengths only implemented in 2D and 3D"); +#endif + + auto const &flags = eb_fact.getMultiEBCellFlagFab(); + auto const &edge_centroid = eb_fact.getEdgeCent(); + for (int idim = 0; idim < 3; ++idim){ +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 1) { + edge_lengths[1]->setVal(0.); + continue; + } +#endif + for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi){ + amrex::Box const box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), + edge_lengths[idim]->nGrowVect()); + amrex::FabType const fab_type = flags[mfi].getType(box); + auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); + + if (fab_type == amrex::FabType::regular) { + // every cell in box is all regular + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + edge_lengths_dim(i, j, k) = 1.; + }); + } else if (fab_type == amrex::FabType::covered) { + // every cell in box is all covered + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + edge_lengths_dim(i, j, k) = 0.; + }); + } else { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + int idim_amrex = idim; + if (idim == 2) { idim_amrex = 1; } + auto const &edge_cent = edge_centroid[idim_amrex]->const_array(mfi); +#elif defined(WARPX_DIM_3D) + auto const &edge_cent = edge_centroid[idim]->const_array(mfi); +#endif + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + if (edge_cent(i, j, k) == amrex::Real(-1.0)) { + // This edge is all covered + edge_lengths_dim(i, j, k) = 0.; + } else if (edge_cent(i, j, k) == amrex::Real(1.0)) { + // This edge is all open + edge_lengths_dim(i, j, k) = 1.; + } else { + // This edge is cut. + edge_lengths_dim(i, j, k) = 1 - amrex::Math::abs(amrex::Real(2.0) + * edge_cent(i, j, k)); + } + + }); + } + } + } +} + + +void +web::ComputeFaceAreas ( + ablastr::fields::VectorField& face_areas, + const amrex::EBFArrayBoxFactory& eb_fact) +{ + BL_PROFILE("ComputeFaceAreas"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ComputeFaceAreas only implemented in 2D and 3D"); +#endif + + auto const &flags = eb_fact.getMultiEBCellFlagFab(); +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + //In 2D the volume frac is actually the area frac. + auto const &area_frac = eb_fact.getVolFrac(); +#elif defined(WARPX_DIM_3D) + auto const &area_frac = eb_fact.getAreaFrac(); +#endif + + for (int idim = 0; idim < 3; ++idim) { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 0 || idim == 2) { + face_areas[idim]->setVal(0.); + continue; + } +#endif + for (amrex::MFIter mfi(flags); mfi.isValid(); ++mfi) { + amrex::Box const box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), + face_areas[idim]->nGrowVect()); + amrex::FabType const fab_type = flags[mfi].getType(box); + auto const &face_areas_dim = face_areas[idim]->array(mfi); + if (fab_type == amrex::FabType::regular) { + // every cell in box is all regular + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) = amrex::Real(1.); + }); + } else if (fab_type == amrex::FabType::covered) { + // every cell in box is all covered + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) = amrex::Real(0.); + }); + } else { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + auto const &face = area_frac.const_array(mfi); +#elif defined(WARPX_DIM_3D) + auto const &face = area_frac[idim]->const_array(mfi); +#endif + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) = face(i, j, k); + }); + } + } + } +} + +void +web::ScaleEdges ( + ablastr::fields::VectorField& edge_lengths, + const std::array& cell_size) +{ + BL_PROFILE("ScaleEdges"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ScaleEdges only implemented in 2D and 3D"); +#endif + + for (int idim = 0; idim < 3; ++idim){ +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 1) { continue; } +#endif + for (amrex::MFIter mfi(*edge_lengths[0]); mfi.isValid(); ++mfi) { + const amrex::Box& box = mfi.tilebox(edge_lengths[idim]->ixType().toIntVect(), + edge_lengths[idim]->nGrowVect() ); + auto const &edge_lengths_dim = edge_lengths[idim]->array(mfi); + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + edge_lengths_dim(i, j, k) *= cell_size[idim]; + }); + } + } +} + + +void +web::ScaleAreas ( + ablastr::fields::VectorField& face_areas, + const std::array& cell_size) +{ + BL_PROFILE("ScaleAreas"); + +#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) && !defined(WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE("ScaleAreas only implemented in 2D and 3D"); +#endif + + for (int idim = 0; idim < 3; ++idim) { +#if defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + if (idim == 0 || idim == 2) { continue; } +#endif + for (amrex::MFIter mfi(*face_areas[0]); mfi.isValid(); ++mfi) { + const amrex::Box& box = mfi.tilebox(face_areas[idim]->ixType().toIntVect(), + face_areas[idim]->nGrowVect() ); + amrex::Real const full_area = cell_size[(idim+1)%3]*cell_size[(idim+2)%3]; + auto const &face_areas_dim = face_areas[idim]->array(mfi); + + amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { + face_areas_dim(i, j, k) *= full_area; + }); + + } + } +} + +#endif diff --git a/Source/EmbeddedBoundary/Make.package b/Source/EmbeddedBoundary/Make.package index e1c6422d99c..fd46932827d 100644 --- a/Source/EmbeddedBoundary/Make.package +++ b/Source/EmbeddedBoundary/Make.package @@ -1,11 +1,11 @@ -CEXE_headers += EmbeddedBoundary.H +CEXE_headers += EmbeddedBoundaryInit.H CEXE_headers += Enabled.H CEXE_headers += ParticleScraper.H CEXE_headers += ParticleBoundaryProcess.H CEXE_headers += DistanceToEB.H CEXE_headers += WarpXFaceInfoBox.H -CEXE_sources += EmbeddedBoundary.cpp +CEXE_sources += EmbeddedBoundaryInit.cpp CEXE_sources += Enabled.cpp CEXE_sources += WarpXInitEB.cpp CEXE_sources += WarpXFaceExtensions.cpp diff --git a/Source/EmbeddedBoundary/WarpXInitEB.cpp b/Source/EmbeddedBoundary/WarpXInitEB.cpp index 371bd6a0570..8b7ad7b9d64 100644 --- a/Source/EmbeddedBoundary/WarpXInitEB.cpp +++ b/Source/EmbeddedBoundary/WarpXInitEB.cpp @@ -13,31 +13,17 @@ # include "Utils/Parser/ParserUtils.H" # include "Utils/TextMsg.H" -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include +# include +# include +# include +# include +# include +# include +# include +# include +# include +# include +# include # include # include @@ -123,404 +109,6 @@ WarpX::InitEB () #endif } -#ifdef AMREX_USE_EB - -void -WarpX::MarkReducedShapeCells ( - std::unique_ptr & eb_reduce_particle_shape, - amrex::EBFArrayBoxFactory const & eb_fact, - int const particle_shape_order ) -{ - // Pre-fill array with 0, including in the ghost cells outside of the domain. - // (The guard cells in the domain will be updated by `FillBoundary` at the end of this function.) - eb_reduce_particle_shape->setVal(0, eb_reduce_particle_shape->nGrow()); - - // Extract structures for embedded boundaries - amrex::FabArray const& eb_flag = eb_fact.getMultiEBCellFlagFab(); - -#ifdef AMREX_USE_OMP -#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) -#endif - for (amrex::MFIter mfi(*eb_reduce_particle_shape); mfi.isValid(); ++mfi) { - - const amrex::Box& box = mfi.tilebox(); - amrex::Array4 const & eb_reduce_particle_shape_arr = eb_reduce_particle_shape->array(mfi); - - // Check if the box (including one layer of guard cells) contains a mix of covered and regular cells - const amrex::Box eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); - amrex::FabType const fab_type = eb_flag[mfi].getType( eb_info_box ); - - if (fab_type == amrex::FabType::regular) { // All cells in the box are regular - - // Every cell in box is regular: do not reduce particle shape in any cell - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - eb_reduce_particle_shape_arr(i, j, k) = 0; - }); - - } else if (fab_type == amrex::FabType::covered) { // All cells in the box are covered - - // Every cell in box is fully covered: reduce particle shape - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - eb_reduce_particle_shape_arr(i, j, k) = 1; - }); - - } else { // The box contains a mix of covered and regular cells - - auto const & flag = eb_flag[mfi].array(); - - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - - // Check if any of the neighboring cells over which the particle shape might extend - // are either partially or fully covered. In this case, set eb_reduce_particle_shape_arr - // to one for this cell, to indicate that the particle should use an order 1 shape - // (This ensures that the particle never deposits any charge in a partially or - // fully covered cell, even with higher-order shapes) - // Note: in the code below `particle_shape_order/2` corresponds to the number of neighboring cells - // over which the shape factor could extend, in each direction. - int const i_start = i-particle_shape_order/2; - int const i_end = i+particle_shape_order/2; -#if AMREX_SPACEDIM > 1 - int const j_start = j-particle_shape_order/2; - int const j_end = j+particle_shape_order/2; -#else - int const j_start = j; - int const j_end = j; -#endif -#if AMREX_SPACEDIM > 2 - int const k_start = k-particle_shape_order/2; - int const k_end = k+particle_shape_order/2; -#else - int const k_start = k; - int const k_end = k; -#endif - int reduce_shape = 0; - for (int i_cell = i_start; i_cell <= i_end; ++i_cell) { - for (int j_cell = j_start; j_cell <= j_end; ++j_cell) { - for (int k_cell = k_start; k_cell <= k_end; ++k_cell) { - // `isRegular` returns `false` if the cell is either partially or fully covered. - if ( !flag(i_cell, j_cell, k_cell).isRegular() ) { - reduce_shape = 1; - } - } - } - } - eb_reduce_particle_shape_arr(i, j, k) = reduce_shape; - }); - - } - - } - - // FillBoundary to set the values in the guard cells - eb_reduce_particle_shape->FillBoundary(Geom(0).periodicity()); - -} - -void -WarpX::MarkUpdateCellsStairCase ( - std::array< std::unique_ptr,3> & eb_update, - ablastr::fields::VectorField const& field, - amrex::EBFArrayBoxFactory const & eb_fact ) -{ - - using ablastr::fields::Direction; - using warpx::fields::FieldType; - - // Extract structures for embedded boundaries - amrex::FabArray const& eb_flag = eb_fact.getMultiEBCellFlagFab(); - - for (int idim = 0; idim < 3; ++idim) { - -#ifdef AMREX_USE_OMP -#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) -#endif - for (amrex::MFIter mfi(*field[idim]); mfi.isValid(); ++mfi) { - - const amrex::Box& box = mfi.tilebox(); - amrex::Array4 const & eb_update_arr = eb_update[idim]->array(mfi); - - // Check if the box (including one layer of guard cells) contains a mix of covered and regular cells - const amrex::Box eb_info_box = mfi.tilebox(amrex::IntVect::TheCellVector()).grow(1); - amrex::FabType const fab_type = eb_flag[mfi].getType( eb_info_box ); - - if (fab_type == amrex::FabType::regular) { // All cells in the box are regular - - // Every cell in box is regular: update field in every cell - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - eb_update_arr(i, j, k) = 1; - }); - - } else if (fab_type == amrex::FabType::covered) { // All cells in the box are covered - - // Every cell in box is fully covered: do not update field - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - eb_update_arr(i, j, k) = 0; - }); - - } else { // The box contains a mix of covered and regular cells - - auto const & flag = eb_flag[mfi].array(); - auto index_type = field[idim]->ixType(); - - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - - // Stair-case approximation: If neighboring cells of this gridpoint - // are either partially or fully covered: do not update field - - // The number of cells that we need to check depend on the index type - // of the `eb_update_arr` in each direction. - // If `eb_update_arr` is nodal in a given direction, we need to check the cells - // to the left and right of this nodal gridpoint. - // For instance, if `eb_update_arr` is nodal in the first dimension, we need - // to check the cells at index i-1 and at index i, since, with AMReX indexing conventions, - // these are the neighboring cells for the nodal gripoint at index i. - // If `eb_update_arr` is cell-centerd in a given direction, we only need to check - // the cell at the same position (e.g., in the first dimension: the cell at index i). - int const i_start = ( index_type.nodeCentered(0) )? i-1 : i; -#if AMREX_SPACEDIM > 1 - int const j_start = ( index_type.nodeCentered(1) )? j-1 : j; -#else - int const j_start = j; -#endif -#if AMREX_SPACEDIM > 2 - int const k_start = ( index_type.nodeCentered(2) )? k-1 : k; -#else - int const k_start = k; -#endif - // Loop over neighboring cells - int eb_update_flag = 1; - for (int i_cell = i_start; i_cell <= i; ++i_cell) { - for (int j_cell = j_start; j_cell <= j; ++j_cell) { - for (int k_cell = k_start; k_cell <= k; ++k_cell) { - // If one of the neighboring is either partially or fully covered - // (i.e. if they are not regular cells), do not update field - // (`isRegular` returns `false` if the cell is either partially or fully covered.) - if ( !flag(i_cell, j_cell, k_cell).isRegular() ) { - eb_update_flag = 0; - } - } - } - } - eb_update_arr(i, j, k) = eb_update_flag; - }); - - } - - } - - } - -} - -void -WarpX::MarkUpdateECellsECT ( - std::array< std::unique_ptr,3> & eb_update_E, - ablastr::fields::VectorField const& edge_lengths ) -{ - -#ifdef AMREX_USE_OMP -#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) -#endif - for ( amrex::MFIter mfi(*eb_update_E[0], amrex::TilingIfNotGPU()); mfi.isValid(); ++mfi) { - - const amrex::Box& tbx = mfi.tilebox( eb_update_E[0]->ixType().toIntVect(), eb_update_E[0]->nGrowVect() ); - const amrex::Box& tby = mfi.tilebox( eb_update_E[1]->ixType().toIntVect(), eb_update_E[1]->nGrowVect() ); - const amrex::Box& tbz = mfi.tilebox( eb_update_E[2]->ixType().toIntVect(), eb_update_E[2]->nGrowVect() ); - - amrex::Array4 const & eb_update_Ex_arr = eb_update_E[0]->array(mfi); - amrex::Array4 const & eb_update_Ey_arr = eb_update_E[1]->array(mfi); - amrex::Array4 const & eb_update_Ez_arr = eb_update_E[2]->array(mfi); - - amrex::Array4 const & lx_arr = edge_lengths[0]->array(mfi); - amrex::Array4 const & lz_arr = edge_lengths[2]->array(mfi); -#if defined(WARPX_DIM_3D) - amrex::Array4 const & ly_arr = edge_lengths[1]->array(mfi); -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - amrex::Dim3 const lx_lo = amrex::lbound(lx_arr); - amrex::Dim3 const lx_hi = amrex::ubound(lx_arr); - amrex::Dim3 const lz_lo = amrex::lbound(lz_arr); - amrex::Dim3 const lz_hi = amrex::ubound(lz_arr); -#endif - - amrex::ParallelFor (tbx, tby, tbz, - [=] AMREX_GPU_DEVICE (int i, int j, int k) { - // Do not update Ex if the edge on which it lives is fully covered - eb_update_Ex_arr(i, j, k) = (lx_arr(i, j, k) == 0)? 0 : 1; - }, - [=] AMREX_GPU_DEVICE (int i, int j, int k) { -#ifdef WARPX_DIM_3D - // In 3D: Do not update Ey if the edge on which it lives is fully covered - eb_update_Ey_arr(i, j, k) = (ly_arr(i, j, k) == 0)? 0 : 1; -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - // In XZ and RZ: Ey is associated with a mesh node, - // so we need to check if the mesh node is covered - if((lx_arr(std::min(i , lx_hi.x), std::min(j , lx_hi.y), k)==0) - ||(lx_arr(std::max(i-1, lx_lo.x), std::min(j , lx_hi.y), k)==0) - ||(lz_arr(std::min(i , lz_hi.x), std::min(j , lz_hi.y), k)==0) - ||(lz_arr(std::min(i , lz_hi.x), std::max(j-1, lz_lo.y), k)==0)) { - eb_update_Ey_arr(i, j, k) = 0; - } else { - eb_update_Ey_arr(i, j, k) = 1; - } -#endif - }, - [=] AMREX_GPU_DEVICE (int i, int j, int k) { - // Do not update Ez if the edge on which it lives is fully covered - eb_update_Ez_arr(i, j, k) = (lz_arr(i, j, k) == 0)? 0 : 1; - } - ); - - } -} - -void -WarpX::MarkUpdateBCellsECT ( - std::array< std::unique_ptr,3> & eb_update_B, - ablastr::fields::VectorField const& face_areas, - ablastr::fields::VectorField const& edge_lengths ) -{ - -#ifdef AMREX_USE_OMP -#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) -#endif - for ( amrex::MFIter mfi(*eb_update_B[0], amrex::TilingIfNotGPU()); mfi.isValid(); ++mfi) { - - const amrex::Box& tbx = mfi.tilebox( eb_update_B[0]->ixType().toIntVect(), eb_update_B[0]->nGrowVect() ); - const amrex::Box& tby = mfi.tilebox( eb_update_B[1]->ixType().toIntVect(), eb_update_B[1]->nGrowVect() ); - const amrex::Box& tbz = mfi.tilebox( eb_update_B[2]->ixType().toIntVect(), eb_update_B[2]->nGrowVect() ); - - amrex::Array4 const & eb_update_Bx_arr = eb_update_B[0]->array(mfi); - amrex::Array4 const & eb_update_By_arr = eb_update_B[1]->array(mfi); - amrex::Array4 const & eb_update_Bz_arr = eb_update_B[2]->array(mfi); - -#ifdef WARPX_DIM_3D - amrex::Array4 const & Sx_arr = face_areas[0]->array(mfi); - amrex::Array4 const & Sy_arr = face_areas[1]->array(mfi); - amrex::Array4 const & Sz_arr = face_areas[2]->array(mfi); - amrex::ignore_unused(edge_lengths); -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - amrex::Array4 const & Sy_arr = face_areas[1]->array(mfi); - amrex::Array4 const & lx_arr = edge_lengths[0]->array(mfi); - amrex::Array4 const & lz_arr = edge_lengths[2]->array(mfi); -#endif - amrex::ParallelFor (tbx, tby, tbz, - [=] AMREX_GPU_DEVICE (int i, int j, int k) { -#ifdef WARPX_DIM_3D - // In 3D: do not update Bx if the face on which it lives is fully covered - eb_update_Bx_arr(i, j, k) = (Sx_arr(i, j, k) == 0)? 0 : 1; -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - //In XZ and RZ, Bx lives on a z-edge ; do not update if fully covered - eb_update_Bx_arr(i, j, k) = (lz_arr(i, j, k) == 0)? 0 : 1; -#endif - }, - [=] AMREX_GPU_DEVICE (int i, int j, int k) { - // Do not update By if the face on which it lives is fully covered - eb_update_By_arr(i, j, k) = (Sy_arr(i, j, k) == 0)? 0 : 1; - }, - [=] AMREX_GPU_DEVICE (int i, int j, int k) { -#ifdef WARPX_DIM_3D - // In 3D: do not update Bz if the face on which it lives is fully covered - eb_update_Bz_arr(i, j, k) = (Sz_arr(i, j, k) == 0)? 0 : 1; -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - //In XZ and RZ, Bz lives on a x-edge ; do not update if fully covered - eb_update_Bz_arr(i, j, k) = (lx_arr(i, j, k) == 0)? 0 : 1; -#endif - } - ); - - } -} - -void -WarpX::MarkExtensionCells () -{ - using ablastr::fields::Direction; - using warpx::fields::FieldType; - -#ifndef WARPX_DIM_RZ - auto const &cell_size = CellSize(maxLevel()); - -#if !defined(WARPX_DIM_3D) && !defined(WARPX_DIM_XZ) - WARPX_ABORT_WITH_MESSAGE("MarkExtensionCells only implemented in 2D and 3D"); -#endif - - for (int idim = 0; idim < 3; ++idim) { -#if defined(WARPX_DIM_XZ) - if (idim == 0 || idim == 2) { - m_flag_info_face[maxLevel()][idim]->setVal(0.); - m_flag_ext_face[maxLevel()][idim]->setVal(0.); - continue; - } -#endif - for (amrex::MFIter mfi(*m_fields.get(FieldType::Bfield_fp, Direction{idim}, maxLevel())); mfi.isValid(); ++mfi) { - auto* face_areas_idim_max_lev = - m_fields.get(FieldType::face_areas, Direction{idim}, maxLevel()); - - const amrex::Box& box = mfi.tilebox(face_areas_idim_max_lev->ixType().toIntVect(), - face_areas_idim_max_lev->nGrowVect() ); - - auto const &S = face_areas_idim_max_lev->array(mfi); - auto const &flag_info_face = m_flag_info_face[maxLevel()][idim]->array(mfi); - auto const &flag_ext_face = m_flag_ext_face[maxLevel()][idim]->array(mfi); - const auto &lx = m_fields.get(FieldType::edge_lengths, Direction{0}, maxLevel())->array(mfi); - const auto &ly = m_fields.get(FieldType::edge_lengths, Direction{1}, maxLevel())->array(mfi); - const auto &lz = m_fields.get(FieldType::edge_lengths, Direction{2}, maxLevel())->array(mfi); - auto const &mod_areas_dim = m_fields.get(FieldType::area_mod, Direction{idim}, maxLevel())->array(mfi); - - const amrex::Real dx = cell_size[0]; - const amrex::Real dy = cell_size[1]; - const amrex::Real dz = cell_size[2]; - - amrex::ParallelFor(box, [=] AMREX_GPU_DEVICE (int i, int j, int k) { - // Minimal area for this cell to be stable - mod_areas_dim(i, j, k) = S(i, j, k); - double S_stab; - if (idim == 0){ - S_stab = 0.5 * std::max({ly(i, j, k) * dz, ly(i, j, k + 1) * dz, - lz(i, j, k) * dy, lz(i, j + 1, k) * dy}); - }else if (idim == 1){ -#ifdef WARPX_DIM_XZ - S_stab = 0.5 * std::max({lx(i, j, k) * dz, lx(i, j + 1, k) * dz, - lz(i, j, k) * dx, lz(i + 1, j, k) * dx}); -#elif defined(WARPX_DIM_3D) - S_stab = 0.5 * std::max({lx(i, j, k) * dz, lx(i, j, k + 1) * dz, - lz(i, j, k) * dx, lz(i + 1, j, k) * dx}); -#endif - }else { - S_stab = 0.5 * std::max({lx(i, j, k) * dy, lx(i, j + 1, k) * dy, - ly(i, j, k) * dx, ly(i + 1, j, k) * dx}); - } - - // Does this face need to be extended? - // The difference between flag_info_face and flag_ext_face is that: - // - for every face flag_info_face contains a: - // * 0 if the face needs to be extended - // * 1 if the face is large enough to lend area to other faces - // * 2 if the face is actually intruded by other face - // Here we only take care of the first two cases. The entries corresponding - // to the intruded faces are going to be set in the function ComputeFaceExtensions - // - for every face flag_ext_face contains a: - // * 1 if the face needs to be extended - // * 0 otherwise - // In the function ComputeFaceExtensions, after the cells are extended, the - // corresponding entries in flag_ext_face are set to zero. This helps to keep - // track of which cells could not be extended - flag_ext_face(i, j, k) = int(S(i, j, k) < S_stab && S(i, j, k) > 0); - if(flag_ext_face(i, j, k)){ - flag_info_face(i, j, k) = 0; - } - // Is this face available to lend area to other faces? - // The criterion is that the face has to be interior and not already unstable itself - if(int(S(i, j, k) > 0 && !flag_ext_face(i, j, k))) { - flag_info_face(i, j, k) = 1; - } - }); - } - } -#endif -} -#endif - void WarpX::ComputeDistanceToEB () { diff --git a/Source/Initialization/WarpXInitData.cpp b/Source/Initialization/WarpXInitData.cpp index b2885f8ca6a..9c2784fe867 100644 --- a/Source/Initialization/WarpXInitData.cpp +++ b/Source/Initialization/WarpXInitData.cpp @@ -18,7 +18,7 @@ #include "Diagnostics/ReducedDiags/MultiReducedDiags.H" #include "EmbeddedBoundary/Enabled.H" #ifdef AMREX_USE_EB -# include "EmbeddedBoundary/EmbeddedBoundary.H" +# include "EmbeddedBoundary/EmbeddedBoundaryInit.H" #endif #include "Fields.H" #include "FieldSolver/ElectrostaticSolvers/ElectrostaticSolver.H" @@ -1247,22 +1247,27 @@ void WarpX::InitializeEBGridData (int lev) warpx::embedded_boundary::ScaleAreas(face_areas_lev, CellSize(lev)); // Compute additional quantities required for the ECT solver - MarkExtensionCells(); + const auto& area_mod = m_fields.get_alldirs(FieldType::area_mod, maxLevel()); + warpx::embedded_boundary::MarkExtensionCells( + CellSize(maxLevel()), m_flag_info_face[maxLevel()], m_flag_ext_face[maxLevel()], + m_fields.get_alldirs(FieldType::Bfield_fp, maxLevel()), + face_areas_lev, + edge_lengths_lev, area_mod); ComputeFaceExtensions(); // Mark on which grid points E should be updated - MarkUpdateECellsECT( m_eb_update_E[lev], edge_lengths_lev ); + warpx::embedded_boundary::MarkUpdateECellsECT( m_eb_update_E[lev], edge_lengths_lev ); // Mark on which grid points B should be updated - MarkUpdateBCellsECT( m_eb_update_B[lev], face_areas_lev, edge_lengths_lev); + warpx::embedded_boundary::MarkUpdateBCellsECT( m_eb_update_B[lev], face_areas_lev, edge_lengths_lev); } else { // Mark on which grid points E should be updated (stair-case approximation) - MarkUpdateCellsStairCase( + warpx::embedded_boundary::MarkUpdateCellsStairCase( m_eb_update_E[lev], m_fields.get_alldirs(FieldType::Efield_fp, lev), eb_fact ); // Mark on which grid points B should be updated (stair-case approximation) - MarkUpdateCellsStairCase( + warpx::embedded_boundary::MarkUpdateCellsStairCase( m_eb_update_B[lev], m_fields.get_alldirs(FieldType::Bfield_fp, lev), eb_fact ); @@ -1271,7 +1276,7 @@ void WarpX::InitializeEBGridData (int lev) } ComputeDistanceToEB(); - MarkReducedShapeCells( m_eb_reduce_particle_shape[lev], eb_fact, WarpX::nox ); + warpx::embedded_boundary::MarkReducedShapeCells( m_eb_reduce_particle_shape[lev], eb_fact, nox, Geom(0).periodicity()); } #else diff --git a/Source/WarpX.H b/Source/WarpX.H index 7d164a9e685..a1595210389 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -959,85 +959,6 @@ public: void InitEB (); -#ifdef AMREX_USE_EB - - /** \brief Set a flag to indicate in which cells a particle should deposit charge/current - * with a reduced, order 1 shape. - * - * More specifically, the flag is set to 1 if any of the neighboring cells over which the - * particle shape might extend are either partially or fully covered by an embedded boundary. - * This ensures that a particle in this cell deposits with an order 1 shape, which in turn - * makes sure that the particle never deposits any charge in a partially or fully covered cell. - * - * \param[in] eb_reduce_particle_shape multifab to be filled with 1s and 0s - * \param[in] eb_fact EB factory - * \param[in] particle_shape_order order of the particle shape function - */ - - - void MarkReducedShapeCells ( - std::unique_ptr & eb_reduce_particle_shape, - amrex::EBFArrayBoxFactory const & eb_fact, - int particle_shape_order ); - - /** \brief Set a flag to indicate on which grid points the field `field` - * should be updated, depending on their position relative to the embedded boundary. - * - * This function is used by all finite-difference solvers, except the - * ECT solver, which instead uses `MarkUpdateECellsECT` and `MarkUpdateBCellsECT`. - * It uses a stair-case approximation of the embedded boundary: - * If a grid point touches cells that are either partially or fully covered - * by the embedded boundary: the corresponding field is not updated. - * - * More specifically, this function fills the iMultiFabs in `eb_update` - * (which have the same indexType as the MultiFabs in `field`) with 1 - * or 0, depending on whether the grid point should be updated or not. - */ - void MarkUpdateCellsStairCase ( - std::array< std::unique_ptr,3> & eb_update, - ablastr::fields::VectorField const & field, - amrex::EBFArrayBoxFactory const & eb_fact ); - - /** \brief Set a flag to indicate on which grid points the E field - * should be updated, depending on their position relative to the embedded boundary. - * - * This function is used by ECT solver. The E field is not updated if - * the edge on which it is defined is fully covered by the embedded boundary. - * - * More specifically, this function fills the iMultiFabs in `eb_update_E` - * (which have the same indexType as the E field) with 1 or 0, depending - * on whether the grid point should be updated or not. - */ - void MarkUpdateECellsECT ( - std::array< std::unique_ptr,3> & eb_update_E, - ablastr::fields::VectorField const& edge_lengths ); - - /** \brief Set a flag to indicate on which grid points the B field - * should be updated, depending on their position relative to the embedded boundary. - * - * This function is used by ECT solver. The B field is not updated if - * the face on which it is defined is fully covered by the embedded boundary. - * - * More specifically, this function fills the iMultiFabs in `eb_update_B` - * (which have the same indexType as the B field) with 1 or 0, depending - * on whether the grid point should be updated or not. - */ - void MarkUpdateBCellsECT ( - std::array< std::unique_ptr,3> & eb_update_B, - ablastr::fields::VectorField const& face_areas, - ablastr::fields::VectorField const& edge_lengths ); - - /** - * \brief Initialize information for cell extensions. - * The flags convention for m_flag_info_face is as follows - * - 0 for unstable cells - * - 1 for stable cells which have not been intruded - * - 2 for stable cells which have been intruded - * Here we cannot know if a cell is intruded or not so we initialize all stable cells with 1 - */ - void MarkExtensionCells(); -#endif - /** * \brief Compute the level set function used for particle-boundary interaction. */ From 7e339a02d3b3bf9c7b43ce32fae0880ebd080604 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Fri, 14 Feb 2025 04:23:14 +0100 Subject: [PATCH 42/58] WarpX class: simplify return type of get_spectral_solver_fp using `auto&` (#5656) This PR simplifies the return type of a method of the WarpX class by replacing: ``` # ifdef WARPX_DIM_RZ SpectralSolverRZ& # else SpectralSolver& # endif ``` with ``` auto& ``` --- Source/WarpX.H | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/Source/WarpX.H b/Source/WarpX.H index a1595210389..ce4a846eace 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -1005,13 +1005,7 @@ public: void PSATDSubtractCurrentPartialSumsAvg (); #ifdef WARPX_USE_FFT - -# ifdef WARPX_DIM_RZ - SpectralSolverRZ& -# else - SpectralSolver& -# endif - get_spectral_solver_fp (int lev) {return *spectral_solver_fp[lev];} + auto& get_spectral_solver_fp (int lev) {return *spectral_solver_fp[lev];} #endif FiniteDifferenceSolver * get_pointer_fdtd_solver_fp (int lev) { return m_fdtd_solver_fp[lev].get(); } From eb2627703166d1f437d25711e2bc8bc059ed7c0b Mon Sep 17 00:00:00 2001 From: Arianna Formenti Date: Fri, 14 Feb 2025 07:55:24 -0800 Subject: [PATCH 43/58] Add reduced diagnostic: 2d differential luminosity (#5545) Adds a luminosity diagnostic differentiated in the energies of two colliding species, called `DifferentialLuminosity2D`. It is defined as follows: ```math \begin{align*} \frac{d^2\mathcal{L}}{dE_1 dE_2}(E_1, E_2, t) = \int_0^t dt'\int d\boldsymbol{x}\, & \int d\boldsymbol{p}_1 \int d\boldsymbol{p}_2\; \sqrt{ |\boldsymbol{v}_1 - \boldsymbol{v}_2|^2 - |\boldsymbol{v}_1\times\boldsymbol{v}_2|^2/c^2} \\ & f_1(\boldsymbol{x}, \boldsymbol{p}_1, t')f_2(\boldsymbol{x}, \boldsymbol{p}_2, t') \delta(E_1 - E_1(\boldsymbol{p}_1) \delta( E_2 - E_2(\boldsymbol{p}_2)) \end{align*} ``` where: * $\boldsymbol{p}_i$ is the momentum of a particle of species $i$ * $E_i$ is the energy of a particle of species $i$, $E_i (\boldsymbol{p}_i) = \sqrt{m_1^2c^4 + c^2 |\boldsymbol{p}_i|^2}$ * $f_i$ is the distribution function of species $i$, normalized such that $\int \int f(\boldsymbol{x} \boldsymbol{p}, t )d\boldsymbol{x} d\boldsymbol{p} = N$, the number of particles in species $i$ at time $t$ The 2D differential luminosity is given in units of $\text{m}^{-2} \ \text{eV}^{-2}$. The user must specify the minimum, maximum, and number of bins to discretize the $E_1$ and $E_2$ axes. The computation of this diagnostic is similar to that of `ParticleHistogram2D`. The output is a folder containing a set of openPMD files. The values of the diagnostic are stored in a record labeled `d2L_dE1_dE2`, with axes `E1` and `E2`. --------- Co-authored-by: Remi Lehe --- Docs/source/usage/parameters.rst | 46 ++ Examples/Tests/diff_lumi_diag/CMakeLists.txt | 5 +- Examples/Tests/diff_lumi_diag/analysis.py | 57 ++- Examples/Tests/diff_lumi_diag/inputs_base_3d | 17 +- .../Diagnostics/ReducedDiags/CMakeLists.txt | 1 + .../ReducedDiags/DifferentialLuminosity2D.H | 70 +++ .../ReducedDiags/DifferentialLuminosity2D.cpp | 401 ++++++++++++++++++ Source/Diagnostics/ReducedDiags/Make.package | 1 + .../ReducedDiags/MultiReducedDiags.cpp | 2 + 9 files changed, 583 insertions(+), 17 deletions(-) create mode 100644 Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.H create mode 100644 Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.cpp diff --git a/Docs/source/usage/parameters.rst b/Docs/source/usage/parameters.rst index 253f9ca0071..dc53ae5295f 100644 --- a/Docs/source/usage/parameters.rst +++ b/Docs/source/usage/parameters.rst @@ -3626,6 +3626,52 @@ This shifts analysis from post-processing to runtime calculation of reduction op * ``.bin_min`` (`float`, in eV) The maximum value of :math:`\mathcal{E}^*` for which the differential luminosity is computed. + * ``DifferentialLuminosity2D`` + This type computes the two-dimensional differential luminosity between two species, defined as: + + .. math:: + + \frac{d^2\mathcal{L}}{dE_1 dE_2}(E_1, E_2, t) = \int_0^t dt'\int d\boldsymbol{x}\, \int d\boldsymbol{p}_1 \int d\boldsymbol{p}_2\; + \sqrt{ |\boldsymbol{v}_1 - \boldsymbol{v}_2|^2 - |\boldsymbol{v}_1\times\boldsymbol{v}_2|^2/c^2} \\ + f_1(\boldsymbol{x}, \boldsymbol{p}_1, t')f_2(\boldsymbol{x}, \boldsymbol{p}_2, t') \delta(E_1 - E_1(\boldsymbol{p}_1)) \delta(E_2 - E_2(\boldsymbol{p}_2)) + + where :math:`f_i` is the distribution function of species :math:`i` + (normalized such that :math:`\int \int f(\boldsymbol{x} \boldsymbol{p}, t )d\boldsymbol{x} d\boldsymbol{p} = N` + is the number of particles in species :math:`i` at time :math:`t`), + :math:`\boldsymbol{p}_i` and :math:`E_i (\boldsymbol{p}_i) = \sqrt{m_1^2c^4 + c^2 |\boldsymbol{p}_i|^2}` + are, respectively, the momentum and the energy of a particle of the :math:`i`-th species. + The 2D differential luminosity is given in units of :math:`\text{m}^{-2}.\text{eV}^{-2}`. + + * ``.species`` (`list of two strings`) + The names of the two species for which the differential luminosity is computed. + + * ``.bin_number_1`` (`int` > 0) + The number of bins in energy :math:`E_1` + + * ``.bin_max_1`` (`float`, in eV) + The minimum value of :math:`E_1` for which the 2D differential luminosity is computed. + + * ``.bin_min_1`` (`float`, in eV) + The maximum value of :math:`E_2` for which the 2D differential luminosity is compute + + * ``.bin_number_2`` (`int` > 0) + The number of bins in energy :math:`E_2` + + * ``.bin_max_2`` (`float`, in eV) + The minimum value of :math:`E_2` for which the 2D differential luminosity is computed. + + * ``.bin_min_2`` (`float`, in eV) + The minimum value of :math:`E_2` for which the 2D differential luminosity is computed. + + * ``.file_min_digits`` (`int`) optional (default `6`) + The minimum number of digits used for the iteration number appended to the diagnostic file names. + + The output is a ```` folder containing a set of openPMD files. + The values of the diagnostic are stored in a record labeled `d2L_dE1_dE2`. + An example input file and a loading python script of + using the DifferentialLuminosity2D reduced diagnostics + are given in ``Examples/Tests/diff_lumi_diag/``. + * ``Timestep`` This type outputs the simulation's physical timestep (in seconds) at each mesh refinement level. diff --git a/Examples/Tests/diff_lumi_diag/CMakeLists.txt b/Examples/Tests/diff_lumi_diag/CMakeLists.txt index f16449a976c..9a4e58d0e62 100644 --- a/Examples/Tests/diff_lumi_diag/CMakeLists.txt +++ b/Examples/Tests/diff_lumi_diag/CMakeLists.txt @@ -1,6 +1,6 @@ # Add tests (alphabetical order) ############################################## # - +if(WarpX_FFT) add_warpx_test( test_3d_diff_lumi_diag_leptons # name 3 # dims @@ -10,7 +10,9 @@ add_warpx_test( "analysis_default_regression.py --path diags/diag1000080 --rtol 1e-2" # checksum OFF # dependency ) +endif() +if(WarpX_FFT) add_warpx_test( test_3d_diff_lumi_diag_photons # name 3 # dims @@ -20,3 +22,4 @@ add_warpx_test( "analysis_default_regression.py --path diags/diag1000080 --rtol 1e-2" # checksum OFF # dependency ) +endif() diff --git a/Examples/Tests/diff_lumi_diag/analysis.py b/Examples/Tests/diff_lumi_diag/analysis.py index cadb21023ab..f8ed5f79779 100755 --- a/Examples/Tests/diff_lumi_diag/analysis.py +++ b/Examples/Tests/diff_lumi_diag/analysis.py @@ -5,15 +5,20 @@ # In that case, the differential luminosity can be calculated analytically. import os +import re import numpy as np -from read_raw_data import read_reduced_diags_histogram +from openpmd_viewer import OpenPMDTimeSeries -# Extract the differential luminosity from the file -_, _, E_bin, bin_data = read_reduced_diags_histogram( - "./diags/reducedfiles/DifferentialLuminosity_beam1_beam2.txt" -) -dL_dE_sim = bin_data[-1] # Differential luminosity at the end of the simulation +# Extract the 1D differential luminosity from the file +filename = "./diags/reducedfiles/DifferentialLuminosity_beam1_beam2.txt" +with open(filename) as f: + # First line: header, contains the energies + line = f.readline() + E_bin = np.array(list(map(float, re.findall("=(.*?)\(", line)))) +data = np.loadtxt(filename) +dE_bin = E_bin[1] - E_bin[0] +dL_dE_sim = data[-1, 2:] # Differential luminosity at the end of the simulation # Beam parameters N = 1.2e10 @@ -33,21 +38,47 @@ * np.exp(-((E_bin - 2 * E_beam) ** 2) / (2 * sigma_E**2)) ) +# Extract the 2D differential luminosity from the file +series = OpenPMDTimeSeries("./diags/reducedfiles/DifferentialLuminosity2d_beam1_beam2/") +d2L_dE1_dE2_sim, info = series.get_field("d2L_dE1_dE2", iteration=80) + +# Compute the analytical 2D differential luminosity for 2 Gaussian beams +assert info.axes[0] == "E2" +assert info.axes[1] == "E1" +E2, E1 = np.meshgrid(info.E2, info.E1, indexing="ij") +d2L_dE1_dE2_th = ( + N**2 + / (2 * (2 * np.pi) ** 2 * sigma_x * sigma_y * sigma_E1 * sigma_E2) + * np.exp( + -((E1 - E_beam) ** 2) / (2 * sigma_E1**2) + - (E2 - E_beam) ** 2 / (2 * sigma_E2**2) + ) +) + # Extract test name from path test_name = os.path.split(os.getcwd())[1] print("test_name", test_name) # Pick tolerance if "leptons" in test_name: - tol = 1e-2 + tol1 = 0.02 + tol2 = 0.04 elif "photons" in test_name: # In the photons case, the particles are # initialized from a density distribution ; # tolerance is larger due to lower particle statistics - tol = 6e-2 + tol1 = 0.021 + tol2 = 0.06 + +# Check that the 1D diagnostic and analytical result match +error1 = abs(dL_dE_sim - dL_dE_th).max() / abs(dL_dE_th).max() +print("Relative error: ", error1) +print("Tolerance: ", tol1) + +# Check that the 2D and 1D diagnostics match +error2 = abs(d2L_dE1_dE2_sim - d2L_dE1_dE2_th).max() / abs(d2L_dE1_dE2_th).max() +print("Relative error: ", error2) +print("Tolerance: ", tol2) -# Check that the simulation result and analytical result match -error = abs(dL_dE_sim - dL_dE_th).max() / abs(dL_dE_th).max() -print("Relative error: ", error) -print("Tolerance: ", tol) -assert error < tol +assert error1 < tol1 +assert error2 < tol2 diff --git a/Examples/Tests/diff_lumi_diag/inputs_base_3d b/Examples/Tests/diff_lumi_diag/inputs_base_3d index ba3c823b52b..0c65850e82b 100644 --- a/Examples/Tests/diff_lumi_diag/inputs_base_3d +++ b/Examples/Tests/diff_lumi_diag/inputs_base_3d @@ -28,6 +28,7 @@ my_constants.dt = sigmaz/clight/10. ################################# ####### GENERAL PARAMETERS ###### ################################# + stop_time = T amr.n_cell = nx ny nz amr.max_grid_size = 128 @@ -93,11 +94,21 @@ diag1.dump_last_timestep = 1 diag1.species = beam1 beam2 # REDUCED -warpx.reduced_diags_names = DifferentialLuminosity_beam1_beam2 +warpx.reduced_diags_names = DifferentialLuminosity_beam1_beam2 DifferentialLuminosity2d_beam1_beam2 DifferentialLuminosity_beam1_beam2.type = DifferentialLuminosity -DifferentialLuminosity_beam1_beam2.intervals = 5 +DifferentialLuminosity_beam1_beam2.intervals = 80 DifferentialLuminosity_beam1_beam2.species = beam1 beam2 DifferentialLuminosity_beam1_beam2.bin_number = 128 DifferentialLuminosity_beam1_beam2.bin_max = 2.1*beam_energy_eV -DifferentialLuminosity_beam1_beam2.bin_min = 1.9*beam_energy_eV +DifferentialLuminosity_beam1_beam2.bin_min = 0 + +DifferentialLuminosity2d_beam1_beam2.type = DifferentialLuminosity2D +DifferentialLuminosity2d_beam1_beam2.intervals = 80 +DifferentialLuminosity2d_beam1_beam2.species = beam1 beam2 +DifferentialLuminosity2d_beam1_beam2.bin_number_1 = 128 +DifferentialLuminosity2d_beam1_beam2.bin_max_1 = 1.45*beam_energy_eV +DifferentialLuminosity2d_beam1_beam2.bin_min_1 = 0 +DifferentialLuminosity2d_beam1_beam2.bin_number_2 = 128 +DifferentialLuminosity2d_beam1_beam2.bin_max_2 = 1.45*beam_energy_eV +DifferentialLuminosity2d_beam1_beam2.bin_min_2 = 0 diff --git a/Source/Diagnostics/ReducedDiags/CMakeLists.txt b/Source/Diagnostics/ReducedDiags/CMakeLists.txt index 4fbfc489aba..c548553b875 100644 --- a/Source/Diagnostics/ReducedDiags/CMakeLists.txt +++ b/Source/Diagnostics/ReducedDiags/CMakeLists.txt @@ -6,6 +6,7 @@ foreach(D IN LISTS WarpX_DIMS) ChargeOnEB.cpp ColliderRelevant.cpp DifferentialLuminosity.cpp + DifferentialLuminosity2D.cpp FieldEnergy.cpp FieldMaximum.cpp FieldMomentum.cpp diff --git a/Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.H b/Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.H new file mode 100644 index 00000000000..7ffefec324e --- /dev/null +++ b/Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.H @@ -0,0 +1,70 @@ +/* Copyright 2023 The WarpX Community + * + * This file is part of WarpX. + * + * Authors: Arianna Formenti, Remi Lehe + * License: BSD-3-Clause-LBNL + */ + +#ifndef WARPX_DIAGNOSTICS_REDUCEDDIAGS_DIFFERENTIALLUMINOSITY2D_H_ +#define WARPX_DIAGNOSTICS_REDUCEDDIAGS_DIFFERENTIALLUMINOSITY2D_H_ + +#include "ReducedDiags.H" +#include +#include + +#include +#include +#include + +/** + * This class contains the differential luminosity diagnostics. + */ +class DifferentialLuminosity2D : public ReducedDiags +{ +public: + + /** + * constructor + * @param[in] rd_name reduced diags names + */ + DifferentialLuminosity2D(const std::string& rd_name); + + /// File type + std::string m_openpmd_backend {"default"}; + + /// minimum number of digits for file suffix (file-based only supported for now) */ + int m_file_min_digits = 6; + + /// name of the two colliding species + std::vector m_beam_name; + + /// number of bins for the c.o.m. energy of the 2 species + int m_bin_num_1; + int m_bin_num_2; + + /// max and min bin values + amrex::Real m_bin_max_1; + amrex::Real m_bin_min_1; + amrex::Real m_bin_max_2; + amrex::Real m_bin_min_2; + + /// bin size + amrex::Real m_bin_size_1; + amrex::Real m_bin_size_2; + + /// output data + amrex::TableData m_h_data_2D; + + void ComputeDiags(int step) final; + + void WriteToFile (int step) const final; + +private: + + /// output table in which to accumulate the luminosity across timesteps + amrex::TableData m_d_data_2D; + +}; + +#endif // WARPX_DIAGNOSTICS_REDUCEDDIAGS_DIFFERENTIALLUMINOSITY2D_H_ diff --git a/Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.cpp b/Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.cpp new file mode 100644 index 00000000000..b3968b9fb02 --- /dev/null +++ b/Source/Diagnostics/ReducedDiags/DifferentialLuminosity2D.cpp @@ -0,0 +1,401 @@ +/* Copyright 2023 The WarpX Community + * + * This file is part of WarpX. + * + * Authors: Arianna Formenti, Yinjian Zhao, Remi Lehe + * License: BSD-3-Clause-LBNL + */ +#include "DifferentialLuminosity2D.H" + +#include "Diagnostics/ReducedDiags/ReducedDiags.H" +#include "Diagnostics/OpenPMDHelpFunction.H" +#include "Particles/MultiParticleContainer.H" +#include "Particles/Pusher/GetAndSetPosition.H" +#include "Particles/SpeciesPhysicalProperties.H" +#include "Particles/WarpXParticleContainer.H" +#include "Utils/ParticleUtils.H" +#include "Utils/Parser/ParserUtils.H" +#include "Utils/WarpXConst.H" +#include "Utils/TextMsg.H" +#include "Utils/WarpXProfilerWrapper.H" +#include "WarpX.H" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef WARPX_USE_OPENPMD +# include +#endif + +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +using ParticleType = WarpXParticleContainer::ParticleType; +using ParticleTileType = WarpXParticleContainer::ParticleTileType; +using ParticleTileDataType = ParticleTileType::ParticleTileDataType; +using ParticleBins = amrex::DenseBins; +using index_type = ParticleBins::index_type; + +#ifdef WARPX_USE_OPENPMD +namespace io = openPMD; +#endif + +using namespace amrex; + +DifferentialLuminosity2D::DifferentialLuminosity2D (const std::string& rd_name) +: ReducedDiags{rd_name} +{ + // RZ coordinate is not supported +#if (defined WARPX_DIM_RZ) + WARPX_ABORT_WITH_MESSAGE( + "DifferentialLuminosity2D diagnostics does not work in RZ geometry."); +#endif + + // read colliding species names - must be 2 + amrex::ParmParse pp_rd_name(m_rd_name); + pp_rd_name.getarr("species", m_beam_name); + + WARPX_ALWAYS_ASSERT_WITH_MESSAGE( + m_beam_name.size() == 2u, + "DifferentialLuminosity2D diagnostics must involve exactly two species"); + + pp_rd_name.query("openpmd_backend", m_openpmd_backend); + pp_rd_name.query("file_min_digits", m_file_min_digits); + // pick first available backend if default is chosen + if( m_openpmd_backend == "default" ) { + m_openpmd_backend = WarpXOpenPMDFileType(); + } + pp_rd_name.add("openpmd_backend", m_openpmd_backend); + + // read bin parameters for species 1 + int bin_num_1 = 0; + amrex::Real bin_max_1 = 0.0_rt, bin_min_1 = 0.0_rt; + utils::parser::getWithParser(pp_rd_name, "bin_number_1", bin_num_1); + utils::parser::getWithParser(pp_rd_name, "bin_max_1", bin_max_1); + utils::parser::getWithParser(pp_rd_name, "bin_min_1", bin_min_1); + m_bin_num_1 = bin_num_1; + m_bin_max_1 = bin_max_1; + m_bin_min_1 = bin_min_1; + m_bin_size_1 = (bin_max_1 - bin_min_1) / bin_num_1; + + // read bin parameters for species 2 + int bin_num_2 = 0; + amrex::Real bin_max_2 = 0.0_rt, bin_min_2 = 0.0_rt; + utils::parser::getWithParser(pp_rd_name, "bin_number_2", bin_num_2); + utils::parser::getWithParser(pp_rd_name, "bin_max_2", bin_max_2); + utils::parser::getWithParser(pp_rd_name, "bin_min_2", bin_min_2); + m_bin_num_2 = bin_num_2; + m_bin_max_2 = bin_max_2; + m_bin_min_2 = bin_min_2; + m_bin_size_2 = (bin_max_2 - bin_min_2) / bin_num_2; + + // resize data array on the host + Array tlo{0,0}; // lower bounds + Array thi{m_bin_num_1-1, m_bin_num_2-1}; // inclusive upper bounds + m_h_data_2D.resize(tlo, thi, The_Pinned_Arena()); + + auto const& h_table_data = m_h_data_2D.table(); + // initialize data on the host + for (int i = tlo[0]; i <= thi[0]; ++i) { + for (int j = tlo[1]; j <= thi[1]; ++j) { + h_table_data(i,j) = 0.0_rt; + } + } + + // resize data on the host + m_d_data_2D.resize(tlo, thi); + // copy data from host to device + m_d_data_2D.copy(m_h_data_2D); + Gpu::streamSynchronize(); +} // end constructor + +void DifferentialLuminosity2D::ComputeDiags (int step) +{ +#if defined(WARPX_DIM_RZ) + amrex::ignore_unused(step); +#else + + WARPX_PROFILE("DifferentialLuminosity2D::ComputeDiags"); + + // Since this diagnostic *accumulates* the luminosity in the + // table m_d_data_2D, we add contributions at *each timestep*, but + // we only write the data to file at intervals specified by the user. + const Real c_sq = PhysConst::c*PhysConst::c; + const Real c_over_qe = PhysConst::c/PhysConst::q_e; + + // output table data + auto d_table = m_d_data_2D.table(); + + // get a reference to WarpX instance + auto& warpx = WarpX::GetInstance(); + const Real dt = warpx.getdt(0); + // get cell volume + Geometry const & geom = warpx.Geom(0); + const Real dV = AMREX_D_TERM(geom.CellSize(0), *geom.CellSize(1), *geom.CellSize(2)); + + // declare local variables + auto const num_bins_1 = m_bin_num_1; + Real const bin_min_1 = m_bin_min_1; + Real const bin_size_1 = m_bin_size_1; + auto const num_bins_2 = m_bin_num_2; + Real const bin_min_2 = m_bin_min_2; + Real const bin_size_2 = m_bin_size_2; + + // get MultiParticleContainer class object + const MultiParticleContainer& mypc = warpx.GetPartContainer(); + + auto& species_1 = mypc.GetParticleContainerFromName(m_beam_name[0]); + auto& species_2 = mypc.GetParticleContainerFromName(m_beam_name[1]); + + const ParticleReal m1 = species_1.getMass(); + const ParticleReal m2 = species_2.getMass(); + + // Enable tiling + amrex::MFItInfo info; + if (amrex::Gpu::notInLaunchRegion()) { info.EnableTiling(WarpXParticleContainer::tile_size); } + + int const nlevs = std::max(0, species_1.finestLevel()+1); // species_1 ? + for (int lev = 0; lev < nlevs; ++lev) { +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + + for (amrex::MFIter mfi = species_1.MakeMFIter(lev, info); mfi.isValid(); ++mfi){ + + ParticleTileType& ptile_1 = species_1.ParticlesAt(lev, mfi); + ParticleTileType& ptile_2 = species_2.ParticlesAt(lev, mfi); + + ParticleBins bins_1 = ParticleUtils::findParticlesInEachCell( lev, mfi, ptile_1 ); + ParticleBins bins_2 = ParticleUtils::findParticlesInEachCell( lev, mfi, ptile_2 ); + + // species 1 + const auto soa_1 = ptile_1.getParticleTileData(); + index_type* AMREX_RESTRICT indices_1 = bins_1.permutationPtr(); + index_type const* AMREX_RESTRICT cell_offsets_1 = bins_1.offsetsPtr(); + + // extract particle data of species 1 in the current tile/box + amrex::ParticleReal * const AMREX_RESTRICT w1 = soa_1.m_rdata[PIdx::w]; + amrex::ParticleReal * const AMREX_RESTRICT u1x = soa_1.m_rdata[PIdx::ux]; // u=v*gamma=p/m + amrex::ParticleReal * const AMREX_RESTRICT u1y = soa_1.m_rdata[PIdx::uy]; + amrex::ParticleReal * const AMREX_RESTRICT u1z = soa_1.m_rdata[PIdx::uz]; + bool const species1_is_photon = species_1.AmIA(); + + // same for species 2 + const auto soa_2 = ptile_2.getParticleTileData(); + index_type* AMREX_RESTRICT indices_2 = bins_2.permutationPtr(); + index_type const* AMREX_RESTRICT cell_offsets_2 = bins_2.offsetsPtr(); + + amrex::ParticleReal * const AMREX_RESTRICT w2 = soa_2.m_rdata[PIdx::w]; + amrex::ParticleReal * const AMREX_RESTRICT u2x = soa_2.m_rdata[PIdx::ux]; + amrex::ParticleReal * const AMREX_RESTRICT u2y = soa_2.m_rdata[PIdx::uy]; + amrex::ParticleReal * const AMREX_RESTRICT u2z = soa_2.m_rdata[PIdx::uz]; + bool const species2_is_photon = species_2.AmIA(); + + // Extract low-level (cell-level) data + auto const n_cells = static_cast(bins_1.numBins()); + + // Loop over cells + amrex::ParallelFor( n_cells, + [=] AMREX_GPU_DEVICE (int i_cell) noexcept + { + + // The particles from species1 that are in the cell `i_cell` are + // given by the `indices_1[cell_start_1:cell_stop_1]` + index_type const cell_start_1 = cell_offsets_1[i_cell]; + index_type const cell_stop_1 = cell_offsets_1[i_cell+1]; + // Same for species 2 + index_type const cell_start_2 = cell_offsets_2[i_cell]; + index_type const cell_stop_2 = cell_offsets_2[i_cell+1]; + + for(index_type i_1=cell_start_1; i_1=num_bins_1 ) { continue; } // discard if out-of-range + + // determine energy bin of particle 2 + int const bin_2 = int(Math::floor((E_2-bin_min_2)/bin_size_2)); + if ( bin_2<0 || bin_2>=num_bins_2 ) { continue; } // discard if out-of-range + + Real const inv_p1t = 1.0_rt/p1t; + Real const inv_p2t = 1.0_rt/p2t; + + Real const beta1_sq = (p1x*p1x + p1y*p1y + p1z*p1z) * inv_p1t*inv_p1t; + Real const beta2_sq = (p2x*p2x + p2y*p2y + p2z*p2z) * inv_p2t*inv_p2t; + Real const beta1_dot_beta2 = (p1x*p2x + p1y*p2y + p1z*p2z) * inv_p1t*inv_p2t; + + // Here we use the fact that: + // (v1 - v2)^2 = v1^2 + v2^2 - 2 v1.v2 + // and (v1 x v2)^2 = v1^2 v2^2 - (v1.v2)^2 + // we also use beta=v/c instead of v + Real const radicand = beta1_sq + beta2_sq - 2*beta1_dot_beta2 - beta1_sq*beta2_sq + beta1_dot_beta2*beta1_dot_beta2; + + Real const d2L_dE1_dE2 = PhysConst::c * std::sqrt( radicand ) * w1[j_1] * w2[j_2] / (dV * bin_size_1 * bin_size_2) * dt; // m^-2 eV^-2 + + amrex::Real &data = d_table(bin_1, bin_2); + amrex::HostDevice::Atomic::Add(&data, d2L_dE1_dE2); + + } // particles species 2 + } // particles species 1 + }); // cells + } // boxes + } // levels + + // Only write to file at intervals specified by the user. + // At these intervals, the data needs to ready on the CPU, + // so we copy it from the GPU to the CPU and reduce across MPI ranks. + if (m_intervals.contains(step+1)) { + + // Copy data from GPU memory + m_h_data_2D.copy(m_d_data_2D); + + // reduced sum over mpi ranks + const int size = static_cast (m_d_data_2D.size()); + ParallelDescriptor::ReduceRealSum + (m_h_data_2D.table().p, size, ParallelDescriptor::IOProcessorNumber()); + } + + // Return for all that are not IO processor + if ( !ParallelDescriptor::IOProcessor() ) { return; } + +#endif // not RZ +} // end void DifferentialLuminosity2D::ComputeDiags + +void DifferentialLuminosity2D::WriteToFile (int step) const +{ + // Judge if the diags should be done at this step + if (!m_intervals.contains(step+1)) { return; } + +#ifdef WARPX_USE_OPENPMD + // only IO processor writes + if ( !ParallelDescriptor::IOProcessor() ) { return; } + + // TODO: support different filename templates + std::string filename = "openpmd"; + // TODO: support also group-based encoding + const std::string fileSuffix = std::string("_%0") + std::to_string(m_file_min_digits) + std::string("T"); + filename = filename.append(fileSuffix).append(".").append(m_openpmd_backend); + + // transform paths for Windows + #ifdef _WIN32 + const std::string filepath = openPMD::auxiliary::replace_all( + m_path + m_rd_name + "/" + filename, "/", "\\"); + #else + const std::string filepath = m_path + m_rd_name + "/" + filename; + #endif + + // Create the OpenPMD series + auto series = io::Series( + filepath, + io::Access::CREATE); + auto i = series.iterations[step + 1]; + // record + auto f_mesh = i.meshes["d2L_dE1_dE2"]; // m^-2 eV^-2 + f_mesh.setUnitDimension({ + {io::UnitDimension::L, -6}, + {io::UnitDimension::M, -2}, + {io::UnitDimension::T, 4} + }); + + // record components + auto data = f_mesh[io::RecordComponent::SCALAR]; + + // meta data + f_mesh.setAxisLabels({"E2", "E1"}); // eV, eV + std::vector< double > const& gridGlobalOffset = {m_bin_min_2, m_bin_min_1}; + f_mesh.setGridGlobalOffset(gridGlobalOffset); + f_mesh.setGridSpacing({m_bin_size_2, m_bin_size_1}); + + data.setPosition({0.5, 0.5}); + + auto dataset = io::Dataset( + io::determineDatatype(), + {static_cast(m_bin_num_2), static_cast(m_bin_num_1)}); + data.resetDataset(dataset); + + // Get time at level 0 + auto & warpx = WarpX::GetInstance(); + auto const time = warpx.gett_new(0); + i.setTime(time); + + auto const& h_table_data = m_h_data_2D.table(); + data.storeChunkRaw( + h_table_data.p, + {0, 0}, + {static_cast(m_bin_num_2), static_cast(m_bin_num_1)}); + + series.flush(); + i.close(); + series.close(); +#else + amrex::ignore_unused(step); + WARPX_ABORT_WITH_MESSAGE("DifferentialLuminosity2D: Needs openPMD-api compiled into WarpX, but was not found!"); +#endif +} diff --git a/Source/Diagnostics/ReducedDiags/Make.package b/Source/Diagnostics/ReducedDiags/Make.package index 4d2e4d7def9..98fa093e2df 100644 --- a/Source/Diagnostics/ReducedDiags/Make.package +++ b/Source/Diagnostics/ReducedDiags/Make.package @@ -4,6 +4,7 @@ CEXE_sources += BeamRelevant.cpp CEXE_sources += ChargeOnEB.cpp CEXE_sources += ColliderRelevant.cpp CEXE_sources += DifferentialLuminosity.cpp +CEXE_sources += DifferentialLuminosity2D.cpp CEXE_sources += FieldEnergy.cpp CEXE_sources += FieldMaximum.cpp CEXE_sources += FieldMomentum.cpp diff --git a/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp b/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp index 0ce18174111..e4c982f7323 100644 --- a/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp +++ b/Source/Diagnostics/ReducedDiags/MultiReducedDiags.cpp @@ -10,6 +10,7 @@ #include "ChargeOnEB.H" #include "ColliderRelevant.H" #include "DifferentialLuminosity.H" +#include "DifferentialLuminosity2D.H" #include "FieldEnergy.H" #include "FieldMaximum.H" #include "FieldMomentum.H" @@ -58,6 +59,7 @@ MultiReducedDiags::MultiReducedDiags () {"ChargeOnEB", [](CS s){return std::make_unique(s);}}, {"ColliderRelevant", [](CS s){return std::make_unique(s);}}, {"DifferentialLuminosity",[](CS s){return std::make_unique(s);}}, + {"DifferentialLuminosity2D",[](CS s){return std::make_unique(s);}}, {"ParticleEnergy", [](CS s){return std::make_unique(s);}}, {"ParticleExtrema", [](CS s){return std::make_unique(s);}}, {"ParticleHistogram", [](CS s){return std::make_unique(s);}}, From f4ece6e746f1d97b7b5f2599fc6ecfd0d68f556f Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Fri, 14 Feb 2025 18:40:44 +0100 Subject: [PATCH 44/58] WarpX class: move SetDotMask to anonymous namespace in WarpX.cpp (#5644) `SetDotMask`, a member function of the WarpX class, is only used inside the member function `getFieldDotMaskPointer` . This PR turns it into a pure function and moves it into an anonymous namespace inside `WarpX.cpp`. This (slightly) simplifies the WarpX class header. --- .../ImplicitSolvers/WarpXSolverVec.cpp | 4 +- Source/WarpX.H | 9 +--- Source/WarpX.cpp | 48 +++++++++++-------- 3 files changed, 30 insertions(+), 31 deletions(-) diff --git a/Source/FieldSolver/ImplicitSolvers/WarpXSolverVec.cpp b/Source/FieldSolver/ImplicitSolvers/WarpXSolverVec.cpp index f091353a4df..05b5f1caa0c 100644 --- a/Source/FieldSolver/ImplicitSolvers/WarpXSolverVec.cpp +++ b/Source/FieldSolver/ImplicitSolvers/WarpXSolverVec.cpp @@ -149,7 +149,7 @@ void WarpXSolverVec::Copy ( FieldType a_array_type, for (int lev = 0; lev < m_num_amr_levels; ++lev) { if (m_array_type != FieldType::None) { for (int n = 0; n < 3; ++n) { - const amrex::iMultiFab* dotMask = m_WarpX->getFieldDotMaskPointer(m_array_type,lev,n); + const amrex::iMultiFab* dotMask = m_WarpX->getFieldDotMaskPointer(m_array_type, lev, ablastr::fields::Direction{n}); auto rtmp = amrex::MultiFab::Dot( *dotMask, *m_array_vec[lev][n], 0, *a_X.getArrayVec()[lev][n], 0, 1, 0, local); @@ -157,7 +157,7 @@ void WarpXSolverVec::Copy ( FieldType a_array_type, } } if (m_scalar_type != FieldType::None) { - const amrex::iMultiFab* dotMask = m_WarpX->getFieldDotMaskPointer(m_scalar_type,lev,0); + const amrex::iMultiFab* dotMask = m_WarpX->getFieldDotMaskPointer(m_scalar_type,lev, ablastr::fields::Direction{0}); auto rtmp = amrex::MultiFab::Dot( *dotMask, *m_scalar_vec[lev], 0, *a_X.getScalarVec()[lev], 0, 1, 0, local); diff --git a/Source/WarpX.H b/Source/WarpX.H index ce4a846eace..ddfd545db74 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -412,14 +412,7 @@ public: * Get pointer to the amrex::MultiFab containing the dotMask for the specified field */ [[nodiscard]] const amrex::iMultiFab* - getFieldDotMaskPointer (warpx::fields::FieldType field_type, int lev, int dir) const; - - /** - * \brief - * Set the dotMask container - */ - void SetDotMask( std::unique_ptr& field_dotMask, - std::string const & field_name, int lev, int dir ) const; + getFieldDotMaskPointer (warpx::fields::FieldType field_type, int lev, ablastr::fields::Direction dir) const; [[nodiscard]] bool DoPML () const {return do_pml;} [[nodiscard]] bool DoFluidSpecies () const {return do_fluid_species;} diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index a17c7ff432e..4a0633369ce 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -200,6 +200,27 @@ namespace std::any_of(field_boundary_hi.begin(), field_boundary_hi.end(), is_pml); return is_any_pml; } + + /** + * \brief + * Set the dotMask container + */ + void SetDotMask( std::unique_ptr& field_dotMask, + ablastr::fields::ConstScalarField const& field, + amrex::Periodicity const& periodicity) + + { + // Define the dot mask for this field_type needed to properly compute dotProduct() + // for field values that have shared locations on different MPI ranks + if (field_dotMask != nullptr) { return; } + + const auto& this_ba = field->boxArray(); + const auto tmp = amrex::MultiFab{ + this_ba, field->DistributionMap(), + 1, 0, amrex::MFInfo().SetAlloc(false)}; + + field_dotMask = tmp.OwnerMask(periodicity); + } } void WarpX::MakeWarpX () @@ -3316,40 +3337,25 @@ WarpX::MakeDistributionMap (int lev, amrex::BoxArray const& ba) } const amrex::iMultiFab* -WarpX::getFieldDotMaskPointer ( FieldType field_type, int lev, int dir ) const +WarpX::getFieldDotMaskPointer ( FieldType field_type, int lev, ablastr::fields::Direction dir ) const { + const auto periodicity = Geom(lev).periodicity(); switch(field_type) { case FieldType::Efield_fp : - SetDotMask( Efield_dotMask[lev][dir], "Efield_fp", lev, dir ); + ::SetDotMask( Efield_dotMask[lev][dir], m_fields.get("Efield_fp", dir, lev), periodicity); return Efield_dotMask[lev][dir].get(); case FieldType::Bfield_fp : - SetDotMask( Bfield_dotMask[lev][dir], "Bfield_fp", lev, dir ); + ::SetDotMask( Bfield_dotMask[lev][dir], m_fields.get("Bfield_fp", dir, lev), periodicity); return Bfield_dotMask[lev][dir].get(); case FieldType::vector_potential_fp : - SetDotMask( Afield_dotMask[lev][dir], "vector_potential_fp", lev, dir ); + ::SetDotMask( Afield_dotMask[lev][dir], m_fields.get("vector_potential_fp", dir, lev), periodicity); return Afield_dotMask[lev][dir].get(); case FieldType::phi_fp : - SetDotMask( phi_dotMask[lev], "phi_fp", lev, 0 ); + ::SetDotMask( phi_dotMask[lev], m_fields.get("phi_fp", dir, lev), periodicity); return phi_dotMask[lev].get(); default: WARPX_ABORT_WITH_MESSAGE("Invalid field type for dotMask"); return Efield_dotMask[lev][dir].get(); } } - -void WarpX::SetDotMask( std::unique_ptr& field_dotMask, - std::string const & field_name, int lev, int dir ) const -{ - // Define the dot mask for this field_type needed to properly compute dotProduct() - // for field values that have shared locations on different MPI ranks - if (field_dotMask != nullptr) { return; } - - ablastr::fields::ConstVectorField const& this_field = m_fields.get_alldirs(field_name,lev); - const amrex::BoxArray& this_ba = this_field[dir]->boxArray(); - const amrex::MultiFab tmp( this_ba, this_field[dir]->DistributionMap(), - 1, 0, amrex::MFInfo().SetAlloc(false) ); - const amrex::Periodicity& period = Geom(lev).periodicity(); - field_dotMask = tmp.OwnerMask(period); - -} From 17692a04f5e4f24c4feb85013ff5da25523ee713 Mon Sep 17 00:00:00 2001 From: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> Date: Fri, 14 Feb 2025 09:45:19 -0800 Subject: [PATCH 45/58] Update to latest AMReX version (#5669) Update to latest AMReX version to pull the latest bug fix in https://github.com/AMReX-Codes/amrex/pull/4333. --- .github/workflows/cuda.yml | 2 +- cmake/dependencies/AMReX.cmake | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 6e87134904f..3b65f406728 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -127,7 +127,7 @@ jobs: which nvcc || echo "nvcc not in PATH!" git clone https://github.com/AMReX-Codes/amrex.git ../amrex - cd ../amrex && git checkout --detach 198da4879a63f1bc8c4e8d674bf9185525318f61 && cd - + cd ../amrex && git checkout --detach 275f55f25fec350dfedb54f75a19200b52ced93f && cd - make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4 ccache -s diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index 7f5546a931b..813734282c7 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -294,7 +294,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "198da4879a63f1bc8c4e8d674bf9185525318f61" +set(WarpX_amrex_branch "275f55f25fec350dfedb54f75a19200b52ced93f" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") From 18578b963b7c2250201ce6d3984aff3185dc54e3 Mon Sep 17 00:00:00 2001 From: "S. Eric Clark" <25495882+clarkse@users.noreply.github.com> Date: Fri, 14 Feb 2025 16:36:59 -0800 Subject: [PATCH 46/58] Add external particle fields ohms law hybrid (#5275) This PR allows for the addition of external fields through the particle fields analytical interface. This is useful for field splitting external vs. self fields in the hybrid ohm's law solver. --------- Signed-off-by: S. Eric Clark <25495882+clarkse@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> --- Docs/source/refs.bib | 10 + Docs/source/usage/parameters.rst | 21 + Examples/Tests/CMakeLists.txt | 1 + .../CMakeLists.txt | 24 ++ .../analysis_default_regression.py | 1 + ...d_ohm_solver_cylinder_compression_picmi.py | 393 ++++++++++++++++++ ...z_ohm_solver_cylinder_compression_picmi.py | 383 +++++++++++++++++ Python/pywarpx/HybridPICModel.py | 1 + Python/pywarpx/WarpX.py | 3 +- Python/pywarpx/__init__.py | 2 +- Python/pywarpx/fields.py | 99 +++++ Python/pywarpx/picmi.py | 77 ++++ ...ohm_solver_cylinder_compression_picmi.json | 20 + ...ohm_solver_cylinder_compression_picmi.json | 20 + .../FiniteDifferenceSolver/CMakeLists.txt | 1 + .../FiniteDifferenceSolver/ComputeCurlA.cpp | 306 ++++++++++++++ .../FiniteDifferenceSolver.H | 44 +- .../HybridPICModel/CMakeLists.txt | 1 + .../HybridPICModel/ExternalVectorPotential.H | 101 +++++ .../ExternalVectorPotential.cpp | 376 +++++++++++++++++ .../HybridPICModel/HybridPICModel.H | 44 +- .../HybridPICModel/HybridPICModel.cpp | 60 ++- .../HybridPICModel/Make.package | 1 + .../HybridPICSolveE.cpp | 169 ++++++-- .../FiniteDifferenceSolver/Make.package | 1 + .../FieldSolver/WarpXPushFieldsHybridPIC.cpp | 75 +++- Source/Fields.H | 4 + Source/Initialization/WarpXInitData.cpp | 89 +++- Source/Particles/Gather/GetExternalFields.H | 6 +- Source/Python/WarpX.cpp | 4 + Source/WarpX.H | 52 ++- Source/WarpX.cpp | 25 +- 32 files changed, 2336 insertions(+), 78 deletions(-) create mode 100644 Examples/Tests/ohm_solver_cylinder_compression/CMakeLists.txt create mode 120000 Examples/Tests/ohm_solver_cylinder_compression/analysis_default_regression.py create mode 100644 Examples/Tests/ohm_solver_cylinder_compression/inputs_test_3d_ohm_solver_cylinder_compression_picmi.py create mode 100644 Examples/Tests/ohm_solver_cylinder_compression/inputs_test_rz_ohm_solver_cylinder_compression_picmi.py create mode 100644 Regression/Checksum/benchmarks_json/test_3d_ohm_solver_cylinder_compression_picmi.json create mode 100644 Regression/Checksum/benchmarks_json/test_rz_ohm_solver_cylinder_compression_picmi.json create mode 100644 Source/FieldSolver/FiniteDifferenceSolver/ComputeCurlA.cpp create mode 100644 Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.H create mode 100644 Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.cpp diff --git a/Docs/source/refs.bib b/Docs/source/refs.bib index 6623bacd452..49f4658af4c 100644 --- a/Docs/source/refs.bib +++ b/Docs/source/refs.bib @@ -518,3 +518,13 @@ @article{Rhee1987 url = {https://doi.org/10.1063/1.1139314}, eprint = {https://pubs.aip.org/aip/rsi/article-pdf/58/2/240/19154912/240\_1\_online.pdf}, } + +@misc{holmstrom2013handlingvacuumregionshybrid, + title={Handling vacuum regions in a hybrid plasma solver}, + author={M. Holmstrom}, + year={2013}, + eprint={1301.0272}, + archivePrefix={arXiv}, + primaryClass={physics.space-ph}, + url={https://arxiv.org/abs/1301.0272}, +} diff --git a/Docs/source/usage/parameters.rst b/Docs/source/usage/parameters.rst index dc53ae5295f..77f99044448 100644 --- a/Docs/source/usage/parameters.rst +++ b/Docs/source/usage/parameters.rst @@ -2537,6 +2537,27 @@ Maxwell solver: kinetic-fluid hybrid * ``hybrid_pic_model.substeps`` (`int`) optional (default ``10``) If ``algo.maxwell_solver`` is set to ``hybrid``, this sets the number of sub-steps to take during the B-field update. +* ``hybrid_pic_model.holmstrom_vacuum_region`` (`bool`) optional (default ``false``) + If ``algo.maxwell_solver`` is set to ``hybrid``, this sets the vacuum region handling of the generalized Ohm's Law to suppress vacuum fluctuations. :cite:t:`param-holmstrom2013handlingvacuumregionshybrid`. + +* ``hybid_pic_model.add_external_fields`` (`bool`) optional (default ``false``) + If ``algo.maxwell_solver`` is set to ``hybrid``, this sets the hybrid solver to use split external fields defined in external_vector_potential inputs. + +* ``external_vector_potential.fields`` (list of `str`) optional (default ``empty``) + If ``hybid_pic_model.add_external_fields`` is set to ``true``, this adds a list names for external time varying vector potentials to be added to hybrid solver. + +* ``external_vector_potential..read_from_file`` (`bool`) optional (default ``false``) + If ``hybid_pic_model.add_external_fields`` is set to ``true``, this flag determines whether to load an external field or use an implcit function to evaluate the time varying field. + +* ``external_vector_potential..path`` (`str`) optional (default ``""``) + If ``external_vector_potential..read_from_file`` is set to ``true``, sets the path to an OpenPMD file that can be loaded externally in :math:`weber/m`. + +* ``external_vector_potential..A[x,y,z]_external_grid_function(x,y,z)`` (`str`) optional (default ``"0"``) + If ``external_vector_potential..read_from_file`` is set to ``false``, Sets the external vector potential to be populated by an implicit function (on the grid) in :math:`weber/m`. + +* ``external_vector_potential..A_time_external_grid_function(t)`` (`str`) optional (default ``"1"``) + This sets the relative strength of the external vector potential by a dimensionless implicit time function, which can compute the external B fields and E fields based on the value and first time derivative of the function. + .. note:: Based on results from :cite:t:`param-Stanier2020` it is recommended to use diff --git a/Examples/Tests/CMakeLists.txt b/Examples/Tests/CMakeLists.txt index 5ff1d4a9a70..b80e6158f49 100644 --- a/Examples/Tests/CMakeLists.txt +++ b/Examples/Tests/CMakeLists.txt @@ -41,6 +41,7 @@ add_subdirectory(nci_fdtd_stability) add_subdirectory(nci_psatd_stability) add_subdirectory(nodal_electrostatic) add_subdirectory(nuclear_fusion) +add_subdirectory(ohm_solver_cylinder_compression) add_subdirectory(ohm_solver_em_modes) add_subdirectory(ohm_solver_ion_beam_instability) add_subdirectory(ohm_solver_ion_Landau_damping) diff --git a/Examples/Tests/ohm_solver_cylinder_compression/CMakeLists.txt b/Examples/Tests/ohm_solver_cylinder_compression/CMakeLists.txt new file mode 100644 index 00000000000..c813d669fa6 --- /dev/null +++ b/Examples/Tests/ohm_solver_cylinder_compression/CMakeLists.txt @@ -0,0 +1,24 @@ +# Add tests (alphabetical order) ############################################## +# + +add_warpx_test( + test_3d_ohm_solver_cylinder_compression_picmi # name + 3 # dims + 2 # nprocs + "inputs_test_3d_ohm_solver_cylinder_compression_picmi.py --test" # inputs + OFF # analysis + "analysis_default_regression.py --path diags/diag1000020 --rtol 1e-6" # checksum + OFF # dependency +) +label_warpx_test(test_3d_ohm_solver_cylinder_compression_picmi slow) + +add_warpx_test( + test_rz_ohm_solver_cylinder_compression_picmi # name + RZ # dims + 2 # nprocs + "inputs_test_rz_ohm_solver_cylinder_compression_picmi.py --test" # inputs + OFF # analysis + "analysis_default_regression.py --path diags/diag1000020 --rtol 1e-6" # output + OFF # dependency +) +label_warpx_test(test_rz_ohm_solver_cylinder_compression_picmi slow) diff --git a/Examples/Tests/ohm_solver_cylinder_compression/analysis_default_regression.py b/Examples/Tests/ohm_solver_cylinder_compression/analysis_default_regression.py new file mode 120000 index 00000000000..d8ce3fca419 --- /dev/null +++ b/Examples/Tests/ohm_solver_cylinder_compression/analysis_default_regression.py @@ -0,0 +1 @@ +../../analysis_default_regression.py \ No newline at end of file diff --git a/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_3d_ohm_solver_cylinder_compression_picmi.py b/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_3d_ohm_solver_cylinder_compression_picmi.py new file mode 100644 index 00000000000..4f05fd15d83 --- /dev/null +++ b/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_3d_ohm_solver_cylinder_compression_picmi.py @@ -0,0 +1,393 @@ +#!/usr/bin/env python3 +# +# --- Test script for the kinetic-fluid hybrid model in WarpX wherein ions are +# --- treated as kinetic particles and electrons as an isothermal, inertialess +# --- background fluid. The script demonstrates the use of this model to +# --- simulate adiabatic compression of a plasma cylinder initialized from an +# --- analytical Grad-Shafranov solution. + +import argparse +import shutil +import sys +from pathlib import Path + +import numpy as np +import openpmd_api as io +from mpi4py import MPI as mpi + +from pywarpx import fields, picmi + +constants = picmi.constants + +comm = mpi.COMM_WORLD + +simulation = picmi.Simulation(warpx_serialize_initial_conditions=True, verbose=False) + + +class PlasmaCylinderCompression(object): + # B0 is chosen with all other quantities scaled by it + n0 = 1e20 + T_i = 10 # eV + T_e = 0 + p0 = n0 * constants.q_e * T_i + + B0 = np.sqrt(2 * constants.mu0 * p0) # Initial magnetic field strength (T) + + # Do a 2x uniform B-field compression + dB = B0 + + # Flux Conserver radius + R_c = 0.5 + + # Plasma Radius (These values control the analytical GS solution) + R_p = 0.25 + delta_p = 0.025 + + # Domain parameters + LX = 2.0 * R_c * 1.05 # m + LY = 2.0 * R_c * 1.05 + LZ = 0.5 # m + + LT = 10 # ion cyclotron periods + DT = 1e-3 # ion cyclotron periods + + # Resolution parameters + NX = 256 + NY = 256 + NZ = 128 + + # Starting number of particles per cell + NPPC = 100 + + # Number of substeps used to update B + substeps = 20 + + def Bz(self, r): + return np.sqrt( + self.B0**2 + - 2.0 + * constants.mu0 + * self.n0 + * constants.q_e + * self.T_i + / (1.0 + np.exp((r - self.R_p) / self.delta_p)) + ) + + def __init__(self, test, verbose): + self.test = test + self.verbose = verbose or self.test + + self.Lx = self.LX + self.Ly = self.LY + self.Lz = self.LZ + + self.DX = self.LX / self.NX + self.DY = self.LY / self.NY + self.DZ = self.LZ / self.NZ + + if comm.rank == 0: + # Write uniform compression dataset to OpenPMD to exercise reading openPMD data + # for the time varying external fields + xvec = np.linspace(-self.LX, self.LX, num=2 * self.NX) + yvec = np.linspace(-self.LY, self.LY, num=2 * self.NY) + zvec = np.linspace(-self.LZ, self.LZ, num=2 * self.NZ) + XM, YM, ZM = np.meshgrid(xvec, yvec, zvec, indexing="ij") + + RM = np.sqrt(XM**2 + YM**2) + + Ax_data = -0.5 * YM * self.dB + Ay_data = 0.5 * XM * self.dB + Az_data = np.zeros_like(RM) + + # Write vector potential to file to exercise field loading via OpenPMD + series = io.Series("Afield.h5", io.Access.create) + + it = series.iterations[0] + + A = it.meshes["A"] + A.grid_spacing = [self.DX, self.DY, self.DZ] + A.grid_global_offset = [-self.LX, -self.LY, -self.LZ] + A.grid_unit_SI = 1.0 + A.axis_labels = ["x", "y", "z"] + A.data_order = "C" + A.unit_dimension = { + io.Unit_Dimension.M: 1.0, + io.Unit_Dimension.T: -2.0, + io.Unit_Dimension.I: -1.0, + io.Unit_Dimension.L: -1.0, + } + + Ax = A["x"] + Ay = A["y"] + Az = A["z"] + + Ax.position = [0.0, 0.0] + Ay.position = [0.0, 0.0] + Az.position = [0.0, 0.0] + + Ax_dataset = io.Dataset(Ax_data.dtype, Ax_data.shape) + + Ay_dataset = io.Dataset(Ay_data.dtype, Ay_data.shape) + + Az_dataset = io.Dataset(Az_data.dtype, Az_data.shape) + + Ax.reset_dataset(Ax_dataset) + Ay.reset_dataset(Ay_dataset) + Az.reset_dataset(Az_dataset) + + Ax.store_chunk(Ax_data) + Ay.store_chunk(Ay_data) + Az.store_chunk(Az_data) + + series.flush() + series.close() + + comm.Barrier() + + # calculate various plasma parameters based on the simulation input + self.get_plasma_quantities() + + self.dt = self.DT * self.t_ci + + # run very low resolution as a CI test + if self.test: + self.total_steps = 20 + self.diag_steps = self.total_steps // 5 + self.NX = 64 + self.NY = 64 + self.NZ = 32 + else: + self.total_steps = int(self.LT / self.DT) + self.diag_steps = 100 + + # print out plasma parameters + if comm.rank == 0: + print( + f"Initializing simulation with input parameters:\n" + f"\tTi = {self.T_i:.1f} eV\n" + f"\tn0 = {self.n0:.1e} m^-3\n" + f"\tB0 = {self.B0:.2f} T\n", + f"\tDX/DY = {self.DX / self.l_i:.3f} c/w_pi\n" + f"\tDZ = {self.DZ / self.l_i:.3f} c/w_pi\n", + ) + print( + f"Plasma parameters:\n" + f"\tl_i = {self.l_i:.1e} m\n" + f"\tt_ci = {self.t_ci:.1e} s\n" + f"\tv_ti = {self.vi_th:.1e} m/s\n" + f"\tvA = {self.vA:.1e} m/s\n" + ) + print( + f"Numerical parameters:\n" + f"\tdz = {self.Lz / self.NZ:.1e} m\n" + f"\tdt = {self.dt:.1e} s\n" + f"\tdiag steps = {self.diag_steps:d}\n" + f"\ttotal steps = {self.total_steps:d}\n" + ) + + self.setup_run() + + def get_plasma_quantities(self): + """Calculate various plasma parameters based on the simulation input.""" + + # Ion mass (kg) + self.M = constants.m_p + + # Cyclotron angular frequency (rad/s) and period (s) + self.w_ci = constants.q_e * abs(self.B0) / self.M + self.t_ci = 2.0 * np.pi / self.w_ci + + # Ion plasma frequency (Hz) + self.w_pi = np.sqrt(constants.q_e**2 * self.n0 / (self.M * constants.ep0)) + + # Ion skin depth (m) + self.l_i = constants.c / self.w_pi + + # # Alfven speed (m/s): vA = B / sqrt(mu0 * n * (M + m)) = c * omega_ci / w_pi + self.vA = abs(self.B0) / np.sqrt( + constants.mu0 * self.n0 * (constants.m_e + self.M) + ) + + # calculate thermal speeds + self.vi_th = np.sqrt(self.T_i * constants.q_e / self.M) + + # Ion Larmor radius (m) + self.rho_i = self.vi_th / self.w_ci + + def load_fields(self): + Bx = fields.BxFPExternalWrapper(include_ghosts=False) + By = fields.ByFPExternalWrapper(include_ghosts=False) + Bz = fields.BzFPExternalWrapper(include_ghosts=False) + + Bx[:, :] = 0.0 + By[:, :] = 0.0 + + XM, YM, ZM = np.meshgrid( + Bz.mesh("x"), Bz.mesh("y"), Bz.mesh("z"), indexing="ij" + ) + + RM = np.sqrt(XM**2 + YM**2) + + Bz[:, :] = self.Bz(RM) + comm.Barrier() + + def setup_run(self): + """Setup simulation components.""" + + ####################################################################### + # Set geometry and boundary conditions # + ####################################################################### + + # Create grid + self.grid = picmi.Cartesian3DGrid( + number_of_cells=[self.NX, self.NY, self.NZ], + lower_bound=[-0.5 * self.Lx, -0.5 * self.Ly, -0.5 * self.Lz], + upper_bound=[0.5 * self.Lx, 0.5 * self.Ly, 0.5 * self.Lz], + lower_boundary_conditions=["dirichlet", "dirichlet", "periodic"], + upper_boundary_conditions=["dirichlet", "dirichlet", "periodic"], + lower_boundary_conditions_particles=["absorbing", "absorbing", "periodic"], + upper_boundary_conditions_particles=["absorbing", "absorbing", "periodic"], + warpx_max_grid_size=self.NZ, + ) + simulation.time_step_size = self.dt + simulation.max_steps = self.total_steps + simulation.current_deposition_algo = "direct" + simulation.particle_shape = 1 + simulation.use_filter = True + simulation.verbose = self.verbose + + ####################################################################### + # Field solver and external field # + ####################################################################### + # External Field definition. Sigmoid starting around 2.5 us + A_ext = { + "uniform": { + "read_from_file": True, + "path": "Afield.h5", + "A_time_external_function": "1/(1+exp(5*(1-(t-t0_ramp)*sqrt(2)/tau_ramp)))", + } + } + + self.solver = picmi.HybridPICSolver( + grid=self.grid, + gamma=1.0, + Te=self.T_e, + n0=self.n0, + n_floor=0.05 * self.n0, + plasma_resistivity="if(rho<=rho_floor,eta_v,eta_p)", + plasma_hyper_resistivity=1e-8, + substeps=self.substeps, + A_external=A_ext, + tau_ramp=20e-6, + t0_ramp=5e-6, + rho_floor=0.05 * self.n0 * constants.q_e, + eta_p=1e-8, + eta_v=1e-3, + ) + simulation.solver = self.solver + + simulation.embedded_boundary = picmi.EmbeddedBoundary( + implicit_function="(x**2+y**2-R_w**2)", R_w=self.R_c + ) + + # Add field loader callback + B_ext = picmi.LoadInitialFieldFromPython( + load_from_python=self.load_fields, + warpx_do_divb_cleaning_external=True, + load_B=True, + load_E=False, + ) + simulation.add_applied_field(B_ext) + + ####################################################################### + # Particle types setup # + ####################################################################### + r_omega = "(sqrt(x*x+y*y)*q_e*B0/m_p)" + dlnndr = "((-1/delta_p)/(1+exp(-(sqrt(x*x+y*y)-R_p)/delta_p)))" + vth = f"0.5*(-{r_omega}+sqrt({r_omega}*{r_omega}+4*q_e*T_i*{dlnndr}/m_p))" + + momentum_expr = [f"y*{vth}", f"-x*{vth}", "0"] + + self.ions = picmi.Species( + name="ions", + charge="q_e", + mass=self.M, + initial_distribution=picmi.AnalyticDistribution( + density_expression="n0_p/(1+exp((sqrt(x*x+y*y)-R_p)/delta_p))", + momentum_expressions=momentum_expr, + warpx_momentum_spread_expressions=[f"{str(self.vi_th)}"] * 3, + warpx_density_min=0.01 * self.n0, + R_p=self.R_p, + delta_p=self.delta_p, + n0_p=self.n0, + B0=self.B0, + T_i=self.T_i, + ), + ) + simulation.add_species( + self.ions, + layout=picmi.PseudoRandomLayout( + grid=self.grid, n_macroparticles_per_cell=self.NPPC + ), + ) + + ####################################################################### + # Add diagnostics # + ####################################################################### + + if self.test: + particle_diag = picmi.ParticleDiagnostic( + name="diag1", + period=self.diag_steps, + species=[self.ions], + data_list=["ux", "uy", "uz", "x", "z", "weighting"], + write_dir="diags", + warpx_format="plotfile", + ) + simulation.add_diagnostic(particle_diag) + field_diag = picmi.FieldDiagnostic( + name="diag1", + grid=self.grid, + period=self.diag_steps, + data_list=["B", "E", "rho"], + write_dir="diags", + warpx_format="plotfile", + ) + simulation.add_diagnostic(field_diag) + + ####################################################################### + # Initialize # + ####################################################################### + + if comm.rank == 0: + if Path.exists(Path("diags")): + shutil.rmtree("diags") + Path("diags").mkdir(parents=True, exist_ok=True) + + # Initialize inputs and WarpX instance + simulation.initialize_inputs() + simulation.initialize_warpx() + + +########################## +# parse input parameters +########################## + +parser = argparse.ArgumentParser() +parser.add_argument( + "-t", + "--test", + help="toggle whether this script is run as a short CI test", + action="store_true", +) +parser.add_argument( + "-v", + "--verbose", + help="Verbose output", + action="store_true", +) +args, left = parser.parse_known_args() +sys.argv = sys.argv[:1] + left + +run = PlasmaCylinderCompression(test=args.test, verbose=args.verbose) +simulation.step() diff --git a/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_rz_ohm_solver_cylinder_compression_picmi.py b/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_rz_ohm_solver_cylinder_compression_picmi.py new file mode 100644 index 00000000000..8c65f88ae79 --- /dev/null +++ b/Examples/Tests/ohm_solver_cylinder_compression/inputs_test_rz_ohm_solver_cylinder_compression_picmi.py @@ -0,0 +1,383 @@ +#!/usr/bin/env python3 +# +# --- Test script for the kinetic-fluid hybrid model in WarpX wherein ions are +# --- treated as kinetic particles and electrons as an isothermal, inertialess +# --- background fluid. The script demonstrates the use of this model to +# --- simulate adiabatic compression of a plasma cylinder initialized from an +# --- analytical Grad-Shafranov solution. + +import argparse +import shutil +import sys +from pathlib import Path + +import numpy as np +import openpmd_api as io +from mpi4py import MPI as mpi + +from pywarpx import fields, picmi + +constants = picmi.constants + +comm = mpi.COMM_WORLD + +simulation = picmi.Simulation(warpx_serialize_initial_conditions=True, verbose=False) + + +class PlasmaCylinderCompression(object): + # B0 is chosen with all other quantities scaled by it + n0 = 1e20 + T_i = 10 # eV + T_e = 0 + p0 = n0 * constants.q_e * T_i + + B0 = np.sqrt(2 * constants.mu0 * p0) # External magnetic field strength (T) + + # Do a 2x uniform B-field compression + dB = B0 + + # Flux Conserver radius + R_c = 0.5 + + # Plasma Radius (These values control the analytical GS solution) + R_p = 0.25 + delta_p = 0.025 + + # Domain parameters + LR = R_c # m + LZ = 0.25 * R_c # m + + LT = 10 # ion cyclotron periods + DT = 1e-3 # ion cyclotron periods + + # Resolution parameters + NR = 128 + NZ = 32 + + # Starting number of particles per cell + NPPC = 100 + + # Number of substeps used to update B + substeps = 20 + + def Bz(self, r): + return np.sqrt( + self.B0**2 + - 2.0 + * constants.mu0 + * self.n0 + * constants.q_e + * self.T_i + / (1.0 + np.exp((r - self.R_p) / self.delta_p)) + ) + + def __init__(self, test, verbose): + self.test = test + self.verbose = verbose or self.test + + self.Lr = self.LR + self.Lz = self.LZ + + self.DR = self.LR / self.NR + self.DZ = self.LZ / self.NZ + + # Write A to OpenPMD for a uniform B field to exercise file based loader + if comm.rank == 0: + mvec = np.array([0]) + rvec = np.linspace(0, 2 * self.LR, num=2 * self.NR) + zvec = np.linspace(-self.LZ, self.LZ, num=2 * self.NZ) + MM, RM, ZM = np.meshgrid(mvec, rvec, zvec, indexing="ij") + + # Write uniform compression dataset to OpenPMD to exercise reading openPMD data + # for the time varying external fields + Ar_data = np.zeros_like(RM) + Az_data = np.zeros_like(RM) + + # Zero padded outside of domain + At_data = 0.5 * RM * self.dB + + # Write vector potential to file to exercise field loading via + series = io.Series("Afield.h5", io.Access.create) + + it = series.iterations[0] + + A = it.meshes["A"] + A.geometry = io.Geometry.thetaMode + A.geometry_parameters = "m=0" + A.grid_spacing = [self.DR, self.DZ] + A.grid_global_offset = [0.0, -self.LZ] + A.grid_unit_SI = 1.0 + A.axis_labels = ["r", "z"] + A.data_order = "C" + A.unit_dimension = { + io.Unit_Dimension.M: 1.0, + io.Unit_Dimension.T: -2.0, + io.Unit_Dimension.I: -1.0, + io.Unit_Dimension.L: -1.0, + } + + Ar = A["r"] + At = A["t"] + Az = A["z"] + + Ar.position = [0.0, 0.0] + At.position = [0.0, 0.0] + Az.position = [0.0, 0.0] + + Ar_dataset = io.Dataset(Ar_data.dtype, Ar_data.shape) + + At_dataset = io.Dataset(At_data.dtype, At_data.shape) + + Az_dataset = io.Dataset(Az_data.dtype, Az_data.shape) + + Ar.reset_dataset(Ar_dataset) + At.reset_dataset(At_dataset) + Az.reset_dataset(Az_dataset) + + Ar.store_chunk(Ar_data) + At.store_chunk(At_data) + Az.store_chunk(Az_data) + + series.flush() + series.close() + + comm.Barrier() + + # calculate various plasma parameters based on the simulation input + self.get_plasma_quantities() + + self.dt = self.DT * self.t_ci + + # run very low resolution as a CI test + if self.test: + self.total_steps = 20 + self.diag_steps = self.total_steps // 5 + self.NR = 64 + self.NZ = 16 + else: + self.total_steps = int(self.LT / self.DT) + self.diag_steps = 100 + + # print out plasma parameters + if comm.rank == 0: + print( + f"Initializing simulation with input parameters:\n" + f"\tTi = {self.T_i:.1f} eV\n" + f"\tn0 = {self.n0:.1e} m^-3\n" + f"\tB0 = {self.B0:.2f} T\n", + f"\tDR = {self.DR / self.l_i:.3f} c/w_pi\n" + f"\tDZ = {self.DZ / self.l_i:.3f} c/w_pi\n", + ) + print( + f"Plasma parameters:\n" + f"\tl_i = {self.l_i:.1e} m\n" + f"\tt_ci = {self.t_ci:.1e} s\n" + f"\tv_ti = {self.vi_th:.1e} m/s\n" + f"\tvA = {self.vA:.1e} m/s\n" + ) + print( + f"Numerical parameters:\n" + f"\tdz = {self.Lz / self.NZ:.1e} m\n" + f"\tdt = {self.dt:.1e} s\n" + f"\tdiag steps = {self.diag_steps:d}\n" + f"\ttotal steps = {self.total_steps:d}\n" + ) + + self.setup_run() + + def get_plasma_quantities(self): + """Calculate various plasma parameters based on the simulation input.""" + + # Ion mass (kg) + self.M = constants.m_p + + # Cyclotron angular frequency (rad/s) and period (s) + self.w_ci = constants.q_e * abs(self.B0) / self.M + self.t_ci = 2.0 * np.pi / self.w_ci + + # Ion plasma frequency (Hz) + self.w_pi = np.sqrt(constants.q_e**2 * self.n0 / (self.M * constants.ep0)) + + # Ion skin depth (m) + self.l_i = constants.c / self.w_pi + + # # Alfven speed (m/s): vA = B / sqrt(mu0 * n * (M + m)) = c * omega_ci / w_pi + self.vA = abs(self.B0) / np.sqrt( + constants.mu0 * self.n0 * (constants.m_e + self.M) + ) + + # calculate thermal speeds + self.vi_th = np.sqrt(self.T_i * constants.q_e / self.M) + + # Ion Larmor radius (m) + self.rho_i = self.vi_th / self.w_ci + + def load_fields(self): + Br = fields.BxFPExternalWrapper(include_ghosts=False) + Bt = fields.ByFPExternalWrapper(include_ghosts=False) + Bz = fields.BzFPExternalWrapper(include_ghosts=False) + + Br[:, :] = 0.0 + Bt[:, :] = 0.0 + + RM, ZM = np.meshgrid(Bz.mesh("r"), Bz.mesh("z"), indexing="ij") + + Bz[:, :] = self.Bz(RM) * (RM <= self.R_c) + comm.Barrier() + + def setup_run(self): + """Setup simulation components.""" + + ####################################################################### + # Set geometry and boundary conditions # + ####################################################################### + + # Create grid + self.grid = picmi.CylindricalGrid( + number_of_cells=[self.NR, self.NZ], + lower_bound=[0.0, -self.Lz / 2.0], + upper_bound=[self.Lr, self.Lz / 2.0], + lower_boundary_conditions=["none", "periodic"], + upper_boundary_conditions=["dirichlet", "periodic"], + lower_boundary_conditions_particles=["none", "periodic"], + upper_boundary_conditions_particles=["absorbing", "periodic"], + warpx_max_grid_size=self.NZ, + ) + simulation.time_step_size = self.dt + simulation.max_steps = self.total_steps + simulation.current_deposition_algo = "direct" + simulation.particle_shape = 1 + simulation.use_filter = True + simulation.verbose = self.verbose + + ####################################################################### + # Field solver and external field # + ####################################################################### + # External Field definition. Sigmoid starting around 2.5 us + A_ext = { + "uniform": { + "read_from_file": True, + "path": "Afield.h5", + "A_time_external_function": "1/(1+exp(5*(1-(t-t0_ramp)*sqrt(2)/tau_ramp)))", + } + } + + self.solver = picmi.HybridPICSolver( + grid=self.grid, + gamma=1.0, + Te=self.T_e, + n0=self.n0, + n_floor=0.05 * self.n0, + plasma_resistivity="if(rho<=rho_floor,eta_v,eta_p)", + plasma_hyper_resistivity=1e-8, + substeps=self.substeps, + A_external=A_ext, + tau_ramp=20e-6, + t0_ramp=5e-6, + rho_floor=0.05 * self.n0 * constants.q_e, + eta_p=1e-8, + eta_v=1e-3, + ) + simulation.solver = self.solver + + # Add field loader callback + B_ext = picmi.LoadInitialFieldFromPython( + load_from_python=self.load_fields, + warpx_do_divb_cleaning_external=True, + load_B=True, + load_E=False, + ) + simulation.add_applied_field(B_ext) + + ####################################################################### + # Particle types setup # + ####################################################################### + r_omega = "(sqrt(x*x+y*y)*q_e*B0/m_p)" + dlnndr = "((-1/delta_p)/(1+exp(-(sqrt(x*x+y*y)-R_p)/delta_p)))" + vth = f"0.5*(-{r_omega}+sqrt({r_omega}*{r_omega}+4*q_e*T_i*{dlnndr}/m_p))" + + momentum_expr = [f"y*{vth}", f"-x*{vth}", "0"] + + self.ions = picmi.Species( + name="ions", + charge="q_e", + mass=self.M, + initial_distribution=picmi.AnalyticDistribution( + density_expression="n0_p/(1+exp((sqrt(x*x+y*y)-R_p)/delta_p))", + momentum_expressions=momentum_expr, + warpx_momentum_spread_expressions=[f"{str(self.vi_th)}"] * 3, + warpx_density_min=0.01 * self.n0, + R_p=self.R_p, + delta_p=self.delta_p, + n0_p=self.n0, + B0=self.B0, + T_i=self.T_i, + ), + ) + simulation.add_species( + self.ions, + layout=picmi.PseudoRandomLayout( + grid=self.grid, n_macroparticles_per_cell=self.NPPC + ), + ) + + ####################################################################### + # Add diagnostics # + ####################################################################### + + if self.test: + particle_diag = picmi.ParticleDiagnostic( + name="diag1", + period=self.diag_steps, + species=[self.ions], + data_list=["ux", "uy", "uz", "x", "z", "weighting"], + write_dir="diags", + warpx_format="plotfile", + ) + simulation.add_diagnostic(particle_diag) + field_diag = picmi.FieldDiagnostic( + name="diag1", + grid=self.grid, + period=self.diag_steps, + data_list=["B", "E", "rho"], + write_dir="diags", + warpx_format="plotfile", + ) + simulation.add_diagnostic(field_diag) + + ####################################################################### + # Initialize # + ####################################################################### + + if comm.rank == 0: + if Path.exists(Path("diags")): + shutil.rmtree("diags") + Path("diags").mkdir(parents=True, exist_ok=True) + + # Initialize inputs and WarpX instance + simulation.initialize_inputs() + simulation.initialize_warpx() + + +########################## +# parse input parameters +########################## + +parser = argparse.ArgumentParser() +parser.add_argument( + "-t", + "--test", + help="toggle whether this script is run as a short CI test", + action="store_true", +) +parser.add_argument( + "-v", + "--verbose", + help="Verbose output", + action="store_true", +) +args, left = parser.parse_known_args() +sys.argv = sys.argv[:1] + left + +run = PlasmaCylinderCompression(test=args.test, verbose=args.verbose) +simulation.step() diff --git a/Python/pywarpx/HybridPICModel.py b/Python/pywarpx/HybridPICModel.py index 7bd8c961950..f94f44ce931 100644 --- a/Python/pywarpx/HybridPICModel.py +++ b/Python/pywarpx/HybridPICModel.py @@ -9,3 +9,4 @@ from .Bucket import Bucket hybridpicmodel = Bucket("hybrid_pic_model") +external_vector_potential = Bucket("external_vector_potential") diff --git a/Python/pywarpx/WarpX.py b/Python/pywarpx/WarpX.py index 9ef7019cda9..9b0446bcc79 100644 --- a/Python/pywarpx/WarpX.py +++ b/Python/pywarpx/WarpX.py @@ -20,7 +20,7 @@ from .Diagnostics import diagnostics, reduced_diagnostics from .EB2 import eb2 from .Geometry import geometry -from .HybridPICModel import hybridpicmodel +from .HybridPICModel import external_vector_potential, hybridpicmodel from .Interpolation import interpolation from .Lasers import lasers, lasers_list from .Particles import particles, particles_list @@ -46,6 +46,7 @@ def create_argv_list(self, **kw): argv += amrex.attrlist() argv += geometry.attrlist() argv += hybridpicmodel.attrlist() + argv += external_vector_potential.attrlist() argv += boundary.attrlist() argv += algo.attrlist() argv += interpolation.attrlist() diff --git a/Python/pywarpx/__init__.py b/Python/pywarpx/__init__.py index 054ca451756..b8e025342dd 100644 --- a/Python/pywarpx/__init__.py +++ b/Python/pywarpx/__init__.py @@ -33,7 +33,7 @@ from .Diagnostics import diagnostics, reduced_diagnostics # noqa from .EB2 import eb2 # noqa from .Geometry import geometry # noqa -from .HybridPICModel import hybridpicmodel # noqa +from .HybridPICModel import hybridpicmodel, external_vector_potential # noqa from .Interpolation import interpolation # noqa from .Lasers import lasers # noqa from .LoadThirdParty import load_cupy # noqa diff --git a/Python/pywarpx/fields.py b/Python/pywarpx/fields.py index 9beef1de5c8..a81999103d9 100644 --- a/Python/pywarpx/fields.py +++ b/Python/pywarpx/fields.py @@ -578,6 +578,24 @@ def norm0(self, *args): return self.mf.norm0(*args) +def CustomNamedxWrapper(mf_name, level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name=mf_name, idir=0, level=level, include_ghosts=include_ghosts + ) + + +def CustomNamedyWrapper(mf_name, level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name=mf_name, idir=1, level=level, include_ghosts=include_ghosts + ) + + +def CustomNamedzWrapper(mf_name, level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name=mf_name, idir=2, level=level, include_ghosts=include_ghosts + ) + + def ExWrapper(level=0, include_ghosts=False): return _MultiFABWrapper( mf_name="Efield_aux", idir=0, level=level, include_ghosts=include_ghosts @@ -704,6 +722,87 @@ def BzFPExternalWrapper(level=0, include_ghosts=False): ) +def AxHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_A_fp_external", + idir=0, + level=level, + include_ghosts=include_ghosts, + ) + + +def AyHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_A_fp_external", + idir=1, + level=level, + include_ghosts=include_ghosts, + ) + + +def AzHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_A_fp_external", + idir=2, + level=level, + include_ghosts=include_ghosts, + ) + + +def ExHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_E_fp_external", + idir=0, + level=level, + include_ghosts=include_ghosts, + ) + + +def EyHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_E_fp_external", + idir=1, + level=level, + include_ghosts=include_ghosts, + ) + + +def EzHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_E_fp_external", + idir=2, + level=level, + include_ghosts=include_ghosts, + ) + + +def BxHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_B_fp_external", + idir=0, + level=level, + include_ghosts=include_ghosts, + ) + + +def ByHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_B_fp_external", + idir=1, + level=level, + include_ghosts=include_ghosts, + ) + + +def BzHybridExternalWrapper(level=0, include_ghosts=False): + return _MultiFABWrapper( + mf_name="hybrid_B_fp_external", + idir=2, + level=level, + include_ghosts=include_ghosts, + ) + + def JxFPWrapper(level=0, include_ghosts=False): return _MultiFABWrapper( mf_name="current_fp", idir=0, level=level, include_ghosts=include_ghosts diff --git a/Python/pywarpx/picmi.py b/Python/pywarpx/picmi.py index da673671953..f660570ca7c 100644 --- a/Python/pywarpx/picmi.py +++ b/Python/pywarpx/picmi.py @@ -1853,8 +1853,37 @@ class HybridPICSolver(picmistandard.base._ClassWithInit): substeps: int, default=100 Number of substeps to take when updating the B-field. + holmstrom_vacuum_region: bool, default=False + Flag to determine handling of vacuum region. Setting to True will solve the simplified Generalized Ohm's Law dropping the Hall and pressure terms in the vacuum region. + This flag is useful for suppressing vacuum region fluctuations. A large resistivity value must be used when rho <= rho_floor. + Jx/y/z_external_function: str Function of space and time specifying external (non-plasma) currents. + + A_external: dict + Function of space and time specifying external (non-plasma) vector potential fields. + It is expected that a nested dicitonary will be passed + into picmi for each field that has different timings + e.g. + A_external = { + '': { + 'Ax_external_function': , + 'Ay_external_function': , + 'Az_external_function': , + 'A_time_external_function': + }, + ': {...}' + } + + or if fields are to be loaded from an OpenPMD file + A_external = { + '': { + 'load_from_file': True, + 'path': , + 'A_time_external_function': + }, + ': {...}' + } """ def __init__( @@ -1867,9 +1896,11 @@ def __init__( plasma_resistivity=None, plasma_hyper_resistivity=None, substeps=None, + holmstrom_vacuum_region=None, Jx_external_function=None, Jy_external_function=None, Jz_external_function=None, + A_external=None, **kw, ): self.grid = grid @@ -1884,10 +1915,14 @@ def __init__( self.substeps = substeps + self.holmstrom_vacuum_region = holmstrom_vacuum_region + self.Jx_external_function = Jx_external_function self.Jy_external_function = Jy_external_function self.Jz_external_function = Jz_external_function + self.A_external = A_external + # Handle keyword arguments used in expressions self.user_defined_kw = {} for k in list(kw.keys()): @@ -1918,6 +1953,7 @@ def solver_initialize_inputs(self): ) pywarpx.hybridpicmodel.plasma_hyper_resistivity = self.plasma_hyper_resistivity pywarpx.hybridpicmodel.substeps = self.substeps + pywarpx.hybridpicmodel.holmstrom_vacuum_region = self.holmstrom_vacuum_region pywarpx.hybridpicmodel.__setattr__( "Jx_external_grid_function(x,y,z,t)", pywarpx.my_constants.mangle_expression( @@ -1936,6 +1972,47 @@ def solver_initialize_inputs(self): self.Jz_external_function, self.mangle_dict ), ) + if self.A_external is not None: + pywarpx.hybridpicmodel.add_external_fields = True + pywarpx.external_vector_potential.__setattr__( + "fields", + pywarpx.my_constants.mangle_expression( + list(self.A_external.keys()), self.mangle_dict + ), + ) + for field_name, field_dict in self.A_external.items(): + if field_dict.get("read_from_file", False): + pywarpx.external_vector_potential.__setattr__( + f"{field_name}.read_from_file", field_dict["read_from_file"] + ) + pywarpx.external_vector_potential.__setattr__( + f"{field_name}.path", field_dict["path"] + ) + else: + pywarpx.external_vector_potential.__setattr__( + f"{field_name}.Ax_external_grid_function(x,y,z)", + pywarpx.my_constants.mangle_expression( + field_dict["Ax_external_function"], self.mangle_dict + ), + ) + pywarpx.external_vector_potential.__setattr__( + f"{field_name}.Ay_external_grid_function(x,y,z)", + pywarpx.my_constants.mangle_expression( + field_dict["Ay_external_function"], self.mangle_dict + ), + ) + pywarpx.external_vector_potential.__setattr__( + f"{field_name}.Az_external_grid_function(x,y,z)", + pywarpx.my_constants.mangle_expression( + field_dict["Az_external_function"], self.mangle_dict + ), + ) + pywarpx.external_vector_potential.__setattr__( + f"{field_name}.A_time_external_function(t)", + pywarpx.my_constants.mangle_expression( + field_dict["A_time_external_function"], self.mangle_dict + ), + ) class ElectrostaticSolver(picmistandard.PICMI_ElectrostaticSolver): diff --git a/Regression/Checksum/benchmarks_json/test_3d_ohm_solver_cylinder_compression_picmi.json b/Regression/Checksum/benchmarks_json/test_3d_ohm_solver_cylinder_compression_picmi.json new file mode 100644 index 00000000000..6cde3a9450e --- /dev/null +++ b/Regression/Checksum/benchmarks_json/test_3d_ohm_solver_cylinder_compression_picmi.json @@ -0,0 +1,20 @@ +{ + "lev=0": { + "Bx": 0.5334253070691776, + "By": 0.5318560243634998, + "Bz": 2252.108905639938, + "Ex": 10509838.331420777, + "Ey": 10512676.798857061, + "Ez": 8848.113963901804, + "rho": 384112.2912140536 + }, + "ions": { + "particle_momentum_x": 2.161294367543349e-16, + "particle_momentum_y": 2.161870747294985e-16, + "particle_momentum_z": 2.0513400435256855e-16, + "particle_position_x": 769864.202585846, + "particle_position_y": 769908.6569812088, + "particle_position_z": 620721.1900338201, + "particle_weight": 1.008292384042714e+19 + } +} \ No newline at end of file diff --git a/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_cylinder_compression_picmi.json b/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_cylinder_compression_picmi.json new file mode 100644 index 00000000000..6fd2ca04fce --- /dev/null +++ b/Regression/Checksum/benchmarks_json/test_rz_ohm_solver_cylinder_compression_picmi.json @@ -0,0 +1,20 @@ +{ + "lev=0": { + "Br": 0.01190012639573578, + "Bt": 0.011313481779415917, + "Bz": 11.684908684984164, + "Er": 154581.58512851578, + "Et": 4798.276941148807, + "Ez": 193.22344271401872, + "rho": 7968.182346905438 + }, + "ions": { + "particle_momentum_x": 3.1125151786241107e-18, + "particle_momentum_y": 3.119385993047207e-18, + "particle_momentum_z": 3.0289560038617916e-18, + "particle_position_x": 13628.662686419664, + "particle_position_y": 2285.6952310457755, + "particle_theta": 115055.48935725243, + "particle_weight": 2.525423582445981e+18 + } +} \ No newline at end of file diff --git a/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt b/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt index 19c2092d1a6..7539d706632 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt +++ b/Source/FieldSolver/FiniteDifferenceSolver/CMakeLists.txt @@ -3,6 +3,7 @@ foreach(D IN LISTS WarpX_DIMS) target_sources(lib_${SD} PRIVATE ComputeDivE.cpp + ComputeCurlA.cpp EvolveB.cpp EvolveBPML.cpp EvolveE.cpp diff --git a/Source/FieldSolver/FiniteDifferenceSolver/ComputeCurlA.cpp b/Source/FieldSolver/FiniteDifferenceSolver/ComputeCurlA.cpp new file mode 100644 index 00000000000..30cbdb60508 --- /dev/null +++ b/Source/FieldSolver/FiniteDifferenceSolver/ComputeCurlA.cpp @@ -0,0 +1,306 @@ +/* Copyright 2024 The WarpX Community + * + * This file is part of WarpX. + * + * Authors: S. Eric Clark (Helion Energy) + * + * License: BSD-3-Clause-LBNL + */ + +#include "FiniteDifferenceSolver.H" + +#include "EmbeddedBoundary/Enabled.H" +#ifdef WARPX_DIM_RZ +# include "FiniteDifferenceAlgorithms/CylindricalYeeAlgorithm.H" +#else +# include "FiniteDifferenceAlgorithms/CartesianYeeAlgorithm.H" +#endif + +#include "Utils/TextMsg.H" +#include "WarpX.H" + +using namespace amrex; + +void FiniteDifferenceSolver::ComputeCurlA ( + ablastr::fields::VectorField& Bfield, + ablastr::fields::VectorField const& Afield, + std::array< std::unique_ptr,3> const& eb_update_B, + int lev ) +{ + // Select algorithm (The choice of algorithm is a runtime option, + // but we compile code for each algorithm, using templates) + if (m_fdtd_algo == ElectromagneticSolverAlgo::HybridPIC) { +#ifdef WARPX_DIM_RZ + ComputeCurlACylindrical ( + Bfield, Afield, eb_update_B, lev + ); + +#else + ComputeCurlACartesian ( + Bfield, Afield, eb_update_B, lev + ); + +#endif + } else { + amrex::Abort(Utils::TextMsg::Err( + "ComputeCurl: Unknown algorithm choice.")); + } +} + +// /** +// * \brief Calculate B from the curl of A +// * i.e. B = curl(A) output field on B field mesh staggering +// * +// * \param[out] curlField output of curl operation +// * \param[in] field input staggered field, should be on E/J/A mesh staggering +// */ +#ifdef WARPX_DIM_RZ +template +void FiniteDifferenceSolver::ComputeCurlACylindrical ( + ablastr::fields::VectorField& Bfield, + ablastr::fields::VectorField const& Afield, + std::array< std::unique_ptr,3> const& eb_update_B, + int lev +) +{ + // for the profiler + amrex::LayoutData* cost = WarpX::getCosts(lev); + + // reset Bfield + Bfield[0]->setVal(0); + Bfield[1]->setVal(0); + Bfield[2]->setVal(0); + + // Loop through the grids, and over the tiles within each grid +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for ( MFIter mfi(*Afield[0], TilingIfNotGPU()); mfi.isValid(); ++mfi ) { + if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers) + { + amrex::Gpu::synchronize(); + } + Real wt = static_cast(amrex::second()); + + // Extract field data for this grid/tile + Array4 const& Ar = Afield[0]->const_array(mfi); + Array4 const& At = Afield[1]->const_array(mfi); + Array4 const& Az = Afield[2]->const_array(mfi); + Array4 const& Br = Bfield[0]->array(mfi); + Array4 const& Bt = Bfield[1]->array(mfi); + Array4 const& Bz = Bfield[2]->array(mfi); + + // Extract structures indicating where the fields + // should be updated, given the position of the embedded boundaries. + amrex::Array4 update_Br_arr, update_Bt_arr, update_Bz_arr; + if (EB::enabled()) { + update_Br_arr = eb_update_B[0]->array(mfi); + update_Bt_arr = eb_update_B[1]->array(mfi); + update_Bz_arr = eb_update_B[2]->array(mfi); + } + + // Extract stencil coefficients + Real const * const AMREX_RESTRICT coefs_r = m_stencil_coefs_r.dataPtr(); + int const n_coefs_r = static_cast(m_stencil_coefs_r.size()); + Real const * const AMREX_RESTRICT coefs_z = m_stencil_coefs_z.dataPtr(); + int const n_coefs_z = static_cast(m_stencil_coefs_z.size()); + + // Extract cylindrical specific parameters + Real const dr = m_dr; + int const nmodes = m_nmodes; + Real const rmin = m_rmin; + + // Extract tileboxes for which to loop over + Box const& tbr = mfi.tilebox(Bfield[0]->ixType().toIntVect()); + Box const& tbt = mfi.tilebox(Bfield[1]->ixType().toIntVect()); + Box const& tbz = mfi.tilebox(Bfield[2]->ixType().toIntVect()); + + // Calculate the B-field from the A-field + amrex::ParallelFor(tbr, tbt, tbz, + + // Br calculation + [=] AMREX_GPU_DEVICE (int i, int j, int /*k*/){ + // Skip field update in the embedded boundaries + if (update_Br_arr && update_Br_arr(i, j, 0) == 0) { return; } + + Real const r = rmin + i*dr; // r on nodal point (Br is nodal in r) + if (r != 0) { // Off-axis, regular Maxwell equations + Br(i, j, 0, 0) = - T_Algo::UpwardDz(At, coefs_z, n_coefs_z, i, j, 0, 0); // Mode m=0 + for (int m=1; m(amrex::second()) - wt; + amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt); + } + } +} + +#else + +template +void FiniteDifferenceSolver::ComputeCurlACartesian ( + ablastr::fields::VectorField & Bfield, + ablastr::fields::VectorField const& Afield, + std::array< std::unique_ptr,3> const& eb_update_B, + int lev +) +{ + using ablastr::fields::Direction; + + // for the profiler + amrex::LayoutData* cost = WarpX::getCosts(lev); + + // reset Bfield + Bfield[0]->setVal(0); + Bfield[1]->setVal(0); + Bfield[2]->setVal(0); + + // Loop through the grids, and over the tiles within each grid +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for ( MFIter mfi(*Afield[0], TilingIfNotGPU()); mfi.isValid(); ++mfi ) { + if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers) { + amrex::Gpu::synchronize(); + } + auto wt = static_cast(amrex::second()); + + // Extract field data for this grid/tile + Array4 const &Bx = Bfield[0]->array(mfi); + Array4 const &By = Bfield[1]->array(mfi); + Array4 const &Bz = Bfield[2]->array(mfi); + Array4 const &Ax = Afield[0]->const_array(mfi); + Array4 const &Ay = Afield[1]->const_array(mfi); + Array4 const &Az = Afield[2]->const_array(mfi); + + // Extract structures indicating where the fields + // should be updated, given the position of the embedded boundaries. + amrex::Array4 update_Bx_arr, update_By_arr, update_Bz_arr; + if (EB::enabled()) { + update_Bx_arr = eb_update_B[0]->array(mfi); + update_By_arr = eb_update_B[1]->array(mfi); + update_Bz_arr = eb_update_B[2]->array(mfi); + } + + // Extract stencil coefficients + Real const * const AMREX_RESTRICT coefs_x = m_stencil_coefs_x.dataPtr(); + auto const n_coefs_x = static_cast(m_stencil_coefs_x.size()); + Real const * const AMREX_RESTRICT coefs_y = m_stencil_coefs_y.dataPtr(); + auto const n_coefs_y = static_cast(m_stencil_coefs_y.size()); + Real const * const AMREX_RESTRICT coefs_z = m_stencil_coefs_z.dataPtr(); + auto const n_coefs_z = static_cast(m_stencil_coefs_z.size()); + + // Extract tileboxes for which to loop + Box const& tbx = mfi.tilebox(Bfield[0]->ixType().toIntVect()); + Box const& tby = mfi.tilebox(Bfield[1]->ixType().toIntVect()); + Box const& tbz = mfi.tilebox(Bfield[2]->ixType().toIntVect()); + + // Calculate the curl of A + amrex::ParallelFor(tbx, tby, tbz, + + // Bx calculation + [=] AMREX_GPU_DEVICE (int i, int j, int k){ + // Skip field update in the embedded boundaries + if (update_Bx_arr && update_Bx_arr(i, j, k) == 0) { return; } + + Bx(i, j, k) = ( + - T_Algo::UpwardDz(Ay, coefs_z, n_coefs_z, i, j, k) + + T_Algo::UpwardDy(Az, coefs_y, n_coefs_y, i, j, k) + ); + }, + + // By calculation + [=] AMREX_GPU_DEVICE (int i, int j, int k){ + // Skip field update in the embedded boundaries + if (update_By_arr && update_By_arr(i, j, k) == 0) { return; } + + By(i, j, k) = ( + - T_Algo::UpwardDx(Az, coefs_x, n_coefs_x, i, j, k) + + T_Algo::UpwardDz(Ax, coefs_z, n_coefs_z, i, j, k) + ); + }, + + // Bz calculation + [=] AMREX_GPU_DEVICE (int i, int j, int k){ + // Skip field update in the embedded boundaries + if (update_Bz_arr && update_Bz_arr(i, j, k) == 0) { return; } + + Bz(i, j, k) = ( + - T_Algo::UpwardDy(Ax, coefs_y, n_coefs_y, i, j, k) + + T_Algo::UpwardDx(Ay, coefs_x, n_coefs_x, i, j, k) + ); + } + ); + + if (cost && WarpX::load_balance_costs_update_algo == LoadBalanceCostsUpdateAlgo::Timers) + { + amrex::Gpu::synchronize(); + wt = static_cast(amrex::second()) - wt; + amrex::HostDevice::Atomic::Add( &(*cost)[mfi.index()], wt); + } + } +} +#endif diff --git a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H index 19b822e3628..0d12d104436 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H +++ b/Source/FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H @@ -1,7 +1,10 @@ -/* Copyright 2020 Remi Lehe +/* Copyright 2020-2024 The WarpX Community * * This file is part of WarpX. * + * Authors: Remi Lehe (LBNL) + * S. Eric Clark (Helion Energy) + * * License: BSD-3-Clause-LBNL */ @@ -172,10 +175,25 @@ class FiniteDifferenceSolver * \param[in] lev level number for the calculation */ void CalculateCurrentAmpere ( - ablastr::fields::VectorField& Jfield, - ablastr::fields::VectorField const& Bfield, - std::array< std::unique_ptr,3> const& eb_update_E, - int lev ); + ablastr::fields::VectorField& Jfield, + ablastr::fields::VectorField const& Bfield, + std::array< std::unique_ptr,3> const& eb_update_E, + int lev ); + + /** + * \brief Calculation of B field from the vector potential A + * B = (curl x A) / mu0. + * + * \param[out] Bfield vector of current MultiFabs at a given level + * \param[in] Afield vector of magnetic field MultiFabs at a given level + * \param[in] edge_lengths length of edges along embedded boundaries + * \param[in] lev level number for the calculation + */ + void ComputeCurlA ( + ablastr::fields::VectorField& Bfield, + ablastr::fields::VectorField const& Afield, + std::array< std::unique_ptr,3> const& eb_update_B, + int lev ); private: @@ -255,6 +273,14 @@ class FiniteDifferenceSolver int lev ); + template + void ComputeCurlACylindrical ( + ablastr::fields::VectorField& Bfield, + ablastr::fields::VectorField const& Afield, + std::array< std::unique_ptr,3> const& eb_update_B, + int lev + ); + #else template< typename T_Algo > void EvolveBCartesian ( @@ -358,6 +384,14 @@ class FiniteDifferenceSolver std::array< std::unique_ptr,3> const& eb_update_E, int lev ); + + template + void ComputeCurlACartesian ( + ablastr::fields::VectorField & Bfield, + ablastr::fields::VectorField const& Afield, + std::array< std::unique_ptr,3> const& eb_update_B, + int lev + ); #endif }; diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt index 1367578b0aa..bb29baefcb9 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/CMakeLists.txt @@ -3,5 +3,6 @@ foreach(D IN LISTS WarpX_DIMS) target_sources(lib_${SD} PRIVATE HybridPICModel.cpp + ExternalVectorPotential.cpp ) endforeach() diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.H b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.H new file mode 100644 index 00000000000..632ff2bd785 --- /dev/null +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.H @@ -0,0 +1,101 @@ +/* Copyright 2024 The WarpX Community + * + * This file is part of WarpX. + * + * Authors: S. Eric Clark (Helion Energy) + * + * License: BSD-3-Clause-LBNL + */ + +#ifndef WARPX_EXTERNAL_VECTOR_POTENTIAL_H_ +#define WARPX_EXTERNAL_VECTOR_POTENTIAL_H_ + +#include "Fields.H" + +#include "Utils/WarpXAlgorithmSelection.H" + +#include "EmbeddedBoundary/Enabled.H" +#include "FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H" +#include "Utils/Parser/ParserUtils.H" +#include "Utils/WarpXConst.H" +#include "Utils/WarpXProfilerWrapper.H" + +#include + +#include +#include +#include +#include +#include + +#include + +/** + * \brief This class contains the parameters needed to evaluate a + * time varying external vector potential, leading to external E/B + * fields to be applied in Hybrid Solver. This class is used to break up + * the passed in fields into a spatial and time dependent solution. + * + * Eventually this can be used in a list to control independent external + * fields with different time profiles. + * + */ +class ExternalVectorPotential +{ +protected: + int m_nFields; + + std::vector m_field_names; + + std::vector m_Ax_ext_grid_function; + std::vector m_Ay_ext_grid_function; + std::vector m_Az_ext_grid_function; + std::vector, 3>> m_A_external_parser; + std::vector, 3>> m_A_external; + + std::vector m_A_ext_time_function; + std::vector> m_A_external_time_parser; + std::vector> m_A_time_scale; + + std::vector m_read_A_from_file; + std::vector m_external_file_path; + +public: + + // Default Constructor + ExternalVectorPotential (); + + void ReadParameters (); + + void AllocateLevelMFs ( + ablastr::fields::MultiFabRegister & fields, + int lev, const amrex::BoxArray& ba, const amrex::DistributionMapping& dm, + int ncomps, + const amrex::IntVect& ngEB, + const amrex::IntVect& Ex_nodal_flag, + const amrex::IntVect& Ey_nodal_flag, + const amrex::IntVect& Ez_nodal_flag, + const amrex::IntVect& Bx_nodal_flag, + const amrex::IntVect& By_nodal_flag, + const amrex::IntVect& Bz_nodal_flag + ); + + void InitData (); + + void CalculateExternalCurlA (); + void CalculateExternalCurlA (std::string& coil_name); + + AMREX_FORCE_INLINE + void PopulateExternalFieldFromVectorPotential ( + ablastr::fields::VectorField const& dstField, + amrex::Real scale_factor, + ablastr::fields::VectorField const& srcField, + std::array< std::unique_ptr,3> const& eb_update); + + void UpdateHybridExternalFields ( + amrex::Real t, + amrex::Real dt + ); +}; + +#endif //WARPX_TIME_DEPENDENT_VECTOR_POTENTIAL_H_ diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.cpp new file mode 100644 index 00000000000..50a62335b57 --- /dev/null +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/ExternalVectorPotential.cpp @@ -0,0 +1,376 @@ +/* Copyright 2024 The WarpX Community + * + * This file is part of WarpX. + * + * Authors: S. Eric Clark (Helion Energy) + * + * License: BSD-3-Clause-LBNL + */ + +#include "ExternalVectorPotential.H" +#include "FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H" +#include "Fields.H" +#include "WarpX.H" + +#include + +using namespace amrex; +using namespace warpx::fields; + +ExternalVectorPotential::ExternalVectorPotential () +{ + ReadParameters(); +} + +void +ExternalVectorPotential::ReadParameters () +{ + const ParmParse pp_ext_A("external_vector_potential"); + + pp_ext_A.queryarr("fields", m_field_names); + + WARPX_ALWAYS_ASSERT_WITH_MESSAGE(!m_field_names.empty(), + "No external field names defined in external_vector_potential.fields"); + + m_nFields = static_cast(m_field_names.size()); + + // Resize vectors and set defaults + m_Ax_ext_grid_function.resize(m_nFields); + m_Ay_ext_grid_function.resize(m_nFields); + m_Az_ext_grid_function.resize(m_nFields); + for (std::string & field : m_Ax_ext_grid_function) { field = "0.0"; } + for (std::string & field : m_Ay_ext_grid_function) { field = "0.0"; } + for (std::string & field : m_Az_ext_grid_function) { field = "0.0"; } + + m_A_external_parser.resize(m_nFields); + m_A_external.resize(m_nFields); + + m_A_ext_time_function.resize(m_nFields); + for (std::string & field_time : m_A_ext_time_function) {field_time = "1.0"; } + + m_A_external_time_parser.resize(m_nFields); + m_A_time_scale.resize(m_nFields); + + m_read_A_from_file.resize(m_nFields); + m_external_file_path.resize(m_nFields); + for (std::string & file_name : m_external_file_path) { file_name = ""; } + + for (int i = 0; i < m_nFields; ++i) { + bool read_from_file = false; + utils::parser::queryWithParser(pp_ext_A, + (m_field_names[i]+".read_from_file").c_str(), read_from_file); + m_read_A_from_file[i] = read_from_file; + + if (m_read_A_from_file[i]) { + pp_ext_A.query((m_field_names[i]+".path").c_str(), m_external_file_path[i]); + } else { + pp_ext_A.query((m_field_names[i]+".Ax_external_grid_function(x,y,z)").c_str(), + m_Ax_ext_grid_function[i]); + pp_ext_A.query((m_field_names[i]+".Ay_external_grid_function(x,y,z)").c_str(), + m_Ay_ext_grid_function[i]); + pp_ext_A.query((m_field_names[i]+".Az_external_grid_function(x,y,z)").c_str(), + m_Az_ext_grid_function[i]); + } + + pp_ext_A.query((m_field_names[i]+".A_time_external_function(t)").c_str(), + m_A_ext_time_function[i]); + } +} + +void +ExternalVectorPotential::AllocateLevelMFs ( + ablastr::fields::MultiFabRegister & fields, + int lev, const BoxArray& ba, const DistributionMapping& dm, + const int ncomps, + const IntVect& ngEB, + const IntVect& Ex_nodal_flag, + const IntVect& Ey_nodal_flag, + const IntVect& Ez_nodal_flag, + const IntVect& Bx_nodal_flag, + const IntVect& By_nodal_flag, + const IntVect& Bz_nodal_flag) +{ + using ablastr::fields::Direction; + for (std::string const & field_name : m_field_names) { + const std::string Aext_field = field_name + std::string{"_Aext"}; + fields.alloc_init(Aext_field, Direction{0}, + lev, amrex::convert(ba, Ex_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(Aext_field, Direction{1}, + lev, amrex::convert(ba, Ey_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(Aext_field, Direction{2}, + lev, amrex::convert(ba, Ez_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + + const std::string curlAext_field = field_name + std::string{"_curlAext"}; + fields.alloc_init(curlAext_field, Direction{0}, + lev, amrex::convert(ba, Bx_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(curlAext_field, Direction{1}, + lev, amrex::convert(ba, By_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(curlAext_field, Direction{2}, + lev, amrex::convert(ba, Bz_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + } + fields.alloc_init(FieldType::hybrid_E_fp_external, Direction{0}, + lev, amrex::convert(ba, Ex_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(FieldType::hybrid_E_fp_external, Direction{1}, + lev, amrex::convert(ba, Ey_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(FieldType::hybrid_E_fp_external, Direction{2}, + lev, amrex::convert(ba, Ez_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(FieldType::hybrid_B_fp_external, Direction{0}, + lev, amrex::convert(ba, Bx_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(FieldType::hybrid_B_fp_external, Direction{1}, + lev, amrex::convert(ba, By_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); + fields.alloc_init(FieldType::hybrid_B_fp_external, Direction{2}, + lev, amrex::convert(ba, Bz_nodal_flag), + dm, ncomps, ngEB, 0.0_rt); +} + +void +ExternalVectorPotential::InitData () +{ + using ablastr::fields::Direction; + auto& warpx = WarpX::GetInstance(); + + int A_time_dep_count = 0; + + for (int i = 0; i < m_nFields; ++i) { + + const std::string Aext_field = m_field_names[i] + std::string{"_Aext"}; + + if (m_read_A_from_file[i]) { + // Read A fields from file + for (auto lev = 0; lev <= warpx.finestLevel(); ++lev) { +#if defined(WARPX_DIM_RZ) + warpx.ReadExternalFieldFromFile(m_external_file_path[i], + warpx.m_fields.get(Aext_field, Direction{0}, lev), + "A", "r"); + warpx.ReadExternalFieldFromFile(m_external_file_path[i], + warpx.m_fields.get(Aext_field, Direction{1}, lev), + "A", "t"); + warpx.ReadExternalFieldFromFile(m_external_file_path[i], + warpx.m_fields.get(Aext_field, Direction{2}, lev), + "A", "z"); +#else + warpx.ReadExternalFieldFromFile(m_external_file_path[i], + warpx.m_fields.get(Aext_field, Direction{0}, lev), + "A", "x"); + warpx.ReadExternalFieldFromFile(m_external_file_path[i], + warpx.m_fields.get(Aext_field, Direction{1}, lev), + "A", "y"); + warpx.ReadExternalFieldFromFile(m_external_file_path[i], + warpx.m_fields.get(Aext_field, Direction{2}, lev), + "A", "z"); +#endif + } + } else { + // Initialize the A fields from expression + m_A_external_parser[i][0] = std::make_unique( + utils::parser::makeParser(m_Ax_ext_grid_function[i],{"x","y","z","t"})); + m_A_external_parser[i][1] = std::make_unique( + utils::parser::makeParser(m_Ay_ext_grid_function[i],{"x","y","z","t"})); + m_A_external_parser[i][2] = std::make_unique( + utils::parser::makeParser(m_Az_ext_grid_function[i],{"x","y","z","t"})); + m_A_external[i][0] = m_A_external_parser[i][0]->compile<4>(); + m_A_external[i][1] = m_A_external_parser[i][1]->compile<4>(); + m_A_external[i][2] = m_A_external_parser[i][2]->compile<4>(); + + // check if the external current parsers depend on time + for (int idim=0; idim<3; idim++) { + const std::set A_ext_symbols = m_A_external_parser[i][idim]->symbols(); + WARPX_ALWAYS_ASSERT_WITH_MESSAGE(A_ext_symbols.count("t") == 0, + "Externally Applied Vector potential time variation must be set with A_time_external_function(t)"); + } + + // Initialize data onto grid + for (auto lev = 0; lev <= warpx.finestLevel(); ++lev) { + warpx.ComputeExternalFieldOnGridUsingParser( + Aext_field, + m_A_external[i][0], + m_A_external[i][1], + m_A_external[i][2], + lev, PatchType::fine, + warpx.GetEBUpdateEFlag(), + false); + + for (int idir = 0; idir < 3; ++idir) { + warpx.m_fields.get(Aext_field, Direction{idir}, lev)-> + FillBoundary(warpx.Geom(lev).periodicity()); + } + } + } + + amrex::Gpu::streamSynchronize(); + + CalculateExternalCurlA(m_field_names[i]); + + // Generate parser for time function + m_A_external_time_parser[i] = std::make_unique( + utils::parser::makeParser(m_A_ext_time_function[i],{"t",})); + m_A_time_scale[i] = m_A_external_time_parser[i]->compile<1>(); + + const std::set A_time_ext_symbols = m_A_external_time_parser[i]->symbols(); + A_time_dep_count += static_cast(A_time_ext_symbols.count("t")); + } + + if (A_time_dep_count > 0) { + ablastr::warn_manager::WMRecordWarning( + "HybridPIC ExternalVectorPotential", + "Coulomb Gauge is Expected, please be sure to have a divergence free A. Divergence cleaning of A to be implemented soon.", + ablastr::warn_manager::WarnPriority::low + ); + } + + UpdateHybridExternalFields(warpx.gett_new(0), warpx.getdt(0)); +} + + +void +ExternalVectorPotential::CalculateExternalCurlA () +{ + for (auto fname : m_field_names) { + CalculateExternalCurlA(fname); + } +} + +void +ExternalVectorPotential::CalculateExternalCurlA (std::string& coil_name) +{ + using ablastr::fields::Direction; + auto & warpx = WarpX::GetInstance(); + + // Compute the curl of the reference A field (unscaled by time function) + const std::string Aext_field = coil_name + std::string{"_Aext"}; + const std::string curlAext_field = coil_name + std::string{"_curlAext"}; + + ablastr::fields::MultiLevelVectorField A_ext = + warpx.m_fields.get_mr_levels_alldirs(Aext_field, warpx.finestLevel()); + ablastr::fields::MultiLevelVectorField curlA_ext = + warpx.m_fields.get_mr_levels_alldirs(curlAext_field, warpx.finestLevel()); + + for (int lev = 0; lev <= warpx.finestLevel(); ++lev) { + warpx.get_pointer_fdtd_solver_fp(lev)->ComputeCurlA( + curlA_ext[lev], + A_ext[lev], + warpx.GetEBUpdateBFlag()[lev], + lev); + + for (int idir = 0; idir < 3; ++idir) { + warpx.m_fields.get(curlAext_field, Direction{idir}, lev)-> + FillBoundary(warpx.Geom(lev).periodicity()); + } + } +} + +AMREX_FORCE_INLINE +void +ExternalVectorPotential::PopulateExternalFieldFromVectorPotential ( + ablastr::fields::VectorField const& dstField, + amrex::Real scale_factor, + ablastr::fields::VectorField const& srcField, + std::array< std::unique_ptr,3> const& eb_update) +{ + // Loop through the grids, and over the tiles within each grid +#ifdef AMREX_USE_OMP +#pragma omp parallel if (amrex::Gpu::notInLaunchRegion()) +#endif + for ( MFIter mfi(*dstField[0], TilingIfNotGPU()); mfi.isValid(); ++mfi ) { + // Extract field data for this grid/tile + Array4 const& Fx = dstField[0]->array(mfi); + Array4 const& Fy = dstField[1]->array(mfi); + Array4 const& Fz = dstField[2]->array(mfi); + + Array4 const& Sx = srcField[0]->const_array(mfi); + Array4 const& Sy = srcField[1]->const_array(mfi); + Array4 const& Sz = srcField[2]->const_array(mfi); + + // Extract structures indicating where the fields + // should be updated, given the position of the embedded boundaries. + amrex::Array4 update_Fx_arr, update_Fy_arr, update_Fz_arr; + if (EB::enabled()) { + update_Fx_arr = eb_update[0]->array(mfi); + update_Fy_arr = eb_update[1]->array(mfi); + update_Fz_arr = eb_update[2]->array(mfi); + } + + // Extract tileboxes for which to loop + Box const& tbx = mfi.tilebox(dstField[0]->ixType().toIntVect()); + Box const& tby = mfi.tilebox(dstField[1]->ixType().toIntVect()); + Box const& tbz = mfi.tilebox(dstField[2]->ixType().toIntVect()); + + // Loop over the cells and update the fields + amrex::ParallelFor(tbx, tby, tbz, + + [=] AMREX_GPU_DEVICE (int i, int j, int k){ + // Skip field update in the embedded boundaries + if (update_Fx_arr && update_Fx_arr(i, j, k) == 0) { return; } + + Fx(i,j,k) = scale_factor * Sx(i,j,k); + }, + + [=] AMREX_GPU_DEVICE (int i, int j, int k){ + // Skip field update in the embedded boundaries + if (update_Fy_arr && update_Fy_arr(i, j, k) == 0) { return; } + + Fy(i,j,k) = scale_factor * Sy(i,j,k); + }, + + [=] AMREX_GPU_DEVICE (int i, int j, int k){ + // Skip field update in the embedded boundaries + if (update_Fz_arr && update_Fz_arr(i, j, k) == 0) { return; } + + Fz(i,j,k) = scale_factor * Sz(i,j,k); + } + ); + } +} + +void +ExternalVectorPotential::UpdateHybridExternalFields (const amrex::Real t, const amrex::Real dt) +{ + using ablastr::fields::Direction; + auto& warpx = WarpX::GetInstance(); + + + ablastr::fields::MultiLevelVectorField B_ext = + warpx.m_fields.get_mr_levels_alldirs(FieldType::hybrid_B_fp_external, warpx.finestLevel()); + ablastr::fields::MultiLevelVectorField E_ext = + warpx.m_fields.get_mr_levels_alldirs(FieldType::hybrid_E_fp_external, warpx.finestLevel()); + + for (int i = 0; i < m_nFields; ++i) { + const std::string Aext_field = m_field_names[i] + std::string{"_Aext"}; + const std::string curlAext_field = m_field_names[i] + std::string{"_curlAext"}; + + // Get B-field Scaling Factor + const amrex::Real scale_factor_B = m_A_time_scale[i](t); + + // Get dA/dt scaling factor based on time centered FD around t + const amrex::Real sf_l = m_A_time_scale[i](t-0.5_rt*dt); + const amrex::Real sf_r = m_A_time_scale[i](t+0.5_rt*dt); + const amrex::Real scale_factor_E = -(sf_r - sf_l)/dt; + + ablastr::fields::MultiLevelVectorField A_ext = + warpx.m_fields.get_mr_levels_alldirs(Aext_field, warpx.finestLevel()); + ablastr::fields::MultiLevelVectorField curlA_ext = + warpx.m_fields.get_mr_levels_alldirs(curlAext_field, warpx.finestLevel()); + + for (int lev = 0; lev <= warpx.finestLevel(); ++lev) { + PopulateExternalFieldFromVectorPotential(E_ext[lev], scale_factor_E, A_ext[lev], warpx.GetEBUpdateEFlag()[lev]); + PopulateExternalFieldFromVectorPotential(B_ext[lev], scale_factor_B, curlA_ext[lev], warpx.GetEBUpdateBFlag()[lev]); + + for (int idir = 0; idir < 3; ++idir) { + E_ext[lev][Direction{idir}]->FillBoundary(warpx.Geom(lev).periodicity()); + B_ext[lev][Direction{idir}]->FillBoundary(warpx.Geom(lev).periodicity()); + } + } + } + amrex::Gpu::streamSynchronize(); +} diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H index 4b50c16a0c8..2a489e1c806 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.H @@ -1,8 +1,9 @@ -/* Copyright 2023 The WarpX Community +/* Copyright 2023-2024 The WarpX Community * * This file is part of WarpX. * * Authors: Roelof Groenewald (TAE Technologies) + * S. Eric Clark (Helion Energy) * * License: BSD-3-Clause-LBNL */ @@ -12,6 +13,9 @@ #include "HybridPICModel_fwd.H" +#include "Fields.H" + +#include "ExternalVectorPotential.H" #include "Utils/WarpXAlgorithmSelection.H" #include "FieldSolver/FiniteDifferenceSolver/FiniteDifferenceSolver.H" @@ -23,6 +27,9 @@ #include #include +#include +#include +#include #include @@ -39,11 +46,26 @@ public: void ReadParameters (); /** Allocate hybrid-PIC specific multifabs. Called in constructor. */ - void AllocateLevelMFs (ablastr::fields::MultiFabRegister & fields, - int lev, const amrex::BoxArray& ba, const amrex::DistributionMapping& dm, - int ncomps, const amrex::IntVect& ngJ, const amrex::IntVect& ngRho, - const amrex::IntVect& jx_nodal_flag, const amrex::IntVect& jy_nodal_flag, - const amrex::IntVect& jz_nodal_flag, const amrex::IntVect& rho_nodal_flag); + void AllocateLevelMFs ( + ablastr::fields::MultiFabRegister & fields, + int lev, + const amrex::BoxArray& ba, + const amrex::DistributionMapping& dm, + int ncomps, + const amrex::IntVect& ngJ, + const amrex::IntVect& ngRho, + const amrex::IntVect& ngEB, + const amrex::IntVect& jx_nodal_flag, + const amrex::IntVect& jy_nodal_flag, + const amrex::IntVect& jz_nodal_flag, + const amrex::IntVect& rho_nodal_flag, + const amrex::IntVect& Ex_nodal_flag, + const amrex::IntVect& Ey_nodal_flag, + const amrex::IntVect& Ez_nodal_flag, + const amrex::IntVect& Bx_nodal_flag, + const amrex::IntVect& By_nodal_flag, + const amrex::IntVect& Bz_nodal_flag + ) const; void InitData (); @@ -142,7 +164,7 @@ public: * charge density (and assumption of quasi-neutrality) using the user * specified electron equation of state. * - * \param[out] Pe_filed scalar electron pressure MultiFab at a given level + * \param[out] Pe_field scalar electron pressure MultiFab at a given level * \param[in] rho_field scalar ion charge density Multifab at a given level */ void FillElectronPressureMF ( @@ -153,6 +175,8 @@ public: /** Number of substeps to take when evolving B */ int m_substeps = 10; + bool m_holmstrom_vacuum_region = false; + /** Electron temperature in eV */ amrex::Real m_elec_temp; /** Reference electron density */ @@ -178,7 +202,11 @@ public: std::string m_Jz_ext_grid_function = "0.0"; std::array< std::unique_ptr, 3> m_J_external_parser; std::array< amrex::ParserExecutor<4>, 3> m_J_external; - bool m_external_field_has_time_dependence = false; + bool m_external_current_has_time_dependence = false; + + /** External E/B fields */ + bool m_add_external_fields = false; + std::unique_ptr m_external_vector_potential; /** Gpu Vector with index type of the Jx multifab */ amrex::GpuArray Jx_IndexType; diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp index 64ee83b10e0..3e5c04e9794 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/HybridPICModel.cpp @@ -1,8 +1,9 @@ -/* Copyright 2023 The WarpX Community +/* Copyright 2023-2024 The WarpX Community * * This file is part of WarpX. * * Authors: Roelof Groenewald (TAE Technologies) + * S. Eric Clark (Helion Energy) * * License: BSD-3-Clause-LBNL */ @@ -12,6 +13,8 @@ #include "EmbeddedBoundary/Enabled.H" #include "Python/callbacks.H" #include "Fields.H" +#include "Particles/MultiParticleContainer.H" +#include "ExternalVectorPotential.H" #include "WarpX.H" using namespace amrex; @@ -30,6 +33,8 @@ void HybridPICModel::ReadParameters () // of sub steps can be specified by the user (defaults to 50). utils::parser::queryWithParser(pp_hybrid, "substeps", m_substeps); + utils::parser::queryWithParser(pp_hybrid, "holmstrom_vacuum_region", m_holmstrom_vacuum_region); + // The hybrid model requires an electron temperature, reference density // and exponent to be given. These values will be used to calculate the // electron pressure according to p = n0 * Te * (n/n0)^gamma @@ -54,15 +59,31 @@ void HybridPICModel::ReadParameters () pp_hybrid.query("Jx_external_grid_function(x,y,z,t)", m_Jx_ext_grid_function); pp_hybrid.query("Jy_external_grid_function(x,y,z,t)", m_Jy_ext_grid_function); pp_hybrid.query("Jz_external_grid_function(x,y,z,t)", m_Jz_ext_grid_function); + + // external fields + pp_hybrid.query("add_external_fields", m_add_external_fields); + + if (m_add_external_fields) { + m_external_vector_potential = std::make_unique(); + } } -void HybridPICModel::AllocateLevelMFs (ablastr::fields::MultiFabRegister & fields, - int lev, const BoxArray& ba, const DistributionMapping& dm, - const int ncomps, const IntVect& ngJ, const IntVect& ngRho, - const IntVect& jx_nodal_flag, - const IntVect& jy_nodal_flag, - const IntVect& jz_nodal_flag, - const IntVect& rho_nodal_flag) +void HybridPICModel::AllocateLevelMFs ( + ablastr::fields::MultiFabRegister & fields, + int lev, const BoxArray& ba, const DistributionMapping& dm, + const int ncomps, + const IntVect& ngJ, const IntVect& ngRho, + const IntVect& ngEB, + const IntVect& jx_nodal_flag, + const IntVect& jy_nodal_flag, + const IntVect& jz_nodal_flag, + const IntVect& rho_nodal_flag, + const IntVect& Ex_nodal_flag, + const IntVect& Ey_nodal_flag, + const IntVect& Ez_nodal_flag, + const IntVect& Bx_nodal_flag, + const IntVect& By_nodal_flag, + const IntVect& Bz_nodal_flag) const { using ablastr::fields::Direction; @@ -114,6 +135,16 @@ void HybridPICModel::AllocateLevelMFs (ablastr::fields::MultiFabRegister & field lev, amrex::convert(ba, jz_nodal_flag), dm, ncomps, IntVect(1), 0.0_rt); + if (m_add_external_fields) { + m_external_vector_potential->AllocateLevelMFs( + fields, + lev, ba, dm, + ncomps, ngEB, + Ex_nodal_flag, Ey_nodal_flag, Ez_nodal_flag, + Bx_nodal_flag, By_nodal_flag, Bz_nodal_flag + ); + } + #ifdef WARPX_DIM_RZ WARPX_ALWAYS_ASSERT_WITH_MESSAGE( (ncomps == 1), @@ -142,7 +173,7 @@ void HybridPICModel::InitData () // check if the external current parsers depend on time for (int i=0; i<3; i++) { const std::set J_ext_symbols = m_J_external_parser[i]->symbols(); - m_external_field_has_time_dependence += J_ext_symbols.count("t"); + m_external_current_has_time_dependence += J_ext_symbols.count("t"); } auto & warpx = WarpX::GetInstance(); @@ -230,11 +261,15 @@ void HybridPICModel::InitData () lev, PatchType::fine, warpx.GetEBUpdateEFlag()); } + + if (m_add_external_fields) { + m_external_vector_potential->InitData(); + } } void HybridPICModel::GetCurrentExternal () { - if (!m_external_field_has_time_dependence) { return; } + if (!m_external_current_has_time_dependence) { return; } auto& warpx = WarpX::GetInstance(); for (int lev = 0; lev <= warpx.finestLevel(); ++lev) @@ -541,6 +576,7 @@ void HybridPICModel::BfieldEvolveRK ( } } + void HybridPICModel::FieldPush ( ablastr::fields::MultiLevelVectorField const& Bfield, ablastr::fields::MultiLevelVectorField const& Efield, @@ -552,13 +588,15 @@ void HybridPICModel::FieldPush ( { auto& warpx = WarpX::GetInstance(); + amrex::Real const t_old = warpx.gett_old(0); + // Calculate J = curl x B / mu0 - J_ext CalculatePlasmaCurrent(Bfield, eb_update_E); // Calculate the E-field from Ohm's law HybridPICSolveE(Efield, Jfield, Bfield, rhofield, eb_update_E, true); warpx.FillBoundaryE(ng, nodal_sync); + // Push forward the B-field using Faraday's law - amrex::Real const t_old = warpx.gett_old(0); warpx.EvolveB(dt, dt_type, t_old); warpx.FillBoundaryB(ng, nodal_sync); } diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package index 8145cfcef2f..d4fa9bfc390 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel/Make.package @@ -1,3 +1,4 @@ CEXE_sources += HybridPICModel.cpp +CEXE_sources += ExternalVectorPotential.cpp VPATH_LOCATIONS += $(WARPX_HOME)/Source/FieldSolver/FiniteDifferenceSolver/HybridPICModel diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp index 2047e87b696..b750a7e4f20 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp @@ -1,8 +1,9 @@ -/* Copyright 2023 The WarpX Community +/* Copyright 2023-2024 The WarpX Community * * This file is part of WarpX. * * Authors: Roelof Groenewald (TAE Technologies) + * S. Eric Clark (Helion Energy) * * License: BSD-3-Clause-LBNL */ @@ -22,6 +23,7 @@ #include using namespace amrex; +using warpx::fields::FieldType; void FiniteDifferenceSolver::CalculateCurrentAmpere ( ablastr::fields::VectorField & Jfield, @@ -429,6 +431,17 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( const bool include_hyper_resistivity_term = (eta_h > 0.0) && solve_for_Faraday; + const bool include_external_fields = hybrid_model->m_add_external_fields; + + const bool holmstrom_vacuum_region = hybrid_model->m_holmstrom_vacuum_region; + + auto & warpx = WarpX::GetInstance(); + ablastr::fields::VectorField Bfield_external, Efield_external; + if (include_external_fields) { + Bfield_external = warpx.m_fields.get_alldirs(FieldType::hybrid_B_fp_external, 0); // lev=0 + Efield_external = warpx.m_fields.get_alldirs(FieldType::hybrid_E_fp_external, 0); // lev=0 + } + // Index type required for interpolating fields from their respective // staggering to the Ex, Ey, Ez locations amrex::GpuArray const& Er_stag = hybrid_model->Ex_IndexType; @@ -485,6 +498,13 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( Array4 const& Bt = Bfield[1]->const_array(mfi); Array4 const& Bz = Bfield[2]->const_array(mfi); + Array4 Br_ext, Bt_ext, Bz_ext; + if (include_external_fields) { + Br_ext = Bfield_external[0]->array(mfi); + Bt_ext = Bfield_external[1]->array(mfi); + Bz_ext = Bfield_external[2]->array(mfi); + } + // Loop over the cells and update the nodal E field amrex::ParallelFor(mfi.tilebox(), [=] AMREX_GPU_DEVICE (int i, int j, int /*k*/){ @@ -499,9 +519,15 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( auto const jiz_interp = Interp(Jiz, Jz_stag, nodal, coarsen, i, j, 0, 0); // interpolate the B field to a nodal grid - auto const Br_interp = Interp(Br, Br_stag, nodal, coarsen, i, j, 0, 0); - auto const Bt_interp = Interp(Bt, Bt_stag, nodal, coarsen, i, j, 0, 0); - auto const Bz_interp = Interp(Bz, Bz_stag, nodal, coarsen, i, j, 0, 0); + auto Br_interp = Interp(Br, Br_stag, nodal, coarsen, i, j, 0, 0); + auto Bt_interp = Interp(Bt, Bt_stag, nodal, coarsen, i, j, 0, 0); + auto Bz_interp = Interp(Bz, Bz_stag, nodal, coarsen, i, j, 0, 0); + + if (include_external_fields) { + Br_interp += Interp(Br_ext, Br_stag, nodal, coarsen, i, j, 0, 0); + Bt_interp += Interp(Bt_ext, Bt_stag, nodal, coarsen, i, j, 0, 0); + Bz_interp += Interp(Bz_ext, Bz_stag, nodal, coarsen, i, j, 0, 0); + } // calculate enE = (J - Ji) x B enE_nodal(i, j, 0, 0) = ( @@ -558,6 +584,13 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( update_Ez_arr = eb_update_E[2]->array(mfi); } + Array4 Er_ext, Et_ext, Ez_ext; + if (include_external_fields) { + Er_ext = Efield_external[0]->array(mfi); + Et_ext = Efield_external[1]->array(mfi); + Ez_ext = Efield_external[2]->array(mfi); + } + // Extract stencil coefficients Real const * const AMREX_RESTRICT coefs_r = m_stencil_coefs_r.dataPtr(); int const n_coefs_r = static_cast(m_stencil_coefs_r.size()); @@ -582,7 +615,8 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( if (update_Er_arr && update_Er_arr(i, j, 0) == 0) { return; } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Er_stag, coarsen, i, j, 0, 0); + const Real rho_val = Interp(rho, nodal, Er_stag, coarsen, i, j, 0, 0); + Real rho_val_limited = rho_val; // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -594,7 +628,7 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( } // safety condition since we divide by rho_val later - if (rho_val < rho_floor) { rho_val = rho_floor; } + if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } // Get the gradient of the electron pressure if the longitudinal part of // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 @@ -604,7 +638,11 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // interpolate the nodal neE values to the Yee grid auto enE_r = Interp(enE, nodal, Er_stag, coarsen, i, j, 0, 0); - Er(i, j, 0) = (enE_r - grad_Pe) / rho_val; + if (rho_val < rho_floor && holmstrom_vacuum_region) { + Er(i, j, 0) = 0._rt; + } else { + Er(i, j, 0) = (enE_r - grad_Pe) / rho_val_limited; + } // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Er(i, j, 0) += eta(rho_val, jtot_val) * Jr(i, j, 0); } @@ -617,6 +655,10 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( + T_Algo::Dzz(Jr, coefs_z, n_coefs_z, i, j, 0, 0) - jr_val/(r*r); Er(i, j, 0) -= eta_h * nabla2Jr; } + + if (include_external_fields && (rho_val >= rho_floor)) { + Er(i, j, 0) -= Er_ext(i, j, 0); + } }, // Et calculation @@ -634,7 +676,8 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Et_stag, coarsen, i, j, 0, 0); + const Real rho_val = Interp(rho, nodal, Et_stag, coarsen, i, j, 0, 0); + Real rho_val_limited = rho_val; // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -646,7 +689,7 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( } // safety condition since we divide by rho_val later - if (rho_val < rho_floor) { rho_val = rho_floor; } + if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } // Get the gradient of the electron pressure // -> d/dt = 0 for m = 0 @@ -655,7 +698,11 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // interpolate the nodal neE values to the Yee grid auto enE_t = Interp(enE, nodal, Et_stag, coarsen, i, j, 0, 1); - Et(i, j, 0) = (enE_t - grad_Pe) / rho_val; + if (rho_val < rho_floor && holmstrom_vacuum_region) { + Et(i, j, 0) = 0._rt; + } else { + Et(i, j, 0) = (enE_t - grad_Pe) / rho_val_limited; + } // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Et(i, j, 0) += eta(rho_val, jtot_val) * Jt(i, j, 0); } @@ -664,9 +711,12 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( const Real jt_val = Interp(Jt, Jt_stag, Et_stag, coarsen, i, j, 0, 0); auto nabla2Jt = T_Algo::Dr_rDr_over_r(Jt, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) + T_Algo::Dzz(Jt, coefs_z, n_coefs_z, i, j, 0, 0) - jt_val/(r*r); - Et(i, j, 0) -= eta_h * nabla2Jt; } + + if (include_external_fields && (rho_val >= rho_floor)) { + Et(i, j, 0) -= Et_ext(i, j, 0); + } }, // Ez calculation @@ -676,7 +726,8 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( if (update_Ez_arr && update_Ez_arr(i, j, 0) == 0) { return; } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, 0, 0); + const Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, 0, 0); + Real rho_val_limited = rho_val; // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -688,7 +739,7 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( } // safety condition since we divide by rho_val later - if (rho_val < rho_floor) { rho_val = rho_floor; } + if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } // Get the gradient of the electron pressure if the longitudinal part of // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 @@ -698,7 +749,11 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // interpolate the nodal neE values to the Yee grid auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, 0, 2); - Ez(i, j, 0) = (enE_z - grad_Pe) / rho_val; + if (rho_val < rho_floor && holmstrom_vacuum_region) { + Ez(i, j, 0) = 0._rt; + } else { + Ez(i, j, 0) = (enE_z - grad_Pe) / rho_val_limited; + } // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Ez(i, j, 0) += eta(rho_val, jtot_val) * Jz(i, j, 0); } @@ -714,6 +769,10 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( Ez(i, j, 0) -= eta_h * nabla2Jz; } + + if (include_external_fields && (rho_val >= rho_floor)) { + Ez(i, j, 0) -= Ez_ext(i, j, 0); + } } ); @@ -753,6 +812,17 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( const bool include_hyper_resistivity_term = (eta_h > 0.) && solve_for_Faraday; + const bool include_external_fields = hybrid_model->m_add_external_fields; + + const bool holmstrom_vacuum_region = hybrid_model->m_holmstrom_vacuum_region; + + auto & warpx = WarpX::GetInstance(); + ablastr::fields::VectorField Bfield_external, Efield_external; + if (include_external_fields) { + Bfield_external = warpx.m_fields.get_alldirs(FieldType::hybrid_B_fp_external, 0); // lev=0 + Efield_external = warpx.m_fields.get_alldirs(FieldType::hybrid_E_fp_external, 0); // lev=0 + } + // Index type required for interpolating fields from their respective // staggering to the Ex, Ey, Ez locations amrex::GpuArray const& Ex_stag = hybrid_model->Ex_IndexType; @@ -809,6 +879,13 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( Array4 const& By = Bfield[1]->const_array(mfi); Array4 const& Bz = Bfield[2]->const_array(mfi); + Array4 Bx_ext, By_ext, Bz_ext; + if (include_external_fields) { + Bx_ext = Bfield_external[0]->array(mfi); + By_ext = Bfield_external[1]->array(mfi); + Bz_ext = Bfield_external[2]->array(mfi); + } + // Loop over the cells and update the nodal E field amrex::ParallelFor(mfi.tilebox(), [=] AMREX_GPU_DEVICE (int i, int j, int k){ @@ -823,9 +900,15 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( auto const jiz_interp = Interp(Jiz, Jz_stag, nodal, coarsen, i, j, k, 0); // interpolate the B field to a nodal grid - auto const Bx_interp = Interp(Bx, Bx_stag, nodal, coarsen, i, j, k, 0); - auto const By_interp = Interp(By, By_stag, nodal, coarsen, i, j, k, 0); - auto const Bz_interp = Interp(Bz, Bz_stag, nodal, coarsen, i, j, k, 0); + auto Bx_interp = Interp(Bx, Bx_stag, nodal, coarsen, i, j, k, 0); + auto By_interp = Interp(By, By_stag, nodal, coarsen, i, j, k, 0); + auto Bz_interp = Interp(Bz, Bz_stag, nodal, coarsen, i, j, k, 0); + + if (include_external_fields) { + Bx_interp += Interp(Bx_ext, Bx_stag, nodal, coarsen, i, j, k, 0); + By_interp += Interp(By_ext, By_stag, nodal, coarsen, i, j, k, 0); + Bz_interp += Interp(Bz_ext, Bz_stag, nodal, coarsen, i, j, k, 0); + } // calculate enE = (J - Ji) x B enE_nodal(i, j, k, 0) = ( @@ -882,6 +965,13 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( update_Ez_arr = eb_update_E[2]->array(mfi); } + Array4 Ex_ext, Ey_ext, Ez_ext; + if (include_external_fields) { + Ex_ext = Efield_external[0]->array(mfi); + Ey_ext = Efield_external[1]->array(mfi); + Ez_ext = Efield_external[2]->array(mfi); + } + // Extract stencil coefficients Real const * const AMREX_RESTRICT coefs_x = m_stencil_coefs_x.dataPtr(); auto const n_coefs_x = static_cast(m_stencil_coefs_x.size()); @@ -904,7 +994,8 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( if (update_Ex_arr && update_Ex_arr(i, j, k) == 0) { return; } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Ex_stag, coarsen, i, j, k, 0); + const Real rho_val = Interp(rho, nodal, Ex_stag, coarsen, i, j, k, 0); + Real rho_val_limited = rho_val; // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -916,7 +1007,7 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( } // safety condition since we divide by rho_val later - if (rho_val < rho_floor) { rho_val = rho_floor; } + if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } // Get the gradient of the electron pressure if the longitudinal part of // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 @@ -926,7 +1017,11 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( // interpolate the nodal neE values to the Yee grid auto enE_x = Interp(enE, nodal, Ex_stag, coarsen, i, j, k, 0); - Ex(i, j, k) = (enE_x - grad_Pe) / rho_val; + if (rho_val < rho_floor && holmstrom_vacuum_region) { + Ex(i, j, k) = 0._rt; + } else { + Ex(i, j, k) = (enE_x - grad_Pe) / rho_val_limited; + } // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Ex(i, j, k) += eta(rho_val, jtot_val) * Jx(i, j, k); } @@ -937,6 +1032,10 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( + T_Algo::Dzz(Jx, coefs_z, n_coefs_z, i, j, k); Ex(i, j, k) -= eta_h * nabla2Jx; } + + if (include_external_fields && (rho_val >= rho_floor)) { + Ex(i, j, k) -= Ex_ext(i, j, k); + } }, // Ey calculation @@ -946,7 +1045,8 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( if (update_Ey_arr && update_Ey_arr(i, j, k) == 0) { return; } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Ey_stag, coarsen, i, j, k, 0); + const Real rho_val = Interp(rho, nodal, Ey_stag, coarsen, i, j, k, 0); + Real rho_val_limited = rho_val; // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -958,7 +1058,7 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( } // safety condition since we divide by rho_val later - if (rho_val < rho_floor) { rho_val = rho_floor; } + if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } // Get the gradient of the electron pressure if the longitudinal part of // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 @@ -968,7 +1068,11 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( // interpolate the nodal neE values to the Yee grid auto enE_y = Interp(enE, nodal, Ey_stag, coarsen, i, j, k, 1); - Ey(i, j, k) = (enE_y - grad_Pe) / rho_val; + if (rho_val < rho_floor && holmstrom_vacuum_region) { + Ey(i, j, k) = 0._rt; + } else { + Ey(i, j, k) = (enE_y - grad_Pe) / rho_val_limited; + } // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Ey(i, j, k) += eta(rho_val, jtot_val) * Jy(i, j, k); } @@ -979,6 +1083,10 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( + T_Algo::Dzz(Jy, coefs_z, n_coefs_z, i, j, k); Ey(i, j, k) -= eta_h * nabla2Jy; } + + if (include_external_fields && (rho_val >= rho_floor)) { + Ey(i, j, k) -= Ey_ext(i, j, k); + } }, // Ez calculation @@ -988,7 +1096,8 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( if (update_Ez_arr && update_Ez_arr(i, j, k) == 0) { return; } // Interpolate to get the appropriate charge density in space - Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, k, 0); + const Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, k, 0); + Real rho_val_limited = rho_val; // Interpolate current to appropriate staggering to match E field Real jtot_val = 0._rt; @@ -1000,7 +1109,7 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( } // safety condition since we divide by rho_val later - if (rho_val < rho_floor) { rho_val = rho_floor; } + if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } // Get the gradient of the electron pressure if the longitudinal part of // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 @@ -1010,7 +1119,11 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( // interpolate the nodal neE values to the Yee grid auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, k, 2); - Ez(i, j, k) = (enE_z - grad_Pe) / rho_val; + if (rho_val < rho_floor && holmstrom_vacuum_region) { + Ez(i, j, k) = 0._rt; + } else { + Ez(i, j, k) = (enE_z - grad_Pe) / rho_val_limited; + } // Add resistivity only if E field value is used to update B if (solve_for_Faraday) { Ez(i, j, k) += eta(rho_val, jtot_val) * Jz(i, j, k); } @@ -1021,6 +1134,10 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( + T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, k); Ez(i, j, k) -= eta_h * nabla2Jz; } + + if (include_external_fields && (rho_val >= rho_floor)) { + Ez(i, j, k) -= Ez_ext(i, j, k); + } } ); diff --git a/Source/FieldSolver/FiniteDifferenceSolver/Make.package b/Source/FieldSolver/FiniteDifferenceSolver/Make.package index b3708c411fa..bc71b9b51a2 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/Make.package +++ b/Source/FieldSolver/FiniteDifferenceSolver/Make.package @@ -5,6 +5,7 @@ CEXE_sources += EvolveF.cpp CEXE_sources += EvolveG.cpp CEXE_sources += EvolveECTRho.cpp CEXE_sources += ComputeDivE.cpp +CEXE_sources += ComputeCurlA.cpp CEXE_sources += MacroscopicEvolveE.cpp CEXE_sources += EvolveBPML.cpp CEXE_sources += EvolveEPML.cpp diff --git a/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp b/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp index 18efba3f445..b57def5c4fe 100644 --- a/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp +++ b/Source/FieldSolver/WarpXPushFieldsHybridPIC.cpp @@ -1,8 +1,9 @@ -/* Copyright 2023 The WarpX Community +/* Copyright 2023-2024 The WarpX Community * * This file is part of WarpX. * * Authors: Roelof Groenewald (TAE Technologies) + * S. Eric Clark (Helion Energy) * * License: BSD-3-Clause-LBNL */ @@ -33,6 +34,31 @@ void WarpX::HybridPICEvolveFields () finest_level == 0, "Ohm's law E-solve only works with a single level."); + // Get requested number of substeps to use + const int sub_steps = m_hybrid_pic_model->m_substeps; + + // Get flag to include external fields. + const bool add_external_fields = m_hybrid_pic_model->m_add_external_fields; + + // Handle field splitting for Hybrid field push + if (add_external_fields) { + // Get the external fields + m_hybrid_pic_model->m_external_vector_potential->UpdateHybridExternalFields( + gett_old(0), + 0.5_rt*dt[0]); + + // If using split fields, subtract the external field at the old time + for (int lev = 0; lev <= finest_level; ++lev) { + for (int idim = 0; idim < 3; ++idim) { + MultiFab::Subtract( + *m_fields.get(FieldType::Bfield_fp, Direction{idim}, lev), + *m_fields.get(FieldType::hybrid_B_fp_external, Direction{idim}, lev), + 0, 0, 1, + m_fields.get(FieldType::Bfield_fp, Direction{idim}, lev)->nGrowVect()); + } + } + } + // The particles have now been pushed to their t_{n+1} positions. // Perform charge deposition in component 0 of rho_fp at t_{n+1}. mypc->DepositCharge(m_fields.get_mr_levels(FieldType::rho_fp, finest_level), 0._rt); @@ -64,9 +90,6 @@ void WarpX::HybridPICEvolveFields () } } - // Get requested number of substeps to use - const int sub_steps = m_hybrid_pic_model->m_substeps; - // Get the external current m_hybrid_pic_model->GetCurrentExternal(); @@ -127,6 +150,13 @@ void WarpX::HybridPICEvolveFields () ); } + if (add_external_fields) { + // Get the external fields at E^{n+1/2} + m_hybrid_pic_model->m_external_vector_potential->UpdateHybridExternalFields( + gett_old(0) + 0.5_rt*dt[0], + 0.5_rt*dt[0]); + } + // Now push the B field from t=n+1/2 to t=n+1 using the n+1/2 quantities for (int sub_step = 0; sub_step < sub_steps; sub_step++) { @@ -160,6 +190,12 @@ void WarpX::HybridPICEvolveFields () } } + if (add_external_fields) { + m_hybrid_pic_model->m_external_vector_potential->UpdateHybridExternalFields( + gett_new(0), + 0.5_rt*dt[0]); + } + // Calculate the electron pressure at t=n+1 m_hybrid_pic_model->CalculateElectronPressure(); @@ -175,6 +211,25 @@ void WarpX::HybridPICEvolveFields () m_eb_update_E, false); FillBoundaryE(guard_cells.ng_FieldSolver, WarpX::sync_nodal_points); + // Handle field splitting for Hybrid field push + if (add_external_fields) { + // If using split fields, add the external field at the new time + for (int lev = 0; lev <= finest_level; ++lev) { + for (int idim = 0; idim < 3; ++idim) { + MultiFab::Add( + *m_fields.get(FieldType::Bfield_fp, Direction{idim}, lev), + *m_fields.get(FieldType::hybrid_B_fp_external, Direction{idim}, lev), + 0, 0, 1, + m_fields.get(FieldType::Bfield_fp, Direction{idim}, lev)->nGrowVect()); + MultiFab::Add( + *m_fields.get(FieldType::Efield_fp, Direction{idim}, lev), + *m_fields.get(FieldType::hybrid_E_fp_external, Direction{idim}, lev), + 0, 0, 1, + m_fields.get(FieldType::Efield_fp, Direction{idim}, lev)->nGrowVect()); + } + } + } + // Copy the rho^{n+1} values to rho_fp_temp and the J_i^{n+1/2} values to // current_fp_temp since at the next step those values will be needed as // rho^{n} and J_i^{n-1/2}. @@ -232,3 +287,15 @@ void WarpX::HybridPICDepositInitialRhoAndJ () ); } } + +void +WarpX::CalculateExternalCurlA() { + WARPX_PROFILE("WarpX::CalculateExternalCurlA()"); + + auto & warpx = WarpX::GetInstance(); + + // Get reference to External Field Object + auto* ext_vector = warpx.m_hybrid_pic_model->m_external_vector_potential.get(); + ext_vector->CalculateExternalCurlA(); + +} diff --git a/Source/Fields.H b/Source/Fields.H index 77589c4675e..271d5a835a3 100644 --- a/Source/Fields.H +++ b/Source/Fields.H @@ -50,6 +50,8 @@ namespace warpx::fields hybrid_current_fp_temp, /**< Used with Ohm's law solver. Stores the time interpolated/extrapolated current density */ hybrid_current_fp_plasma, /**< Used with Ohm's law solver. Stores plasma current calculated as J_plasma = curl x B / mu0 - J_ext */ hybrid_current_fp_external, /**< Used with Ohm's law solver. Stores external current */ + hybrid_B_fp_external, /**< Used with Ohm's law solver. Stores external B field */ + hybrid_E_fp_external, /**< Used with Ohm's law solver. Stores external E field */ Efield_cp, /**< Only used with MR. The field that is updated by the field solver at each timestep, on the coarse patch of each level */ Bfield_cp, /**< Only used with MR. The field that is updated by the field solver at each timestep, on the coarse patch of each level */ current_cp, /**< Only used with MR. The current that is used as a source for the field solver, on the coarse patch of each level */ @@ -102,6 +104,8 @@ namespace warpx::fields FieldType::hybrid_current_fp_temp, FieldType::hybrid_current_fp_plasma, FieldType::hybrid_current_fp_external, + FieldType::hybrid_B_fp_external, + FieldType::hybrid_E_fp_external, FieldType::Efield_cp, FieldType::Bfield_cp, FieldType::current_cp, diff --git a/Source/Initialization/WarpXInitData.cpp b/Source/Initialization/WarpXInitData.cpp index 9c2784fe867..90b8d613898 100644 --- a/Source/Initialization/WarpXInitData.cpp +++ b/Source/Initialization/WarpXInitData.cpp @@ -1048,20 +1048,25 @@ WarpX::InitLevelData (int lev, Real /*time*/) } } -void WarpX::ComputeExternalFieldOnGridUsingParser ( - warpx::fields::FieldType field, +template +void ComputeExternalFieldOnGridUsingParser_template ( + T field, amrex::ParserExecutor<4> const& fx_parser, amrex::ParserExecutor<4> const& fy_parser, amrex::ParserExecutor<4> const& fz_parser, int lev, PatchType patch_type, - amrex::Vector,3 > > const& eb_update_field) + amrex::Vector,3 > > const& eb_update_field, + bool use_eb_flags) { - auto t = gett_new(lev); + auto &warpx = WarpX::GetInstance(); + auto const &geom = warpx.Geom(lev); - auto dx_lev = geom[lev].CellSizeArray(); - const RealBox& real_box = geom[lev].ProbDomain(); + auto t = warpx.gett_new(lev); - amrex::IntVect refratio = (lev > 0 ) ? RefRatio(lev-1) : amrex::IntVect(1); + auto dx_lev = geom.CellSizeArray(); + const RealBox& real_box = geom.ProbDomain(); + + amrex::IntVect refratio = (lev > 0 ) ? WarpX::RefRatio(lev-1) : amrex::IntVect(1); if (patch_type == PatchType::coarse) { for (int idim = 0; idim < AMREX_SPACEDIM; ++idim) { dx_lev[idim] = dx_lev[idim] * refratio[idim]; @@ -1069,9 +1074,9 @@ void WarpX::ComputeExternalFieldOnGridUsingParser ( } using ablastr::fields::Direction; - amrex::MultiFab* mfx = m_fields.get(field, Direction{0}, lev); - amrex::MultiFab* mfy = m_fields.get(field, Direction{1}, lev); - amrex::MultiFab* mfz = m_fields.get(field, Direction{2}, lev); + amrex::MultiFab* mfx = warpx.m_fields.get(field, Direction{0}, lev); + amrex::MultiFab* mfy = warpx.m_fields.get(field, Direction{1}, lev); + amrex::MultiFab* mfz = warpx.m_fields.get(field, Direction{2}, lev); const amrex::IntVect x_nodal_flag = mfx->ixType().toIntVect(); const amrex::IntVect y_nodal_flag = mfy->ixType().toIntVect(); @@ -1087,7 +1092,7 @@ void WarpX::ComputeExternalFieldOnGridUsingParser ( auto const& mfzfab = mfz->array(mfi); amrex::Array4 update_fx_arr, update_fy_arr, update_fz_arr; - if (EB::enabled()) { + if (use_eb_flags && EB::enabled()) { update_fx_arr = eb_update_field[lev][0]->array(mfi); update_fy_arr = eb_update_field[lev][1]->array(mfi); update_fz_arr = eb_update_field[lev][2]->array(mfi); @@ -1181,6 +1186,68 @@ void WarpX::ComputeExternalFieldOnGridUsingParser ( } } +void WarpX::ComputeExternalFieldOnGridUsingParser ( + warpx::fields::FieldType field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector,3 > > const& eb_update_field, + bool use_eb_flags) +{ + ComputeExternalFieldOnGridUsingParser_template ( + field, + fx_parser, fy_parser, fz_parser, + lev, patch_type, eb_update_field, + use_eb_flags); +} + +void WarpX::ComputeExternalFieldOnGridUsingParser ( + std::string const& field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector,3 > > const& eb_update_field, + bool use_eb_flags) +{ + ComputeExternalFieldOnGridUsingParser_template ( + field, + fx_parser, fy_parser, fz_parser, + lev, patch_type, eb_update_field, + use_eb_flags); +} + +void WarpX::ComputeExternalFieldOnGridUsingParser ( + warpx::fields::FieldType field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector,3 > > const& eb_update_field) +{ + ComputeExternalFieldOnGridUsingParser_template ( + field, + fx_parser, fy_parser, fz_parser, + lev, patch_type, eb_update_field, + true); +} + +void WarpX::ComputeExternalFieldOnGridUsingParser ( + std::string const& field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector,3 > > const& eb_update_field) +{ + ComputeExternalFieldOnGridUsingParser_template ( + field, + fx_parser, fy_parser, fz_parser, + lev, patch_type, eb_update_field, + true); +} + void WarpX::CheckGuardCells() { for (int lev = 0; lev <= max_level; ++lev) diff --git a/Source/Particles/Gather/GetExternalFields.H b/Source/Particles/Gather/GetExternalFields.H index 7000d6d7c26..90a61bd25db 100644 --- a/Source/Particles/Gather/GetExternalFields.H +++ b/Source/Particles/Gather/GetExternalFields.H @@ -112,9 +112,9 @@ struct GetExternalEBField lab_time = m_gamma_boost*m_time + m_uz_boost*z*inv_c2; z = m_gamma_boost*z + m_uz_boost*m_time; } - Bx = m_Bxfield_partparser(x, y, z, lab_time); - By = m_Byfield_partparser(x, y, z, lab_time); - Bz = m_Bzfield_partparser(x, y, z, lab_time); + Bx = m_Bxfield_partparser((amrex::ParticleReal) x, (amrex::ParticleReal) y, (amrex::ParticleReal) z, lab_time); + By = m_Byfield_partparser((amrex::ParticleReal) x, (amrex::ParticleReal) y, (amrex::ParticleReal) z, lab_time); + Bz = m_Bzfield_partparser((amrex::ParticleReal) x, (amrex::ParticleReal) y, (amrex::ParticleReal) z, lab_time); } if (m_Etype == RepeatedPlasmaLens || diff --git a/Source/Python/WarpX.cpp b/Source/Python/WarpX.cpp index 870a3a87c91..5b4b07af07b 100644 --- a/Source/Python/WarpX.cpp +++ b/Source/Python/WarpX.cpp @@ -270,6 +270,10 @@ The physical fields in WarpX have the following naming: [] (WarpX& wx) { wx.ProjectionCleanDivB(); }, "Executes projection based divergence cleaner on loaded Bfield_fp_external." ) + .def_static("calculate_hybrid_external_curlA", + [] (WarpX& wx) { wx.CalculateExternalCurlA(); }, + "Executes calculation of the curl of the external A in the hybrid solver." + ) .def("synchronize", [] (WarpX& wx) { wx.Synchronize(); }, "Synchronize particle velocities and positions." diff --git a/Source/WarpX.H b/Source/WarpX.H index ddfd545db74..29439002a3a 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -164,6 +164,7 @@ public: MultiDiagnostics& GetMultiDiags () {return *multi_diags;} ParticleBoundaryBuffer& GetParticleBoundaryBuffer () { return *m_particle_boundary_buffer; } amrex::Vector,3 > >& GetEBUpdateEFlag() { return m_eb_update_E; } + amrex::Vector,3 > >& GetEBUpdateBFlag() { return m_eb_update_B; } amrex::Vector< std::unique_ptr > const & GetEBReduceParticleShapeFlag() const { return m_eb_reduce_particle_shape; } /** @@ -831,6 +832,7 @@ public: void ComputeDivE(amrex::MultiFab& divE, int lev); void ProjectionCleanDivB (); + void CalculateExternalCurlA (); [[nodiscard]] amrex::IntVect getngEB() const { return guard_cells.ng_alloc_EB; } [[nodiscard]] amrex::IntVect getngF() const { return guard_cells.ng_alloc_F; } @@ -875,14 +877,24 @@ public: * on the staggered yee-grid or cell-centered grid, in the interior cells * and guard cells. * - * \param[in] field FieldType + * \param[in] field FieldType to grab from register to write into * \param[in] fx_parser parser function to initialize x-field * \param[in] fy_parser parser function to initialize y-field * \param[in] fz_parser parser function to initialize z-field * \param[in] lev level of the Multifabs that is initialized * \param[in] patch_type PatchType on which the field is initialized (fine or coarse) * \param[in] eb_update_field flag indicating which gridpoints should be modified by this functions + * \param[in] use_eb_flags (default:true) flag indicating if eb points should be excluded or not */ + void ComputeExternalFieldOnGridUsingParser ( + warpx::fields::FieldType field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector,3 > > const& eb_update_field, + bool use_eb_flags); + void ComputeExternalFieldOnGridUsingParser ( warpx::fields::FieldType field, amrex::ParserExecutor<4> const& fx_parser, @@ -891,6 +903,44 @@ public: int lev, PatchType patch_type, amrex::Vector,3 > > const& eb_update_field); + /** + * \brief + * This function computes the E, B, and J fields on each level + * using the parser and the user-defined function for the external fields. + * The subroutine will parse the x_/y_z_external_grid_function and + * then, the field multifab is initialized based on the (x,y,z) position + * on the staggered yee-grid or cell-centered grid, in the interior cells + * and guard cells. + * + * \param[in] field string containing field name to grab from register + * \param[in] fx_parser parser function to initialize x-field + * \param[in] fy_parser parser function to initialize y-field + * \param[in] fz_parser parser function to initialize z-field + * \param[in] edge_lengths edge lengths information + * \param[in] face_areas face areas information + * \param[in] topology flag indicating if field is edge-based or face-based + * \param[in] lev level of the Multifabs that is initialized + * \param[in] patch_type PatchType on which the field is initialized (fine or coarse) + * \param[in] eb_update_field flag indicating which gridpoints should be modified by this functions + * \param[in] use_eb_flags (default:true) flag indicating if eb points should be excluded or not + */ + void ComputeExternalFieldOnGridUsingParser ( + std::string const& field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector< std::array< std::unique_ptr,3> > const& eb_update_field, + bool use_eb_flags); + + void ComputeExternalFieldOnGridUsingParser ( + std::string const& field, + amrex::ParserExecutor<4> const& fx_parser, + amrex::ParserExecutor<4> const& fy_parser, + amrex::ParserExecutor<4> const& fz_parser, + int lev, PatchType patch_type, + amrex::Vector< std::array< std::unique_ptr,3> > const& eb_update_field); + /** * \brief Load field values from a user-specified openPMD file, * for the fields Ex, Ey, Ez, Bx, By, Bz diff --git a/Source/WarpX.cpp b/Source/WarpX.cpp index 4a0633369ce..c9e90850ee1 100644 --- a/Source/WarpX.cpp +++ b/Source/WarpX.cpp @@ -743,12 +743,22 @@ WarpX::ReadParameters () use_kspace_filter = use_filter; use_filter = false; } - else // FDTD + else { - // Filter currently not working with FDTD solver in RZ geometry along R - // (see https://github.com/ECP-WarpX/WarpX/issues/1943) - WARPX_ALWAYS_ASSERT_WITH_MESSAGE(!use_filter || filter_npass_each_dir[0] == 0, - "In RZ geometry with FDTD, filtering can only be apply along z. This can be controlled by setting warpx.filter_npass_each_dir"); + if (WarpX::electromagnetic_solver_id != ElectromagneticSolverAlgo::HybridPIC) { + // Filter currently not working with FDTD solver in RZ geometry along R + // (see https://github.com/ECP-WarpX/WarpX/issues/1943) + WARPX_ALWAYS_ASSERT_WITH_MESSAGE(!use_filter || filter_npass_each_dir[0] == 0, + "In RZ geometry with FDTD, filtering can only be applied along z. This can be controlled by setting warpx.filter_npass_each_dir"); + } else { + if (use_filter && filter_npass_each_dir[0] > 0) { + ablastr::warn_manager::WMRecordWarning( + "HybridPIC ElectromagneticSolver", + "Radial Filtering in RZ is not currently using radial geometric weighting to conserve charge. Use at your own risk.", + ablastr::warn_manager::WarnPriority::low + ); + } + } } #endif @@ -2257,8 +2267,9 @@ WarpX::AllocLevelMFs (int lev, const BoxArray& ba, const DistributionMapping& dm { m_hybrid_pic_model->AllocateLevelMFs( m_fields, - lev, ba, dm, ncomps, ngJ, ngRho, jx_nodal_flag, jy_nodal_flag, - jz_nodal_flag, rho_nodal_flag + lev, ba, dm, ncomps, ngJ, ngRho, ngEB, jx_nodal_flag, jy_nodal_flag, + jz_nodal_flag, rho_nodal_flag, Ex_nodal_flag, Ey_nodal_flag, Ez_nodal_flag, + Bx_nodal_flag, By_nodal_flag, Bz_nodal_flag ); } From 072341c6b02c833e43e12c096bfeba462afd1fbf Mon Sep 17 00:00:00 2001 From: David Grote Date: Tue, 18 Feb 2025 05:44:46 -0800 Subject: [PATCH 47/58] Add PECInsulator to Curl-Curl BC (#5667) This is a temporary fix, setting a boundary condition for the Curl-Curl preconditioner for the implicit solver. This now sets the BC to Dirichlet for the PEC regions. A correct solution would have to be implemented in the curl-curl solver because of the split between the PEC and insulator sections. --- Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp b/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp index ab064772922..0a934693710 100644 --- a/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp +++ b/Source/FieldSolver/ImplicitSolvers/ImplicitSolver.cpp @@ -68,7 +68,12 @@ Array ImplicitSolver::convertFieldBCToLinOpBC (const WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); } else if (a_fbc[i] == FieldBoundaryType::Neumann) { // Also for FieldBoundaryType::PMC - lbc[i] = LinOpBCType::Neumann; + lbc[i] = LinOpBCType::symmetry; + } else if (a_fbc[i] == FieldBoundaryType::PECInsulator) { + ablastr::warn_manager::WMRecordWarning("Implicit solver", + "With PECInsulator, in the Curl-Curl preconditioner Neumann boundary will be used since the full boundary is not yet implemented.", + ablastr::warn_manager::WarnPriority::medium); + lbc[i] = LinOpBCType::symmetry; } else if (a_fbc[i] == FieldBoundaryType::None) { WARPX_ABORT_WITH_MESSAGE("LinOpBCType not set for this FieldBoundaryType"); } else if (a_fbc[i] == FieldBoundaryType::Open) { From 0659286045241691b03bf58d9832aadd6fc73d8d Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Tue, 18 Feb 2025 14:06:51 -0800 Subject: [PATCH 48/58] Perlmutter: SW Install Updates (#5648) - [x] profile: avoid repetition, use `SW_DIR` variable as in `install_...` scripts - [x] move from CFS to PSCRATCH (more stable, faster, where the binary lives); uses an undocumented, purge-exempt location for container images/software - [x] build our own boost (SW stack consistency) - [x] get our own CCache (prior one is gone) - [x] RT tested --- Docs/source/install/hpc/perlmutter.rst | 4 +-- .../install_cpu_dependencies.sh | 30 +++++++++++++++---- .../install_gpu_dependencies.sh | 30 +++++++++++++++---- .../perlmutter_cpu_warpx.profile.example | 30 +++++++++++-------- .../perlmutter_gpu_warpx.profile.example | 30 +++++++++++-------- 5 files changed, 86 insertions(+), 38 deletions(-) diff --git a/Docs/source/install/hpc/perlmutter.rst b/Docs/source/install/hpc/perlmutter.rst index 9612b64476d..7e2ae31630e 100644 --- a/Docs/source/install/hpc/perlmutter.rst +++ b/Docs/source/install/hpc/perlmutter.rst @@ -76,7 +76,7 @@ On Perlmutter, you can run either on GPU nodes with fast A100 GPUs (recommended) .. code-block:: bash bash $HOME/src/warpx/Tools/machines/perlmutter-nersc/install_gpu_dependencies.sh - source ${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/venvs/warpx-gpu/bin/activate + source ${PSCRATCH}/storage/sw/warpx/perlmutter/gpu/venvs/warpx-gpu/bin/activate .. dropdown:: Script Details :color: light @@ -126,7 +126,7 @@ On Perlmutter, you can run either on GPU nodes with fast A100 GPUs (recommended) .. code-block:: bash bash $HOME/src/warpx/Tools/machines/perlmutter-nersc/install_cpu_dependencies.sh - source ${CFS}/${proj}/${USER}/sw/perlmutter/cpu/venvs/warpx-cpu/bin/activate + source ${PSCRATCH}/storage/sw/warpx/perlmutter/cpu/venvs/warpx-cpu/bin/activate .. dropdown:: Script Details :color: light diff --git a/Tools/machines/perlmutter-nersc/install_cpu_dependencies.sh b/Tools/machines/perlmutter-nersc/install_cpu_dependencies.sh index 7608cb3f666..0ef14844493 100755 --- a/Tools/machines/perlmutter-nersc/install_cpu_dependencies.sh +++ b/Tools/machines/perlmutter-nersc/install_cpu_dependencies.sh @@ -31,7 +31,7 @@ fi # Remove old dependencies ##################################################### # -SW_DIR="${CFS}/${proj}/${USER}/sw/perlmutter/cpu" +SW_DIR="${PSCRATCH}/storage/sw/warpx/perlmutter/cpu" rm -rf ${SW_DIR} mkdir -p ${SW_DIR} @@ -44,9 +44,29 @@ python3 -m pip uninstall -qqq -y mpi4py 2>/dev/null || true # General extra dependencies ################################################## # +# build parallelism +PARALLEL=16 + # tmpfs build directory: avoids issues often seen with $HOME and is faster build_dir=$(mktemp -d) +# CCache +curl -Lo ccache.tar.xz https://github.com/ccache/ccache/releases/download/v4.10.2/ccache-4.10.2-linux-x86_64.tar.xz +tar -xf ccache.tar.xz +mv ccache-4.10.2-linux-x86_64 ${SW_DIR}/ccache-4.10.2 +rm -rf ccache.tar.xz + +# Boost (QED tables) +rm -rf $HOME/src/boost-temp +mkdir -p $HOME/src/boost-temp +curl -Lo $HOME/src/boost-temp/boost.tar.gz https://archives.boost.io/release/1.82.0/source/boost_1_82_0.tar.gz +tar -xzf $HOME/src/boost-temp/boost.tar.gz -C $HOME/src/boost-temp +cd $HOME/src/boost-temp/boost_1_82_0 +./bootstrap.sh --with-libraries=math --prefix=${SW_DIR}/boost-1.82.0 +./b2 cxxflags="-std=c++17" install -j ${PARALLEL} +cd - +rm -rf $HOME/src/boost-temp + # c-blosc (I/O compression) if [ -d $HOME/src/c-blosc ] then @@ -59,7 +79,7 @@ else fi rm -rf $HOME/src/c-blosc-pm-cpu-build cmake -S $HOME/src/c-blosc -B ${build_dir}/c-blosc-pm-cpu-build -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DDEACTIVATE_AVX2=OFF -DCMAKE_INSTALL_PREFIX=${SW_DIR}/c-blosc-1.21.1 -cmake --build ${build_dir}/c-blosc-pm-cpu-build --target install --parallel 16 +cmake --build ${build_dir}/c-blosc-pm-cpu-build --target install --parallel ${PARALLEL} rm -rf ${build_dir}/c-blosc-pm-cpu-build # ADIOS2 @@ -74,7 +94,7 @@ else fi rm -rf $HOME/src/adios2-pm-cpu-build cmake -S $HOME/src/adios2 -B ${build_dir}/adios2-pm-cpu-build -DADIOS2_USE_Blosc=ON -DADIOS2_USE_CUDA=OFF -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DADIOS2_USE_ZeroMQ=OFF -DCMAKE_INSTALL_PREFIX=${SW_DIR}/adios2-2.8.3 -cmake --build ${build_dir}/adios2-pm-cpu-build --target install -j 16 +cmake --build ${build_dir}/adios2-pm-cpu-build --target install -j ${PARALLEL} rm -rf ${build_dir}/adios2-pm-cpu-build # BLAS++ (for PSATD+RZ) @@ -89,7 +109,7 @@ else fi rm -rf $HOME/src/blaspp-pm-cpu-build CXX=$(which CC) cmake -S $HOME/src/blaspp -B ${build_dir}/blaspp-pm-cpu-build -Duse_openmp=ON -Dgpu_backend=OFF -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=${SW_DIR}/blaspp-2024.05.31 -cmake --build ${build_dir}/blaspp-pm-cpu-build --target install --parallel 16 +cmake --build ${build_dir}/blaspp-pm-cpu-build --target install --parallel ${PARALLEL} rm -rf ${build_dir}/blaspp-pm-cpu-build # LAPACK++ (for PSATD+RZ) @@ -104,7 +124,7 @@ else fi rm -rf $HOME/src/lapackpp-pm-cpu-build CXX=$(which CC) CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S $HOME/src/lapackpp -B ${build_dir}/lapackpp-pm-cpu-build -DCMAKE_CXX_STANDARD=17 -Dbuild_tests=OFF -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON -DCMAKE_INSTALL_PREFIX=${SW_DIR}/lapackpp-2024.05.31 -cmake --build ${build_dir}/lapackpp-pm-cpu-build --target install --parallel 16 +cmake --build ${build_dir}/lapackpp-pm-cpu-build --target install --parallel ${PARALLEL} rm -rf ${build_dir}/lapackpp-pm-cpu-build # Python ###################################################################### diff --git a/Tools/machines/perlmutter-nersc/install_gpu_dependencies.sh b/Tools/machines/perlmutter-nersc/install_gpu_dependencies.sh index d08ca7457d4..ffa3d0f0714 100755 --- a/Tools/machines/perlmutter-nersc/install_gpu_dependencies.sh +++ b/Tools/machines/perlmutter-nersc/install_gpu_dependencies.sh @@ -31,7 +31,7 @@ fi # Remove old dependencies ##################################################### # -SW_DIR="${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu" +SW_DIR="${PSCRATCH}/storage/sw/warpx/perlmutter/gpu" rm -rf ${SW_DIR} mkdir -p ${SW_DIR} @@ -44,9 +44,29 @@ python3 -m pip uninstall -qqq -y mpi4py 2>/dev/null || true # General extra dependencies ################################################## # +# build parallelism +PARALLEL=16 + # tmpfs build directory: avoids issues often seen with $HOME and is faster build_dir=$(mktemp -d) +# CCache +curl -Lo ccache.tar.xz https://github.com/ccache/ccache/releases/download/v4.10.2/ccache-4.10.2-linux-x86_64.tar.xz +tar -xf ccache.tar.xz +mv ccache-4.10.2-linux-x86_64 ${SW_DIR}/ccache-4.10.2 +rm -rf ccache.tar.xz + +# Boost (QED tables) +rm -rf $HOME/src/boost-temp +mkdir -p $HOME/src/boost-temp +curl -Lo $HOME/src/boost-temp/boost.tar.gz https://archives.boost.io/release/1.82.0/source/boost_1_82_0.tar.gz +tar -xzf $HOME/src/boost-temp/boost.tar.gz -C $HOME/src/boost-temp +cd $HOME/src/boost-temp/boost_1_82_0 +./bootstrap.sh --with-libraries=math --prefix=${SW_DIR}/boost-1.82.0 +./b2 cxxflags="-std=c++17" install -j ${PARALLEL} +cd - +rm -rf $HOME/src/boost-temp + # c-blosc (I/O compression) if [ -d $HOME/src/c-blosc ] then @@ -59,7 +79,7 @@ else fi rm -rf $HOME/src/c-blosc-pm-gpu-build cmake -S $HOME/src/c-blosc -B ${build_dir}/c-blosc-pm-gpu-build -DBUILD_TESTS=OFF -DBUILD_BENCHMARKS=OFF -DDEACTIVATE_AVX2=OFF -DCMAKE_INSTALL_PREFIX=${SW_DIR}/c-blosc-1.21.1 -cmake --build ${build_dir}/c-blosc-pm-gpu-build --target install --parallel 16 +cmake --build ${build_dir}/c-blosc-pm-gpu-build --target install --parallel ${PARALLEL} rm -rf ${build_dir}/c-blosc-pm-gpu-build # ADIOS2 @@ -74,7 +94,7 @@ else fi rm -rf $HOME/src/adios2-pm-gpu-build cmake -S $HOME/src/adios2 -B ${build_dir}/adios2-pm-gpu-build -DADIOS2_USE_Blosc=ON -DADIOS2_USE_Fortran=OFF -DADIOS2_USE_Python=OFF -DADIOS2_USE_ZeroMQ=OFF -DCMAKE_INSTALL_PREFIX=${SW_DIR}/adios2-2.8.3 -cmake --build ${build_dir}/adios2-pm-gpu-build --target install -j 16 +cmake --build ${build_dir}/adios2-pm-gpu-build --target install -j ${PARALLEL} rm -rf ${build_dir}/adios2-pm-gpu-build # BLAS++ (for PSATD+RZ) @@ -89,7 +109,7 @@ else fi rm -rf $HOME/src/blaspp-pm-gpu-build CXX=$(which CC) cmake -S $HOME/src/blaspp -B ${build_dir}/blaspp-pm-gpu-build -Duse_openmp=OFF -Dgpu_backend=cuda -DCMAKE_CXX_STANDARD=17 -DCMAKE_INSTALL_PREFIX=${SW_DIR}/blaspp-2024.05.31 -cmake --build ${build_dir}/blaspp-pm-gpu-build --target install --parallel 16 +cmake --build ${build_dir}/blaspp-pm-gpu-build --target install --parallel ${PARALLEL} rm -rf ${build_dir}/blaspp-pm-gpu-build # LAPACK++ (for PSATD+RZ) @@ -104,7 +124,7 @@ else fi rm -rf $HOME/src/lapackpp-pm-gpu-build CXX=$(which CC) CXXFLAGS="-DLAPACK_FORTRAN_ADD_" cmake -S $HOME/src/lapackpp -B ${build_dir}/lapackpp-pm-gpu-build -DCMAKE_CXX_STANDARD=17 -Dbuild_tests=OFF -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON -DCMAKE_INSTALL_PREFIX=${SW_DIR}/lapackpp-2024.05.31 -cmake --build ${build_dir}/lapackpp-pm-gpu-build --target install --parallel 16 +cmake --build ${build_dir}/lapackpp-pm-gpu-build --target install --parallel ${PARALLEL} rm -rf ${build_dir}/lapackpp-pm-gpu-build # Python ###################################################################### diff --git a/Tools/machines/perlmutter-nersc/perlmutter_cpu_warpx.profile.example b/Tools/machines/perlmutter-nersc/perlmutter_cpu_warpx.profile.example index a7493ecd4bc..fe665e87130 100644 --- a/Tools/machines/perlmutter-nersc/perlmutter_cpu_warpx.profile.example +++ b/Tools/machines/perlmutter-nersc/perlmutter_cpu_warpx.profile.example @@ -10,32 +10,36 @@ module load cpu module load cmake/3.30.2 module load cray-fftw/3.3.10.6 +# missing modules installed here +export SW_DIR=${PSCRATCH}/storage/sw/warpx/perlmutter/cpu + # optional: for QED support with detailed tables -export BOOST_ROOT=/global/common/software/spackecp/perlmutter/e4s-23.08/default/spack/opt/spack/linux-sles15-zen3/gcc-12.3.0/boost-1.83.0-nxqk3hnci5g3wqv75wvsmuke3w74mzxi +export CMAKE_PREFIX_PATH=${SW_DIR}/boost-1.82.0:${CMAKE_PREFIX_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/boost-1.82.0/lib:${LD_LIBRARY_PATH} # optional: for openPMD and PSATD+RZ support module load cray-hdf5-parallel/1.12.2.9 -export CMAKE_PREFIX_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/c-blosc-1.21.1:$CMAKE_PREFIX_PATH -export CMAKE_PREFIX_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/adios2-2.8.3:$CMAKE_PREFIX_PATH -export CMAKE_PREFIX_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/blaspp-2024.05.31:$CMAKE_PREFIX_PATH -export CMAKE_PREFIX_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/lapackpp-2024.05.31:$CMAKE_PREFIX_PATH +export CMAKE_PREFIX_PATH=${SW_DIR}/c-blosc-1.21.1:${CMAKE_PREFIX_PATH} +export CMAKE_PREFIX_PATH=${SW_DIR}/adios2-2.8.3:${CMAKE_PREFIX_PATH} +export CMAKE_PREFIX_PATH=${SW_DIR}/blaspp-2024.05.31:${CMAKE_PREFIX_PATH} +export CMAKE_PREFIX_PATH=${SW_DIR}/lapackpp-2024.05.31:${CMAKE_PREFIX_PATH} -export LD_LIBRARY_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/c-blosc-1.21.1/lib64:$LD_LIBRARY_PATH -export LD_LIBRARY_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/adios2-2.8.3/lib64:$LD_LIBRARY_PATH -export LD_LIBRARY_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/blaspp-2024.05.31/lib64:$LD_LIBRARY_PATH -export LD_LIBRARY_PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/lapackpp-2024.05.31/lib64:$LD_LIBRARY_PATH +export LD_LIBRARY_PATH=${SW_DIR}/c-blosc-1.21.1/lib64:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/adios2-2.8.3/lib64:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/blaspp-2024.05.31/lib64:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/lapackpp-2024.05.31/lib64:${LD_LIBRARY_PATH} -export PATH=${CFS}/${proj}/${USER}/sw/perlmutter/cpu/adios2-2.8.3/bin:${PATH} +export PATH=${SW_DIR}/adios2-2.8.3/bin:${PATH} # optional: CCache -export PATH=/global/common/software/spackecp/perlmutter/e4s-23.08/default/spack/opt/spack/linux-sles15-zen3/gcc-11.2.0/ccache-4.8.2-cvooxdw5wgvv2g3vjxjkrpv6dopginv6/bin:$PATH +export PATH=${SW_DIR}/ccache-4.10.2:$PATH # optional: for Python bindings or libEnsemble module load cray-python/3.11.5 -if [ -d "${CFS}/${proj}/${USER}/sw/perlmutter/cpu/venvs/warpx-cpu" ] +if [ -d "${SW_DIR}/venvs/warpx-cpu" ] then - source ${CFS}/${proj}/${USER}/sw/perlmutter/cpu/venvs/warpx-cpu/bin/activate + source ${SW_DIR}/venvs/warpx-cpu/bin/activate fi # an alias to request an interactive batch node for one hour diff --git a/Tools/machines/perlmutter-nersc/perlmutter_gpu_warpx.profile.example b/Tools/machines/perlmutter-nersc/perlmutter_gpu_warpx.profile.example index 5d413db71e1..dd78bc8ecf3 100644 --- a/Tools/machines/perlmutter-nersc/perlmutter_gpu_warpx.profile.example +++ b/Tools/machines/perlmutter-nersc/perlmutter_gpu_warpx.profile.example @@ -14,32 +14,36 @@ module load craype-accel-nvidia80 module load cudatoolkit module load cmake/3.30.2 +# missing modules installed here +export SW_DIR=${PSCRATCH}/storage/sw/warpx/perlmutter/gpu + # optional: for QED support with detailed tables -export BOOST_ROOT=/global/common/software/spackecp/perlmutter/e4s-23.08/default/spack/opt/spack/linux-sles15-zen3/gcc-12.3.0/boost-1.83.0-nxqk3hnci5g3wqv75wvsmuke3w74mzxi +export CMAKE_PREFIX_PATH=${SW_DIR}/boost-1.82.0:${CMAKE_PREFIX_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/boost-1.82.0/lib:${LD_LIBRARY_PATH} # optional: for openPMD and PSATD+RZ support module load cray-hdf5-parallel/1.12.2.9 -export CMAKE_PREFIX_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/c-blosc-1.21.1:$CMAKE_PREFIX_PATH -export CMAKE_PREFIX_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/adios2-2.8.3:$CMAKE_PREFIX_PATH -export CMAKE_PREFIX_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/blaspp-2024.05.31:$CMAKE_PREFIX_PATH -export CMAKE_PREFIX_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/lapackpp-2024.05.31:$CMAKE_PREFIX_PATH +export CMAKE_PREFIX_PATH=${SW_DIR}/c-blosc-1.21.1:${CMAKE_PREFIX_PATH} +export CMAKE_PREFIX_PATH=${SW_DIR}/adios2-2.8.3:${CMAKE_PREFIX_PATH} +export CMAKE_PREFIX_PATH=${SW_DIR}/blaspp-2024.05.31:${CMAKE_PREFIX_PATH} +export CMAKE_PREFIX_PATH=${SW_DIR}/lapackpp-2024.05.31:${CMAKE_PREFIX_PATH} -export LD_LIBRARY_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/c-blosc-1.21.1/lib64:$LD_LIBRARY_PATH -export LD_LIBRARY_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/adios2-2.8.3/lib64:$LD_LIBRARY_PATH -export LD_LIBRARY_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/blaspp-2024.05.31/lib64:$LD_LIBRARY_PATH -export LD_LIBRARY_PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/lapackpp-2024.05.31/lib64:$LD_LIBRARY_PATH +export LD_LIBRARY_PATH=${SW_DIR}/c-blosc-1.21.1/lib64:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/adios2-2.8.3/lib64:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/blaspp-2024.05.31/lib64:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${SW_DIR}/lapackpp-2024.05.31/lib64:${LD_LIBRARY_PATH} -export PATH=${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/adios2-2.8.3/bin:${PATH} +export PATH=${SW_DIR}/adios2-2.8.3/bin:${PATH} # optional: CCache -export PATH=/global/common/software/spackecp/perlmutter/e4s-23.08/default/spack/opt/spack/linux-sles15-zen3/gcc-11.2.0/ccache-4.8.2-cvooxdw5wgvv2g3vjxjkrpv6dopginv6/bin:$PATH +export PATH=${SW_DIR}/ccache-4.10.2:$PATH # optional: for Python bindings or libEnsemble module load cray-python/3.11.5 -if [ -d "${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/venvs/warpx-gpu" ] +if [ -d "${SW_DIR}/venvs/warpx-gpu" ] then - source ${CFS}/${proj%_g}/${USER}/sw/perlmutter/gpu/venvs/warpx-gpu/bin/activate + source ${SW_DIR}/venvs/warpx-gpu/bin/activate fi # an alias to request an interactive batch node for one hour From e627b9cb66f4bd55017f431a2f9ab6500bfb5423 Mon Sep 17 00:00:00 2001 From: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> Date: Tue, 18 Feb 2025 16:42:43 -0800 Subject: [PATCH 49/58] mini-PR: Cleanup in Ohm solver for readability (#5675) As said in the title, this is just a small PR to make the `HybridPICSolveE` kernels more readable. --------- Signed-off-by: roelof-groenewald --- .../HybridPICSolveE.cpp | 342 +++++++++--------- 1 file changed, 175 insertions(+), 167 deletions(-) diff --git a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp index b750a7e4f20..f46b2f73e41 100644 --- a/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp +++ b/Source/FieldSolver/FiniteDifferenceSolver/HybridPICSolveE.cpp @@ -616,44 +616,45 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // Interpolate to get the appropriate charge density in space const Real rho_val = Interp(rho, nodal, Er_stag, coarsen, i, j, 0, 0); - Real rho_val_limited = rho_val; - - // Interpolate current to appropriate staggering to match E field - Real jtot_val = 0._rt; - if (solve_for_Faraday && resistivity_has_J_dependence) { - const Real jr_val = Interp(Jr, Jr_stag, Er_stag, coarsen, i, j, 0, 0); - const Real jt_val = Interp(Jt, Jt_stag, Er_stag, coarsen, i, j, 0, 0); - const Real jz_val = Interp(Jz, Jz_stag, Er_stag, coarsen, i, j, 0, 0); - jtot_val = std::sqrt(jr_val*jr_val + jt_val*jt_val + jz_val*jz_val); - } - - // safety condition since we divide by rho_val later - if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } - - // Get the gradient of the electron pressure if the longitudinal part of - // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 - Real grad_Pe = 0._rt; - if (!solve_for_Faraday) { grad_Pe = T_Algo::UpwardDr(Pe, coefs_r, n_coefs_r, i, j, 0, 0); } - - // interpolate the nodal neE values to the Yee grid - auto enE_r = Interp(enE, nodal, Er_stag, coarsen, i, j, 0, 0); if (rho_val < rho_floor && holmstrom_vacuum_region) { Er(i, j, 0) = 0._rt; } else { + // Get the gradient of the electron pressure if the longitudinal part of + // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 + const Real grad_Pe = (!solve_for_Faraday) ? + T_Algo::UpwardDr(Pe, coefs_r, n_coefs_r, i, j, 0, 0) + : 0._rt; + + // interpolate the nodal neE values to the Yee grid + const auto enE_r = Interp(enE, nodal, Er_stag, coarsen, i, j, 0, 0); + + // safety condition since we divide by rho + const auto rho_val_limited = std::max(rho_val, rho_floor); + Er(i, j, 0) = (enE_r - grad_Pe) / rho_val_limited; } // Add resistivity only if E field value is used to update B - if (solve_for_Faraday) { Er(i, j, 0) += eta(rho_val, jtot_val) * Jr(i, j, 0); } - - if (include_hyper_resistivity_term) { - // r on cell-centered point (Jr is cell-centered in r) - const Real r = rmin + (i + 0.5_rt)*dr; - const Real jr_val = Interp(Jr, Jr_stag, Er_stag, coarsen, i, j, 0, 0); - auto nabla2Jr = T_Algo::Dr_rDr_over_r(Jr, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) - + T_Algo::Dzz(Jr, coefs_z, n_coefs_z, i, j, 0, 0) - jr_val/(r*r); - Er(i, j, 0) -= eta_h * nabla2Jr; + if (solve_for_Faraday) { + Real jtot_val = 0._rt; + if (resistivity_has_J_dependence) { + // Interpolate current to appropriate staggering to match E field + const Real jr_val = Jr(i, j, 0); + const Real jt_val = Interp(Jt, Jt_stag, Er_stag, coarsen, i, j, 0, 0); + const Real jz_val = Interp(Jz, Jz_stag, Er_stag, coarsen, i, j, 0, 0); + jtot_val = std::sqrt(jr_val*jr_val + jt_val*jt_val + jz_val*jz_val); + } + + Er(i, j, 0) += eta(rho_val, jtot_val) * Jr(i, j, 0); + + if (include_hyper_resistivity_term) { + // r on cell-centered point (Jr is cell-centered in r) + const Real r = rmin + (i + 0.5_rt)*dr; + auto nabla2Jr = T_Algo::Dr_rDr_over_r(Jr, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) + + T_Algo::Dzz(Jr, coefs_z, n_coefs_z, i, j, 0, 0) - Jr(i, j, 0)/(r*r); + Er(i, j, 0) -= eta_h * nabla2Jr; + } } if (include_external_fields && (rho_val >= rho_floor)) { @@ -677,41 +678,41 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // Interpolate to get the appropriate charge density in space const Real rho_val = Interp(rho, nodal, Et_stag, coarsen, i, j, 0, 0); - Real rho_val_limited = rho_val; - - // Interpolate current to appropriate staggering to match E field - Real jtot_val = 0._rt; - if (solve_for_Faraday && resistivity_has_J_dependence) { - const Real jr_val = Interp(Jr, Jr_stag, Et_stag, coarsen, i, j, 0, 0); - const Real jt_val = Interp(Jt, Jt_stag, Et_stag, coarsen, i, j, 0, 0); - const Real jz_val = Interp(Jz, Jz_stag, Et_stag, coarsen, i, j, 0, 0); - jtot_val = std::sqrt(jr_val*jr_val + jt_val*jt_val + jz_val*jz_val); - } - - // safety condition since we divide by rho_val later - if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } - - // Get the gradient of the electron pressure - // -> d/dt = 0 for m = 0 - auto grad_Pe = 0.0_rt; - - // interpolate the nodal neE values to the Yee grid - auto enE_t = Interp(enE, nodal, Et_stag, coarsen, i, j, 0, 1); if (rho_val < rho_floor && holmstrom_vacuum_region) { Et(i, j, 0) = 0._rt; } else { + // Get the gradient of the electron pressure + // -> d/dt = 0 for m = 0 + const auto grad_Pe = 0.0_rt; + + // interpolate the nodal neE values to the Yee grid + const auto enE_t = Interp(enE, nodal, Et_stag, coarsen, i, j, 0, 1); + + // safety condition since we divide by rho + const auto rho_val_limited = std::max(rho_val, rho_floor); + Et(i, j, 0) = (enE_t - grad_Pe) / rho_val_limited; } // Add resistivity only if E field value is used to update B - if (solve_for_Faraday) { Et(i, j, 0) += eta(rho_val, jtot_val) * Jt(i, j, 0); } + if (solve_for_Faraday) { + Real jtot_val = 0._rt; + if(resistivity_has_J_dependence) { + // Interpolate current to appropriate staggering to match E field + const Real jr_val = Interp(Jr, Jr_stag, Et_stag, coarsen, i, j, 0, 0); + const Real jt_val = Jt(i, j, 0); + const Real jz_val = Interp(Jz, Jz_stag, Et_stag, coarsen, i, j, 0, 0); + jtot_val = std::sqrt(jr_val*jr_val + jt_val*jt_val + jz_val*jz_val); + } + + Et(i, j, 0) += eta(rho_val, jtot_val) * Jt(i, j, 0); - if (include_hyper_resistivity_term) { - const Real jt_val = Interp(Jt, Jt_stag, Et_stag, coarsen, i, j, 0, 0); - auto nabla2Jt = T_Algo::Dr_rDr_over_r(Jt, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) - + T_Algo::Dzz(Jt, coefs_z, n_coefs_z, i, j, 0, 0) - jt_val/(r*r); - Et(i, j, 0) -= eta_h * nabla2Jt; + if (include_hyper_resistivity_term) { + auto nabla2Jt = T_Algo::Dr_rDr_over_r(Jt, r, dr, coefs_r, n_coefs_r, i, j, 0, 0) + + T_Algo::Dzz(Jt, coefs_z, n_coefs_z, i, j, 0, 0) - Jt(i, j, 0)/(r*r); + Et(i, j, 0) -= eta_h * nabla2Jt; + } } if (include_external_fields && (rho_val >= rho_floor)) { @@ -727,47 +728,48 @@ void FiniteDifferenceSolver::HybridPICSolveECylindrical ( // Interpolate to get the appropriate charge density in space const Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, 0, 0); - Real rho_val_limited = rho_val; - - // Interpolate current to appropriate staggering to match E field - Real jtot_val = 0._rt; - if (solve_for_Faraday && resistivity_has_J_dependence) { - const Real jr_val = Interp(Jr, Jr_stag, Ez_stag, coarsen, i, j, 0, 0); - const Real jt_val = Interp(Jt, Jt_stag, Ez_stag, coarsen, i, j, 0, 0); - const Real jz_val = Interp(Jz, Jz_stag, Ez_stag, coarsen, i, j, 0, 0); - jtot_val = std::sqrt(jr_val*jr_val + jt_val*jt_val + jz_val*jz_val); - } - - // safety condition since we divide by rho_val later - if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } - - // Get the gradient of the electron pressure if the longitudinal part of - // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 - Real grad_Pe = 0._rt; - if (!solve_for_Faraday) { grad_Pe = T_Algo::UpwardDz(Pe, coefs_z, n_coefs_z, i, j, 0, 0); } - - // interpolate the nodal neE values to the Yee grid - auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, 0, 2); if (rho_val < rho_floor && holmstrom_vacuum_region) { Ez(i, j, 0) = 0._rt; } else { + // Get the gradient of the electron pressure if the longitudinal part of + // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 + const Real grad_Pe = (!solve_for_Faraday) ? + T_Algo::UpwardDz(Pe, coefs_z, n_coefs_z, i, j, 0, 0) + : 0._rt; + + // interpolate the nodal neE values to the Yee grid + const auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, 0, 2); + + // safety condition since we divide by rho + const auto rho_val_limited = std::max(rho_val, rho_floor); + Ez(i, j, 0) = (enE_z - grad_Pe) / rho_val_limited; } // Add resistivity only if E field value is used to update B - if (solve_for_Faraday) { Ez(i, j, 0) += eta(rho_val, jtot_val) * Jz(i, j, 0); } + if (solve_for_Faraday) { + Real jtot_val = 0._rt; + if (resistivity_has_J_dependence) { + // Interpolate current to appropriate staggering to match E field + const Real jr_val = Interp(Jr, Jr_stag, Ez_stag, coarsen, i, j, 0, 0); + const Real jt_val = Interp(Jt, Jt_stag, Ez_stag, coarsen, i, j, 0, 0); + const Real jz_val = Jz(i, j, 0); + jtot_val = std::sqrt(jr_val*jr_val + jt_val*jt_val + jz_val*jz_val); + } - if (include_hyper_resistivity_term) { - // r on nodal point (Jz is nodal in r) - Real const r = rmin + i*dr; + Ez(i, j, 0) += eta(rho_val, jtot_val) * Jz(i, j, 0); - auto nabla2Jz = T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, 0, 0); - if (r > 0.5_rt*dr) { - nabla2Jz += T_Algo::Dr_rDr_over_r(Jz, r, dr, coefs_r, n_coefs_r, i, j, 0, 0); - } + if (include_hyper_resistivity_term) { + // r on nodal point (Jz is nodal in r) + const Real r = rmin + i*dr; - Ez(i, j, 0) -= eta_h * nabla2Jz; + auto nabla2Jz = T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, 0, 0); + if (r > 0.5_rt*dr) { + nabla2Jz += T_Algo::Dr_rDr_over_r(Jz, r, dr, coefs_r, n_coefs_r, i, j, 0, 0); + } + Ez(i, j, 0) -= eta_h * nabla2Jz; + } } if (include_external_fields && (rho_val >= rho_floor)) { @@ -995,42 +997,44 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( // Interpolate to get the appropriate charge density in space const Real rho_val = Interp(rho, nodal, Ex_stag, coarsen, i, j, k, 0); - Real rho_val_limited = rho_val; - - // Interpolate current to appropriate staggering to match E field - Real jtot_val = 0._rt; - if (solve_for_Faraday && resistivity_has_J_dependence) { - const Real jx_val = Interp(Jx, Jx_stag, Ex_stag, coarsen, i, j, k, 0); - const Real jy_val = Interp(Jy, Jy_stag, Ex_stag, coarsen, i, j, k, 0); - const Real jz_val = Interp(Jz, Jz_stag, Ex_stag, coarsen, i, j, k, 0); - jtot_val = std::sqrt(jx_val*jx_val + jy_val*jy_val + jz_val*jz_val); - } - - // safety condition since we divide by rho_val later - if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } - - // Get the gradient of the electron pressure if the longitudinal part of - // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 - Real grad_Pe = 0._rt; - if (!solve_for_Faraday) { grad_Pe = T_Algo::UpwardDx(Pe, coefs_x, n_coefs_x, i, j, k); } - - // interpolate the nodal neE values to the Yee grid - auto enE_x = Interp(enE, nodal, Ex_stag, coarsen, i, j, k, 0); if (rho_val < rho_floor && holmstrom_vacuum_region) { Ex(i, j, k) = 0._rt; } else { + // Get the gradient of the electron pressure if the longitudinal part of + // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 + const Real grad_Pe = (!solve_for_Faraday) ? + T_Algo::UpwardDx(Pe, coefs_x, n_coefs_x, i, j, k) + : 0._rt; + + // interpolate the nodal neE values to the Yee grid + const auto enE_x = Interp(enE, nodal, Ex_stag, coarsen, i, j, k, 0); + + // safety condition since we divide by rho + const auto rho_val_limited = std::max(rho_val, rho_floor); + Ex(i, j, k) = (enE_x - grad_Pe) / rho_val_limited; } // Add resistivity only if E field value is used to update B - if (solve_for_Faraday) { Ex(i, j, k) += eta(rho_val, jtot_val) * Jx(i, j, k); } + if (solve_for_Faraday) { + Real jtot_val = 0._rt; + if (resistivity_has_J_dependence) { + // Interpolate current to appropriate staggering to match E field + const Real jx_val = Jx(i, j, k); + const Real jy_val = Interp(Jy, Jy_stag, Ex_stag, coarsen, i, j, k, 0); + const Real jz_val = Interp(Jz, Jz_stag, Ex_stag, coarsen, i, j, k, 0); + jtot_val = std::sqrt(jx_val*jx_val + jy_val*jy_val + jz_val*jz_val); + } + + Ex(i, j, k) += eta(rho_val, jtot_val) * Jx(i, j, k); - if (include_hyper_resistivity_term) { - auto nabla2Jx = T_Algo::Dxx(Jx, coefs_x, n_coefs_x, i, j, k) - + T_Algo::Dyy(Jx, coefs_y, n_coefs_y, i, j, k) - + T_Algo::Dzz(Jx, coefs_z, n_coefs_z, i, j, k); - Ex(i, j, k) -= eta_h * nabla2Jx; + if (include_hyper_resistivity_term) { + auto nabla2Jx = T_Algo::Dxx(Jx, coefs_x, n_coefs_x, i, j, k) + + T_Algo::Dyy(Jx, coefs_y, n_coefs_y, i, j, k) + + T_Algo::Dzz(Jx, coefs_z, n_coefs_z, i, j, k); + Ex(i, j, k) -= eta_h * nabla2Jx; + } } if (include_external_fields && (rho_val >= rho_floor)) { @@ -1046,42 +1050,44 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( // Interpolate to get the appropriate charge density in space const Real rho_val = Interp(rho, nodal, Ey_stag, coarsen, i, j, k, 0); - Real rho_val_limited = rho_val; - - // Interpolate current to appropriate staggering to match E field - Real jtot_val = 0._rt; - if (solve_for_Faraday && resistivity_has_J_dependence) { - const Real jx_val = Interp(Jx, Jx_stag, Ey_stag, coarsen, i, j, k, 0); - const Real jy_val = Interp(Jy, Jy_stag, Ey_stag, coarsen, i, j, k, 0); - const Real jz_val = Interp(Jz, Jz_stag, Ey_stag, coarsen, i, j, k, 0); - jtot_val = std::sqrt(jx_val*jx_val + jy_val*jy_val + jz_val*jz_val); - } - - // safety condition since we divide by rho_val later - if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } - - // Get the gradient of the electron pressure if the longitudinal part of - // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 - Real grad_Pe = 0._rt; - if (!solve_for_Faraday) { grad_Pe = T_Algo::UpwardDy(Pe, coefs_y, n_coefs_y, i, j, k); } - - // interpolate the nodal neE values to the Yee grid - auto enE_y = Interp(enE, nodal, Ey_stag, coarsen, i, j, k, 1); if (rho_val < rho_floor && holmstrom_vacuum_region) { Ey(i, j, k) = 0._rt; } else { + // Get the gradient of the electron pressure if the longitudinal part of + // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 + const Real grad_Pe = (!solve_for_Faraday) ? + T_Algo::UpwardDy(Pe, coefs_y, n_coefs_y, i, j, k) + : 0._rt; + + // interpolate the nodal neE values to the Yee grid + const auto enE_y = Interp(enE, nodal, Ey_stag, coarsen, i, j, k, 1); + + // safety condition since we divide by rho + const auto rho_val_limited = std::max(rho_val, rho_floor); + Ey(i, j, k) = (enE_y - grad_Pe) / rho_val_limited; } // Add resistivity only if E field value is used to update B - if (solve_for_Faraday) { Ey(i, j, k) += eta(rho_val, jtot_val) * Jy(i, j, k); } + if (solve_for_Faraday) { + Real jtot_val = 0._rt; + if (resistivity_has_J_dependence) { + // Interpolate current to appropriate staggering to match E field + const Real jx_val = Interp(Jx, Jx_stag, Ey_stag, coarsen, i, j, k, 0); + const Real jy_val = Jy(i, j, k); + const Real jz_val = Interp(Jz, Jz_stag, Ey_stag, coarsen, i, j, k, 0); + jtot_val = std::sqrt(jx_val*jx_val + jy_val*jy_val + jz_val*jz_val); + } + + Ey(i, j, k) += eta(rho_val, jtot_val) * Jy(i, j, k); - if (include_hyper_resistivity_term) { - auto nabla2Jy = T_Algo::Dxx(Jy, coefs_x, n_coefs_x, i, j, k) - + T_Algo::Dyy(Jy, coefs_y, n_coefs_y, i, j, k) - + T_Algo::Dzz(Jy, coefs_z, n_coefs_z, i, j, k); - Ey(i, j, k) -= eta_h * nabla2Jy; + if (include_hyper_resistivity_term) { + auto nabla2Jy = T_Algo::Dxx(Jy, coefs_x, n_coefs_x, i, j, k) + + T_Algo::Dyy(Jy, coefs_y, n_coefs_y, i, j, k) + + T_Algo::Dzz(Jy, coefs_z, n_coefs_z, i, j, k); + Ey(i, j, k) -= eta_h * nabla2Jy; + } } if (include_external_fields && (rho_val >= rho_floor)) { @@ -1097,42 +1103,44 @@ void FiniteDifferenceSolver::HybridPICSolveECartesian ( // Interpolate to get the appropriate charge density in space const Real rho_val = Interp(rho, nodal, Ez_stag, coarsen, i, j, k, 0); - Real rho_val_limited = rho_val; - - // Interpolate current to appropriate staggering to match E field - Real jtot_val = 0._rt; - if (solve_for_Faraday && resistivity_has_J_dependence) { - const Real jx_val = Interp(Jx, Jx_stag, Ez_stag, coarsen, i, j, k, 0); - const Real jy_val = Interp(Jy, Jy_stag, Ez_stag, coarsen, i, j, k, 0); - const Real jz_val = Interp(Jz, Jz_stag, Ez_stag, coarsen, i, j, k, 0); - jtot_val = std::sqrt(jx_val*jx_val + jy_val*jy_val + jz_val*jz_val); - } - - // safety condition since we divide by rho_val later - if (rho_val_limited < rho_floor) { rho_val_limited = rho_floor; } - - // Get the gradient of the electron pressure if the longitudinal part of - // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 - Real grad_Pe = 0._rt; - if (!solve_for_Faraday) { grad_Pe = T_Algo::UpwardDz(Pe, coefs_z, n_coefs_z, i, j, k); } - - // interpolate the nodal neE values to the Yee grid - auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, k, 2); if (rho_val < rho_floor && holmstrom_vacuum_region) { Ez(i, j, k) = 0._rt; } else { + // Get the gradient of the electron pressure if the longitudinal part of + // the E-field should be included, otherwise ignore it since curl x (grad Pe) = 0 + const Real grad_Pe = (!solve_for_Faraday) ? + T_Algo::UpwardDz(Pe, coefs_z, n_coefs_z, i, j, k) + : 0._rt; + + // interpolate the nodal neE values to the Yee grid + const auto enE_z = Interp(enE, nodal, Ez_stag, coarsen, i, j, k, 2); + + // safety condition since we divide by rho + const auto rho_val_limited = std::max(rho_val, rho_floor); + Ez(i, j, k) = (enE_z - grad_Pe) / rho_val_limited; } // Add resistivity only if E field value is used to update B - if (solve_for_Faraday) { Ez(i, j, k) += eta(rho_val, jtot_val) * Jz(i, j, k); } + if (solve_for_Faraday) { + Real jtot_val = 0._rt; + if (resistivity_has_J_dependence) { + // Interpolate current to appropriate staggering to match E field + const Real jx_val = Interp(Jx, Jx_stag, Ez_stag, coarsen, i, j, k, 0); + const Real jy_val = Interp(Jy, Jy_stag, Ez_stag, coarsen, i, j, k, 0); + const Real jz_val = Jz(i, j, k); + jtot_val = std::sqrt(jx_val*jx_val + jy_val*jy_val + jz_val*jz_val); + } + + Ez(i, j, k) += eta(rho_val, jtot_val) * Jz(i, j, k); - if (include_hyper_resistivity_term) { - auto nabla2Jz = T_Algo::Dxx(Jz, coefs_x, n_coefs_x, i, j, k) - + T_Algo::Dyy(Jz, coefs_y, n_coefs_y, i, j, k) - + T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, k); - Ez(i, j, k) -= eta_h * nabla2Jz; + if (include_hyper_resistivity_term) { + auto nabla2Jz = T_Algo::Dxx(Jz, coefs_x, n_coefs_x, i, j, k) + + T_Algo::Dyy(Jz, coefs_y, n_coefs_y, i, j, k) + + T_Algo::Dzz(Jz, coefs_z, n_coefs_z, i, j, k); + Ez(i, j, k) -= eta_h * nabla2Jz; + } } if (include_external_fields && (rho_val >= rho_floor)) { From 8d285a8ada0e20d495a89c83927461773cd6a94c Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 19 Feb 2025 01:47:37 +0100 Subject: [PATCH 50/58] WarpX class: fuse together doFieldIonization with doFieldIonization(lev) and doQEDEvents with doQEDEvents(lev) (#5671) `doFieldIonization(lev) ` is called only once, inside `doFieldIonization` , which is simply a loop over the levels calling for each level `doFieldIonization(lev) `. The same happens for `doQEDEvents`. In order to simplify the interface of the WarpX class, I would like to propose to drop `doFieldIonization(lev) ` and `doQEDEvents(lev) `, and to integrate their code respectively in `doFieldIonization` and `doQEDEvents`. --- Source/Evolve/WarpXEvolve.cpp | 56 ++++++++++++++--------------------- Source/WarpX.H | 8 ----- 2 files changed, 22 insertions(+), 42 deletions(-) diff --git a/Source/Evolve/WarpXEvolve.cpp b/Source/Evolve/WarpXEvolve.cpp index a5ad9d4034e..5593642a944 100644 --- a/Source/Evolve/WarpXEvolve.cpp +++ b/Source/Evolve/WarpXEvolve.cpp @@ -1076,53 +1076,41 @@ WarpX::OneStep_sub1 (Real cur_time) void WarpX::doFieldIonization () -{ - for (int lev = 0; lev <= finest_level; ++lev) { - doFieldIonization(lev); - } -} - -void -WarpX::doFieldIonization (int lev) { using ablastr::fields::Direction; using warpx::fields::FieldType; - mypc->doFieldIonization( - lev, - *m_fields.get(FieldType::Efield_aux, Direction{0}, lev), - *m_fields.get(FieldType::Efield_aux, Direction{1}, lev), - *m_fields.get(FieldType::Efield_aux, Direction{2}, lev), - *m_fields.get(FieldType::Bfield_aux, Direction{0}, lev), - *m_fields.get(FieldType::Bfield_aux, Direction{1}, lev), - *m_fields.get(FieldType::Bfield_aux, Direction{2}, lev) - ); -} - -#ifdef WARPX_QED -void -WarpX::doQEDEvents () -{ for (int lev = 0; lev <= finest_level; ++lev) { - doQEDEvents(lev); + mypc->doFieldIonization( + lev, + *m_fields.get(FieldType::Efield_aux, Direction{0}, lev), + *m_fields.get(FieldType::Efield_aux, Direction{1}, lev), + *m_fields.get(FieldType::Efield_aux, Direction{2}, lev), + *m_fields.get(FieldType::Bfield_aux, Direction{0}, lev), + *m_fields.get(FieldType::Bfield_aux, Direction{1}, lev), + *m_fields.get(FieldType::Bfield_aux, Direction{2}, lev) + ); } } +#ifdef WARPX_QED void -WarpX::doQEDEvents (int lev) +WarpX::doQEDEvents () { using ablastr::fields::Direction; using warpx::fields::FieldType; - mypc->doQedEvents( - lev, - *m_fields.get(FieldType::Efield_aux, Direction{0}, lev), - *m_fields.get(FieldType::Efield_aux, Direction{1}, lev), - *m_fields.get(FieldType::Efield_aux, Direction{2}, lev), - *m_fields.get(FieldType::Bfield_aux, Direction{0}, lev), - *m_fields.get(FieldType::Bfield_aux, Direction{1}, lev), - *m_fields.get(FieldType::Bfield_aux, Direction{2}, lev) - ); + for (int lev = 0; lev <= finest_level; ++lev) { + mypc->doQedEvents( + lev, + *m_fields.get(FieldType::Efield_aux, Direction{0}, lev), + *m_fields.get(FieldType::Efield_aux, Direction{1}, lev), + *m_fields.get(FieldType::Efield_aux, Direction{2}, lev), + *m_fields.get(FieldType::Bfield_aux, Direction{0}, lev), + *m_fields.get(FieldType::Bfield_aux, Direction{1}, lev), + *m_fields.get(FieldType::Bfield_aux, Direction{2}, lev) + ); + } } #endif diff --git a/Source/WarpX.H b/Source/WarpX.H index 29439002a3a..f039a636498 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -663,18 +663,10 @@ public: /** Run the ionization module on all species */ void doFieldIonization (); - /** Run the ionization module on all species at level lev - * \param lev level - */ - void doFieldIonization (int lev); #ifdef WARPX_QED /** Run the QED module on all species */ void doQEDEvents (); - /** Run the QED module on all species at level lev - * \param lev level - */ - void doQEDEvents (int lev); #endif void PushParticlesandDeposit (int lev, amrex::Real cur_time, DtType a_dt_type=DtType::Full, bool skip_current=false, From bf4bd4a22d4669b94bb24c7dcc16c9a0d0fab244 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 19 Feb 2025 01:48:23 +0100 Subject: [PATCH 51/58] WarpX class: remove declaration of two unimplemented functions (#5670) `AverageAndPackFields` and `prepareFields` are not implemented. Therefore, this PR removes their declaration from the WarpX header. --- Source/WarpX.H | 8 -------- 1 file changed, 8 deletions(-) diff --git a/Source/WarpX.H b/Source/WarpX.H index f039a636498..638b6403cae 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -756,14 +756,6 @@ public: [[nodiscard]] amrex::Real stopTime () const {return stop_time;} void updateStopTime (const amrex::Real new_stop_time) {stop_time = new_stop_time;} - void AverageAndPackFields( amrex::Vector& varnames, - amrex::Vector& mf_avg, amrex::IntVect ngrow) const; - - void prepareFields( int step, amrex::Vector& varnames, - amrex::Vector& mf_avg, - amrex::Vector& output_mf, - amrex::Vector& output_geom ) const; - static std::array CellSize (int lev); static amrex::XDim3 InvCellSize (int lev); static amrex::RealBox getRealBox(const amrex::Box& bx, int lev); From 804a27340adddcd1163f58fdedf1d00a998a4fc8 Mon Sep 17 00:00:00 2001 From: Andrew Myers Date: Tue, 18 Feb 2025 16:52:56 -0800 Subject: [PATCH 52/58] Fix plot_distribution_mapping.py for 2D (#5660) The box metadata used in this script follows the AMReX conventions. We want "zyx" in 3D and "yx" in 2D. --- Tools/PostProcessing/plot_distribution_mapping.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Tools/PostProcessing/plot_distribution_mapping.py b/Tools/PostProcessing/plot_distribution_mapping.py index 899ea4678c4..07a353cdc3d 100644 --- a/Tools/PostProcessing/plot_distribution_mapping.py +++ b/Tools/PostProcessing/plot_distribution_mapping.py @@ -119,9 +119,9 @@ def _get_costs_reduced_diagnostics(self, directory, prange): kcoords = k.astype(int) // k_blocking_factor # Fill in cost array - shape = (kmax + 1, jmax + 1, imax + 1)[: 2 + self.is_3D] + shape = (kmax + 1, jmax + 1, imax + 1)[1 - self.is_3D :] coords = [ - coord[: 2 + self.is_3D] for coord in zip(kcoords, jcoords, icoords) + coord[1 - self.is_3D :] for coord in zip(kcoords, jcoords, icoords) ] cost_arr = np.full(shape, 0.0) From 216847203740dae3a2a3cf2577165c1d4218fcc2 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 19 Feb 2025 01:55:35 +0100 Subject: [PATCH 53/58] WarpX class: move PrintDtDxDyDz to an anonymous namespace in WarpXInitData.cpp (#5658) `PrintDtDxDyDz` is used only twice in `WarpXInitData.cpp`. Therefore, this PR turns it from a method of the WarpX class to a simple function inside an anonymous namespace in `WarpXInitData.cpp` --- Source/Evolve/WarpXComputeDt.cpp | 19 ------------------- Source/Initialization/WarpXInitData.cpp | 25 +++++++++++++++++++++++-- Source/WarpX.H | 3 --- 3 files changed, 23 insertions(+), 24 deletions(-) diff --git a/Source/Evolve/WarpXComputeDt.cpp b/Source/Evolve/WarpXComputeDt.cpp index 2b4db960ed6..f88b2044927 100644 --- a/Source/Evolve/WarpXComputeDt.cpp +++ b/Source/Evolve/WarpXComputeDt.cpp @@ -134,22 +134,3 @@ WarpX::UpdateDtFromParticleSpeeds () dt[lev] = dt[lev+1] * refRatio(lev)[0]; } } - -void -WarpX::PrintDtDxDyDz () -{ - for (int lev=0; lev <= max_level; lev++) { - const amrex::Real* dx_lev = geom[lev].CellSize(); - amrex::Print() << "Level " << lev << ": dt = " << dt[lev] -#if defined(WARPX_DIM_1D_Z) - << " ; dz = " << dx_lev[0] << '\n'; -#elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) - << " ; dx = " << dx_lev[0] - << " ; dz = " << dx_lev[1] << '\n'; -#elif defined(WARPX_DIM_3D) - << " ; dx = " << dx_lev[0] - << " ; dy = " << dx_lev[1] - << " ; dz = " << dx_lev[2] << '\n'; -#endif - } -} diff --git a/Source/Initialization/WarpXInitData.cpp b/Source/Initialization/WarpXInitData.cpp index 90b8d613898..c70188f07bc 100644 --- a/Source/Initialization/WarpXInitData.cpp +++ b/Source/Initialization/WarpXInitData.cpp @@ -93,6 +93,27 @@ using namespace amrex; namespace { + + /** Print dt and dx,dy,dz */ + void PrintDtDxDyDz ( + int max_level, const amrex::Vector& geom, const amrex::Vector& dt) + { + for (int lev=0; lev <= max_level; lev++) { + const amrex::Real* dx_lev = geom[lev].CellSize(); + amrex::Print() << "Level " << lev << ": dt = " << dt[lev] + #if defined(WARPX_DIM_1D_Z) + << " ; dz = " << dx_lev[0] << '\n'; + #elif defined(WARPX_DIM_XZ) || defined(WARPX_DIM_RZ) + << " ; dx = " << dx_lev[0] + << " ; dz = " << dx_lev[1] << '\n'; + #elif defined(WARPX_DIM_3D) + << " ; dx = " << dx_lev[0] + << " ; dy = " << dx_lev[1] + << " ; dz = " << dx_lev[2] << '\n'; + #endif + } + } + /** * \brief Check that the number of guard cells is smaller than the number of valid cells, * for a given MultiFab, and abort otherwise. @@ -539,14 +560,14 @@ WarpX::InitData () if (restart_chkfile.empty()) { ComputeDt(); - WarpX::PrintDtDxDyDz(); + ::PrintDtDxDyDz(max_level, geom, dt); InitFromScratch(); InitDiagnostics(); } else { InitFromCheckpoint(); - WarpX::PrintDtDxDyDz(); + ::PrintDtDxDyDz(max_level, geom, dt); PostRestart(); reduced_diags->InitData(); } diff --git a/Source/WarpX.H b/Source/WarpX.H index 638b6403cae..44423edb4bb 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -462,9 +462,6 @@ public: /** Write a file that record all inputs: inputs file + command line options */ void WriteUsedInputsFile () const; - /** Print dt and dx,dy,dz */ - void PrintDtDxDyDz (); - /** * \brief * Compute the last time step of the simulation From 04b0cb1177a9558abe451169406abc40bcd36c79 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Tue, 18 Feb 2025 18:20:05 -0800 Subject: [PATCH 54/58] AMReX/pyAMReX/PICSAR: Weekly Update (#5680) Weekly update to latest AMReX. Weekly update to latest pyAMReX (no changes). Weekly update to latest PICSAR (no changes). ```console ./Tools/Release/updateAMReX.py ./Tools/Release/updatepyAMReX.py ./Tools/Release/updatePICSAR.py ``` This pulls in https://github.com/AMReX-Codes/amrex/pull/4337, fixing regressions from #5669 (GPU segfaults on particle redistribute) Signed-off-by: Axel Huebl --- .azure-pipelines.yml | 16 ++++++---------- .github/workflows/cuda.yml | 2 +- cmake/dependencies/AMReX.cmake | 2 +- 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/.azure-pipelines.yml b/.azure-pipelines.yml index 77cc75a0264..427cf21600b 100644 --- a/.azure-pipelines.yml +++ b/.azure-pipelines.yml @@ -66,16 +66,6 @@ jobs: cacheHitVar: CCACHE_CACHE_RESTORED displayName: Cache Ccache Objects - - task: Cache@2 - continueOnError: true - inputs: - key: 'Python3 | "$(System.JobName)" | .azure-pipelines.yml' - restoreKeys: | - Python3 | "$(System.JobName)" | .azure-pipelines.yml - path: /home/vsts/.local/lib/python3.8 - cacheHitVar: PYTHON38_CACHE_RESTORED - displayName: Cache Python Libraries - - bash: | set -o nounset errexit pipefail cat /proc/cpuinfo | grep "model name" | sort -u @@ -176,3 +166,9 @@ jobs: -exec cat {} \; displayName: 'Logs' condition: always() + + - bash: | + # clean out so the Post-job Cache "tar" command has more disk space available + rm -rf build + displayName: 'Clean Build Directory' + condition: always() diff --git a/.github/workflows/cuda.yml b/.github/workflows/cuda.yml index 3b65f406728..029d1e4db89 100644 --- a/.github/workflows/cuda.yml +++ b/.github/workflows/cuda.yml @@ -127,7 +127,7 @@ jobs: which nvcc || echo "nvcc not in PATH!" git clone https://github.com/AMReX-Codes/amrex.git ../amrex - cd ../amrex && git checkout --detach 275f55f25fec350dfedb54f75a19200b52ced93f && cd - + cd ../amrex && git checkout --detach b364becad939a490bca4e7f8b23f7392c558a311 && cd - make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4 ccache -s diff --git a/cmake/dependencies/AMReX.cmake b/cmake/dependencies/AMReX.cmake index 813734282c7..7a249cd6c5b 100644 --- a/cmake/dependencies/AMReX.cmake +++ b/cmake/dependencies/AMReX.cmake @@ -294,7 +294,7 @@ set(WarpX_amrex_src "" set(WarpX_amrex_repo "https://github.com/AMReX-Codes/amrex.git" CACHE STRING "Repository URI to pull and build AMReX from if(WarpX_amrex_internal)") -set(WarpX_amrex_branch "275f55f25fec350dfedb54f75a19200b52ced93f" +set(WarpX_amrex_branch "b364becad939a490bca4e7f8b23f7392c558a311" CACHE STRING "Repository branch for WarpX_amrex_repo if(WarpX_amrex_internal)") From d38ebc75568234d1f603db23d61e4735d430cc47 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Wed, 19 Feb 2025 18:41:05 +0100 Subject: [PATCH 55/58] WarpX class: remove unused functions NodalSyncJ and NodalSyncRho (#5685) `NodalSyncJ` and `NodalSyncRho` are member functions of the WarpX class, but they are never used. Therefore, this PR removes them. --- Source/Parallelization/WarpXComm.cpp | 48 ---------------------------- Source/WarpX.H | 12 ------- 2 files changed, 60 deletions(-) diff --git a/Source/Parallelization/WarpXComm.cpp b/Source/Parallelization/WarpXComm.cpp index d5c36084467..3adf4389a46 100644 --- a/Source/Parallelization/WarpXComm.cpp +++ b/Source/Parallelization/WarpXComm.cpp @@ -1667,51 +1667,3 @@ void WarpX::AddRhoFromFineLevelandSumBoundary ( MultiFab::Add(*charge_fp[lev], mf, 0, icomp, ncomp, 0); } } - -void WarpX::NodalSyncJ ( - const ablastr::fields::MultiLevelVectorField& J_fp, - const ablastr::fields::MultiLevelVectorField& J_cp, - const int lev, - PatchType patch_type) -{ - if (!override_sync_intervals.contains(istep[0])) { return; } - - if (patch_type == PatchType::fine) - { - const amrex::Periodicity& period = Geom(lev).periodicity(); - ablastr::utils::communication::OverrideSync(*J_fp[lev][0], WarpX::do_single_precision_comms, period); - ablastr::utils::communication::OverrideSync(*J_fp[lev][1], WarpX::do_single_precision_comms, period); - ablastr::utils::communication::OverrideSync(*J_fp[lev][2], WarpX::do_single_precision_comms, period); - } - else if (patch_type == PatchType::coarse) - { - const amrex::Periodicity& cperiod = Geom(lev-1).periodicity(); - ablastr::utils::communication::OverrideSync(*J_cp[lev][0], WarpX::do_single_precision_comms, cperiod); - ablastr::utils::communication::OverrideSync(*J_cp[lev][1], WarpX::do_single_precision_comms, cperiod); - ablastr::utils::communication::OverrideSync(*J_cp[lev][2], WarpX::do_single_precision_comms, cperiod); - } -} - -void WarpX::NodalSyncRho ( - const amrex::Vector>& charge_fp, - const amrex::Vector>& charge_cp, - const int lev, - PatchType patch_type, - const int icomp, - const int ncomp) -{ - if (!override_sync_intervals.contains(istep[0])) { return; } - - if (patch_type == PatchType::fine && charge_fp[lev]) - { - const amrex::Periodicity& period = Geom(lev).periodicity(); - MultiFab rhof(*charge_fp[lev], amrex::make_alias, icomp, ncomp); - ablastr::utils::communication::OverrideSync(rhof, WarpX::do_single_precision_comms, period); - } - else if (patch_type == PatchType::coarse && charge_cp[lev]) - { - const amrex::Periodicity& cperiod = Geom(lev-1).periodicity(); - MultiFab rhoc(*charge_cp[lev], amrex::make_alias, icomp, ncomp); - ablastr::utils::communication::OverrideSync(rhoc, WarpX::do_single_precision_comms, cperiod); - } -} diff --git a/Source/WarpX.H b/Source/WarpX.H index 44423edb4bb..00ab9080751 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -1162,11 +1162,6 @@ private: const ablastr::fields::MultiLevelVectorField& current, int lev, const amrex::Periodicity& period); - void NodalSyncJ ( - const ablastr::fields::MultiLevelVectorField& J_fp, - const ablastr::fields::MultiLevelVectorField& J_cp, - int lev, - PatchType patch_type); void RestrictRhoFromFineToCoarsePatch (int lev ); void ApplyFilterandSumBoundaryRho ( @@ -1183,13 +1178,6 @@ private: int lev, int icomp, int ncomp); - void NodalSyncRho ( - const amrex::Vector>& charge_fp, - const amrex::Vector>& charge_cp, - int lev, - PatchType patch_type, - int icomp, - int ncomp); void ReadParameters (); From 686ef38c16f77c1bfec3a153cc598c663e2046df Mon Sep 17 00:00:00 2001 From: Arianna Formenti Date: Wed, 19 Feb 2025 10:10:21 -0800 Subject: [PATCH 56/58] Small fix in Perlmutter GPU sbatch script (#5683) Changes in Perlmutter GPU job script: from `#SBATCH --cpus-per-task=16` to `#SBATCH --cpus-per-task=32`. This is to request (v)cores in consecutive blocks. GPU 3 is closest to CPU cores 0-15, 64-79, GPU 2 to CPU cores 16-31, 80-95, ... If `--cpus-per-task=16`, MPI ranks 0 and 1 are mapped to cores 0 and 8. If `--cpus-per-task=32`, MPI ranks 0 and 1 are mapped to cores 0 and 16. Visual representation ![pm_gpu_vcores_mpi](https://github.com/user-attachments/assets/edf0721f-7321-49ab-bf37-4b55a7c422cc) --------- Co-authored-by: Axel Huebl --- Tools/machines/perlmutter-nersc/perlmutter_gpu.sbatch | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Tools/machines/perlmutter-nersc/perlmutter_gpu.sbatch b/Tools/machines/perlmutter-nersc/perlmutter_gpu.sbatch index 37bd5d60c54..bd47fa3bd2a 100644 --- a/Tools/machines/perlmutter-nersc/perlmutter_gpu.sbatch +++ b/Tools/machines/perlmutter-nersc/perlmutter_gpu.sbatch @@ -17,7 +17,7 @@ # A100 80GB (256 nodes) #S BATCH -C gpu&hbm80g #SBATCH --exclusive -#SBATCH --cpus-per-task=16 +#SBATCH --cpus-per-task=32 # ideally single:1, but NERSC cgroups issue #SBATCH --gpu-bind=none #SBATCH --ntasks-per-node=4 @@ -34,7 +34,7 @@ export MPICH_OFI_NIC_POLICY=GPU # threads for OpenMP and threaded compressors per MPI rank # note: 16 avoids hyperthreading (32 virtual cores, 16 physical) -export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} +export OMP_NUM_THREADS=16 # GPU-aware MPI optimizations GPU_AWARE_MPI="amrex.use_gpu_aware_mpi=1" From deef43533b9ccbee355327e9f023947dfd5ef909 Mon Sep 17 00:00:00 2001 From: Axel Huebl Date: Wed, 19 Feb 2025 11:22:57 -0800 Subject: [PATCH 57/58] Doc: PoP on Ion-Acoustic Solitions (#5686) New PoP by Ashwyn Sam et al.: https://doi.org/10.1063/5.0249525 --- Docs/source/highlights.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/Docs/source/highlights.rst b/Docs/source/highlights.rst index b40ed16e945..81cc53c3eab 100644 --- a/Docs/source/highlights.rst +++ b/Docs/source/highlights.rst @@ -159,6 +159,11 @@ High Energy Astrophysical Plasma Physics Scientific works in astrophysical plasma modeling. +#. Sam A, Kumar P, Fletcher AC, Crabtree C, Lee N, Elschot S. + **Nonlinear evolution, propagation, electron-trapping, and damping effects of ion-acoustic solitons using fully kinetic PIC simulations**. + Phys. Plasmas **32** 022103, 2025 + `DOI:10.1063/5.0249525 `__ + #. Jambunathan R, Jones H, Corrales L, Klion H, Roward ME, Myers A, Zhang W, Vay J-L. **Application of mesh refinement to relativistic magnetic reconnection**. Physics of Plasmas ***32*** 1, 2025 From 346bebdb27928c1acad892c4ffee5251f3c9d5f5 Mon Sep 17 00:00:00 2001 From: Luca Fedeli Date: Fri, 21 Feb 2025 00:57:50 +0100 Subject: [PATCH 58/58] WarpX class: remove unused methods GetMacroscopicProperties and GetHybridPICModel (#5640) The methods `GetMacroscopicProperties` and `GetHybridPICModel` of the WarpX class are currently unused. We may consider to remove them. --- Source/WarpX.H | 2 -- 1 file changed, 2 deletions(-) diff --git a/Source/WarpX.H b/Source/WarpX.H index 00ab9080751..4f6024d426d 100644 --- a/Source/WarpX.H +++ b/Source/WarpX.H @@ -157,9 +157,7 @@ public: MultiParticleContainer& GetPartContainer () { return *mypc; } MultiFluidContainer& GetFluidContainer () { return *myfl; } - MacroscopicProperties& GetMacroscopicProperties () { return *m_macroscopic_properties; } ElectrostaticSolver& GetElectrostaticSolver () {return *m_electrostatic_solver;} - HybridPICModel& GetHybridPICModel () { return *m_hybrid_pic_model; } [[nodiscard]] HybridPICModel * get_pointer_HybridPICModel () const { return m_hybrid_pic_model.get(); } MultiDiagnostics& GetMultiDiags () {return *multi_diags;} ParticleBoundaryBuffer& GetParticleBoundaryBuffer () { return *m_particle_boundary_buffer; }