-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix out-of-bound in the initialization of EB #2607
Conversation
I don't think I fully understand why this happens, but your solution looks great to me. We need the same changes in |
I don't think we need the same change for The problem is really with |
Now I see, thanks a lot for the explanation. |
@lgiacome Hi Lorenzo, in the table with all the checks on this page here you can click the link "Details" next to the red ones (the ones that failed), then click the link that will say "1 errors / 0 warnings" (or something similar), then click on the failing job (e.g. "embedded boundary" in this case), then click on "Build & test" and you'll see on your right the raw log with the backtraces printed out for the individual tests that failed (you can also view/download this raw log in a separate window, if needed). Keep in mind that those backtraces are not generated in DEBUG mode (we don't run in DEBUG mode on Azure), so the information might not be as clear and useful as the one you get in DEBUG mode. Anyways let me know if you find those backtraces by following these steps! Update Note that I do see some tests crashing even locally. For instance, if I run locally |
@EZoni thanks a lot for both the points! I'll look into running this locally then and see what's going on. |
FillBoundary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great thanks Remi 👍
* Fix Init of Vector Members (ECP-WarpX#2595) Fix default init of `Vector` member variables. The old construct is not valid C++. https://stackoverflow.com/a/11491003/2719194 * C++17: Work-Around NVCC gatherParticles (ECP-WarpX#2596) The `noexcept` lambda does not compile in C++17 mode due to an NVCC compiler bug, at least in NVCC 11.3.109. Compiles in C++14 mode with the same compiler. * requirements.txt - PICMI development version (ECP-WarpX#2588) Document in `requirements.txt` on how to install a pre-release version of PICMI. * CONTRIBUTING: Update/Modernize (ECP-WarpX#2600) * CONTRIBUTING: Update/Modernize - Add GitHub account setup - Add local git setup - Modernize for CMake workflows * Apply suggestions by Edoardo Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> * Replaced duplicated current deposition documentation (ECP-WarpX#2604) * Throwing a warning if particle_shape>1 with EB (ECP-WarpX#2592) * Aborting if particle_shape!=1 with EB * Throw warning instead of aborting * Checking at runtime if EB is initialized * Added missing preprocessor directive * Ignoring an unused variable * Fix typo * Improve style Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Fix segfault when importing _libwarpx without initializing WarpX (ECP-WarpX#2580) * Added check for if warpx was initialized when calling finalize * Renamed to be warpx_initialized * Fixed reference to global variable Co-authored-by: Peter Scherpelz <31747262+peterscherpelz@users.noreply.github.com> * Changed global variable to member of libwarpx * Fixed syntax errors * Remove custom arg from argv to avoid parmparse error Co-authored-by: Peter Scherpelz <31747262+peterscherpelz@users.noreply.github.com> * Added parallel pragma to ApplyBoundaryConditions (ECP-WarpX#2612) * Note that CCache 4.2 introduced large CUDA improvements (ECP-WarpX#2606) * Dimensionality Docs: Default (ECP-WarpX#2609) Just adds the note that 3D is the default geometry. * AMReX: Weekly Update (ECP-WarpX#2613) * MergeBuffersForPlotfile: Barrier (ECP-WarpX#2608) Make sure that all MPI ranks are in sync, i.e., have closed the files that they wrote, before trying to merge them. * Fix installation location for libraries (ECP-WarpX#2583) During configuration the installation location for libraries is given by dumping the cmake variable `CMAKE_INSTALL_LIBDIR`. This commit adjusts the installation of WarpX libraries (WarpX_LIB=ON) to respect this setting. Co-authored-by: Rolf Pfeffertal <tropf@users.noreply.github.com> * Release 21.12 (ECP-WarpX#2614) * AMReX: 21.12 * PICSAR: 21.12 * WarpX: 21.12 * div(E,B) Cleaning Options for PSATD (ECP-WarpX#2403) * Implement div(E)/div(B) Cleaning with Standard PSATD * Cleaning * Update Benchmark * Add Nodal Synchronization of F,G * OneStep_multiJ: Nodal Syncs, Damp PML * OneStep_multiJ: Push PSATD Fields in PML * div Cleaning Defaults (Domain v. PML) * Include Fix of ECP-WarpX#2429 until Merged * Reset Benchmark of Langmuir_multi_psatd_div_cleaning * Multi-J: Remove PML Support * Include Fix of ECP-WarpX#2474 Until Merged * Exchange All Guard Cells for F,G * Fix Defaults * Update Test, Reset Benchmark * Fix Defaults * Cleaning * Default update_with_rho=1 if do_dive_cleaning=1 * Update CI Test pml_psatd_dive_divb_cleaning * Replace Warning with Abort * Add 2D Langmuir Test w/ MR & PSATD (ECP-WarpX#2605) * Add 2D Langmuir Test w/ MR & PSATD * Add Missing Compile String * Fix out-of-bound in Inverse FFT of F,G (ECP-WarpX#2619) * Mention that the potentail should be constant inside EB (ECP-WarpX#2618) * Mention that the potentail should be constant inside EB * Update text * Replace AMREX_SPACEDIM: Boundary & Parallelization (ECP-WarpX#2620) * AMREX_SPACEDIM : Boundary Conditions * AMREX_SPACEDIM : Parallelization * Fix compilation * Update Source/Parallelization/WarpXComm_K.H * Fix out-of-bound in the initialization of EB (ECP-WarpX#2607) * Call FillBoundary when initializing EB * Avoid out-of-bound * Bug fix * Apply suggestions from code review * update version number Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> Co-authored-by: Remi Lehe <remi.lehe@normalesup.org> Co-authored-by: Lorenzo Giacomel <47607756+lgiacome@users.noreply.github.com> Co-authored-by: Kevin Z. Zhu <86268612+KZhu-ME@users.noreply.github.com> Co-authored-by: Peter Scherpelz <31747262+peterscherpelz@users.noreply.github.com> Co-authored-by: David Grote <grote1@llnl.gov> Co-authored-by: Phil Miller <phil@intensecomputing.com> Co-authored-by: s9105947 <80697868+s9105947@users.noreply.github.com> Co-authored-by: Rolf Pfeffertal <tropf@users.noreply.github.com> Co-authored-by: Prabhat Kumar <89051199+prkkumar@users.noreply.github.com>
* Using ng_FieldSolver ghost cells in the EB data * Removed an unused variable * Fixed makeEBFabFactory also in in WarpXRgrid.cpp * Fixed end of line whitespace * Undoing #2607
* C++17, CMake 3.17+ (ECP-WarpX#2300) * C++17, CMake 3.17+ Update C++ requirements to compile with C++17 or newer. * Superbuild: C++17 in AMReX/PICSAR/openPMD-api * Summit: `cuda/11.0.3` -> `cuda/11.3.1` When compiling AMReX in C++17 on Summit, the `cuda/11.0.3` module (`nvcc 11.0.2211`) dies with: ``` ... Base/AMReX_TinyProfiler.cpp nvcc error : 'cicc' died due to signal 11 (Invalid memory reference) nvcc error : 'cicc' core dumped ``` Although this usually is a memory issue, it also appears in `-j1` compiles. * Replace AMREX_SPACEDIM: Evolve & FieldSolver (ECP-WarpX#2642) * AMREX_SPACEDIM : Boundary Conditions * AMREX_SPACEDIM : Parallelization * Fix compilation * AMREX_SPACEDIM : Initialization * Fix Typo * space * AMREX_SPACEDIM : Particles * AMREX_SPACEDIM : Evolve and FieldSolver * C++17: structured bindings to replace "std::tie(x,y,z) = f()" (ECP-WarpX#2644) * use structured bindings * std::ignore equivalent in structured bindings Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Perlmutter: December Update (ECP-WarpX#2645) Update the Perlmutter instructions for the major update from December 8th, 2021. * 1D tests for plasma acceleration (ECP-WarpX#2593) * modify requirements.txt and add input file for 1D Python pwfa * add 1D Python plasma acceleration test to CI * picmi version * USE_PSATD=OFF for 1D * Update Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Update Regression/WarpX-tests.ini Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Cartesian1D class in pywarpx/picmi.py * requirements.txt: update picmistandard * update picmi version * requirements.txt: revert unintended changes * 1D Laser Acceleration Test * Update Examples/Physics_applications/laser_acceleration/inputs_1d Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Update Examples/Physics_applications/plasma_acceleration/PICMI_inputs_plasma_acceleration_1d.py Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * add data_list to PICMI laser_acceleration test * increase max steps and fix bug in pywarpx/picmi.py 1DCartesian moving window direction * add data_lust to Python laser acceleration test * picmistandard update Co-authored-by: Prabhat Kumar <prabhatkumar@kraken.dhcp.lbl.gov> Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * CMake 3.22+: Policy CMP0127 (ECP-WarpX#2648) Fix a warning with CMake 3.22+. We use simple syntax in cmake_dependent_option, so we are compatible with the extended syntax in CMake 3.22+: https://cmake.org/cmake/help/v3.22/policy/CMP0127.html * run_test.sh: Own virtual env (ECP-WarpX#2653) Isolate builds locally, so we don't overwrite a developer's setup anymore. This also avoids a couple of nifty problems that can occur by mixing those envs. Originally part of ECP-WarpX#2556 * GNUmake: Fix Python Install (force) (ECP-WarpX#2655) Local developers and cached CI installs ddi never install `pywarpx` if and old version existed. The `--force` must be with us. * Add: Regression/requirements.txt Forgotten in ECP-WarpX#2653 * Azure: `set -eu -o pipefail` Lol, that's not the default. We previously had `script` where it was the default. Introduced in ECP-WarpX#2615 * GNUmake & `WarpX-test.ini`: `python` -> `python3` Consistent with all other calls to Python in tests. * Fix missing checksums1d (ECP-WarpX#2657) * Docs: Fix missing Checksum Ref * Checksum: LaserAcceleration_1d * Checksum: Python_PlasmaAcceleration_1d * Regression/requirements.txt: openpmd-api Follow-up to 8f93e01 * Azure: pre-install `setuptools` upgrade Might fix: ``` - installing setuptools_scm using the system package manager to ensure consistency - migrating from the deprecated setup_requires mechanism to pep517/518 and using a pyproject.toml to declare build dependencies which are reliably pre-installed before running the build tools warnings.warn( TEST FAILED: /home/vsts/.local/lib/python3.8/site-packages/ does NOT support .pth files You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /home/vsts/.local/lib/python3.8/site-packages/ and your PYTHONPATH environment variable currently contains: '' Here are some of your options for correcting the problem: * You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files * You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python and want to use the package(s) you are installing.) * You can set up the installation directory to support ".pth" files by using one of the approaches described here: https://setuptools.readthedocs.io/en/latest/easy_install.html#custom-installation-locations Please make the appropriate changes for your system and try again. ``` * GNUmake `installwarpx`: `mv` -> `cp` No reason to rebuild. Make will detect dependency when needed. * Python GNUmake: Remove Prefix Hacks FREEEEDOM. venv power. * Azure: Ensure latest venv installed * Python/setup.py: picmistandard==0.0.18 Forgotten in ECP-WarpX#2593 * Fix: analysis_default_regression.py Mismatched checksum file due to crude hard-coding. * PWFA 1D: Fix output name Hard coded, undocumented convention: turns out this must be the name of the test that we define in the ini file. Logical, isn't it. Not. Follow-up to ECP-WarpX#2593 * Docs: `python3 -m pip` & Virtual Env (ECP-WarpX#2656) * Docs: `python3 -m pip` Use `python3 -m pip`: - works independent of PATH - always uses the right Python - is the recommended way to use `pip` * Dependencies: Python incl. venv Backported from ECP-WarpX#2556. Follow-up to ECP-WarpX#2653 * CMake: 3.18+ (ECP-WarpX#2651) With the C++17 switch, we required CMake 3.17+ since that one introduced the `cuda_std_17` target compile feature. It turns out that one of the many CUDA improvements in CMake 3.18+ is also to fix that feature for good, so we bump our requirement in CMake. Since CMake is easy to install, it's easier to require a clean newer version than working around a broken old one. Spotted first by Phil on AWS instances, thx! * fix check for absolute library install path (ECP-WarpX#2646) Co-authored-by: Hannes T <s9105947@users.noreply.github.com> * use if constexpr to replace template specialization (ECP-WarpX#2660) * fix for setting the boundary condition potentials in 1D ES simulations (ECP-WarpX#2649) * `use_default_v_<galilean,comoving>` Only w/ Boosted Frame (ECP-WarpX#2654) * ICC CI: Unbound Vars (`setvars.sh`) (ECP-WarpX#2663) Ignore: ``` /opt/intel/oneapi/compiler/latest/env/vars.sh: line 236: OCL_ICD_FILENAMES: unbound variable ``` * QED openPMD Tests: Specify H5 Backend (ECP-WarpX#2661) We default to ADIOS `.bp` if available. Thus, specify HDF5 assumption * C++17: if constexpr for templates in ShapeFactors (ECP-WarpX#2659) * use if constexpr to replace template specialization * Rmove Interface Annotations * Replace static_assert with amrex::Abort * Add includes & authors Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * ABLASTR Library (ECP-WarpX#2263) * [Draft] ABLASTR Library - CMake object library - include FFTW wrappers to start with * Add: MPIInitHelpers * Enable ABLASTR-only builds * Add alias WarpX::ablastr * ABLASTR: openPMD forwarding * make_third_party_includes_system: Avoid Collision * WarpX: depend on `ablastr` * Definitions: WarpX -> ablastr * CMake: Reduce build objects for ABLASTR Skip all object files that we do not use in builds. * CMake: app/shared links all object targets Our `PRIVATE` source/objects are not PUBLICly propagated themselves. * Docs: Fix Warning Logger Typo (ECP-WarpX#2667) * Python: Add 3.10, Relax upper bound (ECP-WarpX#2664) There are no breaking changes in Python 3.10 that affect us. Giving the version compatibility of Python and it's ABI stability, there is no need at the moment to provide an upper limit. Thus, relaxed now in general. * Fixing the initialization of the EB data in ghost cells (ECP-WarpX#2635) * Using ng_FieldSolver ghost cells in the EB data * Removed an unused variable * Fixed makeEBFabFactory also in in WarpXRgrid.cpp * Fixed end of line whitespace * Undoing ECP-WarpX#2607 * Add PML Support for multi-J Algorithm (ECP-WarpX#2603) * Add PML Support for multi-J Algorithm * Add CI Test * Fix the scope of profiler for SYCL (ECP-WarpX#2668) In main.cpp, the destructor of the profiler was called after amrex::Finalize. This caused an error in SYCL due to a device synchronization call in the dtor, because the SYCL queues in amrex had been deleted. In this commit, we limit the scope of the profiler so that its destructor is called before the queues are deleted. Note that it was never an issue for CUDA/HIP, because the device synchronization calls in those backends do not need any amrex objects. * Add high energy asymptotic fit for Proton-Boron total cross section (ECP-WarpX#2408) * Add high energy asymptotic fit for Proton Boron total cross section * Write keV and MeV instead of kev and mev * Add @return doxystrings * Add anisotropic mesh refinement example (ECP-WarpX#2650) * Add anisotropic mesh refinement example * Update benchmark * AMReX/PICSAR: Weekly Update (ECP-WarpX#2666) * AMReX: Weekly Update * Reset: PEC_particle, RepellingParticles, subcyclingMR New AMReX grid layout routines split grids until they truly reach number of MPI ranks, if blocking factor allows. This changes some of our particle orders slightly. * Add load balancing test (ECP-WarpX#2561) * Added embedded_circle test * Add embedded_circle test files * Removed diag files * removed PICMI input file * Update to use default regression analysis * Added line breaks for spacing Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * Added description * Fixed benchmark file * Added load balancing to test * Commented out load_balancing portion of test. This will be added back in once load balancing is fixed. * Add load balancing to embedded_boundary test * Updated checksum * Added embedded_circle test * Add embedded_circle test files * removed PICMI input file * Update to use default regression analysis * Added load balancing to test * Commented out load_balancing portion of test. This will be added back in once load balancing is fixed. * Add load balancing to embedded_boundary test * added analysis.py file in order to relax tolerance on test * Ensure that timers are used to update load balancing algorithm * Updated test name retrieval Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> Co-authored-by: Roelof <roelof.groenewald@modernelectron.com> Co-authored-by: Roelof Groenewald <40245517+roelof-groenewald@users.noreply.github.com> * Adding EB multifabs to the Python interface (ECP-WarpX#2647) * Adding edge_lengths and face_areas to the Python interface * Added wrappers for the two new arrays of data * Adding a CI test * Fixed test name * Added customRunCmd * Added mpi in test * Refactor DepositCharge so it can be called from ImpactX. (ECP-WarpX#2652) * Refactor DepositCharge so it can be called from ImpactX. * change thread_num * Fix namespace * remove all static WarpX:: members and methods from DepositChargeDoIt. * fix unused * Don't access ref_ratio unless lev != depos_lev * more unused * remove function to its own file / namespace * don't need a CMakeLists.txt for this * lower case namespace, rename file * Refactor: Profiler Wrapper Explicit control for synchronization instead of global state. Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> * ABLASTR: Fix Doxygen in `DepositCharge` * update version number and changelog Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja> Co-authored-by: Prabhat Kumar <89051199+prkkumar@users.noreply.github.com> Co-authored-by: Luca Fedeli <luca.fedeli@cea.fr> Co-authored-by: Prabhat Kumar <prabhatkumar@kraken.dhcp.lbl.gov> Co-authored-by: s9105947 <80697868+s9105947@users.noreply.github.com> Co-authored-by: Hannes T <s9105947@users.noreply.github.com> Co-authored-by: Edoardo Zoni <59625522+EZoni@users.noreply.github.com> Co-authored-by: Phil Miller <phil.miller@intensecomputing.com> Co-authored-by: Lorenzo Giacomel <47607756+lgiacome@users.noreply.github.com> Co-authored-by: Weiqun Zhang <WeiqunZhang@lbl.gov> Co-authored-by: Neïl Zaim <49716072+NeilZaim@users.noreply.github.com> Co-authored-by: Remi Lehe <remi.lehe@normalesup.org> Co-authored-by: Kevin Z. Zhu <86268612+KZhu-ME@users.noreply.github.com> Co-authored-by: Andrew Myers <atmyers@lbl.gov>
* Using ng_FieldSolver ghost cells in the EB data * Removed an unused variable * Fixed makeEBFabFactory also in in WarpXRgrid.cpp * Fixed end of line whitespace * Undoing ECP-WarpX#2607
When compiling with EB in debug mode, I got an out-of-bound message at this line:
https://github.com/ECP-WarpX/WarpX/blob/development/Source/EmbeddedBoundary/WarpXInitEB.cpp#L140
It seems that some of the internal structures for EB in
amrex
(used e.g. ingetType
) have only 1 guard cells, whereas we pass a box that has more guard cells.Instead, this PR does not add the guard cells in the box passed ingetType
, and instead the guard cells are filled by an additionalFillBoundary
.Instead, this PR calls
getType
by passing a box that does not have guard cells (but adds the guard cells when actually touching theedge_length
andface_area
arrays.