-
Notifications
You must be signed in to change notification settings - Fork 6
Building on Orion
The JCSDA maintains JEDI modules on Orion for compiling/running JEDI executables.
First, you must set the JEDI_OPT
environment variable. This must be set both at compile and at runtime.
If using bash
:
export JEDI_OPT=/work/noaa/da/jedipara/opt/modules
If using csh
:
setenv JEDI_OPT /work/noaa/da/jedipara/opt/modules
Next, you need to add these JEDI modules for use
module use $JEDI_OPT/modulefiles/core
It is recommended that you run module purge
before loading the JEDI environment modules, to prevent conflicts.
To load the complete set of libraries needed to build and run JEDI executables, choose Intel
or GNU
options based on below:
For the Intel compiler suite:
module load jedi/intel-impi
and for the GNU compilers:
module load jedi/gnu-openmpi
To run the ctests properly, many require MPI. Orion admins are not as stringent as those of Hera, but it is still not good to run MPI programs on the login nodes. But ctest
can be smart and submit the test as a batch job, if the correct environment variables are set.
For example:
export SLURM_ACCOUNT=da-cpu
export SALLOC_ACCOUNT=$SLURM_ACCOUNT
export SBATCH_ACCOUNT=$SLURM_ACCOUNT
export SLURM_QOS=debug
Please change this as appropriate if you are using csh
, and ensure that SLURM_ACCOUNT
is set to a HPC project that you have access to the queue for.
Some tests (particularly FV3-JEDI ones) will fail without changing the stack size, so add this command:
ulimit -s unlimited
Some tests (particularly FV3-JEDI ones) may also fail if the submitted job is assigned to node(s) with other users' jobs running concurrently and the job tries to access shared memory. If your test(s) failed with error message that reads "unable to allocate shared memory", add this command:
export SLURM_EXCLUSIVE=user
You may also want to explicitly set OMP threading to 1
export OMP_NUM_THREADS=1
The ecbuild command that you use will depend on what you are attempting to build.
For the most basic case, you should always have this:
ecbuild -DMPIEXEC_EXECUTABLE=/opt/slurm/bin/srun -DMPIEXEC_NUMPROC_FLAG="-n" /path/to/bundle
The first option:
-DMPIEXEC_EXECUTABLE=/opt/slurm/bin/srun
tells ecbuild that for ctests that need MPI, how to submit it to the job queue using srun
instead of mpirun
and the path to where srun
is.
The second option:
-DMPIEXEC_NUMPROC_FLAG="-n"
is also needed to tell ecbuild how to specify the number of processors needed for each test.
For some other cases you may need to add extra options before the /path/to/bundle
.
These include:
- To build the IODA Python API:
-DBUILD_PYTHON_BINDINGS=ON
- To build the IODA-Converters in the IODA-bundle:
-DBUILD_IODA_CONVERTERS=ON
For example, to build the ioda-bundle with the python API and the ioda-converters:
ecbuild -DMPIEXEC_EXECUTABLE=/opt/slurm/bin/srun -DMPIEXEC_NUMPROC_FLAG="-n" -DBUILD_PYTHON_BINDINGS=ON -DBUILD_IODA_CONVERTERS=ON /path/to/bundle