Skip to content

Commit

Permalink
Merge pull request #168 from xylar/add-frontier
Browse files Browse the repository at this point in the history
Add Frontier as a supported machine
  • Loading branch information
xylar authored Jan 31, 2024
2 parents b910de8 + 3aa19b3 commit a46277c
Show file tree
Hide file tree
Showing 12 changed files with 235 additions and 105 deletions.
4 changes: 4 additions & 0 deletions deploy/albany_supported.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,9 @@ anvil, gnu, openmpi
chicoma-cpu, gnu, mpich
chrysalis, gnu, openmpi
compy, gnu, openmpi
frontier, gnu, mpich
frontier, gnugpu, mpich
frontier, crayclang, mpich
frontier, crayclanggpu, mpich
pm-cpu, gnu, mpich
morpheus, gnu, openmpi
1 change: 1 addition & 0 deletions deploy/petsc_supported.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ chicoma-cpu, gnu, mpich
chrysalis, intel, openmpi
chrysalis, gnu, openmpi
compy, intel, impi
frontier, gnu, mpich
pm-cpu, gnu, mpich
31 changes: 31 additions & 0 deletions docs/developers_guide/machines/frontier.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Frontier

## frontier, gnu

If you've set things up for this compiler, you should be able to source a load
script similar to:

```bash
source load_dev_polaris_0.3.0-alpha.1_frontier_gnu_mpich.sh
```

Then, you can build the MPAS model with

```bash
make [DEBUG=true] gnu-cray
```

## frontier, crayclang

Similarly to `gnu`, for `crayclang`, if you've set things up right, sourcing
the load scrip will look something like:

```bash
source load_dev_polaris_0.3.0-alpha.1_frontier_crayclang_mpich.sh
```

To build MPAS components, use:

```bash
make [DEBUG=true] cray-cray
```
9 changes: 7 additions & 2 deletions docs/developers_guide/machines/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ supported for those configurations with `gnu` compilers.
+--------------+------------+-----------+-------------------+
| compy | intel | impi | intel-mpi |
+--------------+------------+-----------+-------------------+
| frontier | gnu | mpich | gnu-cray |
| +------------+-----------+-------------------+
| | crayclang | mpich | cray-cray |
+--------------+------------+-----------+-------------------+
| pm-cpu | gnu | mpich | gnu-cray |
| +------------+-----------+-------------------+
| | intel | mpich | intel-cray |
Expand All @@ -71,6 +75,7 @@ anvil
chicoma
chrysalis
compy
frontier
perlmutter
```

Expand All @@ -85,13 +90,13 @@ rather than system compilers. To create a development conda environment and
an activation script for it, on Linux, run:

```bash
./conda/configure_polaris_envs.py --conda <conda_path> -c gnu -i mpich
./configure_polaris_envs.py --conda <conda_path> -c gnu -i mpich
```

and on OSX run:

```bash
./conda/configure_polaris_envs.py --conda <conda_path> -c clang -i mpich
./configure_polaris_envs.py --conda <conda_path> -c clang -i mpich
```

You may use `openmpi` instead of `mpich` but we have had better experiences
Expand Down
38 changes: 4 additions & 34 deletions docs/users_guide/machines/anvil.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,38 +70,8 @@ partitions = acme-small, acme-medium, acme-large
qos = regular, acme_high
```

## Intel on Anvil
## Loading and running Polaris on Anvil

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source /lcrc/soft/climate/polaris/anvil/load_latest_polaris_intel_impi.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] intel-mpi
```

For other MPI libraries (`openmpi` or `mvapich` instead of `impi`), use

```bash
make [DEBUG=true] [OPENMP=true] ifort
```

## Gnu on Anvil

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source /lcrc/soft/climate/polaris/anvil/load_latest_polaris_gnu_openmpi.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] [ALBANY=true] gfortran
```
Follow the developer's guide at {ref}`dev-machines` to get set up. There are
currently no plans to support a different deployment strategy (e.g. a shared
environoment) for users.
17 changes: 4 additions & 13 deletions docs/users_guide/machines/chicoma.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,17 +153,8 @@ modules_before = False
modules_after = False
```

### Gnu on Chicoma-CPU
## Loading and running Polaris on Chicoma

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source /usr/projects/climate/SHARED_CLIMATE/polaris/chicoma-cpu/load_latest_polaris_gnu_mpich.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] [ALBANY=true] gnu-cray
```
Follow the developer's guide at {ref}`dev-machines` to get set up. There are
currently no plans to support a different deployment strategy (e.g. a shared
environoment) for users.
32 changes: 4 additions & 28 deletions docs/users_guide/machines/chrysalis.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,32 +60,8 @@ cores_per_node = 128
partitions = debug, compute, high
```

## Intel on Chrysalis
## Loading and running Polaris on Chrysalis

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source /lcrc/soft/climate/polaris/chrysalis/load_latest_polaris_intel_openmpi.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] ifort
```

## Gnu on Chrysalis

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source /lcrc/soft/climate/polaris/chrysalis/load_latest_polaris_gnu_openmpi.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] [ALBANY=true] gfortran
```
Follow the developer's guide at {ref}`dev-machines` to get set up. There are
currently no plans to support a different deployment strategy (e.g. a shared
environoment) for users.
17 changes: 4 additions & 13 deletions docs/users_guide/machines/compy.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,17 +68,8 @@ partitions = slurm
qos = regular
```

## Intel on CompyMcNodeFace
## Loading and running Polaris on CompyMcNodeFace

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source source /share/apps/E3SM/conda_envs/polaris/load_latest_polaris_intel_impi.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] intel-mpi
```
Follow the developer's guide at {ref}`dev-machines` to get set up. There are
currently no plans to support a different deployment strategy (e.g. a shared
environoment) for users.
119 changes: 119 additions & 0 deletions docs/users_guide/machines/frontier.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Frontier

login: `ssh <username>@frontier.olcf.ornl.gov`

interactive login:

```bash
# for CPU:
salloc -A cli115 --partition=batch --nodes=1 --time=30:00 -C cpu

# for GPU:
salloc -A cli115 --partition=batch --nodes=1 --time=30:00 -C gpu
```

Here is a link to the
[Frontier User Guide](https://docs.olcf.ornl.gov/systems/frontier_user_guide.html)

## config options

Here are the default config options added when you have configured Poaris on
a Frontier login node (or specified `./configure_polaris_envs.py -m frontier`):

```cfg
# The paths section describes paths for data and environments
[paths]
# A shared root directory where polaris data can be found
database_root = /lustre/orion/cli115/world-shared/polaris
# the path to the base conda environment where polaris environments have
# been created
polaris_envs = /ccs/proj/cli115/software/polaris/frontier/conda/base
# Options related to deploying a polaris conda and spack environments
[deploy]
# the compiler set to use for system libraries and MPAS builds
compiler = gnu
# the compiler to use to build software (e.g. ESMF and MOAB) with spack
software_compiler = gnu
# the system MPI library to use for gnu compiler
mpi_gnu = mpich
# the system MPI library to use for gnugpu compiler
mpi_gnugpu = mpich
# the system MPI library to use for crayclang compiler
mpi_crayclang = mpich
# the system MPI library to use for crayclanggpu compiler
mpi_crayclanggpu = mpich
# the base path for spack environments used by polaris
spack = /ccs/proj/cli115/software/polaris/frontier/spack
# whether to use the same modules for hdf5, netcdf-c, netcdf-fortran and
# pnetcdf as E3SM (spack modules are used otherwise)
use_e3sm_hdf5_netcdf = True
# The parallel section describes options related to running jobs in parallel.
# Most options in this section come from mache so here we just add or override
# some defaults
[parallel]
# cores per node on the machine
cores_per_node = 64
# threads per core (set to 1 because hyperthreading requires extra sbatch
# flag --threads-per-core that polaris doesn't yet support)
threads_per_core = 1
```

Additionally, some relevant config options come from the
[mache](https://github.com/E3SM-Project/mache/) package:

```cfg
# The parallel section describes options related to running jobs in parallel
[parallel]
# parallel system of execution: slurm, cobalt or single_node
system = slurm
# whether to use mpirun or srun to run a task
parallel_executable = srun
# cores per node on the machine
cores_per_node = 64
# account for running diagnostics jobs
account = cli115
# available partition(s) (default is the first)
partitions = batch
# Config options related to spack environments
[spack]
# whether to load modules from the spack yaml file before loading the spack
# environment
modules_before = False
# whether to load modules from the spack yaml file after loading the spack
# environment
modules_after = False
# whether the machine uses cray compilers
cray_compilers = True
```

## Loading and running Polaris on Frontier

Follow the developer's guide at {ref}`dev-machines` to get set up. There are
currently no plans to support a different deployment strategy (e.g. a shared
environoment) for users.

1 change: 1 addition & 0 deletions docs/users_guide/machines/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ anvil
chicoma
chrysalis
compy
frontier
perlmutter
```

Expand Down
21 changes: 6 additions & 15 deletions docs/users_guide/machines/perlmutter.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ interactive login:

```bash
# for CPU:
salloc --partition=debug --nodes=1 --time=30:00 -C cpu
salloc --qos=debug --nodes=1 --time=30:00 -C cpu

# for GPU:
salloc --partition=debug --nodes=1 --time=30:00 -C gpu
salloc --qos=debug --nodes=1 --time=30:00 -C gpu
```

Compute time:
Expand Down Expand Up @@ -123,20 +123,11 @@ modules_after = False
cray_compilers = True
```

### Gnu on Perlmutter-CPU
## Loading and running Polaris on Perlmutter

To load the polaris environment and modules, and set appropriate environment
variables:

```bash
source /global/cfs/cdirs/e3sm/software/polaris/pm-cpu/load_latest_polaris_gnu_mpich.sh
```

To build the MPAS model with

```bash
make [DEBUG=true] [OPENMP=true] [ALBANY=true] gnu-cray
```
Follow the developer's guide at {ref}`dev-machines` to get set up. There are
currently no plans to support a different deployment strategy (e.g. a shared
environoment) for users.

## Jupyter notebook on remote data

Expand Down
Loading

0 comments on commit a46277c

Please sign in to comment.