-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add data interpolation to Antarctica mesh generation case #408
Conversation
Add renaming, scaling, and extrapolation of variables from the gridded dataset within mesh.py.
Use convention that negative mass balance is melting.
Instead of using bilinear interpolation from the not-so-good composite gridded dataset for all fields, use conservative remapping for bed, thickness, and velocity fields directly from those datasets. This requires multiple nodes (more than 2 badger nodes; 10 works well) because ESMF_RegridWeightGen on those datasets runs out of memory.
Previous commit inadvertently removed interpolation of fields from gridded dataset, such as sfcMassBal, floatingBasalMassBal, and basalHeatFlux. Also set observedSurfaceVelocityUncertainty to 1.0 wherever thickness < 1.0
Use numCells instead of distance when culling from ice margin.
Use extrapolated thickness and velocity fields from BedMachine and Measures, and then trim back based on iceMask field. Also treat difference between cull_cell and cull_distance in .cfg file.
Set data path in .cfg file instead of hard-coding in mesh.py
Read scrip files for gridded datasets from data directory instead of creating every time because that takes a while. That code is left commented out in case it is needed again.
Set default number of processors to 180 instead of 360. Multiple nodes are only used for the ESMF_RegridWeightGen step, which does not take long as long as it does not run out of memory. 2 nodes on Badger is too few, but I have not tested anything less than 5.
Hello @trhille! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2022-08-05 15:51:39 UTC |
By default, calculate window_size for calculating distance to grounding line and ice edge from the config options high_dist . There is also an option to manually set window_size if desired, but it must not be smaller than high_dist. If it is, a warning is printed to the log file and window_size is calculated instead of using the manually set value.
Replaced by #750 |
Interpolate data from gridded datasets to Antarctic mesh within COMPASS. This takes care of the peculiarities of the current gridded compilation dataset (antarctica_8km_2020_10_20.nc), as well as using conservative remapping directly from the high-resolution BedMachineAntarctica and MeASUReS velocity datasets. There is a fairly heavy degree of pre-processing done to get the BedMachine and MeASUReS datasets ready to be used here. The pre-processing includes renaming variables, setting reasonable _FillValue and missing_value attributes, extrapolating fields to avoid interpolation ramps at ice margins, updating mask values, and raising the bed topography at Lake Vostok to ensure a flat ice surface. Those data files and processing scripts currently live here on Badger:
/usr/projects/climate/trhille/data
. Eventually that pre-processing could be integrated into a new step in COMPASS, or the processed data files could be added to the server on Anvil and downloaded as needed.The conservative remapping step using ESMF_RegridWeightGen requires multiple nodes in order to not run out of memory. This is a major short-coming of the current set up for this test-case, which runs the entire case on five Badger nodes, even though four of those nodes are only used for a small fraction of the total run time. This should probably be updated to be a separate step in the future in order to save on cost.
This currently requires use of a branch of MPAS-Tools to interpolate the
iceMask
variable: https://github.com/trhille/MPAS-Tools/tree/add_iceMask_to_interp_script. The steps to use this are:1. Create your compass environment as normal and source the load script
2. Go to your MPAS-Tools directory and checkout https://github.com/trhille/MPAS-Tools/tree/add_iceMask_to_interp_script
3. Still from MPAS-Tools directory python -m pip install -e conda_package
4. Proceed setting up test case as normal
This can take a long time to run because the conservative remapping is slow. I recommend at least a 4 hour job on 5 nodes for medium resolution (~6–30km).