diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 3ab91f6be2..769994adfc 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -34,5 +34,5 @@ If your code changes are available on GitHub, please provide the repository. ### Build information Please describe: - 1. The machine you are running on (e.g. windows laptop, NCAR supercomputer Cheyenne). + 1. The machine you are running on (e.g. windows laptop, NSF NCAR supercomputer Derecho). 2. The compiler you are using (e.g. gnu, intel). diff --git a/.github/workflows/action_on_pull_request.yml b/.github/workflows/action_on_pull_request.yml index 4426c1cdbd..61c390d9a7 100644 --- a/.github/workflows/action_on_pull_request.yml +++ b/.github/workflows/action_on_pull_request.yml @@ -16,7 +16,7 @@ jobs: options: "--cap-add=SYS_PTRACE" steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Set checked out repo as a safe git directory run: git config --global --add safe.directory /__w/${{ github.event.repository.name }}/${{ github.event.repository.name }} - name: Build and run lorenz_96 with mpi @@ -35,7 +35,7 @@ jobs: options: '--cap-add=SYS_PTRACE' steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Set checked out repo as a safe git directory run: git config --global --add safe.directory /__w/${{ github.event.repository.name }}/${{ github.event.repository.name }} - name: Build and run lorenz_63 with no mpi @@ -53,7 +53,7 @@ jobs: options: '--cap-add=SYS_PTRACE' steps: - name: Checkout repo - uses: actions/checkout@v3 + uses: actions/checkout@v4 - name: Set checked out repo as a safe git directory run: git config --global --add safe.directory /__w/${{ github.event.repository.name }}/${{ github.event.repository.name }} - name: Build and run lorenz_96 with mpif08 diff --git a/.gitignore b/.gitignore index 94d5bd9376..5d1f27c778 100644 --- a/.gitignore +++ b/.gitignore @@ -90,6 +90,8 @@ gitm_to_netcdf netcdf_to_gitm_blocks streamflow_obs_diag cam_dart_obs_preprocessor +aether_to_dart +dart_to_aether # Observation converter exectutables convert_aviso @@ -158,6 +160,10 @@ ssec_satwnd gts_to_dart littler_tf_dart rad_3dvar_to_dart +L1_AMSUA_to_netcdf +convert_airs_L2 +convert_amsu_L1 +convert_L2b # Test programs built by developer_tests rttov_test diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 1b2f991b34..e1dc25ad49 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -22,6 +22,93 @@ individual files. The changes are now listed with the most recent at the top. +**May 16 2024 :: WRF v4. Tag v11.5.0** + +- WRF-DART and WRF-DART Tutorial updated to WRFv4. Note, not backwards compatible with WRFv3.9. +- local particle filter default value for pf_enkf_hybrid=.false. *contributed by Jon Poterjoy* + +**April 23 2024 :: Bug-fix: WRF hybrid vertical coordinate. Tag v11.4.1** + +- DART now detects whether WRF is using Hybrid Vertical Coordinate (HVC) introduced in WRFv3.9 or terrain following (TF) system. + This fix is also compatible with pre WRFv3.9 versions which did not include explicit attribute information for vertical coordinate system. +- Improved obs_impact_tool documentation. + +**March 27 2024 :: WRF-Hydro Developments; AIRS converter documentation update; Add citation.cff file. Tag v11.4.0** + +- WRF-Hydro: + + - Added a new perfect model obs experimental capability to HydroDART + - Modified the Streamflow obs converter to allow for better diagnostics: allows DART to + compute obs space diagnostics on all gauges from the Routelink + - Enhanced performance in the model_mod and noah_hydro_mod when running a full CONUS domain + - Improved HydroDART Diagnostics with new capabilities (saves the hydrographs in a high-resolution + pdf, handles hybrid DA components, separate plots for the hybrid statistics, allows the openloop + to have different ens size and gauges than the DA runs) + +- AIRS and AMSU-A observation converters: + + - Updated the documentation to use up-to-date build suggestions for the HDFEOS library + - Updated the AIRS converter code to be able to use version 7 of the AIRS data formats + - Removed unused and non-functional code: AIRS/BUILD_HDF-EOS.sh, AIRS/L1_AMSUA_to_netcdf.f90, + AIRS/shell_scripts/Build_HDF_to_netCDF.sh, AIRS/shell_scripts/Convert_HDF_to_netCDF.csh + - Removed the unnecessary entries from obs_def_rttov_nml in the input.nml + +- Added a citation.cff file to help users correctly cite DART software - creates a link to cite + the repository on the landing page sidebar on GitHub. + +**March 13 2024 :: Update WRF-DART scripts and bug template to Derecho; remove no-op routines in ensemble manager. Tag v11.3.1** + +- Updated the csh scripting templates used to run WRF-DART and WRF-DART tutorial from Cheyenne to Derecho +- Updated bug report template to use Derecho instead of Cheyenne +- Removed the following no-op routines from ensemble manager: prepare_to_write_to_vars, prepare_to_write_to_copies, + prepare_to_read_from_vars, prepare_to_read_from_copies, prepare_to_update_vars, prepare_to_update_copies + +**March 12 2024 :: MITgcm/N-BLING with Compressed Staggered Grids. Tag v11.3.0** + +- The DART-MITgcm code now supports compressed grids, especially suited for areas like + the Red Sea where land occupies more than 90% of the domain. + Built upon work *contributed by Jiachen Liu*. +- Allows writing the BGC fields into MITgcm's pickup files. +- Allows different compression for the regular and staggered grids. + +**March 12 2024 :: Aether lat-lon. Tag v11.2.0** + +- Aether lat-lon interface added to DART. + +**March 11 2024 :: SEIR model for infectious diseases. Tag v11.1.0** + +- Added SEIR model which simulates the spread of infectious diseases, for example COVID-19. + +**February 13 2024 :: Fortran Standards. Tag v11.0.3** + +- Replace f2kcli with Fortran intrinsics for command line arguments. +- AIRS and quikscat mkmf.templates with appropriate HDF, HDFEOS, RTTOV library flags. +- Simplified noah_hydro_mod.f90 number of non-zero element counts. +- WRF pert_sounding_module random iseed now integer. + +**February 1 2024 :: RTTOV13 cloud bug-fix. Tag v11.0.2** + +- Initialize RTTOV13 profile cloud arrays to zero for profiles. +- Updated docs with RTTOV13 namelist info. +- New obs_def_rttov13_mod.f90 namelist option wfetch_value. +- Updated mkmf.templates for RTTOV on Derecho: HDF5 library flags. + +GitHub actions changes: + + - checkout action updated to v4. + +**January 17 2024 :: CLM bug-fixes. Tag v11.0.1** + +- CLM5-DART SourceMods path variable correction + +- dart_to_clm: + + - Resolved compiler error by changing the arrays for number of snow layers (snlsno and clm_SNLSNO) to integer types + + - Forcing h2oliq_po to be slightly larger than zero to be consistent with h2oice_po and dzsno_po + + - Adding checks to ensure that the values for h2oliq_po, h2oice_po, dzsno_po, and snowdp_po are never negative + **January 11 2024 :: QCEFF. Tag v11.0.0** Nonlinear and Non-Gaussian Data Assimilation Capabilities in DART diff --git a/CITATION.cff b/CITATION.cff new file mode 100644 index 0000000000..3561dc2bf0 --- /dev/null +++ b/CITATION.cff @@ -0,0 +1,9 @@ +cff-version: 1.2.0 +message: "To cite DART, please use the following metadata. Update the DART version and year as appropriate." +title: "The Data Assimilation Research Testbed" +version: "X.Y.Z" +date-released: "2024-03-13" +doi: "10.5065/D6WQ0202" +authors: + - name: "UCAR/NSF NCAR/CISL/DAReS" + city: "Boulder, Colorado" diff --git a/README.md b/README.md index 2cbcd209f6..7dc94b41c6 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ git clone https://github.com/NCAR/DART.git To cite DART, please use the following text: -> The Data Assimilation Research Testbed (Version X.Y.Z) [Software]. (2021). Boulder, Colorado: UCAR/NCAR/CISL/DAReS. +> The Data Assimilation Research Testbed (Version X.Y.Z) [Software]. (2021). Boulder, Colorado: UCAR/NSF NCAR/CISL/DAReS. > http://doi.org/10.5065/D6WQ0202 and update the DART version and year as appropriate. diff --git a/assimilation_code/modules/assimilation/assim_tools_mod.pf.f90 b/assimilation_code/modules/assimilation/assim_tools_mod.pf.f90 index 5892aa139c..af72a46ffa 100644 --- a/assimilation_code/modules/assimilation/assim_tools_mod.pf.f90 +++ b/assimilation_code/modules/assimilation/assim_tools_mod.pf.f90 @@ -62,7 +62,7 @@ module assim_tools_mod use ensemble_manager_mod, only : ensemble_type, get_my_num_vars, get_my_vars, & compute_copy_mean_var, get_var_owner_index, & - prepare_to_update_copies, map_pe_to_task + map_pe_to_task use mpi_utilities_mod, only : my_task_id, broadcast_send, broadcast_recv, & sum_across_tasks, task_count, start_mpi_timer, & @@ -165,7 +165,7 @@ module assim_tools_mod real(r8) :: pf_alpha = 0.30_r8 integer :: pf_kddm = 0 logical :: sampling_weighted_prior = .true. -logical :: pf_enkf_hybrid = .true. +logical :: pf_enkf_hybrid = .false. real(r8) :: min_residual = 0.5_r8 integer :: pf_maxiter = 3 real(r8) :: pf_kf_rtps_coeff = 0.0_r8 @@ -496,10 +496,6 @@ subroutine filter_assim(ens_handle, obs_ens_handle, obs_seq, keys, & my_state_loc( ens_handle%my_num_vars)) ! end alloc -! we are going to read/write the copies array -call prepare_to_update_copies(ens_handle) -call prepare_to_update_copies(obs_ens_handle) - ! Initialize assim_tools_module if needed if (.not. module_initialized) call assim_tools_init() diff --git a/assimilation_code/modules/assimilation/assim_tools_mod.rst b/assimilation_code/modules/assimilation/assim_tools_mod.rst index 502e2419de..daac0cac12 100644 --- a/assimilation_code/modules/assimilation/assim_tools_mod.rst +++ b/assimilation_code/modules/assimilation/assim_tools_mod.rst @@ -1,3 +1,5 @@ +.. _assim_tools: + MODULE assim_tools_mod ====================== @@ -289,7 +291,7 @@ Description of each namelist entry *type:* character(len=256) If adjust_obs_impact is true, the name of the file with the observation types and quantities and state quantities - that should have have an additional factor applied to the correlations during assimilation. + that should have an additional factor applied to the correlations during assimilation. ``allow_any_impact_values`` *type:* logical diff --git a/assimilation_code/modules/assimilation/filter_mod.dopplerfold.f90 b/assimilation_code/modules/assimilation/filter_mod.dopplerfold.f90 index 9041bfc221..e9cf2146df 100644 --- a/assimilation_code/modules/assimilation/filter_mod.dopplerfold.f90 +++ b/assimilation_code/modules/assimilation/filter_mod.dopplerfold.f90 @@ -46,11 +46,10 @@ module filter_mod compute_copy_mean, compute_copy_mean_sd, & compute_copy_mean_var, duplicate_ens, get_copy_owner_index, & get_ensemble_time, set_ensemble_time, broadcast_copy, & - map_pe_to_task, prepare_to_update_copies, & - copies_in_window, set_num_extra_copies, get_allow_transpose, & - all_copies_to_all_vars, allocate_single_copy, allocate_vars, & - get_single_copy, put_single_copy, deallocate_single_copy, & - print_ens_handle + map_pe_to_task, copies_in_window, set_num_extra_copies, & + get_allow_transpose, all_copies_to_all_vars, & + allocate_single_copy, allocate_vars, get_single_copy, & + put_single_copy, deallocate_single_copy, print_ens_handle use adaptive_inflate_mod, only : do_ss_inflate, mean_from_restart, sd_from_restart, & inflate_ens, adaptive_inflate_init, & @@ -806,7 +805,6 @@ subroutine filter_main() call trace_message('Before prior inflation damping and prep') if (inf_damping(PRIOR_INF) /= 1.0_r8) then - call prepare_to_update_copies(state_ens_handle) state_ens_handle%copies(PRIOR_INF_COPY, :) = 1.0_r8 + & inf_damping(PRIOR_INF) * (state_ens_handle%copies(PRIOR_INF_COPY, :) - 1.0_r8) endif @@ -907,7 +905,6 @@ subroutine filter_main() call trace_message('Before posterior inflation damping') if (inf_damping(POSTERIOR_INF) /= 1.0_r8) then - call prepare_to_update_copies(state_ens_handle) state_ens_handle%copies(POST_INF_COPY, :) = 1.0_r8 + & inf_damping(POSTERIOR_INF) * (state_ens_handle%copies(POST_INF_COPY, :) - 1.0_r8) endif @@ -1549,9 +1546,6 @@ subroutine filter_ensemble_inflate(ens_handle, inflate_copy, inflate, ENS_MEAN_C integer :: j, group, grp_bot, grp_top, grp_size -! Assumes that the ensemble is copy complete -call prepare_to_update_copies(ens_handle) - ! Inflate each group separately; Divide ensemble into num_groups groups grp_size = ens_size / num_groups @@ -2827,8 +2821,6 @@ subroutine update_observations_radar(obs_ens_handle, ens_size, seq, keys, prior_ ! for quiet execution, set it to false. verbose = .true. -call prepare_to_update_copies(obs_ens_handle) - do j = 1, obs_ens_handle%my_num_vars ! get the key number associated with each of my subset of obs ! then get the obs and extract info from it. diff --git a/assimilation_code/modules/assimilation/filter_mod.f90 b/assimilation_code/modules/assimilation/filter_mod.f90 index 75d0560de6..0971d7821e 100644 --- a/assimilation_code/modules/assimilation/filter_mod.f90 +++ b/assimilation_code/modules/assimilation/filter_mod.f90 @@ -46,11 +46,10 @@ module filter_mod compute_copy_mean, compute_copy_mean_sd, & compute_copy_mean_var, duplicate_ens, get_copy_owner_index, & get_ensemble_time, set_ensemble_time, broadcast_copy, & - map_pe_to_task, prepare_to_update_copies, & - copies_in_window, set_num_extra_copies, get_allow_transpose, & - all_copies_to_all_vars, allocate_single_copy, allocate_vars, & - get_single_copy, put_single_copy, deallocate_single_copy, & - print_ens_handle + map_pe_to_task, copies_in_window, set_num_extra_copies, & + get_allow_transpose, all_copies_to_all_vars, & + allocate_single_copy, allocate_vars, get_single_copy, & + put_single_copy, deallocate_single_copy, print_ens_handle use adaptive_inflate_mod, only : do_ss_inflate, mean_from_restart, sd_from_restart, & inflate_ens, adaptive_inflate_init, & @@ -809,7 +808,6 @@ subroutine filter_main() call trace_message('Before prior inflation damping and prep') if (inf_damping(PRIOR_INF) /= 1.0_r8) then - call prepare_to_update_copies(state_ens_handle) state_ens_handle%copies(PRIOR_INF_COPY, :) = 1.0_r8 + & inf_damping(PRIOR_INF) * (state_ens_handle%copies(PRIOR_INF_COPY, :) - 1.0_r8) endif @@ -910,7 +908,6 @@ subroutine filter_main() call trace_message('Before posterior inflation damping') if (inf_damping(POSTERIOR_INF) /= 1.0_r8) then - call prepare_to_update_copies(state_ens_handle) state_ens_handle%copies(POST_INF_COPY, :) = 1.0_r8 + & inf_damping(POSTERIOR_INF) * (state_ens_handle%copies(POST_INF_COPY, :) - 1.0_r8) endif @@ -1566,9 +1563,6 @@ subroutine filter_ensemble_inflate(ens_handle, inflate_copy, inflate, ENS_MEAN_C real(r8) :: lower_bound, upper_bound integer :: dist_type -! Assumes that the ensemble is copy complete -call prepare_to_update_copies(ens_handle) - ! Inflate each group separately; Divide ensemble into num_groups groups grp_size = ens_size / num_groups diff --git a/assimilation_code/modules/assimilation/obs_model_mod.f90 b/assimilation_code/modules/assimilation/obs_model_mod.f90 index 9cb8618995..0f04dd1db2 100644 --- a/assimilation_code/modules/assimilation/obs_model_mod.f90 +++ b/assimilation_code/modules/assimilation/obs_model_mod.f90 @@ -20,8 +20,8 @@ module obs_model_mod operator(/=), operator(>), operator(-), & operator(/), operator(+), operator(<), operator(==), & operator(<=), operator(>=) -use ensemble_manager_mod, only : get_ensemble_time, ensemble_type, map_task_to_pe, & - prepare_to_update_vars +use ensemble_manager_mod, only : get_ensemble_time, ensemble_type, map_task_to_pe + use mpi_utilities_mod, only : my_task_id, task_sync, block_task, & sum_across_tasks, shell_execute, my_task_id use io_filenames_mod, only : file_info_type @@ -348,8 +348,6 @@ subroutine advance_state(ens_handle, ens_size, target_time, async, adv_ens_comma ! Ok, this task does need to advance something. need_advance = 1 - call prepare_to_update_vars(ens_handle) - ! Increment number of ensemble member copies I have. my_num_state_copies = my_num_state_copies + 1 diff --git a/assimilation_code/modules/observations/forward_operator_mod.f90 b/assimilation_code/modules/observations/forward_operator_mod.f90 index a12e45cb1d..52e07ec444 100644 --- a/assimilation_code/modules/observations/forward_operator_mod.f90 +++ b/assimilation_code/modules/observations/forward_operator_mod.f90 @@ -30,8 +30,6 @@ module forward_operator_mod use obs_kind_mod, only : assimilate_this_type_of_obs, evaluate_this_type_of_obs use ensemble_manager_mod, only : ensemble_type, compute_copy_mean_var, & - prepare_to_read_from_vars, & - prepare_to_write_to_vars, & get_my_num_copies, copies_in_window, & get_allow_transpose, all_vars_to_all_copies, & all_copies_to_all_vars, allocate_single_copy, & @@ -127,11 +125,6 @@ subroutine get_obs_ens_distrib_state(ens_handle, obs_fwd_op_ens_handle, & istatus = 999123 expected_obs = MISSING_R8 -! FIXME: these no longer do anything? -! call prepare_to_write_to_vars(obs_fwd_op_ens_handle) -! call prepare_to_write_to_vars(qc_ens_handle) -! call prepare_to_read_from_vars(ens_handle) - ! Set up access to the state call create_state_window(ens_handle, obs_fwd_op_ens_handle, qc_ens_handle) diff --git a/assimilation_code/modules/utilities/ensemble_manager_mod.f90 b/assimilation_code/modules/utilities/ensemble_manager_mod.f90 index df5726ec22..11e1258e88 100644 --- a/assimilation_code/modules/utilities/ensemble_manager_mod.f90 +++ b/assimilation_code/modules/utilities/ensemble_manager_mod.f90 @@ -37,9 +37,7 @@ module ensemble_manager_mod get_copy, put_copy, all_vars_to_all_copies, & all_copies_to_all_vars, allocate_vars, deallocate_vars, & compute_copy_mean_var, get_copy_owner_index, set_ensemble_time, & - broadcast_copy, prepare_to_write_to_vars, prepare_to_write_to_copies, & - prepare_to_read_from_vars, prepare_to_read_from_copies, prepare_to_update_vars, & - prepare_to_update_copies, print_ens_handle, set_current_time, & + broadcast_copy, print_ens_handle, set_current_time, & map_task_to_pe, map_pe_to_task, get_current_time, & allocate_single_copy, put_single_copy, get_single_copy, & deallocate_single_copy @@ -66,7 +64,6 @@ module ensemble_manager_mod ! Time is only related to var complete type(time_type), allocatable :: time(:) integer :: distribution_type - integer :: valid ! copies modified last, vars modified last, both same integer :: id_num integer, allocatable :: task_to_pe_list(:), pe_to_task_list(:) ! List of tasks ! Flexible my_pe, layout_type which allows different task layouts for different ensemble handles @@ -83,13 +80,6 @@ module ensemble_manager_mod !PAR some way, either allocating or multiple addressing, to use same chunk of storage !PAR for both copy and var complete representations. -! track if copies modified last, vars modified last, both are in sync -! (and therefore both valid to be used r/o), or unknown. -integer, parameter :: VALID_UNKNOWN = -1 -integer, parameter :: VALID_BOTH = 0 ! vars & copies have same data -integer, parameter :: VALID_VARS = 1 ! vars(:,:) modified last -integer, parameter :: VALID_COPIES = 2 ! copies(:,:) modified last - ! unique counter per ensemble handle integer :: global_counter = 1 @@ -237,9 +227,6 @@ subroutine init_ensemble_manager(ens_handle, num_copies, & source, text2=msgstring) endif -! initially no data -ens_handle%valid = VALID_BOTH - if(debug .and. my_task_id()==0) then print*, 'pe_to_task_list', ens_handle%pe_to_task_list print*, 'task_to_pe_list', ens_handle%task_to_pe_list @@ -278,11 +265,6 @@ subroutine get_copy(receiving_pe, ens_handle, copy, vars, mtime) integer :: owner, owners_index -! Error checking -if (ens_handle%valid /= VALID_VARS .and. ens_handle%valid /= VALID_BOTH) then - call error_handler(E_ERR, 'get_copy', 'last access not var-complete', source) -endif - ! Verify that requested copy exists if(copy < 1 .or. copy > ens_handle%num_copies) then write(msgstring, *) 'Requested copy: ', copy, ' is > maximum copy: ', ens_handle%num_copies @@ -348,11 +330,6 @@ subroutine put_copy(sending_pe, ens_handle, copy, vars, mtime) integer :: owner, owners_index -! Error checking -if (ens_handle%valid /= VALID_VARS .and. ens_handle%valid /= VALID_BOTH) then - call error_handler(E_ERR, 'put_copy', 'last access not var-complete', source) -endif - if(copy < 1 .or. copy > ens_handle%num_copies) then write(msgstring, *) 'Requested copy: ', copy, ' is > maximum copy: ', ens_handle%num_copies call error_handler(E_ERR,'put_copy', msgstring, source) @@ -392,8 +369,6 @@ subroutine put_copy(sending_pe, ens_handle, copy, vars, mtime) endif endif -ens_handle%valid = VALID_VARS - end subroutine put_copy !----------------------------------------------------------------- @@ -411,11 +386,6 @@ subroutine broadcast_copy(ens_handle, copy, arraydata) integer :: owner, owners_index -! Error checking -if (ens_handle%valid /= VALID_VARS .and. ens_handle%valid /= VALID_BOTH) then - call error_handler(E_ERR, 'broadcast_copy', 'last access not var-complete', source) -endif - if(copy < 1 .or. copy > ens_handle%num_copies) then write(msgstring, *) 'Requested copy: ', copy, ' is > maximum copy: ', ens_handle%num_copies call error_handler(E_ERR,'broadcast_copy', msgstring, source) @@ -442,94 +412,6 @@ end subroutine broadcast_copy !----------------------------------------------------------------- -subroutine prepare_to_write_to_vars(ens_handle) - -! Warn ens manager that we're going to directly update the %vars array - -type(ensemble_type), intent(inout) :: ens_handle - -!ens_handle%valid = VALID_VARS - -end subroutine prepare_to_write_to_vars - -!----------------------------------------------------------------- - -subroutine prepare_to_write_to_copies(ens_handle) - -! Warn ens manager that we're going to directly update the %copies array - -type(ensemble_type), intent(inout) :: ens_handle - -!ens_handle%valid = VALID_COPIES - -end subroutine prepare_to_write_to_copies - -!----------------------------------------------------------------- - -subroutine prepare_to_read_from_vars(ens_handle) - -! Check to be sure that the vars array is current - -type(ensemble_type), intent(in) :: ens_handle - -!if (ens_handle%valid /= VALID_VARS .and. ens_handle%valid /= VALID_BOTH) then -! call error_handler(E_ERR, 'prepare_to_read_from_vars', & - ! 'last access not var-complete', source) -!endif - -end subroutine prepare_to_read_from_vars - -!----------------------------------------------------------------- - -subroutine prepare_to_read_from_copies(ens_handle) - -! Check to be sure that the copies array is current - -type(ensemble_type), intent(in) :: ens_handle - -!if (ens_handle%valid /= VALID_COPIES .and. ens_handle%valid /= VALID_BOTH) then -! call error_handler(E_ERR, 'prepare_to_read_from_copies', & -! 'last access not copy-complete', source) -!endif - -end subroutine prepare_to_read_from_copies - -!----------------------------------------------------------------- - -subroutine prepare_to_update_vars(ens_handle) - -! We need read/write access, so it has to start valid for vars or both, -! and then is going to be vars only going out. - -type(ensemble_type), intent(inout) :: ens_handle - -!if (ens_handle%valid /= VALID_VARS .and. ens_handle%valid /= VALID_BOTH) then -! call error_handler(E_ERR, 'prepare_to_update_vars', & - ! 'last access not var-complete', source) -!endif -!ens_handle%valid = VALID_VARS - -end subroutine prepare_to_update_vars - -!----------------------------------------------------------------- - -subroutine prepare_to_update_copies(ens_handle) - -! We need read/write access, so it has to start valid for copies or both, -! and then is going to be copies only going out. - -type(ensemble_type), intent(inout) :: ens_handle - -!if (ens_handle%valid /= VALID_COPIES .and. ens_handle%valid /= VALID_BOTH) then -! call error_handler(E_ERR, 'prepare_to_update_copies', & -! 'last access not copy-complete', source) -!endif -!ens_handle%valid = VALID_COPIES - -end subroutine prepare_to_update_copies - -!----------------------------------------------------------------- - subroutine set_ensemble_time(ens_handle, indx, mtime) ! Sets the time of an ensemble member indexed by local storage on this pe. @@ -596,12 +478,6 @@ subroutine duplicate_ens(ens1, ens2, duplicate_time) ! If duplicate_time is true, also copies the time information from ens1 to ens2. ! If duplicate_time is false, the times in ens2 are left unchanged. -! Error checking -if (ens1%valid /= VALID_VARS .and. ens1%valid /= VALID_BOTH) then - call error_handler(E_ERR, 'duplicate_ens', & - 'last access not var-complete for source ensemble', source) -endif - ! Check to make sure that the ensembles are compatible if(ens1%num_copies /= ens2%num_copies) then write(msgstring, *) 'num_copies ', ens1%num_copies, ' and ', ens2%num_copies, & @@ -622,8 +498,6 @@ subroutine duplicate_ens(ens1, ens2, duplicate_time) ! Duplicate each copy that is stored locally on this process. ens2%vars = ens1%vars -ens2%valid = VALID_VARS - ! Duplicate time if requested if(duplicate_time) ens2%time = ens1%time @@ -1056,25 +930,6 @@ subroutine all_vars_to_all_copies(ens_handle, label) call timestamp_message('vars_to_copies start: '//label, alltasks=.true.) endif -! Error checking, but can't return early in case only some of the -! MPI tasks need to transpose. Only if all N tasks say this is an -! unneeded transpose can we skip it. -!if (ens_handle%valid == VALID_BOTH) then -! if (flag_unneeded_transposes) then -! write(msgstring, *) 'task ', my_task_id(), ' ens_handle ', ens_handle%id_num -! call error_handler(E_MSG, 'all_vars_to_all_copies', & -! 'vars & copies both valid, transpose not needed for this task', & -! source, text2=msgstring) -! endif -!else if (ens_handle%valid /= VALID_VARS) then -! write(msgstring, *) 'ens_handle ', ens_handle%id_num -! call error_handler(E_ERR, 'all_vars_to_all_copies', & -! 'last access not var-complete', source, & -! text2=msgstring) -!endif - -ens_handle%valid = VALID_BOTH - ! Accelerated version for single process if(num_pes == 1) then ens_handle%copies = transpose(ens_handle%vars) @@ -1232,25 +1087,6 @@ subroutine all_copies_to_all_vars(ens_handle, label) call timestamp_message('copies_to_vars start: '//label, alltasks=.true.) endif -! Error checking, but can't return early in case only some of the -! MPI tasks need to transpose. Only if all N tasks say this is an -! unneeded transpose can we skip it. -!if (ens_handle%valid == VALID_BOTH) then -! if (flag_unneeded_transposes) then -! write(msgstring, *) 'task ', my_task_id(), ' ens_handle ', ens_handle%id_num -! call error_handler(E_MSG, 'all_copies_to_all_vars', & -! 'vars & copies both valid, transpose not needed for this task', & -! source, text2=msgstring) -! endif -!else if (ens_handle%valid /= VALID_COPIES) then -! write(msgstring, *) 'ens_handle ', ens_handle%id_num -! call error_handler(E_ERR, 'all_copies_to_all_vars', & -! 'last access not copy-complete', source, & -! text2=msgstring) -!endif - -ens_handle%valid = VALID_BOTH - ! Accelerated version for single process if(num_pes == 1) then ens_handle%vars = transpose(ens_handle%copies) @@ -1416,12 +1252,6 @@ subroutine compute_copy_mean(ens_handle, start_copy, end_copy, mean_copy) ! Should check to make sure that start, end and mean are all legal -! Error checking -if (ens_handle%valid /= VALID_COPIES .and. ens_handle%valid /= VALID_BOTH) then - call error_handler(E_ERR, 'compute_copy_mean', & - 'last access not copy-complete', source) -endif - num_copies = end_copy - start_copy + 1 MYLOOP : do i = 1, ens_handle%my_num_vars @@ -1432,8 +1262,6 @@ subroutine compute_copy_mean(ens_handle, start_copy, end_copy, mean_copy) endif end do MYLOOP -ens_handle%valid = VALID_COPIES - end subroutine compute_copy_mean !-------------------------------------------------------------------------------- @@ -1450,12 +1278,6 @@ subroutine compute_copy_mean_sd(ens_handle, start_copy, end_copy, mean_copy, sd_ ! Should check to make sure that start, end, mean and sd are all legal copies -! Error checking -!if (ens_handle%valid /= VALID_COPIES .and. ens_handle%valid /= VALID_BOTH) then -! call error_handler(E_ERR, 'compute_copy_mean_sd', & -! 'last access not copy-complete', source) -!endif - num_copies = end_copy - start_copy + 1 MYLOOP : do i = 1, ens_handle%my_num_vars @@ -1475,8 +1297,6 @@ subroutine compute_copy_mean_sd(ens_handle, start_copy, end_copy, mean_copy, sd_ end do MYLOOP -ens_handle%valid = VALID_COPIES - end subroutine compute_copy_mean_sd !-------------------------------------------------------------------------------- @@ -1494,12 +1314,6 @@ subroutine compute_copy_mean_var(ens_handle, start_copy, end_copy, mean_copy, va ! Should check to make sure that start, end, mean and var are all legal copies -! Error checking -if (ens_handle%valid /= VALID_COPIES .and. ens_handle%valid /= VALID_BOTH) then - call error_handler(E_ERR, 'compute_copy_mean_var', & - 'last access not copy-complete', source) -endif - num_copies = end_copy - start_copy + 1 MYLOOP : do i = 1, ens_handle%my_num_vars @@ -1517,8 +1331,6 @@ subroutine compute_copy_mean_var(ens_handle, start_copy, end_copy, mean_copy, va endif end do MYLOOP -ens_handle%valid = VALID_COPIES - end subroutine compute_copy_mean_var !-------------------------------------------------------------------------------- @@ -1612,8 +1424,6 @@ subroutine print_ens_handle(ens_handle, force, label, contents, limit) call error_handler(E_MSG, 'ensemble handle: ', msgstring, source) write(msgstring, *) 'number of my_vars : ', ens_handle%my_num_vars call error_handler(E_MSG, 'ensemble handle: ', msgstring, source) -write(msgstring, *) 'valid : ', ens_handle%valid -call error_handler(E_MSG, 'ensemble handle: ', msgstring, source) write(msgstring, *) 'distribution_type : ', ens_handle%distribution_type call error_handler(E_MSG, 'ensemble handle: ', msgstring, source) write(msgstring, *) 'my_pe number : ', ens_handle%my_pe diff --git a/assimilation_code/modules/utilities/ensemble_manager_mod.rst b/assimilation_code/modules/utilities/ensemble_manager_mod.rst index f33d7b357c..1e31b109b8 100644 --- a/assimilation_code/modules/utilities/ensemble_manager_mod.rst +++ b/assimilation_code/modules/utilities/ensemble_manager_mod.rst @@ -140,12 +140,6 @@ Public interfaces \ compute_copy_mean \ compute_copy_mean_sd \ compute_copy_mean_var -\ prepare_to_write_to_vars -\ prepare_to_write_to_copies -\ prepare_to_read_from_vars -\ prepare_to_read_from_copies -\ prepare_to_update_vars -\ prepare_to_update_copies \ print_ens_handle \ map_pe_to_task \ map_task_to_pe @@ -174,7 +168,6 @@ A note about documentation style. Optional arguments are enclosed in brackets *[ ! Time is only related to var complete type(time_type), pointer :: time(:) integer :: distribution_type - integer :: valid ! copies modified last, vars modified last, both same integer :: id_num integer, allocatable :: task_to_pe_list(:) ! List of tasks integer, allocatable :: pe_to_task_list(:) ! List of tasks @@ -796,182 +789,6 @@ A note about documentation style. Optional arguments are enclosed in brackets *[ | -.. container:: routine - - *call prepare_to_update_vars(ens_handle)* - :: - - type(ensemble_type), intent(inout) :: ens_handle - -.. container:: indent1 - - Call this routine before directly accessing the ``ens_handle%vars`` array when the data is going to be updated, and - the incoming vars array should have the most current data representation. - - Internally the ensemble manager tracks which of the copies or vars arrays, or both, have the most recently updated - representation of the data. For example, before a transpose (``all_vars_to_all_copies()`` or - ``all_copies_to_all_vars()``) the code checks to be sure the source array has the most recently updated - representation before it does the operation. After a transpose both representations have the same update time and are - both valid. - - For efficiency reasons we allow the copies and vars arrays to be accessed directly from other code without going - through a routine in the ensemble manager. The "prepare" routines verify that the desired array has the most recently - updated representation of the data, and if needed marks which one has been updated so the internal consistency checks - have an accurate accounting of the representations. - - ============== ================================================ - ``ens_handle`` Handle for the ensemble being accessed directly. - ============== ================================================ - -| - -.. container:: routine - - *call prepare_to_update_copies(ens_handle)* - :: - - type(ensemble_type), intent(inout) :: ens_handle - -.. container:: indent1 - - Call this routine before directly accessing the ``ens_handle%copies`` array when the data is going to be updated, and - the incoming copies array should have the most current data representation. - - Internally the ensemble manager tracks which of the copies or vars arrays, or both, have the most recently updated - representation of the data. For example, before a transpose (``all_vars_to_all_copies()`` or - ``all_copies_to_all_vars()``) the code checks to be sure the source array has the most recently updated - representation before it does the operation. After a transpose both representations have the same update time and are - both valid. - - For efficiency reasons we allow the copies and vars arrays to be accessed directly from other code without going - through a routine in the ensemble manager. The "prepare" routines verify that the desired array has the most recently - updated representation of the data, and if needed marks which one has been updated so the internal consistency checks - have an accurate accounting of the representations. - - ============== ================================================ - ``ens_handle`` Handle for the ensemble being accessed directly. - ============== ================================================ - -| - -.. container:: routine - - *call prepare_to_read_from_vars(ens_handle)* - :: - - type(ensemble_type), intent(inout) :: ens_handle - -.. container:: indent1 - - Call this routine before directly accessing the ``ens_handle%vars`` array for reading only, when the incoming vars - array should have the most current data representation. - - Internally the ensemble manager tracks which of the copies or vars arrays, or both, have the most recently updated - representation of the data. For example, before a transpose (``all_vars_to_all_copies()`` or - ``all_copies_to_all_vars()``) the code checks to be sure the source array has the most recently updated - representation before it does the operation. After a transpose both representations have the same update time and are - both valid. - - For efficiency reasons we allow the copies and vars arrays to be accessed directly from other code without going - through a routine in the ensemble manager. The "prepare" routines verify that the desired array has the most recently - updated representation of the data, and if needed marks which one has been updated so the internal consistency checks - have an accurate accounting of the representations. - - ============== ================================================ - ``ens_handle`` Handle for the ensemble being accessed directly. - ============== ================================================ - -| - -.. container:: routine - - *call prepare_to_read_from_copies(ens_handle)* - :: - - type(ensemble_type), intent(inout) :: ens_handle - -.. container:: indent1 - - Call this routine before directly accessing the ``ens_handle%copies`` array for reading only, when the incoming - copies array should have the most current data representation. - - Internally the ensemble manager tracks which of the copies or vars arrays, or both, have the most recently updated - representation of the data. For example, before a transpose (``all_vars_to_all_copies()`` or - ``all_copies_to_all_vars()``) the code checks to be sure the source array has the most recently updated - representation before it does the operation. After a transpose both representations have the same update time and are - both valid. - - For efficiency reasons we allow the copies and vars arrays to be accessed directly from other code without going - through a routine in the ensemble manager. The "prepare" routines verify that the desired array has the most recently - updated representation of the data, and if needed marks which one has been updated so the internal consistency checks - have an accurate accounting of the representations. - - ============== ================================================ - ``ens_handle`` Handle for the ensemble being accessed directly. - ============== ================================================ - -| - -.. container:: routine - - *call prepare_to_write_to_vars(ens_handle)* - :: - - type(ensemble_type), intent(inout) :: ens_handle - -.. container:: indent1 - - Call this routine before directly accessing the ``ens_handle%vars`` array for writing. This routine differs from the - 'update' version in that it doesn't care what the original data state is. This routine might be used in the case - where an array is being filled for the first time and consistency with the data in the copies array is not an issue. - - Internally the ensemble manager tracks which of the copies or vars arrays, or both, have the most recently updated - representation of the data. For example, before a transpose (``all_vars_to_all_copies()`` or - ``all_copies_to_all_vars()``) the code checks to be sure the source array has the most recently updated - representation before it does the operation. After a transpose both representations have the same update time and are - both valid. - - For efficiency reasons we allow the copies and vars arrays to be accessed directly from other code without going - through a routine in the ensemble manager. The "prepare" routines verify that the desired array has the most recently - updated representation of the data, and if needed marks which one has been updated so the internal consistency checks - have an accurate accounting of the representations. - - ============== ================================================ - ``ens_handle`` Handle for the ensemble being accessed directly. - ============== ================================================ - -| - -.. container:: routine - - *call prepare_to_write_to_copies(ens_handle)* - :: - - type(ensemble_type), intent(inout) :: ens_handle - -.. container:: indent1 - - Call this routine before directly accessing the ``ens_handle%copies`` array for writing. This routine differs from - the 'update' version in that it doesn't care what the original data state is. This routine might be used in the case - where an array is being filled for the first time and consistency with the data in the vars array is not an issue. - - Internally the ensemble manager tracks which of the copies or vars arrays, or both, have the most recently updated - representation of the data. For example, before a transpose (``all_vars_to_all_copies()`` or - ``all_copies_to_all_vars()``) the code checks to be sure the source array has the most recently updated - representation before it does the operation. After a transpose both representations have the same update time and are - both valid. - - For efficiency reasons we allow the copies and vars arrays to be accessed directly from other code without going - through a routine in the ensemble manager. The "prepare" routines verify that the desired array has the most recently - updated representation of the data, and if needed marks which one has been updated so the internal consistency checks - have an accurate accounting of the representations. - - ============== ================================================ - ``ens_handle`` Handle for the ensemble being accessed directly. - ============== ================================================ - -| - Private interfaces ------------------ diff --git a/assimilation_code/modules/utilities/netcdf_utilities_mod.f90 b/assimilation_code/modules/utilities/netcdf_utilities_mod.f90 index 74301bd6f4..667138cfe5 100644 --- a/assimilation_code/modules/utilities/netcdf_utilities_mod.f90 +++ b/assimilation_code/modules/utilities/netcdf_utilities_mod.f90 @@ -63,7 +63,7 @@ module netcdf_utilities_mod nc_begin_define_mode, & nc_end_define_mode, & nc_synchronize_file, & - NF90_MAX_NAME, NF90_MAX_VAR_DIMS + NF90_MAX_NAME, NF90_MAX_VAR_DIMS, NF90_FILL_REAL ! note here that you only need to distinguish between diff --git a/assimilation_code/programs/integrate_model/integrate_model.f90 b/assimilation_code/programs/integrate_model/integrate_model.f90 index 587dd4e1ee..fc385aa49c 100644 --- a/assimilation_code/programs/integrate_model/integrate_model.f90 +++ b/assimilation_code/programs/integrate_model/integrate_model.f90 @@ -20,8 +20,8 @@ program integrate_model use assim_model_mod, only : static_init_assim_model, get_model_size use obs_model_mod, only : advance_state -use ensemble_manager_mod, only : init_ensemble_manager, ensemble_type, & - prepare_to_write_to_vars +use ensemble_manager_mod, only : init_ensemble_manager, ensemble_type + use mpi_utilities_mod, only : initialize_mpi_utilities, finalize_mpi_utilities, & task_count, iam_task0 @@ -119,7 +119,6 @@ program integrate_model ! Initialize an ensemble manager type with a single copy call init_ensemble_manager(ens_handle, num_copies=1, num_vars=model_size, transpose_type_in = 2) -call prepare_to_write_to_vars(ens_handle) !------------------- Read restart from file ---------------------- diff --git a/assimilation_code/programs/model_mod_check/model_mod_check.f90 b/assimilation_code/programs/model_mod_check/model_mod_check.f90 index ee4eea51a0..f56c7617a1 100644 --- a/assimilation_code/programs/model_mod_check/model_mod_check.f90 +++ b/assimilation_code/programs/model_mod_check/model_mod_check.f90 @@ -416,7 +416,7 @@ subroutine check_meta_data( iloc ) kind_index=qty_index, & kind_string=qty_string) -write(string1,'("index ",i11," is i,j,k",3(1x,i4)," and is in domain ",i2)') & +write(string1,'("index ",i11," is i,j,k",3(1x,i10)," and is in domain ",i2)') & iloc, ix, iy, iz, dom_id write(string2,'("is quantity ", I4,", ",A)') var_type, trim(qty_string)//' at location' call write_location(0,loc,charstring=string3) @@ -556,9 +556,12 @@ subroutine check_all_meta_data() kind_string=qty_string) ! CLM has (potentially many) columns and needs i7 ish precision - write(string1,'(i11,1x,''i,j,k'',3(1x,i7),'' domain '',i2)') & +! write(string1,'(i11,1x,''i,j,k'',3(1x,i7),'' domain '',i2)') & +! iloc, ix, iy, iz, dom_id + ! EL: integer to short for the new I/O method + ! Change to long int to avoid problems + write(string1,'(i21,1x,''i,j,k'',3(1x,i21),'' domain '',i2)') & iloc, ix, iy, iz, dom_id - call get_state_meta_data(iloc, loc, var_type) metadata_qty_string = trim(get_name_for_quantity(var_type)) diff --git a/assimilation_code/programs/obs_impact_tool/obs_impact_tool.rst b/assimilation_code/programs/obs_impact_tool/obs_impact_tool.rst index 5299da8916..d84cd53f14 100644 --- a/assimilation_code/programs/obs_impact_tool/obs_impact_tool.rst +++ b/assimilation_code/programs/obs_impact_tool/obs_impact_tool.rst @@ -4,47 +4,33 @@ PROGRAM ``obs_impact_tool`` Overview -------- -The standard DART algorithms compute increments for an observation and then compute corresponding increments for each -model state variable due to that observation. To do this, DART computes a sample regression coefficient using the prior -ensemble distributions of a state variable and the observation. The increments for each member of the observation are -multiplied by this regression coefficient and then added to the corresponding prior ensemble member for the state -variable. However, in many cases, it is appropriate to reduce the impact of an observation on a state variable; this is -called localization. The standard DART algorithms allow users to specify a localization that is a function of the -horizontal (and optionally vertical) distance between the observation and the state variable. The localization is a -value between 0 and 1 and multiplies the regression coefficient when updating state ensemble members. - -Sometimes, it may be desirable to do an additional localization that is a function of the -type of observation and the -state vector quantity. This program allows users to construct a table that is read by -filter at run-time to localize the -impact of sets of observation types on sets of state vector quantities. Users can create -named sets of observation types -and sets of state vector quantities and specify a localization for the impact of the -specified observation types on the state vector quantities. - -An example would be to create a subset of observations of tracer concentration for a variety of tracers, and a subset of -dynamic state variable quantities like temperatures and wind components. It has been common to set this localization -value to 0 so that tracer observations have no impact on dynamic state quantities, however, the tool allows values -between 0 and 1 to be specified. - -This tool allows related collections of observation types and state vector quantities to be named and then express the -relationship of the named groups to each other in a concise way. It can also define relationships by exceptions. - -All the listed observation types and state vector quantities must be known by the system. -If they are not, look at the -&preprocess_nml :: input_items namelist which specifies which *obs_def_xxx_mod.f90* files -are included, which is where observation types are defined. -Quantities for different regimes (atmosphere, ocean, land, etc.) are defined in -``assimilation_code/modules/observations/xxx_quantities_mod.f90`` and explained in -:doc:`../../modules/observations/obs_kind_mod` - -Format of the input file can be any combination of these types of sections: +The standard DART algorithms work by calculating increments for an observation and then determining corresponding +increments for each variable in the state due to that observation. This is done by computing a sample regression +coefficient using the prior ensemble distributions of a state variable and the observation. The increments for each member +of the ensemble are multiplied by this coefficient and then added to the corresponding prior ensemble member for the variable. -.. container:: +However, in many cases it is necessary to limit the influence of an observation on a variable; this is known as localization. +DART provides a way to specify a localization, known as cutoff, based on the horizontal and vertical distance between the observation +and the state variable. + +In some situations, you may want additional localization based on the type of observation and the state quanity. +``obs_impact_tool`` allows you to create a table that filter reads during runtime to localize the impact of certain types of +observations on specific state vector quantities. You can define sets of observation types and state vector quantities, and +specify localization for the impact of those observation types on the state vector quantities. + +For example, you can create a subset of observations related to tracer concentration for various tracers, and a subset of +dynamic state variables like temperatures and wind components. Typically, it is common practice to set this localization value +to 0 to prevent tracer observations from affecting dynamic state quantities. However, ``obs_impact_tool`` allows you to specify values +between 0 and 1. - :: +#. Build ``obs_sequence_tool`` by adding ``obs_impact_tool`` to the list of serial_programs in the quickbuild.sh script for the model you are using. + Run ./quickbuild.sh to build all the DART programs. +#. Create an input file for ``obs_sequence_tool`` to define the impacts of observations. In the examples on this page, the input file + is called `cross_correlations.txt`. + The format of the input file can be any combination of the following types of sections: + .. code:: bash # hash mark starts a comment. @@ -99,10 +85,48 @@ Format of the input file can be any combination of these types of sections: groupname1 groupname1 0.0 END IMPACT -Namelist interface ``&obs_impact_tool_nml`` must be read from file ``input.nml``. + The following is an example of an input file to prevent chemistry species from impacting the meterological variables in the model state, and vice versa: -Namelist --------- + .. code:: bash + + GROUP chem + QTY_CO QTY_NO QTY_C2H4 + END GROUP + + GROUP met + ALLQTYS EXCEPT chem + END GROUP + + IMPACT + chem met 0.0 + met chem 0.0 + END IMPACT + + +#. Run ``obs_impact_tool`` using your `cross_correlations.txt` as input. ``obs_impact_tool`` will create an output file, + named `control_impact_runtime.txt` in this example. + + .. code:: text + + &obs_impact_tool_nml + input_filename = 'cross_correlations.txt' + output_filename = 'control_impact_runtime.txt' + / + + +#. Set the following namelist options in :ref:`&assim_tools_nml` to use `control_impact_runtime.txt` in filter. + Filter will apply your selected observation impacts during assimilation. + + .. code:: text + + &assim_tools_nml + adjust_obs_impact = .true. + obs_impact_filename = 'control_impact_runtime.txt' + / + + +obs_impact_tool Namelist +------------------------ This namelist is read from the file ``input.nml``. Namelists start with an ampersand '&' and terminate with a slash '/'. Character strings that contain a '/' must be enclosed in quotes to prevent them from prematurely terminating the @@ -135,47 +159,3 @@ namelist. +-----------------+--------------------+-----------------------------------------------------------------------------+ | debug | logical | If true print out debugging info. | +-----------------+--------------------+-----------------------------------------------------------------------------+ - -| - -Examples --------- - -To prevent chemistry species from impacting the meterological variables in the model state, and vice versa: - -.. container:: - - :: - - GROUP chem - QTY_CO QTY_NO QTY_C2H4 - END GROUP - - GROUP met - ALLQTYS EXCEPT chem - END GROUP - - IMPACT - chem met 0.0 - met chem 0.0 - END IMPACT - -Modules used ------------- - -:: - - types_mod - utilities_mod - parse_args_mod - -Files ------ - -- two text files, one input and one output. -- obs_impact_tool.nml - -References ----------- - -- none diff --git a/assimilation_code/programs/perfect_model_obs/perfect_model_obs.f90 b/assimilation_code/programs/perfect_model_obs/perfect_model_obs.f90 index 6417785e2c..7013fe2774 100644 --- a/assimilation_code/programs/perfect_model_obs/perfect_model_obs.f90 +++ b/assimilation_code/programs/perfect_model_obs/perfect_model_obs.f90 @@ -36,9 +36,8 @@ program perfect_model_obs use random_seq_mod, only : random_seq_type, init_random_seq, random_gaussian use ensemble_manager_mod, only : init_ensemble_manager, & end_ensemble_manager, ensemble_type, & - get_my_num_copies, get_ensemble_time, prepare_to_write_to_vars, & - prepare_to_read_from_vars, allocate_vars, & - all_vars_to_all_copies, & + get_my_num_copies, get_ensemble_time, & + allocate_vars, all_vars_to_all_copies, & all_copies_to_all_vars use filter_mod, only : filter_set_initial_time, filter_sync_keys_time @@ -465,8 +464,6 @@ subroutine perfect_main() call trace_message('After setup for next group of observations') - call prepare_to_read_from_vars(ens_handle) - ! Output the true state to the netcdf file if((output_interval > 0) .and. & (time_step_number / output_interval * output_interval == time_step_number)) then diff --git a/build_templates/mkmf.template.AIRS.gfortran b/build_templates/mkmf.template.AIRS.gfortran new file mode 100644 index 0000000000..70ea54667d --- /dev/null +++ b/build_templates/mkmf.template.AIRS.gfortran @@ -0,0 +1,41 @@ +# Template for AIRS observation converter with GNU gfortran on Linux or OSX +# +# DART software - Copyright UCAR. This open source software is provided +# by UCAR, "as is", without charge, subject to all terms of use at +# http://www.image.ucar.edu/DAReS/DART/DART_download + + +MPIFC = mpif90 +MPILD = mpif90 +FC = gfortran +LD = h4fc + +# MODIFY THE FOLLOWING VARIABLES FOR YOUR SYSTEM: +# If your NETCDF, HDFEOS, or RTTOV environment variables are not set, +# uncomment the following line and set value to where lib and include +# are found for the netcdf files that match this compiler. +# +# NETCDF = /opt/local +HDFEOS = /glade/campaign/cisl/dares/libraries/hdf-eos_gfortran +RTTOV = /glade/campaign/cisl/dares/libraries/rttov123_gfortran + +RTTOV_VERSION = 12 + +RTLIBS = -lrttov$(RTTOV_VERSION)_wrapper -lrttov$(RTTOV_VERSION)_mw_scatt -lrttov$(RTTOV_VERSION)_brdf_atlas \ + -lrttov$(RTTOV_VERSION)_emis_atlas -lrttov$(RTTOV_VERSION)_other -lrttov$(RTTOV_VERSION)_parallel \ + -lrttov$(RTTOV_VERSION)_coef_io -lrttov$(RTTOV_VERSION)_hdf -lrttov$(RTTOV_VERSION)_main + +INCS = -I$(NETCDF)/include -I$(HDFEOS)/include -I$(RTTOV)/include -I$(RTTOV)/mod + +LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf \ + -L$(HDFEOS)/lib -lhdfeos -lmfhdf -ldf -ljpeg -lz -lm -lsz \ + -L$(RTTOV)/lib -lhdf5_hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 $(RTLIBS) + +FFLAGS = -O2 -ffree-line-length-none -fallow-argument-mismatch $(INCS) +LDFLAGS = $(FFLAGS) $(LIBS) + +# Debug settings (preferably also use a RTTOV compiled with debug settings): +# +# FFLAGS = -g -Wuninitialized -Wunused -ffree-line-length-none -fbounds-check \ +# -fbacktrace -ffpe-trap=invalid,zero,overflow -fallow-argument-mismatch $(INCS) + diff --git a/build_templates/mkmf.template.AIRS.intel b/build_templates/mkmf.template.AIRS.intel new file mode 100644 index 0000000000..1ddab0b23e --- /dev/null +++ b/build_templates/mkmf.template.AIRS.intel @@ -0,0 +1,42 @@ +# Template for AIRS observation converter with Intel Fortran Compiler on Linux or OSX +# +# DART software - Copyright UCAR. This open source software is provided +# by UCAR, "as is", without charge, subject to all terms of use at +# http://www.image.ucar.edu/DAReS/DART/DART_download + +MPIFC = mpif90 +MPILD = mpif90 +FC = h4fc +LD = h4fc + +# MODIFY THE FOLLOWING VARIABLES FOR YOUR SYSTEM: +# If your NETCDF, HDFEOS, or RTTOV environment variables are not set, +# uncomment the following line and set value to where lib and include +# are found for the netcdf files that match this compiler. +# +# NETCDF = /opt/local +HDFEOS = /glade/campaign/cisl/dares/libraries/hdf-eos_intel/ +RTTOV = /glade/campaign/cisl/dares/libraries/rttov123_intel/ + +RTTOV_VERSION = 12 + +RTLIBS = -lrttov$(RTTOV_VERSION)_wrapper -lrttov$(RTTOV_VERSION)_mw_scatt -lrttov$(RTTOV_VERSION)_brdf_atlas \ + -lrttov$(RTTOV_VERSION)_emis_atlas -lrttov$(RTTOV_VERSION)_other -lrttov$(RTTOV_VERSION)_parallel \ + -lrttov$(RTTOV_VERSION)_coef_io -lrttov$(RTTOV_VERSION)_hdf -lrttov$(RTTOV_VERSION)_main + +INCS = -I$(NETCDF)/include -I$(HDFEOS)/include -I$(RTTOV)/include -I$(RTTOV)/mod + +LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf \ + -L$(HDFEOS)/lib -lhdfeos -lmfhdf -ldf -ljpeg -lz -lm -lsz -lGctp \ + -L$(RTTOV)/lib -lhdf5_hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 $(RTLIBS) + +FFLAGS = -O2 -assume buffered_io $(INCS) +LDFLAGS = $(FFLAGS) $(LIBS) + +# for development or debugging, use this instead: +# FFLAGS = -g -C -check noarg_temp_created -fpe0 \ +# -fp-model precise -ftrapuv -traceback \ +# -warn declarations,uncalled,unused $(INCS) + +# Optimized (BLAS, LAPACK) libraries are available from the Intel Math Kernel Libraries: +# -lmkl -lmkl_lapack -lguide -lpthread diff --git a/build_templates/mkmf.template.quikscat.gfortran b/build_templates/mkmf.template.quikscat.gfortran new file mode 100644 index 0000000000..10e14e55bb --- /dev/null +++ b/build_templates/mkmf.template.quikscat.gfortran @@ -0,0 +1,31 @@ +# Template for quikscat converter with GNU gfortran on Linux or Mac OSX +# +# DART software - Copyright UCAR. This open source software is provided +# by UCAR, "as is", without charge, subject to all terms of use at +# http://www.image.ucar.edu/DAReS/DART/DART_download +# + + +MPIFC = mpif90 +MPILD = mpif90 +FC = gfortran +LD = h4fc + +# If you get an error "ld: library not found for -lnetcdff" (note 2 f's), +# remove it from the LIBS line. The same is true for any library. If 'ld' +# does not complain - it worked. + +# If your NETCDF environment variable is not set correctly, +# uncomment the following line and set value to where lib and include +# are found for the netcdf files that match this compiler. +# +# NETCDF = /opt/local + +INCS = -I$(NETCDF)/include +LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf +FFLAGS = -O2 -ffree-line-length-none -fallow-argument-mismatch $(INCS) +LDFLAGS = $(FFLAGS) $(LIBS) + +# for development or debugging, use this instead: +# FFLAGS = -g -Wuninitialized -Wunused -ffree-line-length-none -fbounds-check \ +# -fbacktrace -ffpe-trap=invalid,zero,overflow -fallow-argument-mismatch $(INCS) diff --git a/build_templates/mkmf.template.quikscat.intel b/build_templates/mkmf.template.quikscat.intel new file mode 100644 index 0000000000..84e32674f4 --- /dev/null +++ b/build_templates/mkmf.template.quikscat.intel @@ -0,0 +1,37 @@ +# Template for quikscat converter with Intel Fortran Compiler on Linux or OSX +# +# DART software - Copyright UCAR. This open source software is provided +# by UCAR, "as is", without charge, subject to all terms of use at +# http://www.image.ucar.edu/DAReS/DART/DART_download +# + +MPIFC = mpif90 +MPILD = mpif90 +FC = h4fc +LD = h4fc + +# If you get an error "ld: library not found for -lnetcdff" (note 2 f's), +# remove it from the LIBS line. The same is true for any library. If 'ld' +# does not complain - it worked. + +# If your NETCDF environment variable is not set correctly, +# uncomment the following line and set value to where lib and include +# are found for the netcdf files that match this compiler. +# +# NETCDF = /opt/local + +INCS = -I$(NETCDF)/include +LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf +FFLAGS = -O -assume buffered_io $(INCS) +LDFLAGS = $(FFLAGS) $(LIBS) + +# for development or debugging, use this instead: +# FFLAGS = -g -C -check noarg_temp_created -fpe0 \ +# -fp-model precise -ftrapuv -traceback \ +# -warn declarations,uncalled,unused $(INCS) + +# Some optimized (BLAS, LAPACK) libraries may be available with: +# LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf -lmkl -lmkl_lapack -lguide -lpthread +# +# If you get this error: libimf.so: warning: warning: feupdateenv is not implemented +# try adding: -limf -lm to your LIBS line. diff --git a/build_templates/mkmf.template.rttov.gfortran b/build_templates/mkmf.template.rttov.gfortran index f6f3b3721e..ecb914a273 100644 --- a/build_templates/mkmf.template.rttov.gfortran +++ b/build_templates/mkmf.template.rttov.gfortran @@ -59,39 +59,28 @@ MPILD = mpif90 FC = gfortran LD = gfortran -# DISCUSSION ABOUT RTTOV. DART is designed to work with RTTOV v12.3. -# There have been non-backwards compatible changes throughout the -# life-cycle of RTTOV (and more to be expected), so changing the -# RTTOV version will required additional effort to get running. -# You should install RTTOV 12.3 with HDF support. See -# https://www.nwpsaf.eu/site/software/rttov/ -# for more information on installing RTTOV. - # MODIFY THE FOLLOWING VARIABLES FOR YOUR SYSTEM: -# If your NETCDF, HDFEOS5, or RTTOV environment variables are not set, +# If your NETCDF, HDF5, or RTTOV environment variables are not set, # uncomment the following line and set value to where lib and include # are found for the netcdf files that match this compiler. # Since netCDF can be built with HDF5, many systems have the HDF5 # installation in the same place as netCDF. # # NETCDF = /usr/lib/x86_64-linux-gnu -# HDFEOS5 = /usr/include/hdf5/serial -# RTTOV = ~/research/satellite/rttov - -RTTOV_VERSION = 12 - -HDFEOS5 = /Users/thoar/gnu/gnu-9.2.0/ +# HDF5 = $(NETCDF) +# RTTOV = /glade/campaign/cisl/dares/libraries/rttov132_gfortran/ -# You will likely not need to modify below this line +RTTOV_VERSION = 13 RTLIBS = -lrttov$(RTTOV_VERSION)_wrapper -lrttov$(RTTOV_VERSION)_mw_scatt -lrttov$(RTTOV_VERSION)_brdf_atlas \ -lrttov$(RTTOV_VERSION)_emis_atlas -lrttov$(RTTOV_VERSION)_other -lrttov$(RTTOV_VERSION)_parallel \ -lrttov$(RTTOV_VERSION)_coef_io -lrttov$(RTTOV_VERSION)_hdf -lrttov$(RTTOV_VERSION)_main -INCS = -I$(NETCDF)/include -I$(HDFEOS5)/include -I$(RTTOV)/include -I$(RTTOV)/mod +INCS = -I$(NETCDF)/include -I$(HDF5)/include -I$(RTTOV)/include -I$(RTTOV)/mod +# Note some versions of hdf5 need -lhdf5hl_fortran instead of -lhdf5_hl_fortran LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf \ - -L$(HDFEOS5)/lib -lhdf5hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 \ + -L$(HDF5)/lib -lhdf5_hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 \ -L$(RTTOV)/lib $(RTLIBS) FFLAGS = -O2 -ffree-line-length-none $(INCS) diff --git a/build_templates/mkmf.template.rttov.intel b/build_templates/mkmf.template.rttov.intel index 4910abf33b..10d4185f16 100644 --- a/build_templates/mkmf.template.rttov.intel +++ b/build_templates/mkmf.template.rttov.intel @@ -4,143 +4,33 @@ # by UCAR, "as is", without charge, subject to all terms of use at # http://www.image.ucar.edu/DAReS/DART/DART_download -# typical use with mkmf -# mkmf -t mkmf.template.xxxx ... -# -# FFLAGS useful for DEBUGGING. NOTE: The intel compiler can provide a lot more -# information if you LEAVE the object and module files intact. -# Do not remove the *.o and *.mod files when debugging code. -# -# -g include debugging information. these are all synonyms. -# -debug full -# -debug all -# -O0 setting -g will make this the default (no optimization). -# it is possible to set -g and then explicitly set -O2 if -# the behavior being debugged depends on optimization changes. -# -ftrapuv traps if a local variable is used before being set -# -C enables all runtime checks. -C and -check all are synonyms. -# -check all -# -check enables/disables more specific runtime checks. -# keywords: [arg_temp_created,bounds,overflow,format,pointers,uninit] -# -warn the level of warning messages issued. -# keywords: [alignments, argument_checking, declarations, -# errors, fileopt, general, ignore_loc, -# stderrors, truncated_source, uncalled, -# uninitialized, unused, usage, all] -# -fp-stack-check catches conditions where the FP stack is not correct. -# Typically this is when a real function is called as if it were a -# subroutine, OR a subroutine is called as if it were a function (return -# values left of FP stack OR too much data is taken off the FP stack) -# -vec-reportN controls how much diagnostic output is printed about -# loops vectorized by the compiler. N = 0 is silent, -# N can have values up to 5. -# -traceback tells the compiler to generate extra information in the -# object file to provide source file traceback information -# when a severe error occurs at run time -# -# FFLAGS useful for bitwise reproducibility and accuracy control -# (these will slow down performance to various degrees) -# -fp-model precise control how floating point roundoff is done so it is -# reproducible from run to run. in simple tests this -# flag alone was enough to create bitwise reproducible -# code but slowed execution significantly. -# -ftz 'flush to zero' underflows result in zero. set by default if -# any -O other than -O0 set, or if -fpe0 or -fpe1 set. -# -fpeN controls floating point exception handling. -fpe0 rounds underflow -# to zero and traps on any other exception type. -# -pc80 set internal FPU precision to 64 bit significand -# (default is -pc64 with 53 internal bits) -# -# FFLAGS useful for production -# -O2 default. optimize without too much unrepeatable numerical games -# -O3 more aggressive optimizations. check numerical differences -# before using this indiscriminately. -# -O1 if you get compile-time errors about out of memory or unable to -# complete compilation because of complexity, try lowering the -# optimization level on the offending source files. -# -ipo enable optimizations between routines in separate source files -# -heap-arrays 10 allocate large arrays from the heap instead of putting them -# on the stack. the number is the limit in KB for when arrays -# move from the stack to the heap. this can help if you get stack -# overflow errors and cannot increase the stack size more. -# allocating from the stack is faster, but it's usually a smaller -# size than the heap. -# -x, -m, -ax, -mcode, -march all these flags tell the compiler to generate -# processor-specific or vector instructions. either 'man ifort' or -# ifort --help to see what the current list of options are and -# which have priority over the others. -# (for those running on yellowstone, -axavx will enable the advanced -# vector instructions available on the sandy bridge processors.) -# -assume buffered_io allows the runtime library to buffer up individual -# writes before calling the operating system. in particular, we -# write our observation sequence files as a series of many individual -# calls to the write() routine. when debugging you do not want to -# buffer so you can see the last output before the program dies. -# for production, however, you want to batch up writes into larger -# blocks before stopping to do i/o to disk. an alternative at -# runtime is to set FORT_BUFFERED to 'true' in your environment. -# (e.g. csh family: setenv FORT_BUFFERED true or -# ksh family: export FORT_BUFFERED=true). -# -# FFLAGS possibly useful, not normally used by DART -# -fpp run Fortran preprocessor on source files prior to compilation -# -free interpret source as free-format, regardless of file extension -# -r8 specify default real size. note that for DART we use explicit -# types on all our real values so this will not change anything -# inside DART. see DART/common/types_mod.f90 if you must run -# with single precision reals. -# -convert big_endian useful if you're on the wrong architecture. -# however this controls both reading and writing so you can't -# use it as a conversion mechanism unless you write files out -# in ascii format. applies to all unformatted fortran i/o. -# -assume byterecl ... more 'industry-standard' direct-access behavior -# controls what units the RECL (record length) specifier returns. -# -# Runtime environment variables that influence the compiler behavior: -# -# Make output lines for fortran write statements longer without wrapping: -# setenv FORT_FMT_RECL 512 (or any length) -# -# IF YOU HAVE MORE CURRENT COMPILER INFORMATION, PLEASE SHARE IT WITH US. - MPIFC = mpif90 MPILD = mpif90 FC = ifort LD = ifort -# DISCUSSION ABOUT RTTOV. DART is designed to work with RTTOV v12.3. -# There have been non-backwards compatible changes throughout the -# life-cycle of RTTOV (and more to be expected), so changing the -# RTTOV version will required additional effort to get running. -# You should install RTTOV 12.3 with HDF support. See -# https://www.nwpsaf.eu/site/software/rttov/ -# for more information on installing RTTOV. - # MODIFY THE FOLLOWING VARIABLES FOR YOUR SYSTEM: -# If your NETCDF, HDFEOS5, or RTTOV environment variables are not set, +# If your NETCDF, HDF5, or RTTOV environment variables are not set, # uncomment the following line and set value to where lib and include # are found for the netcdf files that match this compiler. # Since netCDF can be built with HDF5, many systems have the HDF5 # installation in the same place as netCDF. # # NETCDF = /opt/local -# HDFEOS5 = $(NETCDF) -# RTTOV = ~/research/satellite/rttov - -RTTOV_VERSION = 12 - -HDFEOS5 = /glade/u/apps/ch/opt/hdf-eos5/5.1.16/intel/19.0.5 +# HDF5 = $(NETCDF) +# RTTOV = /glade/campaign/cisl/dares/libraries/rttov132_intel/ -# You will likely not need to modify below this line +RTTOV_VERSION = 13 RTLIBS = -lrttov$(RTTOV_VERSION)_wrapper -lrttov$(RTTOV_VERSION)_mw_scatt -lrttov$(RTTOV_VERSION)_brdf_atlas \ -lrttov$(RTTOV_VERSION)_emis_atlas -lrttov$(RTTOV_VERSION)_other -lrttov$(RTTOV_VERSION)_parallel \ -lrttov$(RTTOV_VERSION)_coef_io -lrttov$(RTTOV_VERSION)_hdf -lrttov$(RTTOV_VERSION)_main -INCS = -I$(NETCDF)/include -I$(HDFEOS5)/include -I$(RTTOV)/include -I$(RTTOV)/mod +INCS = -I$(NETCDF)/include -I$(HDF5)/include -I$(RTTOV)/include -I$(RTTOV)/mod +# Note some version of hdf5 need -lhdf5hl_fortran instead of -lhdf5_hl_fortran LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf \ - -L$(HDFEOS5)/lib -lhdf5hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 \ + -L$(HDF5)/lib -lhdf5_hl_fortran -lhdf5_hl -lhdf5_fortran -lhdf5 \ -L$(RTTOV)/lib $(RTLIBS) FFLAGS = -O2 -assume buffered_io $(INCS) diff --git a/conf.py b/conf.py index cf4bf8f786..6b2fdb2ab3 100644 --- a/conf.py +++ b/conf.py @@ -21,7 +21,7 @@ author = 'Data Assimilation Research Section' # The full version, including alpha/beta/rc tags -release = '11.0.0' +release = '11.5.0' root_doc = 'index' # -- General configuration --------------------------------------------------- diff --git a/guide/Dockerfile b/guide/Dockerfile deleted file mode 100644 index 648bb14282..0000000000 --- a/guide/Dockerfile +++ /dev/null @@ -1,5 +0,0 @@ -FROM jekyll/jekyll:3.8.5 - -WORKDIR /project - -CMD jekyll serve diff --git a/guide/Gemfile b/guide/Gemfile deleted file mode 100644 index 2373bec332..0000000000 --- a/guide/Gemfile +++ /dev/null @@ -1,28 +0,0 @@ -source "https://rubygems.org" - -# Hello! This is where you manage which Jekyll version is used to run. -# When you want to use a different version, change it below, save the -# file and run `bundle install`. Run Jekyll with `bundle exec`, like so: -# -# bundle exec jekyll serve -# -# This will help ensure the proper Jekyll version is running. -# Happy Jekylling! -gem "jekyll", "~> 3.8.5" - -# If you want to use GitHub Pages, remove the "gem "jekyll"" above and -# uncomment the line below. To upgrade, run `bundle update github-pages`. -# gem "github-pages", group: :jekyll_plugins - -# If you have any plugins, put them here! -group :jekyll_plugins do - gem "jekyll-feed", "~> 0.6" - gem "jekyll-remote-theme", "~> 0.3" -end - -# Windows does not include zoneinfo files, so bundle the tzinfo-data gem -gem "tzinfo-data", platforms: [:mingw, :mswin, :x64_mingw, :jruby] - -# Performance-booster for watching directories on Windows -gem "wdm", "~> 0.1.0" if Gem.win_platform? - diff --git a/guide/_api/ncar.css b/guide/_api/ncar.css deleted file mode 100644 index 6e225d11bc..0000000000 --- a/guide/_api/ncar.css +++ /dev/null @@ -1,39 +0,0 @@ -.navbar-inverse { - background-color: whitesmoke; -} - -.navbar-inverse .navbar-brand { - color: black; -} -.navbar-inverse .navbar-brand:hover { - color: grey; -} -.navbar-inverse .navbar-nav>li>a { - color: black; -} - -.navbar-inverse .navbar-nav > li > a:hover, -.navbar-inverse .navbar-nav > li > a:focus { - color: grey; - background-color: transparent; -} - -body { - background-image: url(https://ncar.ucar.edu/profiles/custom/ncar_ucar_umbrella/themes/custom/koru/libraries/koru-base/img/bg-body-ncar-v2.png); - background-position: top right; - background-repeat: repeat-y; - background-color: #dbe2e9; -} - -footer { - background-color: #414143; - color: #fff; -} - -img { - height: 100px; -} - -.navbar-form .form-control { - width: 100px; -} diff --git a/guide/_config.yml b/guide/_config.yml deleted file mode 100644 index 2ce3266b64..0000000000 --- a/guide/_config.yml +++ /dev/null @@ -1,46 +0,0 @@ -# Welcome to Jekyll! -# -# This config file is meant for settings that affect your whole blog, values -# which you are expected to set up once and rarely edit after that. If you find -# yourself editing this file very often, consider using Jekyll's data files -# feature for the data you need to update frequently. -# -# For technical reasons, this file is *NOT* reloaded automatically when you use -# 'bundle exec jekyll serve'. If you change this file, please restart the server process. - -# Site settings -# These are used to personalize your new site. If you look in the HTML files, -# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on. -# You can create any custom variable you would like, and they will be accessible -# in the templates via {{ site.myvariable }}. -title: DART -email: dart@ucar.edu -description: >- # this means to ignore newlines until "baseurl:" - Write an awesome description for your new site here. You can edit this - line in _config.yml. It will appear in your document head meta (for - Google search results) and in your feed.xml site description. -baseurl: "" # the subpath of your site, e.g. /blog -url: "" # the base hostname & protocol for your site, e.g. http://example.com -github_username: - -# Build settings -markdown: kramdown -remote_theme: ncar/koru-jekyll@1.0.33 -plugins: - - jekyll-feed - - jekyll-remote-theme - - jemoji -collections: - api: - output: true -# Exclude from processing. -# The following items will not be processed, by default. Create a custom list -# to override the default setting. -# exclude: -# - Gemfile -# - Gemfile.lock -# - node_modules -# - vendor/bundle/ -# - vendor/cache/ -# - vendor/gems/ -# - vendor/ruby/ diff --git a/guide/_data/mainmenu.yml b/guide/_data/mainmenu.yml deleted file mode 100644 index 1d2ea7b684..0000000000 --- a/guide/_data/mainmenu.yml +++ /dev/null @@ -1,65 +0,0 @@ -# For this data file to work on GitHub pages you need to include the name of the repository in the url -# e.g. //pages/Getting_Started.html -# excluding the repository name makes it easier to navigate when building locally -menu: - - title: Documentation - submenu: - - title: Models - url: /pages/Models.html - - title: Observations - url: /pages/Observations.html - - title: Diagnostics - url: /pages/Diagnostics.html - - title: Radiance Support - url: /pages/Radiance_support.html - - title: Learning - submenu: - - title: Getting Started - url: /pages/Getting_Started.html - - title: "DART_LAB: explore ensemble DA concepts with MATLAB" - url: /pages/dart_lab.html - - title: "Tutorial: learn how to use DART" - url: /pages/Tutorial.html - - title: Research - url: /pages/Research.html - submenu: - - title: Publications - url: /pages/Publications.html - - title: Presentations - url: /pages/Presentations.html - - title: CESM and DART - url: /pages/CESM_DART_guidelines.html - - title: WRF and DART - url: /pages/WRF_DART_guidelines.html - - title: Radiance Support - url: /pages/Radiance_support.html - - title: About Us - url: /pages/About_Us.html - - title: Support - url: /pages/support.html - submenu: - - title: DART Parallelism - url: /pages/dart_mpi.html - - title: Frequently Asked Questions - url: /pages/Miscellany.html - - title: Releases - url: /pages/release.html - submenu: - - title: Manhattan (newest) - url: https://github.com/NCAR/DART/releases/tag/v9.9.0 - - title: Lanai - url: https://github.com/NCAR/DART/releases/tag/v8.4.1 - - title: Kodiak - url: https://github.com/NCAR/DART/releases/tag/v7.2.2 - - title: Jamaica - url: https://github.com/NCAR/DART/releases/tag/v6.0.1 - - title: Iceland - url: https://github.com/NCAR/DART/releases/tag/v5.1.0 - - title: Hawaii - url: https://github.com/NCAR/DART/releases/tag/v4.1.0 - - title: Guam - url: https://github.com/NCAR/DART/releases/tag/v3.0.0 - - title: Fiji - url: https://github.com/NCAR/DART/releases/tag/v2.0.0 - - title: Easter (oldest) - url: https://github.com/NCAR/DART/releases/tag/v1.0.0 diff --git a/guide/docker-compose.yml b/guide/docker-compose.yml deleted file mode 100644 index 441bef0859..0000000000 --- a/guide/docker-compose.yml +++ /dev/null @@ -1,13 +0,0 @@ -version: '3.6' - -services: - web: - container_name: koru-jekyll-site - image: koru-jekyll-site:latest - build: - context: . - dockerfile: Dockerfile - volumes: - - ./:/project - ports: - - "4000:4000" diff --git a/guide/ford_config.md b/guide/ford_config.md deleted file mode 100644 index 9e7da02794..0000000000 --- a/guide/ford_config.md +++ /dev/null @@ -1,27 +0,0 @@ -src_dir: ../. -exclude_dir: ../docs/_api - ../docs/_site/api - ../models/bgrid_solo/fms_src - ../observations/obs_converters/NCEP - ../assimilation_code/programs/system_simulation -exclude: mpisetup.f90 - obs_seq.F -output_dir: ./ford_output -page_dir: ./pages -project: DART -project_github: https://github.com/NCAR/DART -project_website: https://ncar.github.io/DART -summary: **D**ata **A**ssimilation **R**esearch **T**estbed -author: DAReS -author_description: The NCAR Data Assimilation Research Section -display: public -source: false -graph: true -graph_maxdepth: 3 -coloured_edges: true -search: true -md_extensions: markdown.extensions.toc -preprocessor: gfortran -css: ./_api/ncar.css - -Click Documentation for release notes, getting started, and other documentation diff --git a/guide/index.md b/guide/index.md deleted file mode 100644 index 5b8e3c7cea..0000000000 --- a/guide/index.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -layout: frontpage -title: Home -banner-title: Welcome to DART -banner-description: "DART has been reformulated to better support the ensemble data -assimilation needs of researchers who are interested in native netCDF support, -less filesystem I/O, better computational performance, good scaling for large -processor counts, and support for the memory requirements of very large models. -Manhattan has support for many of our larger models -(WRF, POP, CAM, CICE, CLM, ROMS, MPAS_ATM, ...) -with many more being added as time permits." -banner-button-text: Download -banner-button-url: https://github.com/NCAR/DART ---- - -# The Data Assimilation Research Testbed (DART) - -DART is a community facility for ensemble DA developed and maintained by -the Data Assimilation Research Section (DAReS) at the National Center for -Atmospheric Research (NCAR). DART provides modelers, observational scientists, -and geophysicists with powerful, flexible DA tools that are easy to implement -and use and can be customized to support efficient operational DA applications. -DART is a software environment that makes it easy to explore a variety of -data assimilation methods and observations with different numerical models -and is designed to facilitate the combination of assimilation algorithms, -models, and real (as well as synthetic) observations to allow increased -understanding of all three. DART includes extensive documentation, a -comprehensive tutorial, and a variety of models and observation sets that -can be used to introduce new users or graduate students to ensemble DA. -DART also provides a framework for developing, testing, and distributing -advances in ensemble DA to a broad community of users by removing the -implementation-specific peculiarities of one-off DA systems. - -
-assim graphic  -manhattan image  -spaghetti -
- -DART is a software environment for making it easy to match a variety of data -assimiliation methods to different numerical models and different kinds of -observations. DART has been through the crucible of many compilers and platforms. -It is ready for friendly use and has been used in several field programs -requiring real-time forecasting. - - - - - - - - - - -
workflow1 -DART employs a modular programming approach to apply an Ensemble Kalman Filter -which modifies the underlying models toward a state that is more consistent with -information from a set of observations. Models may be swapped in and out, as can -different algorithms in the Ensemble Kalman Filter. The method requires running -multiple instances of a model to generate an ensemble of states. A forward -operator appropriate for the type of observation being assimilated is applied -to each of the states to generate the model's estimate of the observation. -workflow2
- -The DART algorithms are designed so that incorporating new models and new -observation types requires minimal coding of a small set of interface -routines, and does not require modification of the existing model code. -Several comprehensive atmosphere and ocean general circulation models (GCMs) -have been added to DART by modelers from outside of NCAR, in some cases with -less than one person-month of development effort. Forward operators for new -observation types can be created in a fashion that is nearly independent of -the forecast model, many of the standard operators are available -'out of the box' and will work with no additional coding. DART has been -through the crucible of many compilers and platforms. It is ready for -friendly use and has been used in several field programs requiring -real-time forecasting. The DART programs have been compiled with many -Fortran 90 compilers and have run on linux compute-servers, linux clusters, -OSX laptops/desktops, SGI Altix clusters, IBM supercomputers based on both -Power and Intel CPUs, and Cray supercomputers. diff --git a/guide/index.shtml b/guide/index.shtml deleted file mode 100644 index 4051e0f983..0000000000 --- a/guide/index.shtml +++ /dev/null @@ -1,258 +0,0 @@ -+---------------------------------------+---------------------------------------+---------------------------------------+ -| | | .. rubric:: The Data Assimilation | -| | | Research Testbed -- DART | -| | | :name: the-data- | -| | | assimilation-research-testbed----dart | -| | | | -| | | |cool spaghetti plot of North America | -| | | demonstrating the uncertainty among | -| | | the ensemble members| | -| | | The 500hPa geopotential height from | -| | | 20 ensemble members of an 80 member | -| | | experiment with a T85 resolution of | -| | | CAM (00Z 01 Feb 2003). The contour | -| | | sets are from 5320 to 5800 by 80. | -+---------------------------------------+---------------------------------------+---------------------------------------+ - -| - -Quick guide topics ------------------- - -- You are looking for some `introductory materials `__ on the general - concepts of Data Assimilation. -- You want to explore the example models and observations which are included with the DART software distribution. -- You have an existing model and/or data observations and would like to experiment with assimilating them with the DART - software. -- You are a current DART user and want to download the latest updates. -- You are interested in using the DART software and tutorial materials to teach a class using Data Assimilation (or - would like to use the DART tutorial material to teach yourself). -- You are interested in learning how DART :doc:`./mpi_intro`. -- You want to contact our group for more information. - -Getting started ---------------- - -The DART software provides a flexible, extensible framework for conducting data assimilation research on a wide variety -of models and observations. In order to facilitate the incorporation of new models, which in the Geoscience community -are frequently written in Fortran 90, the DART software is written primarily in Fortran 90. Control scripts are -primarily in C Shell, and the distribution includes Matlab® diagnostic scripts. - -The DART system comes with many models -- ranging from 1-dimensional Lorenz systems to full global atmospheric and ocean -models. DART also has extensive tutorial materials that explain typical DART experiments and explores many aspects of -ensemble data assimilation. Download the DART source code and see the :doc:`./Lanai_release` for instructions on how to -build an executable, run the "workshop" experiments, and look at the results. The ``DART_LAB`` directory contains -presentation slides and interactive MATLAB demonstrations which illustrate and explain the fundamentals of the ensemble -data assimilation algorithms. The ``tutorial`` directory contains a series of PDF files which go into more mathematical -detail on various ensemble data assimilation topics, and specifics on the DART implementation. - -DART requirements ------------------ - -| DART is intended to be highly portable but has a strong Unix/Linux preference. DART has been run successfully on - Windows machines under the cygwin environment. Those instructions are under development - if you would like to be a - friendly beta-tester please send me (Tim Hoar) an email and I'll send you the instructions, as long as you promise to - provide feedback (good or bad!) so I can improve them. My email is thoar @ ucar . edu - minus the spaces, naturally. -| Minimally, you will need a Fortran90 compiler and the netCDF libraries built with the F90 interface. History has shown - that it is a very good idea to make sure your run-time environment has the following: - -.. container:: unix - - limit stacksize unlimited - limit datasize unlimited - -| If you want to run your own model, all you need is an executable and some scripts to interface with DART - we have - templates and examples. If your model can be called as a subroutine, *life is good*, and the hardest part is usually a - routine to parse the model state vector into one whopping array - and back. Again - we have templates, examples, and a - document describing the required interfaces. That document exists in the DART code - ``DART/models/model_mod.html`` - - as does all the most current documentation. Almost every DART program/module has a matching piece of documentation. -| Starting with the Jamaica release there is an option to compile with the MPI (Message Passing Interface) libraries in - order to run the assimilation step in parallel on hardware with multiple CPUs. Note that this is optional; MPI is not - required to run DART. If you do want to run in parallel then we also require a working MPI library and appropriate - cluster or SMP hardware. See the :doc:`./mpi_intro` for more information on running with the MPI option. -| One of the beauties of ensemble data assimilation is that even if (particularly if) your model is single-threaded, you - can still run efficiently on parallel machines by dealing out each ensemble member (an unique instance of the model) - to a separate processor. If your model cannot run single-threaded, fear not, DART can do that too. - -DART platforms/compilers/batch systems --------------------------------------- - -We work to keep the DART code highly portable. We avoid compiler-specific constructs, require no system-specific -functions, and try as much as possible to be easy to build on new platforms. - -DART has been compiled and run on Apple laptops and workstations, Linux clusters small and large, SGI Altix systems, IBM -Power systems, IBM Intel systems, Cray systems. - -DART has been compiled with compilers from Intel, PGI, Cray, GNU, IBM, Pathscale. - -MPI versions of DART have run under batch systems including LSF, PBS, Moab/Torque, and Sun Grid Engine. - -We have run successfully on a Windows machine under the ``cygwin`` environment. If you are interested in this, please -`contact us `__. - -DART code distributions -======================= - -The DART code is distributed via a Subversion (**SVN**) repository. Anonymous access is allowed, and the repository code -is read-only for everyone except the DART development team. DART is distributed as source code, so you must be prepared -to build the parts of the system you need before you can run it. - -Using subversion makes it easy to update and compare your checked-out version of the code with the latest repository -version of the code. If you are not familiar with the ``svn`` command (the client application of subversion), you should -take a stroll through Tim's `svn primer `__. Or there are several GUI -programs that help you manage, check out, and merge subversion distributions. If you cannot use ``svn`` (e.g. because -you are behind a firewall that does not permit subversion access), please `email the DART team `__ -and we may be able to send you a tar file as a last resort. - -The DART development team keeps released versions of the code which are stable (don't change) except for bug fixes. -Generally we recommend users check out one of these versions. The DART development team makes frequent updates to the -trunk version of the code as new features are developed. Those users who want to use recently added features are welcome -to check out the trunk, but they should be prepared to work around possible non-backwards compatible changes and more -lightly tested code than the released versions. - -DART continues to evolve. We request that you register using `this web page `__ and -afterwards you will be redirected to instructions on how to download a version of the DART code. Registration helps us -track how many people are using our code, and allows us to contact current users in case of bugs or major updates. The -DART mailing list is a **very low-traffic** list -- perhaps 4 emails per year -- so PLEASE use a real email address when -signing up. We solemnly swear to protect your email address like it is our own! Even local NCAR users or users who have -registered in the past are encouraged to reregister when downloading new versions. Thank you for your understanding. - -| - -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| version | date | instructions | most noteable change(s) | -+=============================+=============================+=============================+=============================+ -| `lanai `__ | | | climate components under | -| | | | the CESM framework; the | -| | | | MPAS models, the NOAH land | -| | | | model, the GITM ionosphere | -| | | | model, the NOGAPS | -| | | | atmosphere model, the NAAPS | -| | | | aerosol model, and the SQG | -| | | | surface quasi-geostrophic | -| | | | model. Support for many new | -| | | | chemistry and aerosol | -| | | | observation types, support | -| | | | for many new observations | -| | | | sources, many new | -| | | | diagnostic routines, and | -| | | | new utilities. | -| | | | `change | -| | | | log `__ | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| `trunk `__ | | | --revision ####:HEAD* to | -| | | | see log messages about | -| | | | changes since revision | -| | | | ####. | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| `kodiak `__ | | :`./history/Kodiak_release` | Error Correction, Boxcar | -| | | | Kernel Filter option, | -| | | | support for new models, new | -| | | | observation types, new | -| | | | diagnostics, new utilities. | -| | | | `change | -| | | | log `__ | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| jamaica | 12 Apr 2007 | :doc: | vertical localization, | -| | | `./history/Jamaica_release` | extensive testing of MPI | -| | | | implementation, full | -| | | | documentation for new | -| | | | algorithms, new tutorial | -| | | | sections | -| | | | :doc:`./hi | -| | | | story/Jamaica_diffs_from_I` | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| pre_j | 02 Oct 2006 | :do | contains an updated | -| | | c:`./history/pre_j_release` | scalable filter algorithm | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| post_iceland | 20 Jun 2006 | :doc:`./hi | observation-space adaptive | -| | | story/Post_Iceland_release` | inflation, bug fixes, | -| | | | obs_sequence_tool support | -| | | | ... | -| | | | :doc:`./ | -| | | | history/PostI_diffs_from_I` | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| iceland | 23 Nov 2005 | :doc: | huge expansion of real | -| | | `./history/Iceland_release` | observation capability | -| | | | better namelist processing, | -| | | | PBL_1d available. | -| | | | :doc:`./his | -| | | | tory/I_diffs_from_workshop` | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| pre_iceland | 20 Oct 2005 | for developers only | huge expansion of real | -| | | | observation capability | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| DA workshop 2005 | 13 Jun 2005 | docs included in distrib. | tutorial directory in | -| | | | distribution, observation | -| | | | preprocessing | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| hawaii | 28 Feb 2005 | :doc | new filtering algorithms | -| | | :`./history/hawaii_release` | | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| pre-hawaii | 20 Dec 2004 | :doc:`./ | new filtering algorithms | -| | | history/pre_hawaii_release` | | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| guam | 12 Aug 2004 | :d | new observation modules, | -| | | oc:`./history/Guam_release` | removing autopromotion | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| fiji | 29 Apr 2004 | :d | enhanced portability, CAM, | -| | | oc:`./history/Fiji_release` | WRF | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| | | | | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ -| easter | 8 March 2004 | :doc:`. | initial release | -| | | /history/ASP_DART_exercise` | | -+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ - -| - -DART tutorial materials and presentations ------------------------------------------ - -| The DART system comes with an extensive set of tutorial materials, working models with several different levels of - complexity, and data to be assimilated. It has been used in several multi-day workshops and can be used as the basis - to teach a section on Data Assimilation. Download the DART software distribution and look in the ``DART_LAB`` - subdirectory for pdf and powerpoint presentations, and MATLAB GUI point-and-click examples and hands-on - demonstrations. Also look in the ``tutorial`` subdirectory for pdf files for each of the 22 tutorial sections. -| **Browsing the DART_LAB and tutorial directories in the distribution is worth the effort. Doing the tutorials is even - better!** - -Presentations about DART ------------------------- - -The full list of presentations (as well as some of the presentations themselves) and publications is available on our -`Publications `__ page. - -Related links -------------- - -- `FMS group of GFDL `__ -- `ESMF -- Earth System Modeling Framework `__ -- `UK Met Office Fortran 90 Standards `__ -- `WEG -- NCAR's Web Engineering Group `__ -- `ncview -- a visual browser for netCDF files `__ - -DART contact list ------------------ - -We're a small group, so the contact list is pretty short. Our central email contact is dart@ucar.edu. Or if you want to -contact us individually, here is our information: - -+------------------+------------------+------------------+------------------+------------------+------------------+ -| lead scientist | general / | general / | large processor | CAM | WRF | -| and manager | diagnostics | platforms / mpi | count systems | | | -+==================+==================+==================+==================+==================+==================+ -| Jeff Anderson | Tim Hoar | Nancy Collins | Helen Kershaw | Kevin Raeder | Glen Romine | -+------------------+------------------+------------------+------------------+------------------+------------------+ -| jla @ ucar . edu | thoar @ ucar . | nancy @ ucar . | hkershaw @ ucar | raeder @ ucar . | romine @ ucar . | -| | edu | edu | . edu | edu | edu | -+------------------+------------------+------------------+------------------+------------------+------------------+ - -| - -.. |cool spaghetti plot of North America demonstrating the uncertainty among the ensemble members| image:: ../images/DARTspaghettiSquare.gif diff --git a/guide/matlab-observation-space.rst b/guide/matlab-observation-space.rst index e3f255dc4c..853fc535d7 100644 --- a/guide/matlab-observation-space.rst +++ b/guide/matlab-observation-space.rst @@ -2,6 +2,8 @@ MATLAB observation space diagnostics #################################### +.. _configMatlab: + Configuring MATLAB ================== diff --git a/guide/preprocess-program.rst b/guide/preprocess-program.rst index 7a4f5e9349..9381c528cb 100644 --- a/guide/preprocess-program.rst +++ b/guide/preprocess-program.rst @@ -1,3 +1,5 @@ +.. _preprocess: + How DART supports different types of observations: the preprocess program ========================================================================= diff --git a/index.rst b/index.rst index 417ee8bee3..a3dcbd6d5c 100644 --- a/index.rst +++ b/index.rst @@ -246,8 +246,8 @@ Citing DART Cite DART using the following text: - The Data Assimilation Research Testbed (Version X.Y.Z) [Software]. (2019). - Boulder, Colorado: UCAR/NCAR/CISL/DAReS. http://doi.org/10.5065/D6WQ0202 + The Data Assimilation Research Testbed (Version X.Y.Z) [Software]. (2024). + Boulder, Colorado: UCAR/NSF NCAR/CISL/DAReS. http://doi.org/10.5065/D6WQ0202 Update the DART version and year as appropriate. @@ -439,6 +439,7 @@ References :hidden: models/9var/readme + models/aether_lat-lon/readme models/am2/readme models/bgrid_solo/readme models/cam-fv/readme @@ -480,6 +481,7 @@ References models/POP/dart_pop_mod models/ROMS/readme models/rose/readme + models/seir/readme models/simple_advection/readme models/sqg/readme models/template/new_model diff --git a/models/MITgcm_ocean/model_mod.f90 b/models/MITgcm_ocean/model_mod.f90 index cc331593f5..85d635e520 100644 --- a/models/MITgcm_ocean/model_mod.f90 +++ b/models/MITgcm_ocean/model_mod.f90 @@ -20,13 +20,15 @@ module model_mod get_close_state, get_close_obs, set_location, & VERTISHEIGHT, get_location, is_vertical, & convert_vertical_obs, convert_vertical_state - +! EL use only nc_check was here, deleted for now for testing use utilities_mod, only : error_handler, E_ERR, E_WARN, E_MSG, & - logfileunit, get_unit, nc_check, do_output, to_upper, & + logfileunit, get_unit, do_output, to_upper, & find_namelist_in_file, check_namelist_read, & open_file, file_exist, find_textfile_dims, file_to_text, & string_to_real, string_to_logical +use netcdf_utilities_mod, only : nc_check + use obs_kind_mod, only : QTY_TEMPERATURE, QTY_SALINITY, QTY_U_CURRENT_COMPONENT, & QTY_V_CURRENT_COMPONENT, QTY_SEA_SURFACE_HEIGHT, & QTY_NITRATE_CONCENTRATION, QTY_SURFACE_CHLOROPHYLL, & @@ -54,7 +56,10 @@ module model_mod get_index_start, get_index_end, & get_dart_vector_index, get_num_variables, & get_domain_size, & - get_io_clamping_minval + get_io_clamping_minval, get_kind_index + +use netcdf_utilities_mod, only : nc_open_file_readonly, nc_get_variable, & + nc_get_dimension_size, nc_close_file use netcdf @@ -253,9 +258,16 @@ module model_mod ! standard MITgcm namelist and filled in here. integer :: Nx=-1, Ny=-1, Nz=-1 ! grid counts for each field +integer :: comp2d = -1, comp3d=-1, comp3dU = -1, comp3dV = -1 ! size of commpressed variables ! locations of cell centers (C) and edges (G) for each axis. real(r8), allocatable :: XC(:), XG(:), YC(:), YG(:), ZC(:), ZG(:) +real(r4), allocatable :: XC_sq(:), YC_sq(:), XG_sq(:), YG_sq(:) +real(r8), allocatable :: ZC_sq(:) + +integer, allocatable :: Xc_Ti(:), Yc_Ti(:), Zc_Ti(:) +integer, allocatable :: Xc_Ui(:), Yc_Ui(:), Zc_Ui(:) +integer, allocatable :: Xc_Vi(:), Yc_Vi(:), Zc_Vi(:) real(r8) :: ocean_dynamics_timestep = 900.0_r4 integer :: timestepcount = 0 @@ -275,7 +287,6 @@ module model_mod integer, parameter :: NUM_STATE_TABLE_COLUMNS = 5 character(len=vtablenamelength) :: mitgcm_variables(NUM_STATE_TABLE_COLUMNS, MAX_STATE_VARIABLES ) = ' ' - character(len=256) :: model_shape_file = ' ' integer :: assimilation_period_days = 7 integer :: assimilation_period_seconds = 0 @@ -290,8 +301,9 @@ module model_mod logical :: go_to_dart = .false. logical :: do_bgc = .false. logical :: log_transform = .false. +logical :: compress = .false. -namelist /trans_mitdart_nml/ go_to_dart, do_bgc, log_transform +namelist /trans_mitdart_nml/ go_to_dart, do_bgc, log_transform, compress ! /pkg/mdsio/mdsio_write_meta.F writes the .meta files type MIT_meta_type @@ -325,6 +337,7 @@ subroutine static_init_model() integer :: i, iunit, io integer :: ss, dd +integer :: ncid ! for reading compressed coordinates ! The Plan: ! @@ -526,13 +539,62 @@ subroutine static_init_model() domain_id = add_domain(model_shape_file, nvars, & var_names, quantity_list, clamp_vals, update_list ) +if (compress) then ! read in compressed coordinates + + ncid = nc_open_file_readonly(model_shape_file) + comp2d = nc_get_dimension_size(ncid, 'comp2d' , 'static_init_model', model_shape_file) + comp3d = nc_get_dimension_size(ncid, 'comp3d' , 'static_init_model', model_shape_file) + comp3dU = nc_get_dimension_size(ncid, 'comp3dU', 'static_init_model', model_shape_file) + comp3dV = nc_get_dimension_size(ncid, 'comp3dV', 'static_init_model', model_shape_file) + + allocate(XC_sq(comp3d)) + allocate(YC_sq(comp3d)) + allocate(ZC_sq(comp3d)) ! ZC is r8 + + allocate(XG_sq(comp3d)) + allocate(YG_sq(comp3d)) + + allocate(Xc_Ti(comp3d)) + allocate(Yc_Ti(comp3d)) + allocate(Zc_Ti(comp3d)) + + allocate(Xc_Ui(comp3dU)) + allocate(Yc_Ui(comp3dU)) + allocate(Zc_Ui(comp3dU)) + + allocate(Xc_Vi(comp3dV)) + allocate(Yc_Vi(comp3dV)) + allocate(Zc_Vi(comp3dV)) + + call nc_get_variable(ncid, 'XCcomp', XC_sq) + call nc_get_variable(ncid, 'YCcomp', YC_sq) + call nc_get_variable(ncid, 'ZCcomp', ZC_sq) + + call nc_get_variable(ncid, 'XGcomp', XG_sq) + call nc_get_variable(ncid, 'YGcomp', YG_sq) + + call nc_get_variable(ncid, 'Xcomp_ind', Xc_Ti) + call nc_get_variable(ncid, 'Ycomp_ind', Yc_Ti) + call nc_get_variable(ncid, 'Zcomp_ind', Zc_Ti) + + call nc_get_variable(ncid, 'Xcomp_indU', Xc_Ui) + call nc_get_variable(ncid, 'Ycomp_indU', Yc_Ui) + call nc_get_variable(ncid, 'Zcomp_indU', Zc_Ui) + + call nc_get_variable(ncid, 'Xcomp_indV', Xc_Vi) + call nc_get_variable(ncid, 'Ycomp_indV', Yc_Vi) + call nc_get_variable(ncid, 'Zcomp_indV', Zc_Vi) + + call nc_close_file(ncid) + +endif + model_size = get_domain_size(domain_id) if (do_output()) write(*,*) 'model_size = ', model_size end subroutine static_init_model - function get_model_size() !------------------------------------------------------------------ ! @@ -952,6 +1014,63 @@ function lon_dist(lon1, lon2) end function lon_dist +function get_compressed_dart_vector_index(iloc, jloc, kloc, dom_id, var_id) +!======================================================================= +! + +! returns the dart vector index for the compressed state + +integer, intent(in) :: iloc, jloc, kloc +integer, intent(in) :: dom_id, var_id +integer(i8) :: get_compressed_dart_vector_index + +integer :: i ! loop counter +integer :: qty +integer(i8) :: offset + +offset = get_index_start(dom_id, var_id) + +qty = get_kind_index(dom_id, var_id) + +get_compressed_dart_vector_index = -1 + +! MEG: Using the already established compressed indices +! +! 2D compressed variables +if (qty == QTY_SEA_SURFACE_HEIGHT .or. qty == QTY_SURFACE_CHLOROPHYLL ) then + do i = 1, comp2d + if (Xc_Ti(i) == iloc .and. Yc_Ti(i) == jloc .and. Zc_Ti(i) == 1) then + get_compressed_dart_vector_index = offset + i - 1 + endif + enddo + return +endif + +! 3D compressed variables +if (qty == QTY_U_CURRENT_COMPONENT) then + do i = 1, comp3dU + if (Xc_Ui(i) == iloc .and. Yc_Ui(i) == jloc .and. Zc_Ui(i) == kloc) then + get_compressed_dart_vector_index = offset + i - 1 + endif + enddo +elseif (qty == QTY_V_CURRENT_COMPONENT) then + do i = 1, comp3dV + if (Xc_Vi(i) == iloc .and. Yc_Vi(i) == jloc .and. Zc_Vi(i) == kloc) then + get_compressed_dart_vector_index = offset + i - 1 + endif + enddo +else + do i = 1, comp3d + if (Xc_Ti(i) == iloc .and. Yc_Ti(i) == jloc .and. Zc_Ti(i) == kloc) then + get_compressed_dart_vector_index = offset + i - 1 + endif + enddo +endif + + +end function get_compressed_dart_vector_index + + function get_val(lon_index, lat_index, level, var_id, state_handle,ens_size, masked) !======================================================================= ! @@ -969,24 +1088,28 @@ function get_val(lon_index, lat_index, level, var_id, state_handle,ens_size, mas if ( .not. module_initialized ) call static_init_model -state_index = get_dart_vector_index(lon_index, lat_index, level, domain_id, var_id) -get_val = get_state(state_index,state_handle) +masked = .false. -! Masked returns false if the value is masked -! A grid variable is assumed to be masked if its value is FVAL. -! Just to maintain legacy, we also assume that A grid variable is assumed -! to be masked if its value is exactly 0. -! See discussion in lat_lon_interpolate. +if (compress) then -! MEG CAUTION: THE ABOVE STATEMENT IS INCORRECT -! trans_mitdart already looks for 0.0 and makes them FVAL -! So, in the condition below we don't need to check for zeros -! The only mask is FVAL -masked = .false. -do i=1,ens_size -! if(get_val(i) == FVAL .or. get_val(i) == 0.0_r8 ) masked = .true. - if(get_val(i) == FVAL) masked = .true. -enddo + state_index = get_compressed_dart_vector_index(lon_index, lat_index, level, domain_id, var_id) + + if (state_index .ne. -1) then + get_val = get_state(state_index,state_handle) + else + masked = .true. + endif + +else + + state_index = get_dart_vector_index(lon_index, lat_index, level, domain_id, var_id) + get_val = get_state(state_index,state_handle) + + do i=1,ens_size ! HK this is checking the whole ensemble, can you have different masks for each ensemble member? + if(get_val(i) == FVAL) masked = .true. + enddo + +endif end function get_val @@ -1077,16 +1200,28 @@ subroutine get_state_meta_data(index_in, location, qty) call get_model_variable_indices(index_in, iloc, jloc, kloc, kind_index = qty) -lon = XC(iloc) -lat = YC(jloc) -depth = ZC(kloc) +if (compress) then ! all variables ae 1D + lon = XC_sq(iloc) + lat = YC_sq(iloc) + depth = ZC_sq(iloc) + ! Acounting for variables those on staggered grids + if (qty == QTY_U_CURRENT_COMPONENT) lon = XG_sq(iloc) + if (qty == QTY_V_CURRENT_COMPONENT) lat = YG_sq(iloc) +else + + lon = XC(iloc) + lat = YC(jloc) + depth = ZC(kloc) + + ! Acounting for variables those on staggered grids + if (qty == QTY_U_CURRENT_COMPONENT) lon = XG(iloc) + if (qty == QTY_V_CURRENT_COMPONENT) lat = YG(jloc) + +endif -! Acounting for surface variables and those on staggered grids ! MEG: check chl's depth here if (qty == QTY_SEA_SURFACE_HEIGHT .or. & qty == QTY_SURFACE_CHLOROPHYLL) depth = 0.0_r8 -if (qty == QTY_U_CURRENT_COMPONENT) lon = XG(iloc) -if (qty == QTY_V_CURRENT_COMPONENT) lat = YG(jloc) location = set_location(lon, lat, depth, VERTISHEIGHT) @@ -1295,6 +1430,8 @@ end subroutine nc_write_model_atts !------------------------------------------------------------------ ! Create an ensemble of states from a single state. +! Note if you perturb a compressed state, this will not be bitwise +! with perturbing a non-compressed state. subroutine pert_model_copies(state_ens_handle, ens_size, pert_amp, interf_provided) type(ensemble_type), intent(inout) :: state_ens_handle diff --git a/models/MITgcm_ocean/model_mod.nml b/models/MITgcm_ocean/model_mod.nml index ef64b88caa..03b81505ae 100644 --- a/models/MITgcm_ocean/model_mod.nml +++ b/models/MITgcm_ocean/model_mod.nml @@ -2,5 +2,6 @@ assimilation_period_days = 7 assimilation_period_seconds = 0 model_perturbation_amplitude = 0.2 + model_shape_file = 'mem01_reduced.nc' / diff --git a/models/MITgcm_ocean/readme.rst b/models/MITgcm_ocean/readme.rst index d63e56fea0..c6b7dc3dfd 100644 --- a/models/MITgcm_ocean/readme.rst +++ b/models/MITgcm_ocean/readme.rst @@ -34,8 +34,14 @@ can be set in the ``&trans_mitdart_nml`` namelist in ``input.nml``. &trans_mitdart_nml do_bgc = .false. ! change to .true. if doing bio-geo-chemistry log_transform = .false. ! change to .true. if using log_transform + compress = .false. ! change to .true. to compress the state vector / +``compress = .true.`` can be used to generate netcdf files for use with DART which has missing values (land) removed. +For some datasets this reduces the state vector size significantly. For example, the state vector size is +reduced by approximately 90% for the Red Sea. The program ``expand_netcdf`` can be used to uncompress the netcdf +file to view the data in a convenient form. + .. Warning:: diff --git a/models/MITgcm_ocean/trans_mitdart_mod.f90 b/models/MITgcm_ocean/trans_mitdart_mod.f90 index 15861b8164..8ce45307df 100644 --- a/models/MITgcm_ocean/trans_mitdart_mod.f90 +++ b/models/MITgcm_ocean/trans_mitdart_mod.f90 @@ -9,6 +9,7 @@ module trans_mitdart_mod use utilities_mod, only: initialize_utilities, register_module, & get_unit, find_namelist_in_file, file_exist, & check_namelist_read +use netcdf_utilities_mod, only : nc_get_variable, nc_get_dimension_size use netcdf implicit none @@ -20,9 +21,16 @@ module trans_mitdart_mod integer :: io, iunit logical :: do_bgc = .false. -logical :: log_transform = .false. +logical :: log_transform = .false. +logical :: compress = .false. +! set compress = .true. remove missing values from state +logical :: output_chl_data = .false. +! CHL.data is not written to mit .data files by default -namelist /trans_mitdart_nml/ do_bgc, log_transform +namelist /trans_mitdart_nml/ do_bgc, log_transform, compress + +real(r4), parameter :: FVAL=-999.0_r4 ! may put this as a namelist option +real(r4), parameter :: binary_fill=0.0_r4 !------------------------------------------------------------------ ! @@ -43,7 +51,7 @@ module trans_mitdart_mod integer :: recl3d integer :: recl2d -!-- Gridding parameters variable declarations +!-- Gridding parameters variable declarations logical :: usingCartesianGrid, usingCylindricalGrid, & usingSphericalPolarGrid, usingCurvilinearGrid, & deepAtmosphere @@ -71,14 +79,49 @@ module trans_mitdart_mod ! standard MITgcm namelist and filled in here. integer :: Nx=-1, Ny=-1, Nz=-1 ! grid counts for each field +integer :: ncomp2 = -1 ! length of 2D compressed dim +integer :: ncomp3 = -1, ncomp3U = -1, ncomp3V = -1 ! length of 3D compressed dim + +integer, parameter :: MITgcm_3D_FIELD = 1 +integer, parameter :: MITgcm_3D_FIELD_U = 2 +integer, parameter :: MITgcm_3D_FIELD_V = 3 ! locations of cell centers (C) and edges (G) for each axis. real(r8), allocatable :: XC(:), XG(:), YC(:), YG(:), ZC(:), ZG(:) +real(r8), allocatable :: XCcomp(:), XGcomp(:), YCcomp(:), YGcomp(:), ZCcomp(:), ZGcomp(:) + +integer, allocatable :: Xcomp_ind(:), Ycomp_ind(:), Zcomp_ind(:) !HK are the staggered grids compressed the same? +!MEG: For staggered grids +integer, allocatable :: Xcomp_indU(:), Ycomp_indU(:), Zcomp_indU(:) +integer, allocatable :: Xcomp_indV(:), Ycomp_indV(:), Zcomp_indV(:) + +! 3D variables, 3 grids: +! +! XC, YC, ZC 1 PSAL, PTMP, NO3, PO4, O2, PHY, ALK, DIC, DOP, DON, FET +! XC, YC, ZG 2 UVEL +! XC, YG, ZC 3 VVEL + +! MEG: For compression, especially if we're doing Arakawa C-grid, +! we will need 3 different compressions for the above variables + +! 2D variables, 1 grid: +! +! YC, XC ETA, CHL private public :: static_init_trans, mit2dart, dart2mit +interface write_compressed + module procedure write_compressed_2d + module procedure write_compressed_3d +end interface write_compressed + +interface read_compressed + module procedure read_compressed_2d + module procedure read_compressed_3d +end interface read_compressed + contains !================================================================== @@ -100,7 +143,6 @@ subroutine static_init_trans() read(iunit, nml = trans_mitdart_nml, iostat = io) call check_namelist_read(iunit, io, 'trans_mitdart_nml') - ! Grid-related variables are in PARM04 delX(:) = 0.0_r4 delY(:) = 0.0_r4 @@ -196,13 +238,6 @@ subroutine static_init_trans() recl3d = Nx*Ny*Nz*4 recl2d = Nx*Ny*4 -! MEG Better have that as inout namelist parameter -! Are we also doing bgc on top of physics? -! If we found nitrate then the rest of the binaries (for the -! remaining 9 variables) should be also there. -! TODO may also enhance this functionality -! if (file_exist('NO3.data')) do_bgc = .true. - end subroutine static_init_trans !------------------------------------------------------------------ @@ -210,11 +245,17 @@ end subroutine static_init_trans subroutine mit2dart() -integer :: ncid, iunit +integer :: ncid ! for the dimensions and coordinate variables integer :: XGDimID, XCDimID, YGDimID, YCDimID, ZGDimID, ZCDimID integer :: XGVarID, XCVarID, YGVarID, YCVarID, ZGVarID, ZCVarID +integer :: comp2ID, comp3ID, comp3UD, comp3VD ! compressed dim +integer :: XGcompVarID, XCcompVarID, YGcompVarID, YCcompVarID, ZGcompVarID, ZCcompVarID +integer :: XindID, YindID, ZindID +integer :: XindUD, YindUD, ZindUD +integer :: XindVD, YindVD, ZindVD +integer :: all_dimids(9) ! store the 9 dimension ids that are used ! for the prognostic variables integer :: SVarID, TVarID, UVarID, VVarID, EtaVarID @@ -224,27 +265,47 @@ subroutine mit2dart() ! diagnostic variable integer :: chl_varid -real(r4), allocatable :: data_3d(:,:,:), data_2d(:,:) - -real(r4) :: FVAL - if (.not. module_initialized) call static_init_trans -FVAL=-999.0_r4 - -allocate(data_3d(Nx,Ny,Nz)) -allocate(data_2d(Nx,Ny)) - call check(nf90_create(path="OUTPUT.nc",cmode=or(nf90_clobber,nf90_64bit_offset),ncid=ncid)) ! Define the new dimensions IDs - -call check(nf90_def_dim(ncid=ncid, name="XG", len = Nx, dimid = XGDimID)) + call check(nf90_def_dim(ncid=ncid, name="XC", len = Nx, dimid = XCDimID)) -call check(nf90_def_dim(ncid=ncid, name="YG", len = Ny, dimid = YGDimID)) call check(nf90_def_dim(ncid=ncid, name="YC", len = Ny, dimid = YCDimID)) call check(nf90_def_dim(ncid=ncid, name="ZC", len = Nz, dimid = ZCDimID)) - + +call check(nf90_def_dim(ncid=ncid, name="XG", len = Nx, dimid = XGDimID)) +call check(nf90_def_dim(ncid=ncid, name="YG", len = Ny, dimid = YGDimID)) + +print *, '' + +if (compress) then + ncomp2 = get_compressed_size_2d() + + write(*, '(A, I12, A, I8)') '2D: ', Nx*Ny, ', COMP2D: ', ncomp2 + + ncomp3 = get_compressed_size_3d(MITgcm_3D_FIELD) + ncomp3U = get_compressed_size_3d(MITgcm_3D_FIELD_U) + ncomp3V = get_compressed_size_3d(MITgcm_3D_FIELD_V) + + write(*, '(A, I12, A, 3I8)') '3D: ', Nx*Ny*Nz, ', COMP3D [T-S, U, V]: ', ncomp3, ncomp3U, ncomp3V + + ! Put the compressed dimensions in the restart file + call check(nf90_def_dim(ncid=ncid, name="comp2d", len = ncomp2, dimid = comp2ID)) + call check(nf90_def_dim(ncid=ncid, name="comp3d", len = ncomp3, dimid = comp3ID)) + call check(nf90_def_dim(ncid=ncid, name="comp3dU", len = ncomp3U, dimid = comp3UD)) + call check(nf90_def_dim(ncid=ncid, name="comp3dV", len = ncomp3V, dimid = comp3VD)) +else + comp2ID = -1 + comp3ID = -1 +endif + +all_dimids = (/XCDimID, YCDimID, ZCDimID, XGDimID, YGDimID, & + comp2ID, comp3ID, comp3UD, comp3VD/) + +print *, '' + ! Create the (empty) Coordinate Variables and the Attributes ! U Grid Longitudes @@ -290,142 +351,89 @@ subroutine mit2dart() call check(nf90_put_att(ncid, ZCVarID, "axis", "Z")) call check(nf90_put_att(ncid, ZCVarID, "standard_name", "depth")) +! Compressed grid variables +if (compress) then + call check(nf90_def_var(ncid,name="XGcomp",xtype=nf90_real,dimids=comp3ID,varid=XGcompVarID)) + call check(nf90_def_var(ncid,name="XCcomp",xtype=nf90_real,dimids=comp3ID,varid=XCcompVarID)) + call check(nf90_def_var(ncid,name="YGcomp",xtype=nf90_real,dimids=comp3ID,varid=YGcompVarID)) + call check(nf90_def_var(ncid,name="YCcomp",xtype=nf90_real,dimids=comp3ID,varid=YCcompVarID)) + call check(nf90_def_var(ncid,name="ZCcomp",xtype=nf90_double,dimids=comp3ID,varid=ZCcompVarID)) + + call check(nf90_def_var(ncid,name="Xcomp_ind",xtype=nf90_int,dimids=comp3ID,varid=XindID)) + call check(nf90_def_var(ncid,name="Ycomp_ind",xtype=nf90_int,dimids=comp3ID,varid=YindID)) + call check(nf90_def_var(ncid,name="Zcomp_ind",xtype=nf90_int,dimids=comp3ID,varid=ZindID)) + + call check(nf90_def_var(ncid,name="Xcomp_indU",xtype=nf90_int,dimids=comp3UD,varid=XindUD)) + call check(nf90_def_var(ncid,name="Ycomp_indU",xtype=nf90_int,dimids=comp3UD,varid=YindUD)) + call check(nf90_def_var(ncid,name="Zcomp_indU",xtype=nf90_int,dimids=comp3UD,varid=ZindUD)) + + call check(nf90_def_var(ncid,name="Xcomp_indV",xtype=nf90_int,dimids=comp3VD,varid=XindVD)) + call check(nf90_def_var(ncid,name="Ycomp_indV",xtype=nf90_int,dimids=comp3VD,varid=YindVD)) + call check(nf90_def_var(ncid,name="Zcomp_indV",xtype=nf90_int,dimids=comp3VD,varid=ZindVD)) +endif + +! The size of these variables will depend on the compression ! Create the (empty) Prognostic Variables and the Attributes -call check(nf90_def_var(ncid=ncid, name="PSAL", xtype=nf90_real, & - dimids = (/XCDimID,YCDimID,ZCDimID/),varid=SVarID)) -call check(nf90_put_att(ncid, SVarID, "long_name", "potential salinity")) -call check(nf90_put_att(ncid, SVarID, "missing_value", FVAL)) -call check(nf90_put_att(ncid, SVarID, "_FillValue", FVAL)) -call check(nf90_put_att(ncid, SVarID, "units", "psu")) -call check(nf90_put_att(ncid, SVarID, "units_long_name", "practical salinity units")) - -call check(nf90_def_var(ncid=ncid, name="PTMP", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=TVarID)) -call check(nf90_put_att(ncid, TVarID, "long_name", "Potential Temperature")) -call check(nf90_put_att(ncid, TVarID, "missing_value", FVAL)) -call check(nf90_put_att(ncid, TVarID, "_FillValue", FVAL)) -call check(nf90_put_att(ncid, TVarID, "units", "C")) -call check(nf90_put_att(ncid, TVarID, "units_long_name", "degrees celsius")) - -call check(nf90_def_var(ncid=ncid, name="UVEL", xtype=nf90_real, & - dimids=(/XGDimID,YCDimID,ZCDimID/),varid=UVarID)) -call check(nf90_put_att(ncid, UVarID, "long_name", "Zonal Velocity")) -call check(nf90_put_att(ncid, UVarID, "mssing_value", FVAL)) -call check(nf90_put_att(ncid, UVarID, "_FillValue", FVAL)) -call check(nf90_put_att(ncid, UVarID, "units", "m/s")) -call check(nf90_put_att(ncid, UVarID, "units_long_name", "meters per second")) - -call check(nf90_def_var(ncid=ncid, name="VVEL", xtype=nf90_real, & - dimids=(/XCDimID,YGDimID,ZCDimID/),varid=VVarID)) -call check(nf90_put_att(ncid, VVarID, "long_name", "Meridional Velocity")) -call check(nf90_put_att(ncid, VVarID, "missing_value", FVAL)) -call check(nf90_put_att(ncid, VVarID, "_FillValue", FVAL)) -call check(nf90_put_att(ncid, VVarID, "units", "m/s")) -call check(nf90_put_att(ncid, VVarID, "units_long_name", "meters per second")) - -call check(nf90_def_var(ncid=ncid, name="ETA", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID/),varid=EtaVarID)) -call check(nf90_put_att(ncid, EtaVarID, "long_name", "sea surface height")) -call check(nf90_put_att(ncid, EtaVarID, "missing_value", FVAL)) -call check(nf90_put_att(ncid, EtaVarID, "_FillValue", FVAL)) -call check(nf90_put_att(ncid, EtaVarID, "units", "m")) -call check(nf90_put_att(ncid, EtaVarID, "units_long_name", "meters")) - -!> Add BLING data: +SVarID = define_variable(ncid,"PSAL", nf90_real, all_dimids, MITgcm_3D_FIELD) +call add_attributes_to_variable(ncid, SVarID, "potential salinity", "psu", "practical salinity units") + +TVarID = define_variable(ncid,"PTMP", nf90_real, all_dimids, MITgcm_3D_FIELD) +call add_attributes_to_variable(ncid, TVarID, "Potential Temperature", "C", "degrees celsius") + +UVarID = define_variable(ncid,"UVEL", nf90_real, all_dimids, MITgcm_3D_FIELD_U) +call add_attributes_to_variable(ncid, UVarID, "Zonal Velocity", "m/s", "meters per second") + +VVarID = define_variable(ncid,"VVEL", nf90_real, all_dimids, MITgcm_3D_FIELD_V) +call add_attributes_to_variable(ncid, VVarID, "Meridional Velocity", "m/s", "meters per second") + +EtaVarID = define_variable_2d(ncid,"ETA", nf90_real, all_dimids) +call add_attributes_to_variable(ncid, EtaVarID, "sea surface height", "m", "meters") + +! Create the BLING netcdf variables: if (do_bgc) then ! 1. BLING tracer: nitrate NO3 - call check(nf90_def_var(ncid=ncid, name="NO3", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=no3_varid)) - call check(nf90_put_att(ncid, no3_varid, "long_name" , "Nitrate")) - call check(nf90_put_att(ncid, no3_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, no3_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, no3_varid, "units" , "mol N/m3")) - call check(nf90_put_att(ncid, no3_varid, "units_long_name", "moles Nitrogen per cubic meters")) - + no3_varid = define_variable(ncid,"NO3", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, no3_varid, "Nitrate", "mol N/m3", "moles Nitrogen per cubic meters") + ! 2. BLING tracer: phosphate PO4 - call check(nf90_def_var(ncid=ncid, name="PO4", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=po4_varid)) - call check(nf90_put_att(ncid, po4_varid, "long_name" , "Phosphate")) - call check(nf90_put_att(ncid, po4_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, po4_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, po4_varid, "units" , "mol P/m3")) - call check(nf90_put_att(ncid, po4_varid, "units_long_name", "moles Phosphorus per cubic meters")) - + po4_varid = define_variable(ncid,"PO4", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, po4_varid, "Phosphate", "mol P/m3", "moles Phosphorus per cubic meters") + ! 3. BLING tracer: oxygen O2 - call check(nf90_def_var(ncid=ncid, name="O2", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=o2_varid)) - call check(nf90_put_att(ncid, o2_varid, "long_name" , "Dissolved Oxygen")) - call check(nf90_put_att(ncid, o2_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, o2_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, o2_varid, "units" , "mol O/m3")) - call check(nf90_put_att(ncid, o2_varid, "units_long_name", "moles Oxygen per cubic meters")) - + o2_varid = define_variable(ncid,"O2", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, o2_varid, "Dissolved Oxygen", "mol O/m3", "moles Oxygen per cubic meters") + ! 4. BLING tracer: phytoplankton PHY - call check(nf90_def_var(ncid=ncid, name="PHY", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=phy_varid)) - call check(nf90_put_att(ncid, phy_varid, "long_name" , "Phytoplankton Biomass")) - call check(nf90_put_att(ncid, phy_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, phy_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, phy_varid, "units" , "mol C/m3")) - call check(nf90_put_att(ncid, phy_varid, "units_long_name", "moles Carbon per cubic meters")) + phy_varid = define_variable(ncid,"PHY", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, phy_varid, "Phytoplankton Biomass", "mol C/m3", "moles Carbon per cubic meters") ! 5. BLING tracer: alkalinity ALK - call check(nf90_def_var(ncid=ncid, name="ALK", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=alk_varid)) - call check(nf90_put_att(ncid, alk_varid, "long_name" , "Alkalinity")) - call check(nf90_put_att(ncid, alk_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, alk_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, alk_varid, "units" , "mol eq/m3")) - call check(nf90_put_att(ncid, alk_varid, "units_long_name", "moles equivalent per cubic meters")) - + alk_varid = define_variable(ncid,"ALK", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, alk_varid, "Alkalinity", "mol eq/m3", "moles equivalent per cubic meters") + ! 6. BLING tracer: dissolved inorganic carbon DIC - call check(nf90_def_var(ncid=ncid, name="DIC", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=dic_varid)) - call check(nf90_put_att(ncid, dic_varid, "long_name" , "Dissolved Inorganic Carbon")) - call check(nf90_put_att(ncid, dic_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, dic_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, dic_varid, "units" , "mol C/m3")) - call check(nf90_put_att(ncid, dic_varid, "units_long_name", "moles Carbon per cubic meters")) - - ! 7. BLING tracer: dissolved organic phosphorus DOP - call check(nf90_def_var(ncid=ncid, name="DOP", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=dop_varid)) - call check(nf90_put_att(ncid, dop_varid, "long_name" , "Dissolved Organic Phosphorus")) - call check(nf90_put_att(ncid, dop_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, dop_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, dop_varid, "units" , "mol P/m3")) - call check(nf90_put_att(ncid, dop_varid, "units_long_name", "moles Phosphorus per cubic meters")) + dic_varid = define_variable(ncid,"DIC", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, dic_varid, "Dissolved Inorganic Carbon", "mol C/m3", "moles Carbon per cubic meters") + + ! 7. BLING tracer: dissolved organic phosphorus DOP + dop_varid = define_variable(ncid,"DOP", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, dop_varid, "Dissolved Organic Phosphorus", "mol P/m3", "moles Phosphorus per cubic meters") ! 8. BLING tracer: dissolved organic nitrogen DON - call check(nf90_def_var(ncid=ncid, name="DON", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=don_varid)) - call check(nf90_put_att(ncid, don_varid, "long_name" , "Dissolved Organic Nitrogen")) - call check(nf90_put_att(ncid, don_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, don_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, don_varid, "units" , "mol N/m3")) - call check(nf90_put_att(ncid, don_varid, "units_long_name", "moles Nitrogen per cubic meters")) + don_varid = define_variable(ncid,"DON", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, don_varid, "Dissolved Organic Nitrogen", "mol N/m3", "moles Nitrogen per cubic meters") ! 9. BLING tracer: dissolved inorganic iron FET - call check(nf90_def_var(ncid=ncid, name="FET", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID,ZCDimID/),varid=fet_varid)) - call check(nf90_put_att(ncid, fet_varid, "long_name" , "Dissolved Inorganic Iron")) - call check(nf90_put_att(ncid, fet_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, fet_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, fet_varid, "units" , "mol Fe/m3")) - call check(nf90_put_att(ncid, fet_varid, "units_long_name", "moles Iron per cubic meters")) - + fet_varid = define_variable(ncid,"FET", nf90_real, all_dimids, MITgcm_3D_FIELD) + call add_attributes_to_variable(ncid, fet_varid, "Dissolved Inorganic Iron", "mol Fe/m3", "moles Iron per cubic meters") + ! 10. BLING tracer: Surface Chlorophyl CHL - call check(nf90_def_var(ncid=ncid, name="CHL", xtype=nf90_real, & - dimids=(/XCDimID,YCDimID/),varid=chl_varid)) - call check(nf90_put_att(ncid, chl_varid, "long_name" , "Surface Chlorophyll")) - call check(nf90_put_att(ncid, chl_varid, "missing_value" , FVAL)) - call check(nf90_put_att(ncid, chl_varid, "_FillValue" , FVAL)) - call check(nf90_put_att(ncid, chl_varid, "units" , "mg/m3")) - call check(nf90_put_att(ncid, chl_varid, "units_long_name", "milligram per cubic meters")) -endif + chl_varid = define_variable_2d(ncid,"CHL", nf90_real, all_dimids) + call add_attributes_to_variable(ncid, chl_varid, "Surface Chlorophyll", "mg/m3", "milligram per cubic meters" ) +endif ! Finished with dimension/variable definitions, must end 'define' mode to fill. @@ -439,125 +447,72 @@ subroutine mit2dart() call check(nf90_put_var(ncid, YCVarID, YC )) call check(nf90_put_var(ncid, ZCVarID, ZC )) -! Fill the data - -iunit = get_unit() -open(iunit, file='PSAL.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -read(iunit,rec=1)data_3d -close(iunit) -where (data_3d == 0.0_r4) data_3d = FVAL -call check(nf90_put_var(ncid,SVarID,data_3d,start=(/1,1,1/))) - -open(iunit, file='PTMP.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -read(iunit,rec=1)data_3d -close(iunit) -where (data_3d == 0.0_r4) data_3d = FVAL -call check(nf90_put_var(ncid,TVarID,data_3d,start=(/1,1,1/))) - -open(iunit, file='UVEL.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -read(iunit,rec=1)data_3d -close(iunit) -where (data_3d == 0.0_r4) data_3d = FVAL -call check(nf90_put_var(ncid,UVarID,data_3d,start=(/1,1,1/))) - -open(iunit, file='VVEL.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -read(iunit,rec=1)data_3d -close(iunit) -where (data_3d == 0.0_r4) data_3d = FVAL -call check(nf90_put_var(ncid,VVarID,data_3d,start=(/1,1,1/))) - -open(iunit, file='ETA.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl2d, convert='BIG_ENDIAN') -read(iunit,rec=1)data_2d -close(iunit) -where (data_2d == 0.0_r4) data_2d = FVAL -call check(nf90_put_var(ncid,EtaVarID,data_2d,start=(/1,1/))) +if (compress) then + allocate(XCcomp(ncomp3)) + allocate(XGcomp(ncomp3)) + allocate(YCcomp(ncomp3)) + allocate(YGcomp(ncomp3)) + allocate(ZCcomp(ncomp3)) + allocate(ZGcomp(ncomp3)) + allocate(Xcomp_ind(ncomp3)) + allocate(Ycomp_ind(ncomp3)) + allocate(Zcomp_ind(ncomp3)) + + allocate(Xcomp_indU(ncomp3U)) + allocate(Ycomp_indU(ncomp3U)) + allocate(Zcomp_indU(ncomp3U)) + + allocate(Xcomp_indV(ncomp3V)) + allocate(Ycomp_indV(ncomp3V)) + allocate(Zcomp_indV(ncomp3V)) + + call fill_compressed_coords() + + call check(nf90_put_var(ncid, XGcompVarID, XGcomp )) + call check(nf90_put_var(ncid, XCcompVarID, XCcomp )) + call check(nf90_put_var(ncid, YGcompVarID, YGcomp )) + call check(nf90_put_var(ncid, YCcompVarID, YCcomp )) + call check(nf90_put_var(ncid, ZCcompVarID, ZCcomp )) + + call check(nf90_put_var(ncid, XindID, Xcomp_ind )) + call check(nf90_put_var(ncid, YindID, Ycomp_ind )) + call check(nf90_put_var(ncid, ZindID, Zcomp_ind )) + + call check(nf90_put_var(ncid, XindUD, Xcomp_indU )) + call check(nf90_put_var(ncid, YindUD, Ycomp_indU )) + call check(nf90_put_var(ncid, ZindUD, Zcomp_indU )) + + call check(nf90_put_var(ncid, XindVD, Xcomp_indV )) + call check(nf90_put_var(ncid, YindVD, Ycomp_indV )) + call check(nf90_put_var(ncid, ZindVD, Zcomp_indV )) +endif -if (do_bgc) then - open(iunit, file='NO3.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,no3_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='PO4.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,po4_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='O2.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,o2_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='PHY.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,phy_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='ALK.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,alk_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='DIC.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,dic_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='DOP.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,dop_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='DON.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,don_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='FET.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_3d - close(iunit) - call fill_var_md(data_3d, FVAL) - call check(nf90_put_var(ncid,fet_varid,data_3d,start=(/1,1,1/))) - - open(iunit, file='CHL.data', form='UNFORMATTED', status='OLD', & - access='DIRECT', recl=recl2d, convert='BIG_ENDIAN') - read(iunit,rec=1)data_2d - close(iunit) - where (data_2d == 0.0_r4) - data_2d = FVAL - elsewhere - data_2d = log10(data_2d) - endwhere - call check(nf90_put_var(ncid,chl_varid,data_2d,start=(/1,1/))) +! Fill the netcdf variables +call from_mit_to_netcdf_3d('PSAL.data', ncid, SVarID, MITgcm_3D_FIELD) +call from_mit_to_netcdf_3d('PTMP.data', ncid, TVarID, MITgcm_3D_FIELD) +call from_mit_to_netcdf_3d('UVEL.data', ncid, UVarID, MITgcm_3D_FIELD_U) +call from_mit_to_netcdf_3d('VVEL.data', ncid, VVarID, MITgcm_3D_FIELD_V) +call from_mit_to_netcdf_2d('ETA.data' , ncid, EtaVarID) + +print *, 'Done writing physical variables' + +if (do_bgc) then + call from_mit_to_netcdf_tracer_3d('NO3.data', ncid, no3_varid) + call from_mit_to_netcdf_tracer_3d('PO4.data', ncid, po4_varid) + call from_mit_to_netcdf_tracer_3d('O2.data' , ncid, o2_varid) + call from_mit_to_netcdf_tracer_3d('PHY.data', ncid, phy_varid) + call from_mit_to_netcdf_tracer_3d('ALK.data', ncid, alk_varid) + call from_mit_to_netcdf_tracer_3d('DIC.data', ncid, dic_varid) + call from_mit_to_netcdf_tracer_3d('DOP.data', ncid, dop_varid) + call from_mit_to_netcdf_tracer_3d('DON.data', ncid, don_varid) + call from_mit_to_netcdf_tracer_3d('FET.data', ncid, fet_varid) + call from_mit_to_netcdf_tracer_2d('CHL.data', ncid, chl_varid) + + print *, 'Done writing biogeochemical variables' endif call check(nf90_close(ncid)) -deallocate(data_3d) -deallocate(data_2d) - end subroutine mit2dart !------------------------------------------------------------------ @@ -565,165 +520,74 @@ end subroutine mit2dart subroutine dart2mit() -integer :: ncid, varid, iunit -real(r4), allocatable :: data_3d(:,:,:),data_2d(:,:) -real(r4) :: FVAL - -allocate(data_3d(Nx,Ny,Nz)) -allocate(data_2d(Nx,Ny)) +integer :: ncid +recl2d = Nx*Ny*8 if (.not. module_initialized) call static_init_trans -iunit = get_unit() call check(nf90_open("INPUT.nc",NF90_NOWRITE,ncid)) -!Fill the data -call check( NF90_INQ_VARID(ncid,'PSAL',varid) ) -call check( NF90_GET_VAR(ncid,varid,data_3d)) -call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) -where (data_3d == FVAL) data_3d = 0.0_r4 - -open(iunit, file='PSAL.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -write(iunit,rec=1)data_3d -close(iunit) - -call check( NF90_INQ_VARID(ncid,'PTMP',varid) ) -call check( NF90_GET_VAR(ncid,varid,data_3d)) -call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) -where (data_3d == FVAL) data_3d = 0.0_r4 +if (compress) then + ncomp2 = nc_get_dimension_size(ncid,'comp2d') -open(iunit, file='PTMP.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -write(iunit,rec=1)data_3d -close(iunit) - -call check( NF90_INQ_VARID(ncid,'UVEL',varid) ) -call check( NF90_GET_VAR(ncid,varid,data_3d)) -call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) -where (data_3d == FVAL) data_3d = 0.0_r4 + ncomp3 = nc_get_dimension_size(ncid,'comp3d') + ncomp3U = nc_get_dimension_size(ncid,'comp3dU') + ncomp3V = nc_get_dimension_size(ncid,'comp3dV') -open(iunit, file='UVEL.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -write(iunit,rec=1)data_3d -close(iunit) + allocate(Xcomp_ind(ncomp3)) + allocate(Ycomp_ind(ncomp3)) + allocate(Zcomp_ind(ncomp3)) + + allocate(Xcomp_indU(ncomp3U)) + allocate(Ycomp_indU(ncomp3U)) + allocate(Zcomp_indU(ncomp3U)) -call check( NF90_INQ_VARID(ncid,'VVEL',varid) ) -call check( NF90_GET_VAR(ncid,varid,data_3d)) -call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) -where (data_3d == FVAL) data_3d = 0.0_r4 + allocate(Xcomp_indV(ncomp3V)) + allocate(Ycomp_indV(ncomp3V)) + allocate(Zcomp_indV(ncomp3V)) + + call nc_get_variable(ncid, 'Xcomp_ind', Xcomp_ind) + call nc_get_variable(ncid, 'Ycomp_ind', Ycomp_ind) + call nc_get_variable(ncid, 'Zcomp_ind', Zcomp_ind) -open(iunit, file='VVEL.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') -write(iunit,rec=1)data_3d -close(iunit) + call nc_get_variable(ncid, 'Xcomp_indU', Xcomp_indU) + call nc_get_variable(ncid, 'Ycomp_indU', Ycomp_indU) + call nc_get_variable(ncid, 'Zcomp_indU', Zcomp_indU) -call check( NF90_INQ_VARID(ncid,'ETA',varid) ) -call check( NF90_GET_VAR(ncid,varid,data_2d)) -call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) -where (data_2d == FVAL) data_2d = 0.0_r4 + call nc_get_variable(ncid, 'Xcomp_indV', Xcomp_indV) + call nc_get_variable(ncid, 'Ycomp_indV', Ycomp_indV) + call nc_get_variable(ncid, 'Zcomp_indV', Zcomp_indV) +endif -open(iunit, file='ETA.data', form="UNFORMATTED", status='UNKNOWN', & +!Fill the data +iunit = get_unit() +open(iunit, file='PICKUP.OUTPUT', form="UNFORMATTED", status='UNKNOWN', & access='DIRECT', recl=recl2d, convert='BIG_ENDIAN') -write(iunit,rec=1)data_2d -close(iunit) -if (do_bgc) then - call check( NF90_INQ_VARID(ncid,'NO3',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='NO3.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'PO4',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='PO4.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'O2',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='O2.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'PHY',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='PHY.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'ALK',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='ALK.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'DIC',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='DIC.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'DOP',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='DOP.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'DON',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='DON.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) - - call check( NF90_INQ_VARID(ncid,'FET',varid) ) - call check( NF90_GET_VAR(ncid,varid,data_3d)) - call check( nf90_get_att(ncid,varid,"_FillValue",FVAL)) - call fill_var_dm(data_3d, FVAL) - - open(iunit, file='FET.data', form="UNFORMATTED", status='UNKNOWN', & - access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') - write(iunit,rec=1)data_3d - close(iunit) +call from_netcdf_to_mit_3d_pickup(ncid, 'UVEL', 1, MITgcm_3D_FIELD_U) +call from_netcdf_to_mit_3d_pickup(ncid, 'VVEL', 2, MITgcm_3D_FIELD_V) +call from_netcdf_to_mit_3d_pickup(ncid, 'PTMP', 3, MITgcm_3D_FIELD) +call from_netcdf_to_mit_3d_pickup(ncid, 'PSAL', 4, MITgcm_3D_FIELD) +call from_netcdf_to_mit_2d_pickup(ncid, 'ETA') + +print *, 'Done writing physical variables into model binary files' + +if (do_bgc) then + call from_netcdf_to_mit_tracer_pickup(ncid, 'DIC', 1) + call from_netcdf_to_mit_tracer_pickup(ncid, 'ALK', 2) + call from_netcdf_to_mit_tracer_pickup(ncid, 'O2' , 3) + call from_netcdf_to_mit_tracer_pickup(ncid, 'NO3', 4) + call from_netcdf_to_mit_tracer_pickup(ncid, 'PO4', 5) + call from_netcdf_to_mit_tracer_pickup(ncid, 'FET', 6) + call from_netcdf_to_mit_tracer_pickup(ncid, 'DON', 7) + call from_netcdf_to_mit_tracer_pickup(ncid, 'DOP', 8) + call from_netcdf_to_mit_tracer_pickup(ncid, 'PHY', 9) + print *, 'Done writing biogeochemical variables into model binary files' endif call check( NF90_CLOSE(ncid) ) -deallocate(data_3d) -deallocate(data_2d) +if (compress) deallocate(Xcomp_ind, Ycomp_ind, Zcomp_ind) end subroutine dart2mit @@ -742,61 +606,820 @@ subroutine check(status) end subroutine check - !=============================================================================== -!> Check the tracer variables after reading from the binaries -!> Make sure they are non-negative -!> Do the transform if requested -!> md: mit2dart; dm: dart2mit +! 3D variable +function define_variable(ncid, VARname, nc_type, all_dimids, field) result(varid) + +integer, intent(in) :: ncid +character(len=*), intent(in) :: VARname ! variable name +integer, intent(in) :: nc_type +integer, intent(in) :: all_dimids(9) ! possible dimension ids +integer, intent(in) :: field +integer :: varid ! netcdf variable id + +integer :: dimids(3) + +if (compress) then + if (field == MITgcm_3D_FIELD) then + call check(nf90_def_var(ncid=ncid, name=VARname, xtype=nc_type, & + dimids=all_dimids(7),varid=varid)) + elseif (field == MITgcm_3D_FIELD_U) then + call check(nf90_def_var(ncid=ncid, name=VARname, xtype=nc_type, & + dimids=all_dimids(8),varid=varid)) + elseif (field == MITgcm_3D_FIELD_V) then + call check(nf90_def_var(ncid=ncid, name=VARname, xtype=nc_type, & + dimids=all_dimids(9),varid=varid)) + endif +else + dimids = which_dims(VARname, all_dimids) + call check(nf90_def_var(ncid=ncid, name=VARname, xtype=nc_type, & + dimids=dimids, varid=varid)) +endif + +end function define_variable + +!------------------------------------------------------------------ +! For the non-compressed variables, X,Y,Z dimesnions vary +! depending on the variable +function which_dims(VARname, all_dimids) result(dimids) + +character(len=*), intent(in) :: VARname ! variable name +integer, intent(in) :: all_dimids(9) +integer :: dimids(3) +! 3D variables, 3 grids: +! XC, YC, ZC 1 PSAL, PTMP, NO3, PO4, O2, PHY, ALK, DIC, DOP, DON, FET +! XG, YC, ZC 2 UVEL +! XC, YG, ZC 3 VVEL + +if (VARname == 'UVEL') then + dimids = (/all_dimids(4),all_dimids(2),all_dimids(3)/) + return +endif +if (VARname == 'VVEL') then + dimids = (/all_dimids(1),all_dimids(5),all_dimids(3)/) + return +endif + +dimids = (/all_dimids(1),all_dimids(2),all_dimids(3)/) + +end function + +!------------------------------------------------------------------ +! 2D variable +function define_variable_2d(ncid, name, nc_type, all_dimids) result(varid) + +integer, intent(in) :: ncid +character(len=*), intent(in) :: name ! variable name +integer, intent(in) :: nc_type +integer, intent(in) :: all_dimids(9) +integer :: varid ! netcdf variable id -subroutine fill_var_md(var, fillval) +! 2D variables, 1 grid: +! YC, XC 1 ETA, CHL + +if (compress) then + call check(nf90_def_var(ncid=ncid, name=name, xtype=nc_type, & + dimids = (/all_dimids(6)/),varid=varid)) +else + call check(nf90_def_var(ncid=ncid, name=name, xtype=nc_type, & + dimids = (/all_dimids(1),all_dimids(2)/),varid=varid)) +endif + +end function define_variable_2d + +!------------------------------------------------------------------ +subroutine add_attributes_to_variable(ncid, varid, long_name, units, units_long_name) + +integer, intent(in) :: ncid, varid ! which file, which variable +character(len=*), intent(in) :: long_name, units, units_long_name + +call check(nf90_put_att(ncid, varid, "long_name" , long_name)) +call check(nf90_put_att(ncid, varid, "missing_value" , FVAL)) +call check(nf90_put_att(ncid, varid, "_FillValue" , FVAL)) +call check(nf90_put_att(ncid, varid, "units" , units)) +call check(nf90_put_att(ncid, varid, "units_long_name", units_long_name)) + +end subroutine + +!------------------------------------------------------------------ +subroutine from_mit_to_netcdf_3d(mitfile, ncid, varid, field) + +character(len=*), intent(in) :: mitfile +integer, intent(in) :: ncid, varid, field ! which file, which variable, grid type + +integer :: iunit +real(r4) :: var_data(Nx,Ny,Nz) + +iunit = get_unit() +! HK are the mit files big endian by default? +open(iunit, file=mitfile, form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var_data +close(iunit) -real(r4), intent(inout) :: var(:, :, :) -real(r4), intent(in) :: fillval +where (var_data == binary_fill) var_data = FVAL !HK do we also need a check for nans here? +if (compress) then + call write_compressed(ncid, varid, var_data, field) +else + call check(nf90_put_var(ncid,varid,var_data)) +endif + +end subroutine from_mit_to_netcdf_3d + +!------------------------------------------------------------------ +subroutine from_mit_to_netcdf_2d(mitfile, ncid, varid) + +character(len=*), intent(in) :: mitfile +integer, intent(in) :: ncid, varid ! which file, which variable + +integer :: iunit +real(r4) :: var_data(Nx,Ny), var_T_data(Nx,Ny,Nz) + +iunit = get_unit() +! HK are the mit files big endian by default? +open(iunit, file=mitfile, form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl2d, convert='BIG_ENDIAN') +read(iunit,rec=1) var_data +close(iunit) + +! Manually get PTMP surface layer +open(iunit, file='PTMP.data', form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var_T_data +close(iunit) + +where (var_T_data(:,:,1) == binary_fill) var_data = FVAL !HK do we also need a check for nans here? + +if (compress) then + call write_compressed(ncid, varid, var_data) +else + call check(nf90_put_var(ncid,varid,var_data)) +endif + +end subroutine from_mit_to_netcdf_2d + + +!------------------------------------------------------------------ +subroutine from_mit_to_netcdf_tracer_3d(mitfile, ncid, varid) + +character(len=*), intent(in) :: mitfile +integer, intent(in) :: ncid, varid ! which file, which variable + +integer :: iunit +real(r4) :: var_data(Nx,Ny,Nz) real(r4) :: low_conc -if (.not. module_initialized) call static_init_trans +low_conc = 1.0e-12 + +iunit = get_unit() +! HK are the mit files big endian by default? +open(iunit, file=mitfile, form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var_data +close(iunit) + +! CHL is treated differently - HK CHL is 2d so you will not enter this +if (mitfile=='CHL.data') then + where (var_data == binary_fill) + var_data = FVAL + elsewhere + var_data = log10(var_data) + endwhere +else + ! Make sure the tracer concentration is positive + where(var_data < binary_fill) var_data = low_conc + + if (log_transform) then + where (var_data == binary_fill) + var_data = FVAL + elsewhere + var_data = log(var_data) + endwhere + else + where (var_data == binary_fill) var_data = FVAL + endif +endif + +if (compress) then + call write_compressed(ncid, varid, var_data, MITgcm_3D_FIELD) +else + call check(nf90_put_var(ncid,varid,var_data)) +endif + +end subroutine from_mit_to_netcdf_tracer_3d + +!------------------------------------------------------------------ +subroutine from_mit_to_netcdf_tracer_2d(mitfile, ncid, varid) + +character(len=*), intent(in) :: mitfile +integer, intent(in) :: ncid, varid ! which file, which variable + +integer :: iunit +real(r4) :: var_data(Nx,Ny) +real(r4) :: low_conc low_conc = 1.0e-12 -! Make sure the tracer concentration is positive -where(var < 0.0_r4) var = low_conc +iunit = get_unit() +! HK are the mit files big endian by default? +open(iunit, file=mitfile, form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var_data +close(iunit) + +! CHL is treated differently +if (mitfile=='CHL.data') then + where (var_data == binary_fill) + var_data = FVAL + elsewhere + var_data = log10(var_data) + endwhere +else + ! Make sure the tracer concentration is positive + where(var_data < binary_fill) var_data = low_conc + + if (log_transform) then + where (var_data == binary_fill) + var_data = FVAL + elsewhere + var_data = log(var_data) + endwhere + else + where (var_data == binary_fill) var_data = FVAL + endif +endif + +if (compress) then + call write_compressed(ncid, varid, var_data) +else + call check(nf90_put_var(ncid,varid,var_data)) +endif + +end subroutine from_mit_to_netcdf_tracer_2d + +!------------------------------------------------------------------ +subroutine from_netcdf_to_mit_2d(ncid, name) + +integer, intent(in) :: ncid ! which file, +character(len=*), intent(in) :: name ! which variable + +integer :: iunit +real(r4) :: var(Nx,Ny) +integer :: varid +real(r4) :: local_fval + +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) +! initialize var to netcdf fill value +var(:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var) +else + call check(nf90_get_var(ncid,varid,var)) +endif + +where (var == local_fval) var = binary_fill + +iunit = get_unit() +open(iunit, file=trim(name)//'.data', form="UNFORMATTED", status='UNKNOWN', & + access='DIRECT', recl=recl2d, convert='BIG_ENDIAN') +write(iunit,rec=1)var +close(iunit) + +end subroutine from_netcdf_to_mit_2d + +!------------------------------------------------------------------ +subroutine from_netcdf_to_mit_3d(ncid, name, field) + +integer, intent(in) :: ncid ! which file, +character(len=*), intent(in) :: name ! which variable + +integer :: iunit, field +real(r4) :: var(Nx,Ny,Nz) +integer :: varid +real(r4) :: local_fval + +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) +! initialize var to netcdf fill value +var(:,:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var, field) +else + call check(nf90_get_var(ncid,varid,var)) +endif + +where (var == local_fval) var = binary_fill + +iunit = get_unit() +open(iunit, file=trim(name)//'.data', form="UNFORMATTED", status='UNKNOWN', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +write(iunit,rec=1)var +close(iunit) + +end subroutine from_netcdf_to_mit_3d + +!------------------------------------------------------------------ +subroutine from_netcdf_to_mit_2d_pickup(ncid, name) + +integer, intent(in) :: ncid ! which file, +character(len=*), intent(in) :: name ! which variable + +integer :: iunit +real(r4) :: var(Nx,Ny) +real(r8) :: var8(Nx,Ny) +integer :: varid +real(r4) :: local_fval + + + +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) + +! initialize var to netcdf fill value +var(:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var) +else + call check(nf90_get_var(ncid,varid,var)) +endif + +where (var == local_fval) var = binary_fill +var8 = var + + +if (do_bgc) then + write(iunit,rec=401) var8 +else + write(iunit,rec=481) var8 +endif +close(iunit) + +end subroutine from_netcdf_to_mit_2d_pickup + +!------------------------------------------------------------------ +subroutine from_netcdf_to_mit_3d_pickup(ncid, name, lev, field) + +integer, intent(in) :: ncid ! which file, +character(len=*), intent(in) :: name ! which variable + +integer :: iunit, lev, field +real(r4) :: var(Nx,Ny,Nz) +real(r8) :: var8(Nx,Ny,Nz) +integer :: varid, i +real(r4) :: local_fval +integer :: LB, RB, RF + + +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) + +! initialize var to netcdf fill value +var(:,:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var, field) +else + call check(nf90_get_var(ncid,varid,var)) +endif + +where (var == local_fval) var = binary_fill +var8 = var + + + +LB = Nz * (lev-1) + 1 +RB = Nz * lev +RF = Nz * (lev-1) +do i = LB, RB + write(iunit,rec=i) var8(:, :, i - RF) +enddo +close(iunit) + +end subroutine from_netcdf_to_mit_3d_pickup + +!------------------------------------------------------------------ +subroutine from_netcdf_to_mit_tracer(ncid, name) + +integer, intent(in) :: ncid ! which file +character(len=*), intent(in) :: name ! which variable + +integer :: iunit +real(r4) :: var(Nx,Ny,Nz) +integer :: varid +real(r4) :: local_fval + +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) +! initialize var to netcdf fill value +var(:,:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var, MITgcm_3D_FIELD) +else + call check(nf90_get_var(ncid,varid,var)) +endif if (log_transform) then - where (var == 0.0_r4) - var = fillval + where (var == local_fval) + var = binary_fill elsewhere - var = log(var) + var = exp(var) endwhere else - where (var == 0.0_r4) var = fillval + where (var == local_fval) var = binary_fill endif -end subroutine +iunit = get_unit() +open(iunit, file=trim(name)//'.data', form="UNFORMATTED", status='UNKNOWN', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +write(iunit,rec=1)var +close(iunit) + +end subroutine from_netcdf_to_mit_tracer !------------------------------------------------------------------ +subroutine from_netcdf_to_mit_tracer_pickup(ncid, name, lev) -subroutine fill_var_dm(var, fillval) +integer, intent(in) :: ncid ! which file +character(len=*), intent(in) :: name ! which variable -real(r4), intent(inout) :: var(:, :, :) -real(r4), intent(in) :: fillval +integer :: iunit, lev +real(r4) :: var(Nx,Ny,Nz) +real(r8) :: var8(Nx,Ny,Nz) +integer :: varid +real(r4) :: local_fval +real(r4) :: low_conc, large_conc = 5.0 ! From Siva's old code -if (.not. module_initialized) call static_init_trans +low_conc = 1.0e-12 -if (log_transform) then - where (var == fillval) - var = 0.0_r4 +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) + +! initialize var to netcdf fill value +var(:,:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var, MITgcm_3D_FIELD) +else + call check(nf90_get_var(ncid,varid,var)) +endif + +if (log_transform) then + where (var == local_fval) + var = binary_fill elsewhere var = exp(var) endwhere else - where (var == fillval) var = 0.0_r4 + where (var == local_fval) var = binary_fill + where (var > large_conc) var = low_conc endif -end subroutine +var8 = var + +iunit = get_unit() +open(iunit, file='PICKUP_PTRACERS.OUTPUT', form="UNFORMATTED", status='UNKNOWN', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +write(iunit,rec=lev) var8 +close(iunit) + +end subroutine from_netcdf_to_mit_tracer_pickup + +!------------------------------------------------------------------ +subroutine from_netcdf_to_mit_tracer_chl(ncid, name) + +integer, intent(in) :: ncid ! which file +character(len=*), intent(in) :: name ! which variable + +integer :: iunit +real(r4) :: var(Nx,Ny) +integer :: varid +real(r4) :: local_fval + +call check( NF90_INQ_VARID(ncid,name,varid) ) +call check( nf90_get_att(ncid,varid,"_FillValue",local_fval)) +! initialize var to netcdf fill value +var(:,:) = local_fval + +if (compress) then + call read_compressed(ncid, varid, var) +else + call check(nf90_get_var(ncid,varid,var)) +endif + +where (var == local_fval) + var = binary_fill +elsewhere + var = 10**(var) +endwhere + + +iunit = get_unit() +open(iunit, file=trim(name)//'.data', form="UNFORMATTED", status='UNKNOWN', & + access='DIRECT', recl=recl2d, convert='BIG_ENDIAN') +write(iunit,rec=1)var +close(iunit) + +end subroutine from_netcdf_to_mit_tracer_chl + + +!------------------------------------------------------------------ +! Assumes all 3D variables are masked in the +! same location +function get_compressed_size_3d(field) result(n3) + +integer :: n3, field +integer :: iunit +real(r4) :: var3d(NX,NY,NZ) +integer :: i, j, k +character(len=MAX_LEN_FNAM) :: source + +if (field == MITgcm_3D_FIELD) source = 'PSAL.data' +if (field == MITgcm_3D_FIELD_U) source = 'UVEL.data' +if (field == MITgcm_3D_FIELD_V) source = 'VVEL.data' + +iunit = get_unit() +open(iunit, file=trim(source), form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var3d +close(iunit) + +n3 = 0 + +! Get compressed size +do i=1,NX + do j=1,NY + do k=1,NZ + if (var3d(i,j,k) /= binary_fill) then !HK also NaN? + n3 = n3 + 1 + endif + enddo + enddo +enddo + +end function get_compressed_size_3d !------------------------------------------------------------------ +! Assumes all 2D variables are masked in the +! same location +function get_compressed_size_2d() result(n2) + +integer :: n2 +integer :: iunit +real(r4) :: var3d(NX,NY,NZ) +integer :: i,j + +iunit = get_unit() +open(iunit, file='PTMP.data', form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var3d +close(iunit) + +n2 = 0 + +! Get compressed size +do i=1,NX + do j=1,NY + if (var3d(i,j,1) /= binary_fill) then !HK also NaN? + n2 = n2 + 1 + endif + enddo +enddo + +end function get_compressed_size_2d + +!------------------------------------------------------------------ +subroutine fill_compressed_coords() + +!XG,etc read from PARAM04 in static_init_trans +real(r4) :: var3d(NX,NY,NZ) +integer :: n, i, j, k + +iunit = get_unit() +open(iunit, file='PSAL.data', form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var3d +close(iunit) + +n = 1 + +do k=1,NZ ! k first so 2d is first + do i=1,NX + do j=1,NY + if (var3d(i,j,k) /= binary_fill) then !HK also NaN? + XCcomp(n) = XC(i) + YCcomp(n) = YC(j) + ZCcomp(n) = ZC(k) + XGcomp(n) = XG(i) + YGcomp(n) = YG(j) + ZGcomp(n) = ZG(k) + + Xcomp_ind(n) = i ! Assuming grids are compressed the same + Ycomp_ind(n) = j + Zcomp_ind(n) = k + + n = n + 1 + endif + enddo + enddo +enddo + +! UVEL: +iunit = get_unit() +open(iunit, file='UVEL.data', form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var3d +close(iunit) + +n = 1 + +do k=1,NZ ! k first so 2d is first + do i=1,NX + do j=1,NY + if (var3d(i,j,k) /= binary_fill) then !HK also NaN? + Xcomp_indU(n) = i + Ycomp_indU(n) = j + Zcomp_indU(n) = k + + n = n + 1 + endif + enddo + enddo +enddo + +! VVEL: +iunit = get_unit() +open(iunit, file='VVEL.data', form='UNFORMATTED', status='OLD', & + access='DIRECT', recl=recl3d, convert='BIG_ENDIAN') +read(iunit,rec=1) var3d +close(iunit) + +n = 1 + +do k=1,NZ ! k first so 2d is first + do i=1,NX + do j=1,NY + if (var3d(i,j,k) /= binary_fill) then !HK also NaN? + Xcomp_indV(n) = i + Ycomp_indV(n) = j + Zcomp_indV(n) = k + + n = n + 1 + endif + enddo + enddo +enddo + +end subroutine fill_compressed_coords + +!------------------------------------------------------------------ +subroutine write_compressed_2d(ncid, varid, var_data) + +integer, intent(in) :: ncid, varid +real(r4), intent(in) :: var_data(Nx,Ny) + +real(r4) :: comp_var(ncomp2) +integer :: n +integer :: i,j ! loop variables + +n = 1 +do i = 1, NX + do j = 1, NY + if (var_data(i,j) /= FVAL) then + comp_var(n) = var_data(i,j) + n = n + 1 + endif + enddo +enddo + +call check(nf90_put_var(ncid,varid,comp_var)) + +end subroutine write_compressed_2d + +!------------------------------------------------------------------ +subroutine write_compressed_3d(ncid, varid, var_data, field) + +integer, intent(in) :: ncid, varid, field +real(r4), intent(in) :: var_data(Nx,Ny,Nz) + +real(r4), allocatable :: comp_var(:) +integer :: n +integer :: i,j,k ! loop variables + +if (field == MITgcm_3D_FIELD_U) then + allocate(comp_var(ncomp3U)) + do i = 1,ncomp3U + comp_var(i) = var_data(Xcomp_indU(i), Ycomp_indU(i), Zcomp_indU(i)) + enddo + +elseif (field == MITgcm_3D_FIELD_V) then + allocate(comp_var(ncomp3V)) + do i = 1,ncomp3V + comp_var(i) = var_data(Xcomp_indV(i), Ycomp_indV(i), Zcomp_indV(i)) + enddo + +else + allocate(comp_var(ncomp3)) + do i = 1,ncomp3 + comp_var(i) = var_data(Xcomp_ind(i), Ycomp_ind(i), Zcomp_ind(i)) + enddo +endif + +!n = 1 +!do k = 1, NZ !k first so 2d is first +! do i = 1, NX +! do j = 1, NY +! if (var_data(i,j,k) /= FVAL) then +! print *, 'n: ', n, ', var_data(i,j,k): ', var_data(i,j,k) +! comp_var(n) = var_data(i,j,k) +! n = n + 1 +! endif +! enddo +! enddo +!enddo + +call check(nf90_put_var(ncid,varid,comp_var)) + +deallocate(comp_var) + +end subroutine write_compressed_3d + +!------------------------------------------------------------------ +subroutine read_compressed_2d(ncid, varid, var) + +integer, intent(in) :: ncid, varid +real(r4), intent(inout) :: var(NX,NY) + +real(r4) :: comp_var(ncomp2) +integer :: n ! loop variable +integer :: i,j,k ! x,y,z +integer :: c + +c = 1 + +call check(nf90_get_var(ncid,varid,comp_var)) + +do n = 1, ncomp3 + i = Xcomp_ind(n) + j = Ycomp_ind(n) + k = Zcomp_ind(n) + if (k == 1) then + var(i,j) = comp_var(c) + c = c + 1 + endif +enddo + +end subroutine read_compressed_2d + +!------------------------------------------------------------------ +subroutine read_compressed_3d(ncid, varid, var, field) + +integer, intent(in) :: ncid, varid, field +real(r4), intent(inout) :: var(NX,NY,NZ) + +real(r4), allocatable :: comp_var(:) +integer :: n ! loop variable +integer :: i,j,k ! x,y,k + +if (field == MITgcm_3D_FIELD_U) then + allocate(comp_var(ncomp3U)) + call check(nf90_get_var(ncid,varid,comp_var)) + do n = 1, ncomp3U + i = Xcomp_indU(n) + j = Ycomp_indU(n) + k = Zcomp_indU(n) + var(i,j,k) = comp_var(n) + enddo + +elseif (field == MITgcm_3D_FIELD_V) then + allocate(comp_var(ncomp3V)) + call check(nf90_get_var(ncid,varid,comp_var)) + do n = 1, ncomp3V + i = Xcomp_indV(n) + j = Ycomp_indV(n) + k = Zcomp_indV(n) + var(i,j,k) = comp_var(n) + enddo + +else + allocate(comp_var(ncomp3)) + call check(nf90_get_var(ncid,varid,comp_var)) + do n = 1, ncomp3 + i = Xcomp_ind(n) + j = Ycomp_ind(n) + k = Zcomp_ind(n) + var(i,j,k) = comp_var(n) + enddo +endif + +deallocate(comp_var) + +end subroutine read_compressed_3d end module trans_mitdart_mod diff --git a/models/MITgcm_ocean/work/input.nml b/models/MITgcm_ocean/work/input.nml index f92ab19c39..0c462c2266 100644 --- a/models/MITgcm_ocean/work/input.nml +++ b/models/MITgcm_ocean/work/input.nml @@ -457,7 +457,7 @@ # quantity_of_interest = 'QTY_DENSITY' &model_mod_check_nml - input_state_files = 'OUTPUT.nc' + input_state_files = 'mem01_reduced.nc' output_state_files = 'check_me' verbose = .TRUE. test1thru = 0 diff --git a/models/README.rst b/models/README.rst index 09eb9bf5db..73b4dd87bd 100644 --- a/models/README.rst +++ b/models/README.rst @@ -39,6 +39,7 @@ DART supported models: - :doc:`POP/readme` - :doc:`ROMS/readme` - :doc:`rose/readme` +- :doc:`seir/readme` - :doc:`simple_advection/readme` - :doc:`sqg/readme` - :doc:`tiegcm/readme` diff --git a/models/aether_lat-lon/aether_to_dart.f90 b/models/aether_lat-lon/aether_to_dart.f90 new file mode 100644 index 0000000000..dca1ef1c6d --- /dev/null +++ b/models/aether_lat-lon/aether_to_dart.f90 @@ -0,0 +1,472 @@ +! DART software - Copyright UCAR. This open source software is provided +! by UCAR, "as is", without charge, subject to all terms of use at +! http://www.image.ucar.edu/DAReS/DART/DART_download +! + +program aether_to_dart + +!---------------------------------------------------------------------- +! purpose: Transform the Aether model restarts into a DART filter_input.nc. +! +! method: Read aether "restart" files of model state (multiple files, +! one block per aether mpi task) +! Reform fields into a DART netcdf file +! +! USAGE: The aether restart dirname and output filename are read from +! the aether_to_dart_nml namelist. +! +!---------------------------------------------------------------------- +! Converts Aether restart files to a netCDF file + +use types_mod, only : r4, MISSING_I, vtablenamelength + +use time_manager_mod, only: time_type + +use utilities_mod, only : & + finalize_utilities, error_handler, E_ERR, E_MSG, E_WARN, & + initialize_utilities, do_output + +use default_model_mod, only : write_model_time + +use transform_state_mod, only : & + static_init_blocks, aether_name_to_dart, & + nghost, open_block_file, aether_restart_dirname, & + VT_ORIGININDX, VT_VARNAMEINDX, nvar_neutral, nvar_ion, & + nx_per_block, ny_per_block, nz_per_block, & + nblocks_lon, nblocks_lat, variables, & + lats, levs, lons, debug, state_time, & + block_file_name, nlat, nlon, nlev, purge_chars + +use netcdf_utilities_mod, only : & + nc_create_file, nc_close_file, & + nc_begin_define_mode, nc_end_define_mode, & + nc_define_dimension, & + nc_add_global_attribute, nc_add_global_creation_time, & + nc_get_attribute_from_variable, nc_add_attribute_to_variable, & + nc_define_real_variable, nc_define_real_scalar, & + nc_get_variable, nc_put_variable, & + nc_synchronize_file + +implicit none + +!---------------------------------------------------------------------- +! global storage +!---------------------------------------------------------------------- + +integer :: member = MISSING_I, & + num_args, ncid +character(len=3) :: char_mem +character(len=31) :: filter_io_root = 'filter_input' +character(len=64) :: filter_io_file = '' +character(len=512) :: error_string_1, error_string_2 +character(len=31), parameter :: progname = 'aether_to_dart' +character(len=256), parameter :: source = 'aether_lat-lon/aether_to_dart.f90' + +character(len=4), parameter :: LEV_DIM_NAME = 'alt' +character(len=4), parameter :: LAT_DIM_NAME = 'lat' +character(len=4), parameter :: LON_DIM_NAME = 'lon' +character(len=4), parameter :: TIME_DIM_NAME = 'time' + +character(len=4), parameter :: LEV_VAR_NAME = 'alt' +character(len=4), parameter :: LAT_VAR_NAME = 'lat' +character(len=4), parameter :: LON_VAR_NAME = 'lon' +character(len=4), parameter :: TIME_VAR_NAME = 'time' + +!====================================================================== + +call initialize_utilities(progname) + +!---------------------------------------------------------------------- +! Get the ensemble member +!---------------------------------------------------------------------- +num_args = command_argument_count() +if (num_args == 0) then + write(error_string_1,*) 'Usage: ./aether_to_dart member_number (0-based)' + call error_handler(E_ERR, progname, error_string_1) +endif + +call get_command_argument(1,char_mem) +read(char_mem,'(I3)') member + +!---------------------------------------------------------------------- +! Convert the files +!---------------------------------------------------------------------- + +call static_init_blocks(member) + +! Must be after static_init_blocks, which provides filter_io_root from the namelist. +write(filter_io_file,'(2A, I0.4, A3)') trim(filter_io_root),'_', member + 1,'.nc' +call error_handler(E_MSG, '', '') +write(error_string_1,'(A,I3,2A)') 'Converting Aether member ',member, & + ' restart files to the NetCDF file ', trim(filter_io_file) +write(error_string_2,'(3A)') ' in directory ', trim(aether_restart_dirname) +call error_handler(E_MSG, progname, error_string_1, text2=error_string_2) +call error_handler(E_MSG, '', '') + +! nc_create_file does not leave define mode. +ncid = nc_create_file(trim(aether_restart_dirname)//'/'//trim(filter_io_file)) +! def_fill_dimvars does leave define mode. +call def_fill_dimvars(ncid) + +! Write_model_time will make a time variable, if needed, which it is not. +! state_time is read in transform_state_mod and is available by the use statement. +call write_model_time(ncid, state_time) + +! Define (non-time) variables +call restarts_to_filter(ncid, member, define=.true.) + +! Read and convert (non-time) variables +call restarts_to_filter(ncid, member, define=.false.) +! subr. called by this routine closes the file only if define = .true. +call nc_close_file(ncid) + +call error_handler(E_MSG, '', '') +write(error_string_1,'(3A)') 'Successfully converted the Aether restart files to ', & + "'"//trim(filter_io_file)//"'" +call error_handler(E_MSG, progname, error_string_1) +call error_handler(E_MSG, '', '') + +! end - close the log, etc +call finalize_utilities() + +!----------------------------------------------------------------------- +contains + +!----------------------------------------------------------------------- +! Open all restart files (blocks x {neutrals,ions}) for 1 member +! and transfer the requested variable contents to the filter input file. +! This is called with 'define' = +! .true. define variables in the file or +! .false. transfer the data from restart files to a filter_inpu.nc file. + +subroutine restarts_to_filter(ncid_output, member, define) + +integer, intent(in) :: ncid_output, member +logical, intent(in) :: define + +integer :: ib, jb, ib_loop, jb_loop + +if (define) then + ! if define, run one block. + ! the block_to_filter_io call defines the variables in the whole domain netCDF file. + ib_loop = 1 + jb_loop = 1 + call nc_begin_define_mode(ncid_output) +else + ! if not define, run all blocks. + ! the block_to_filter_io call adds the (ib,jb) block to the netCDF variables + ! in order to make a file containing the data for all the blocks. + ib_loop = nblocks_lon + jb_loop = nblocks_lat +end if + +do jb = 1, jb_loop + do ib = 1, ib_loop + call block_to_filter_io(ncid_output, ib, jb, member, define) + enddo +enddo + +if (define) then + call nc_end_define_mode(ncid_output) +endif + +end subroutine restarts_to_filter + +!----------------------------------------------------------------------- +! Transfer variable data from a block restart file to the filter_input.nc file. +! It's called with 2 modes: +! define = .true. define the NC variables in the filter_input.nc +! define = .false. write the data from a block to the NC file using write_filter_io. + +subroutine block_to_filter_io(ncid_output, ib, jb, member, define) + +integer, intent(in) :: ncid_output +integer, intent(in) :: ib, jb +integer, intent(in) :: member +logical, intent(in) :: define + +real(r4), allocatable :: temp1d(:), temp2d(:,:), temp3d(:,:,:) +! real(r4), allocatable :: alt1d(:), density_ion_e(:,:,:) +integer :: ivar, nb, ncid_input +! TEC? integer :: maxsize +! logical :: no_idensity +! real(r4) :: temp0d +character(len=32) :: att_val +character(len=128) :: file_root +character(len=256) :: filename +character(len=vtablenamelength) :: varname, dart_varname + +character(len=*), parameter :: routine = 'block_to_filter_io' + +! The block number, as counted in Aether. +! Lower left is 0, increase to the East, then 1 row farther north, West to East. +nb = (jb - 1) * nblocks_lon + ib - 1 + +! a temp array large enough to hold any of the +! Lon,Lat or Alt array from a block plus ghost cells +allocate(temp1d(1-nghost:max(nx_per_block, ny_per_block, nz_per_block) + nghost)) + +! treat alt specially since we want to derive TEC here +! TODO: See density_ion_e too. +! allocate( alt1d(1-nghost:max(nx_per_block, ny_per_block, nz_per_block) + nghost)) + +! temp array large enough to hold any 2D field +allocate(temp2d(1-nghost:ny_per_block+nghost, & + 1-nghost:nx_per_block+nghost)) + +! TODO: We need all altitudes, but there might be vertical blocks in the future. +! But there would be no vertical halos. +! Make transform_state_mod: zcount adapt to whether there are blocks. +! Temp needs to have C-ordering, which is what the restart files have. +! temp array large enough to hold 1 species, temperature, etc +allocate(temp3d(1:nz_per_block, & + 1-nghost:ny_per_block+nghost, & + 1-nghost:nx_per_block+nghost)) + +! TODO: Waiting for e- guidance from Aaron. +! save density_ion_e to compute TEC +! allocate(density_ion_e(1:nz_per_block, & +! 1-nghost:ny_per_block+nghost, & +! 1-nghost:nx_per_block+nghost)) + +! TODO: Aether gives a unique name to each (of 6) velocity components. +! Do we want to use a temp4d array to handle them? +! They are independent variables in the block files (and state). +! ! temp array large enough to hold velocity vect, etc +! maxsize = max(3, nvar_ion) +! allocate(temp4d(1-nghost:nx_per_block+nghost, & +! 1-nghost:ny_per_block+nghost, & +! 1-nghost:nz_per_block+nghost, maxsize)) + + +! TODO; Does Aether need a replacement for these Density fields? Yes. +! But they are probably read by the loops below. +! Don't need to fetch index because Aether has NetCDF restarts, +! so just loop over the field names to read. +! +! ! assume we could not find the electron density for VTEC calculations +! no_idensity = .true. +! +! if (inum > 0) then +! ! one or more items in the state vector need to replace the +! ! data in the output file. loop over the index list in order. +! j = 1 +! ! TODO: electron density is not in the restart files, but it's needed for TEC +! In Aether they will be from an ions file, but now only from an output file (2023-10-30). +! Can that be handled like the neutrals and ions files, using variables(VT_ORIGININDX,:) +! to build an output file name? Are outputs in block form? +! ! save the electron density for TEC computation +! density_ion_e(:,:,:) = temp3d(:,:,:) + +! Handle the 2 restart file types (ions and neutrals). +! Each field has a file type associated with it: variables(VT_ORIGININDX,f_index) + +file_root = variables(VT_ORIGININDX,1) +write(filename,'(A,"/",A)') trim(aether_restart_dirname), & + trim(block_file_name(trim(file_root), member, nb)) +ncid_input = open_block_file(filename, 'read') + +do ivar = 1, nvar_neutral + ! The nf90 functions cannot read the variable names with the '\'s in them. + varname = purge_chars(trim(variables(VT_VARNAMEINDX,ivar)), '\', plus_minus=.false.) + if (debug >= 100 .and. do_output()) print*, routine,'varname = ', varname + ! Translate the Aether field name into a CF-compliant DART field name. + dart_varname = aether_name_to_dart(varname) + + ! TODO: Given the subroutine name, perhaps these definition sections should be + ! one call higher up, with the same loop around it. + if (define) then + ! Define the variable in the filter_input.nc file (the output from this program). + ! The calling routine entered define mode. + + if (debug > 10 .and. do_output()) then + write(error_string_1,'(A,I0,2A)') 'Defining ivar = ', ivar,':', dart_varname + call error_handler(E_MSG, routine, error_string_1, source) + end if + + call nc_define_real_variable(ncid_output, dart_varname, & + (/ LEV_DIM_NAME, LAT_DIM_NAME, LON_DIM_NAME/) ) + call nc_get_attribute_from_variable(ncid_input, varname, 'units', att_val, routine) + call nc_add_attribute_to_variable(ncid_output, dart_varname, 'units', att_val, routine) + + else if (file_root == 'neutrals') then + ! Read 3D array and extract the non-halo data of this block. + call nc_get_variable(ncid_input, varname, temp3d, context=routine) + call write_filter_io(temp3d, dart_varname, ib, jb, ncid_output) + else + write(error_string_1,'(A,I3,A)') 'Trying to read neutrals, but variables(', & + VT_ORIGININDX,ivar , ') /= "neutrals"' + call error_handler(E_ERR, routine, error_string_1, source) + endif + +enddo +call nc_close_file(ncid_input) + +file_root = variables(VT_ORIGININDX,nvar_neutral+1) +write(filename,'(A,"/",A)') trim(aether_restart_dirname), & + trim(block_file_name(trim(file_root), member, nb)) +ncid_input = open_block_file(filename, 'read') + +do ivar = nvar_neutral +1, nvar_neutral + nvar_ion + ! Purging \ from aether name. + varname = purge_chars(trim(variables(VT_VARNAMEINDX,ivar)), '\', plus_minus=.false.) + dart_varname = aether_name_to_dart(varname) + + if (define) then + + if (debug > 10 .and. do_output()) then + write(error_string_1,'(A,I0,2A)') 'Defining ivar = ', ivar,':', dart_varname + call error_handler(E_MSG, routine, error_string_1, source) + end if + + call nc_define_real_variable(ncid_output, dart_varname, & + (/ LEV_DIM_NAME, LAT_DIM_NAME, LON_DIM_NAME/) ) + call nc_get_attribute_from_variable(ncid_input, varname, 'units', att_val, routine) + call nc_add_attribute_to_variable(ncid_output, dart_varname, 'units', att_val, routine) + print*, routine,': defined ivar, dart_varname, att = ', & + ivar, trim(dart_varname), trim(att_val) + + else if (file_root == 'ions') then + call nc_get_variable(ncid_input, varname, temp3d, context=routine) + call write_filter_io(temp3d, dart_varname, ib, jb, ncid_output) + else + write(error_string_1,'(A,I3,A)') 'Trying to read ions, but variables(', & + VT_ORIGININDX,ivar , ') /= "ions"' + call error_handler(E_ERR, routine, error_string_1, source) + endif + +enddo + +! Leave file open if fields were just added (define = .false.), +! so that time can be added. +if (define) call nc_close_file(ncid_input) + +! TODO: Does Aether need TEC to be calculated? Yes +! ! add the VTEC as an extended-state variable +! ! NOTE: This variable will *not* be written out to the Aether restart files +! +! if (no_idensity) then +! write(error_string_1,*) 'Cannot compute the VTEC without the electron density' +! call error_handler(E_ERR, routine, error_string_1, source) +! end if +! +! temp2d = 0._r8 +! ! compute the TEC integral +! do i =1,nz_per_block-1 ! approximate the integral over the altitude as a sum of trapezoids +! ! area of a trapezoid: A = (h2-h1) * (f2+f1)/2 +! temp2d(:,:) = temp2d(:,:) + ( alt1d(i+1)-alt1d(i) ) * & +! ( density_ion_e(:,:,i+1)+density_ion_e(:,:,i) ) /2.0_r8 +! end do +! ! convert temp2d to TEC units +! temp2d = temp2d/1e16_r8 +! call write_block_to_filter2d(temp2d, ivals(1), block, ncid, define) + +! TODO: Does Aether need f10_7 to be calculated or processed? Yes +! !gitm_index = get_index_start(domain_id, 'VerticalVelocity') +! call get_index_from_gitm_varname('f107', inum, ivals) +! if (inum > 0) then +! call write_block_to_filter0d(temp0d, ivals(1), ncid, define) !see comments in the body of the subroutine +! endif +! + +deallocate(temp1d, temp2d, temp3d) +! deallocate(alt1d, density_ion_e) + +end subroutine block_to_filter_io + +!----------------------------------------------------------------------- +! Open all restart files (neutrals,ions) for a block and read in the requested data items. +! The write_filter_io calls will write the data to the filter_input.nc. + +subroutine write_filter_io(data3d, varname, ib, jb, ncid) + +real(r4), intent(in) :: data3d(1:nz_per_block, & + 1-nghost:ny_per_block+nghost, & + 1-nghost:nx_per_block+nghost) + +character(len=vtablenamelength), intent(in) :: varname +integer, intent(in) :: ib, jb +integer, intent(in) :: ncid + +integer :: starts(3) + +character(len=*), parameter :: routine = 'write_filter_io' + +! write(varname,'(A)') trim(variables(VT_VARNAMEINDX,ivar)) + +! to compute the start, consider (ib-1)*nx_per_block+1 +starts(1) = 1 +starts(2) = (jb-1) * ny_per_block + 1 +starts(3) = (ib-1) * nx_per_block + 1 + +call nc_put_variable(ncid, varname, & + data3d(1:nz_per_block,1:ny_per_block,1:nx_per_block), & + context=routine, nc_start=starts, & + nc_count=(/nz_per_block,ny_per_block,nx_per_block/)) + +end subroutine write_filter_io + +!----------------------------------------------------------------------- +! Add dimension variable contents to the file. + +subroutine def_fill_dimvars(ncid) + +integer, intent(in) :: ncid + +character(len=*), parameter :: routine = 'def_fill_dimvars' + +! File is still in define mode from nc_create_file +! call nc_begin_define_mode(ncid) + +! Global atts for aether_to_dart and dart_to_aether. +call nc_add_global_creation_time(ncid, routine) +call nc_add_global_attribute(ncid, "model_source", source, routine) +call nc_add_global_attribute(ncid, "model", "aether", routine) + +! define grid dimensions +call nc_define_dimension(ncid, trim(LEV_DIM_NAME), nlev, routine) +call nc_define_dimension(ncid, trim(LAT_DIM_NAME), nlat, routine) +call nc_define_dimension(ncid, trim(LON_DIM_NAME), nlon, routine) + +! define grid variables +! z +call nc_define_real_variable( ncid, trim(LEV_VAR_NAME), (/ trim(LEV_DIM_NAME) /), routine) +call nc_add_attribute_to_variable(ncid, trim(LEV_VAR_NAME), 'units', 'm', routine) +call nc_add_attribute_to_variable & + (ncid, trim(LEV_VAR_NAME), 'long_name', 'height above mean sea level', routine) + +! latitude +call nc_define_real_variable( ncid, trim(LAT_VAR_NAME), (/ trim(LAT_DIM_NAME) /), routine) +call nc_add_attribute_to_variable(ncid, trim(LAT_VAR_NAME), 'units', 'degrees_north', routine) +call nc_add_attribute_to_variable(ncid, trim(LAT_VAR_NAME), 'long_name', 'latitude', routine) + +! longitude +call nc_define_real_variable( ncid, trim(LON_VAR_NAME), (/ trim(LON_VAR_NAME) /), routine) +call nc_add_attribute_to_variable(ncid, trim(LON_VAR_NAME), 'units', 'degrees_east', routine) +call nc_add_attribute_to_variable(ncid, trim(LON_VAR_NAME), 'long_name', 'longitude', routine) + +! Dimension 'time' will no longer be created by write_model_time, +! or by nc_define_unlimited_dimension. It will be a scalar variable. +! time +call nc_define_real_scalar( ncid, trim(TIME_VAR_NAME), routine) +call nc_add_attribute_to_variable(ncid, trim(TIME_VAR_NAME), 'calendar', 'gregorian', routine) +call nc_add_attribute_to_variable & + (ncid, trim(TIME_VAR_NAME), 'units', 'days since 1601-01-01 00:00:00', routine) +call nc_add_attribute_to_variable & + (ncid, trim(TIME_VAR_NAME), 'long_name', 'gregorian_days', routine) + +call nc_end_define_mode(ncid) + +call nc_put_variable(ncid, trim(LEV_VAR_NAME), levs, routine) +call nc_put_variable(ncid, trim(LAT_VAR_NAME), lats, routine) +call nc_put_variable(ncid, trim(LON_VAR_NAME), lons, routine) +! time will be written elsewhere. + +! Flush the buffer and leave netCDF file open +call nc_synchronize_file(ncid) + +end subroutine def_fill_dimvars + +!----------------------------------------------------------------------- +end program aether_to_dart + diff --git a/models/aether_lat-lon/dart_to_aether.f90 b/models/aether_lat-lon/dart_to_aether.f90 new file mode 100644 index 0000000000..c8aab30dcd --- /dev/null +++ b/models/aether_lat-lon/dart_to_aether.f90 @@ -0,0 +1,395 @@ +! DART software - Copyright UCAR. This open source software is provided +! by UCAR, "as is", without charge, subject to all terms of use at +! http://www.image.ucar.edu/DAReS/DART/DART_download +! + +program dart_to_aether + +!---------------------------------------------------------------------- +! purpose: Transform a DART filter_output.nc into the Aether model restarts. +! +! method: Read DART state netcdf file and overwrite values in Aether restart files. +! +! this version assumes that the DART grid is global and the data needs to be +! blocked into one block per Aether mpi task. there is a different converter +! for when Aether only needs a single input/output file. +! +!---------------------------------------------------------------------- + +use types_mod, only : r4, MISSING_I, MISSING_R4, vtablenamelength + +use utilities_mod, only : & + finalize_utilities, error_handler, E_ERR, E_MSG, E_WARN, & + initialize_utilities, do_output + +use default_model_mod, only : write_model_time + +use transform_state_mod, only : & + debug, aether_restart_dirname, nblocks_lat, & + nblocks_lon, nghost, nlat, nlon, nlev, & + nx_per_block, ny_per_block, nz_per_block, & + nvar_ion, nvar_neutral, VT_ORIGININDX, VT_VARNAMEINDX, & + block_file_name, open_block_file, aether_name_to_dart, & + variables, purge_chars, static_init_blocks + +use netcdf_utilities_mod, only : & + nc_open_file_readonly, nc_close_file, & + nc_begin_define_mode, nc_end_define_mode, & + nc_define_dimension, & + nc_add_global_attribute, nc_add_global_creation_time, & + nc_get_attribute_from_variable, nc_add_attribute_to_variable, & + nc_define_real_variable, nc_define_real_scalar, & + nc_get_variable, nc_put_variable, nc_variable_exists, & + nc_synchronize_file, NF90_FILL_REAL + +implicit none + +!---------------------------------------------------------------------- +! global storage +!---------------------------------------------------------------------- + +integer :: member = MISSING_I, & + num_args, ncid +character(len=3) :: char_mem +character(len=31) :: filter_io_root = 'filter_input' +character(len=64) :: filter_io_file = '' +character(len=512) :: error_string_1, error_string_2 +character(len=31), parameter :: progname = 'dart_to_aether' +character(len=256), parameter :: source = 'aether_lat-lon/dart_to_aether.f90' + +!====================================================================== + +call initialize_utilities(progname) + +!---------------------------------------------------------------------- +! Get the ensemble member +!---------------------------------------------------------------------- +num_args = command_argument_count() +if (num_args == 0) then + write(error_string_1,*) 'Usage: ./dart_to_aether member_number (0-based)' + call error_handler(E_ERR, progname, error_string_1) +endif + +call get_command_argument(1,char_mem) +read(char_mem,'(I3)') member + +!---------------------------------------------------------------------- +! Convert the files +!---------------------------------------------------------------------- + +call static_init_blocks(member) + +write(filter_io_file,'(2A,I0.4,A3)') trim(filter_io_root),'_',member + 1,'.nc' + +call error_handler(E_MSG, source, '', '') +write(error_string_1,'(3A)') 'Extracting fields from DART file ',trim(filter_io_file) +write(error_string_2,'(A,I3,2A)') 'into Aether restart member ',member, & + ' in directory ', trim(aether_restart_dirname) +call error_handler(E_MSG, progname, error_string_1, text2=error_string_2) +call error_handler(E_MSG, '', '') + +ncid = nc_open_file_readonly(trim(aether_restart_dirname)//'/'//trim(filter_io_file), source) + +call filter_to_restarts(ncid, member) + +!---------------------------------------------------------------------- +! Log what we think we're doing, and exit. +!---------------------------------------------------------------------- +call error_handler(E_MSG, source,'','') +write(error_string_1,'(3A)') 'Successfully converted to the Aether restart files in directory' +write(error_string_2,'(3A)') "'"//trim(aether_restart_dirname)//"'" +call error_handler(E_MSG, source, error_string_1, source, text2=error_string_2) + +call nc_close_file(ncid) + +! end - close the log, etc +call finalize_utilities() + +!----------------------------------------------------------------------- +contains +!----------------------------------------------------------------------- +! Extract (updated) variables from a filter_output.nc file +! and write to existing block restart files. + +subroutine filter_to_restarts(ncid, member) + +integer, intent(in) :: member, ncid + +real(r4), allocatable :: fulldom3d(:,:,:) +character(len=64) :: file_root +integer :: ivar +character(len=vtablenamelength) :: varname, dart_varname + +character(len=*), parameter :: routine = 'filter_to_restarts' + +! Space for full domain field (read from filter_output.nc) +! and halo around the full domain +allocate(fulldom3d(1:nlev, & + 1-nghost:nlat+nghost, & + 1-nghost:nlon+nghost)) + +! get the dirname, construct the filenames inside open_block_file + +! Not all fields have halos suitable for calculating gradients. +! These do (2023-11-8): neutrals; temperature, O, O2, N2, and the horizontal winds. +! ions; none. +! The current transform_state will fill all neutral halos anyway, +! since that's simpler and won't break the model. +! TODO: add an attribute to the variables (?) to denote whether a field +! should have its halo filled? +do ivar = 1, nvar_neutral + varname = purge_chars(trim(variables(VT_VARNAMEINDX,ivar)), '\', plus_minus=.false.) + if (debug >= 0 .and. do_output()) then + write(error_string_1,'("varname = ",A)') trim(varname) + call error_handler(E_MSG, routine, error_string_1, source) + endif + dart_varname = aether_name_to_dart(varname) + + file_root = trim(variables(VT_ORIGININDX,ivar)) + if (trim(file_root) == 'neutrals') then + ! This parameter is available through the `use netcdf` command. + fulldom3d = NF90_FILL_REAL + + call nc_get_variable(ncid, dart_varname, fulldom3d(1:nlev,1:nlat,1:nlon), & + context=routine) + ! Copy updated field values to full domain halo. + ! Block domains+halos will be easily read from this. + call add_halo_fulldom3d(fulldom3d) + + call filter_io_to_blocks(fulldom3d, varname, file_root, member) + else + write(error_string_1,'(3A)') "file_root of varname = ",trim(varname), & + ' expected to be "neutrals"' + call error_handler(E_ERR, routine, error_string_1, source) + endif + +enddo + +do ivar = nvar_neutral + 1, nvar_neutral + nvar_ion + varname = purge_chars(trim(variables(VT_VARNAMEINDX,ivar)), '\', plus_minus=.false.) + dart_varname = aether_name_to_dart(varname) + + file_root = trim(variables(VT_ORIGININDX,ivar)) + if (debug >= 0 .and. do_output()) then + write(error_string_1,'("varname, dart_varname, file_root = ",3(2x,A))') & + trim(varname), trim(dart_varname), trim(file_root) + call error_handler(E_MSG, routine, error_string_1, source) + endif + + if (trim(file_root) == 'ions') then + fulldom3d = NF90_FILL_REAL + call nc_get_variable(ncid, dart_varname, fulldom3d(1:nlev,1:nlat,1:nlon), & + context=routine) + ! 2023-11: ions do not have real or used data in their halos. + ! Make this clear by leaving the halos filled with MISSING_R4 + ! TODO: Will this be translated into NetCDF missing_value? + ! call add_halo_fulldom3d(fulldom3d) + + call filter_io_to_blocks(fulldom3d, varname, file_root, member) + + else + write(error_string_1,'(3A)') "file_root of varname = ",trim(varname), & + ' expected to be "ions"' + call error_handler(E_ERR, routine, error_string_1, source) + endif +enddo + +deallocate(fulldom3d) + +end subroutine filter_to_restarts + + +!----------------------------------------------------------------------- +! Copy updated data from the full domain into the halo regions, +! in preparation for extracting haloed blocks into the block restart files. +! First, the halos past the East and West edges are taken from the wrap-around points. +! Then, the halos beyond the edge latitudes in the North and South +! are taken by reaching over the pole to a longitude that's half way around the globe. +! This is independent of the number of blocks. + +subroutine add_halo_fulldom3d(fulldom3d) + +! Space for full domain field (read from filter_output.nc) +! and halo around the full domain +real(r4), intent(inout) :: fulldom3d(1:nz_per_block, & + 1-nghost:nlat+nghost, & + 1-nghost:nlon+nghost) + +integer :: g, i, j, haflat, haflon +real(r4), allocatable :: normed(:,:) +character(len=16) :: debug_format + +character(len=*), parameter :: routine = 'add_halo_fulldom3d' + +! An array for debugging by renormalizing an altitude of fulldom3d. +allocate(normed(1-nghost:nlat+nghost, & + 1-nghost:nlon+nghost)) + +haflat = nlat / 2 +haflon = nlon / 2 + +do g = 1,nghost + ! left; reach around the date line. + ! There's no data at the ends of the halos for this copy. + fulldom3d (:,1:nlat, 1-g) & + = fulldom3d(:,1:nlat,nlon+1-g) + + ! right + fulldom3d (:,1:nlat,nlon+g) & + = fulldom3d(:,1:nlat,g) + + ! bottom; reach over the S Pole for halo values. + ! There is data at the ends of the halos for these.) + + fulldom3d (:, 1-g ,1-nghost :haflon) & + = fulldom3d(:, g ,1-nghost+haflon:nlon) + fulldom3d (:, 1-g ,haflon+1:nlon) & + = fulldom3d(:, g , 1:haflon) + ! Last 2 (halo) points on the right edge (at the bottom) + fulldom3d (:, 1-g , nlon+1: nlon+nghost) & + = fulldom3d(:, g ,haflon+1:haflon+nghost) + + ! top + fulldom3d (:, nlat +g, 1-nghost :haflon) & + = fulldom3d(:, nlat+1-g, 1-nghost+haflon:nlon) + fulldom3d (:, nlat +g, haflon+1:nlon) & + = fulldom3d(:, nlat+1-g, 1:haflon) + ! Last 2 (halo) points on the right edge (at the top) + fulldom3d (:, nlat +g, nlon+1: nlon+nghost) & + = fulldom3d(:, nlat+1-g, haflon+1:haflon+nghost) +enddo + +if (any(fulldom3d == MISSING_R4)) then + error_string_1 = 'ERROR: some fulldom3d contain MISSING_R4 after halos' + call error_handler(E_ERR, routine, error_string_1, source) +endif + +! TODO: Keep halo corners check for future use? +! Add more robust rescaling. +! Print the 4x4 arrays (corners & middle) to see whether values are copied correctly. +! Level 44 values range from 800-eps to 805. I don't want to see the 80. +! For O+ range from 0 to 7e+11, but are close to 1.1082e+10 near the corners. +! 2023-12-20; Aaron sent new files with 54 levels. +if (debug >= 100 .and. do_output()) then + if (fulldom3d(54,10,10) > 1.e+10_r4) then + normed = fulldom3d(54,:,:) - 1.1092e+10_r4 + debug_format = '(3(4E10.4,2X))' + else if (fulldom3d(54,10,10) < 1000._r4) then + normed = fulldom3d(54,:,:) - 800._r4 + debug_format = '(3(4F10.5,2X))' + endif + + ! Debug HDF5 + write(error_string_1,'("normed_field(10,nlat+1,nlon+2) = ",3(1x,i5))') normed(nlat+1,nlon+2) + call error_handler(E_MSG, routine, error_string_1, source) + + ! 17 format debug_format + print*,'top' + do j = nlat+2, nlat-1, -1 + write(*,debug_format) (normed(j,i), i= -1, 2), & + (normed(j,i), i=haflon-1,haflon+2), & + (normed(j,i), i= nlon-1, nlon+2) + enddo + print*,'middle' + do j = haflat+2, haflat-1 , -1 + write(*,debug_format) (normed(j,i), i= -1, 2), & + (normed(j,i), i=haflon-1,haflon+2), & + (normed(j,i), i= nlon-1, nlon+2) + enddo + print*,'bottom' + do j = 2,-1, -1 + write(*,debug_format) (normed(j,i), i= -1, 2), & + (normed(j,i), i=haflon-1,haflon+2), & + (normed(j,i), i= nlon-1, nlon+2) + enddo +endif + +deallocate(normed) + +end subroutine add_halo_fulldom3d + +!----------------------------------------------------------------------- +! Transfer part of the full field into a block restart file. + +subroutine filter_io_to_blocks(fulldom3d, varname, file_root, member) + +real(r4), intent(in) :: fulldom3d(1:nz_per_block, & + 1-nghost:nlat+nghost, & + 1-nghost:nlon+nghost ) +character(len=*), intent(in) :: varname +character(len=*), intent(in) :: file_root +integer, intent(in) :: member + +! Don't collect velocity components (6 of them) +! real(r4) :: temp0d +! , temp1d(:) ? +integer :: ncid_output +integer :: ib, jb, nb +integer :: starts(3), ends(3), xcount, ycount, zcount +character(len=256) :: block_file + +character(len=*), parameter :: routine = 'filter_io_to_blocks' + +! a temp array large enough to hold any of the +! Lon,Lat or Alt array from a block plus ghost cells +! allocate(temp1d(1-nghost:max(nx_per_block,ny_per_block,nz_per_block)+nghost)) + +zcount = nz_per_block +ycount = ny_per_block + (2 * nghost) +xcount = nx_per_block + (2 * nghost) + +if (debug > 0 .and. do_output()) then + write(error_string_1,'(A,I0,A,I0,A)') 'Now putting the data for ', nblocks_lon, & + ' blocks lon by ',nblocks_lat,' blocks lat' + call error_handler(E_MSG, routine, error_string_1, source) +end if + +starts(1) = 1 +ends(1) = nz_per_block + +do jb = 1, nblocks_lat + starts(2) = (jb - 1) * ny_per_block - nghost + 1 + ends(2) = jb * ny_per_block + nghost + + do ib = 1, nblocks_lon + starts(3) = (ib - 1) * nx_per_block - nghost + 1 + ends(3) = ib * nx_per_block + nghost + + nb = (jb - 1) * nblocks_lon + ib - 1 + + write(block_file,'(A,"/",A)') trim(aether_restart_dirname), & + trim(block_file_name(trim(file_root), member, nb)) + ncid_output = open_block_file(block_file, 'readwrite') + if (.not.nc_variable_exists(ncid_output,varname)) then + write(error_string_1,'(4A)') 'variable ', varname, ' does not exist in ',block_file + call error_handler(E_ERR, routine, error_string_1, source) + endif + + if ( debug > 0 .and. do_output()) then + write(error_string_1,'(A,3(2X,i5))') "block, ib, jb = ", nb, ib, jb + call error_handler(E_MSG, routine, error_string_1, source) + write(error_string_1,'(3(A,3i5))') & + 'starts = ',starts, 'ends = ',ends, '[xyz]counts = ',xcount,ycount,zcount + call error_handler(E_MSG, routine, error_string_1, source) + endif + + call nc_put_variable(ncid_output, trim(varname), & + fulldom3d(starts(1):ends(1), starts(2):ends(2), starts(3):ends(3)), & + context=routine, nc_count=(/ zcount,ycount,xcount /) ) + + call nc_close_file(ncid_output) + + enddo +enddo + +! +! TODO: ? Add f107 and Rho to the restart files +! call read_filter_io_block0d(ncid, ivals(1), data0d) +! if (data0d < 0.0_r8) data0d = 60.0_r8 !alex +! write(ounit) data0d + +end subroutine filter_io_to_blocks + +!----------------------------------------------------------------------- +end program dart_to_aether + diff --git a/models/aether_lat-lon/model_mod.f90 b/models/aether_lat-lon/model_mod.f90 new file mode 100644 index 0000000000..87ac6c0e9c --- /dev/null +++ b/models/aether_lat-lon/model_mod.f90 @@ -0,0 +1,622 @@ +! DART software - Copyright UCAR. This open source software is provided +! by UCAR, "as is", without charge, subject to all terms of use at +! http://www.image.ucar.edu/DAReS/DART/DART_download +! + +module model_mod + +!----------------------------------------------------------------------- +! +! Interface for Aether +! +!----------------------------------------------------------------------- + +use types_mod, only : & + r8, i8, MISSING_R8, vtablenamelength + +use time_manager_mod, only : & + time_type, set_time, set_calendar_type + +use location_mod, only : & + location_type, get_close_type, & + get_close_obs, get_close_state, & + is_vertical, set_location, & + VERTISHEIGHT, query_location, get_location + +use utilities_mod, only : & + open_file, close_file, & + error_handler, E_ERR, E_MSG, E_WARN, & + nmlfileunit, do_nml_file, do_nml_term, & + find_namelist_in_file, check_namelist_read, to_upper, & + find_enclosing_indices + +use obs_kind_mod, only : get_index_for_quantity + +use netcdf_utilities_mod, only : & + nc_add_global_attribute, nc_synchronize_file, & + nc_add_global_creation_time, & + nc_begin_define_mode, nc_end_define_mode, & + nc_open_file_readonly, nc_get_dimension_size, nc_create_file, & + nc_get_variable + +use quad_utils_mod, only : & + quad_interp_handle, init_quad_interp, set_quad_coords, & + quad_lon_lat_locate, quad_lon_lat_evaluate, & + GRID_QUAD_FULLY_REGULAR, QUAD_LOCATED_CELL_CENTERS + +use state_structure_mod, only : & + add_domain, get_dart_vector_index, get_domain_size, & + get_model_variable_indices, get_varid_from_kind, & + state_structure_info + +use distributed_state_mod, only : get_state + +use ensemble_manager_mod, only : ensemble_type + +! These routines are passed through from default_model_mod. +! To write model specific versions of these routines +! remove the routine from this use statement and add your code to +! this the file. +use default_model_mod, only : & + pert_model_copies, read_model_time, write_model_time, & + init_time => fail_init_time, & + init_conditions => fail_init_conditions, & + convert_vertical_obs, convert_vertical_state, adv_1step + +implicit none +private + +! routines required by DART code - will be called from filter and other DART executables. +public :: get_model_size, & + get_state_meta_data, & + model_interpolate, & + end_model, & + static_init_model, & + nc_write_model_atts, & + get_close_obs, & + get_close_state, & + pert_model_copies, & + convert_vertical_obs, & + convert_vertical_state, & + read_model_time, & + adv_1step, & + init_time, & + init_conditions, & + shortest_time_between_assimilations, & + write_model_time + +character(len=256), parameter :: source = 'aether_lat-lon/model_mod.f90' + +logical :: module_initialized = .false. +integer :: dom_id ! used to access the state structure +type(time_type) :: assimilation_time_step + +!----------------------------------------------------------------------- +! Default values for namelist +character(len=256) :: template_file = 'filter_input_0001.nc' +integer :: time_step_days = 0 +integer :: time_step_seconds = 3600 + +integer, parameter :: MAX_STATE_VARIABLES = 100 +integer, parameter :: NUM_STATE_TABLE_COLUMNS = 5 +character(len=vtablenamelength) :: variables(NUM_STATE_TABLE_COLUMNS,MAX_STATE_VARIABLES) = '' + +type :: var_type + integer :: count + character(len=64), allocatable :: names(:) + integer, allocatable :: qtys(:) + real(r8), allocatable :: clamp_values(:, :) + logical, allocatable :: updates(:) +end type var_type + +namelist /model_nml/ template_file, time_step_days, time_step_seconds, variables + +!----------------------------------------------------------------------- +! Dimensions + +character(len=4), parameter :: LEV_DIM_NAME = 'alt' +character(len=4), parameter :: LAT_DIM_NAME = 'lat' +character(len=4), parameter :: LON_DIM_NAME = 'lon' +character(len=4), parameter :: TIME_DIM_NAME = 'time' + +character(len=4), parameter :: LEV_VAR_NAME = 'alt' +character(len=4), parameter :: LAT_VAR_NAME = 'lat' +character(len=4), parameter :: LON_VAR_NAME = 'lon' +character(len=4), parameter :: TIME_VAR_NAME = 'time' + +! Filter +! To be assigned in assign_dimensions (for filter) +! or get_grid_from_blocks (aether_to_dart, dart_to_aether). +real(r8), allocatable :: levs(:), lats(:), lons(:) + +integer :: nlev, nlat, nlon +real(r8) :: lon_start, lon_delta, lat_start, lat_delta, lat_end + +!----------------------------------------------------------------------- +! to be assigned in the verify_variables subroutine + +type(quad_interp_handle) :: quad_interp + +integer, parameter :: GENERAL_ERROR_CODE = 99 +integer, parameter :: INVALID_VERT_COORD_ERROR_CODE = 15 +integer, parameter :: INVALID_LATLON_VAL_ERROR_CODE = 16 +integer, parameter :: INVALID_ALTITUDE_VAL_ERROR_CODE = 17 +integer, parameter :: UNKNOWN_OBS_QTY_ERROR_CODE = 20 + +type(time_type) :: state_time ! module-storage declaration of current model time +character(len=512) :: error_string_1, error_string_2 + +contains + +!----------------------------------------------------------------------- +! Called to do one time initialization of the model. As examples, +! might define information about the model size or model timestep. +! In models that require pre-computed static data, for instance +! spherical harmonic weights, these would also be computed here. + +subroutine static_init_model() + +integer :: iunit, io +type(var_type) :: var + +module_initialized = .true. + +call find_namelist_in_file("input.nml", "model_nml", iunit) +read(iunit, nml = model_nml, iostat = io) +call check_namelist_read(iunit, io, "model_nml") + +call set_calendar_type('GREGORIAN') + +! Record the namelist values used for the run +if (do_nml_file()) write(nmlfileunit, nml=model_nml) +if (do_nml_term()) write( * , nml=model_nml) + +call assign_dimensions() + +! Dimension start and deltas needed for set_quad_coords +lon_start = lons(1) +lon_delta = lons(2) - lons(1) +lat_start = lats(1) +lat_delta = lats(2) - lats(1) + +var = assign_var(variables, MAX_STATE_VARIABLES) + +! This time is both the minimum time you can ask the model to advance +! (for models that can be advanced by filter) and it sets the assimilation +! window. All observations within +/- 1/2 this interval from the current +! model time will be assimilated. If this is not settable at runtime +! feel free to hardcode it and remove from the namelist. +assimilation_time_step = set_time(time_step_seconds, time_step_days) + +! Define which variables are in the model state +! This is using add_domain_from_file (arg list matches) +dom_id = add_domain(template_file, var%count, var%names, var%qtys, & + var%clamp_values, var%updates) + +call state_structure_info(dom_id) + + +call init_quad_interp(GRID_QUAD_FULLY_REGULAR, nlon, nlat, & + QUAD_LOCATED_CELL_CENTERS, & + global=.true., spans_lon_zero=.true., pole_wrap=.true., & + interp_handle=quad_interp) + +call set_quad_coords(quad_interp, lon_start, lon_delta, lat_start, lat_delta) + +end subroutine static_init_model + +!----------------------------------------------------------------------- +! Returns the number of items in the state vector as an integer. + +function get_model_size() + +integer(i8) :: get_model_size + +if ( .not. module_initialized ) call static_init_model + +get_model_size = get_domain_size(dom_id) + +end function get_model_size + +!----------------------------------------------------------------------- +! Use quad_utils_mod to interpalate the ensemble to the ob location. + +subroutine model_interpolate(state_handle, ens_size, location, qty, expected_obs, istatus) + +type(ensemble_type), intent(in) :: state_handle +integer, intent(in) :: ens_size +type(location_type), intent(in) :: location +integer, intent(in) :: qty +real(r8), intent(out) :: expected_obs(ens_size) +integer, intent(out) :: istatus(ens_size) + +! Local storage + +character(len=*), parameter :: routine = 'model_interpolate' + +real(r8) :: loc_array(3), llon, llat, lvert, lon_fract, lat_fract +integer :: four_lons(4), four_lats(4) +integer :: status1, which_vert, varid +real(r8) :: quad_vals(4, ens_size) + +if ( .not. module_initialized ) call static_init_model + +! Assume failure. Set return val to missing, then the code can +! just set istatus to something indicating why it failed, and return. +! If the interpolation is good, expected_obs will be set to the +! good values, and the last line here sets istatus to 0. +! make any error codes set here be in the 10s + +expected_obs = MISSING_R8 ! the DART bad value flag +istatus = GENERAL_ERROR_CODE ! unknown error + +! Get the individual locations values + +loc_array = get_location(location) +llon = loc_array(1) +llat = loc_array(2) +lvert = loc_array(3) +which_vert = nint(query_location(location)) + +! Only height and level for vertical location type is supported at this point +if (.not. is_vertical(location, "HEIGHT") .and. .not. is_vertical(location, "LEVEL")) THEN + istatus = INVALID_VERT_COORD_ERROR_CODE + return +endif + +! See if the state contains the obs quantity +varid = get_varid_from_kind(dom_id, qty) + +if (varid > 0) then + istatus = 0 +else + istatus = UNKNOWN_OBS_QTY_ERROR_CODE +endif + +! get the indices for the 4 corners of the quad in the horizontal, plus +! the fraction across the quad for the obs location +call quad_lon_lat_locate(quad_interp, llon, llat, & + four_lons, four_lats, lon_fract, lat_fract, status1) +if (status1 /= 0) then + istatus(:) = INVALID_LATLON_VAL_ERROR_CODE ! cannot locate enclosing horizontal quad + return +endif + +call get_quad_vals(state_handle, ens_size, varid, four_lons, four_lats, & + loc_array, which_vert, quad_vals, istatus) +if (any(istatus /= 0)) return + +! do the horizontal interpolation for each ensemble member +call quad_lon_lat_evaluate(quad_interp, lon_fract, lat_fract, ens_size, & + quad_vals, expected_obs, istatus) + +! All good. +istatus(:) = 0 + +end subroutine model_interpolate + +!----------------------------------------------------------------------- +! Returns the smallest increment in time that the model is capable +! of advancing the state in a given implementation, or the shortest +! time you want the model to advance between assimilations. + +function shortest_time_between_assimilations() + +type(time_type) :: shortest_time_between_assimilations + +if ( .not. module_initialized ) call static_init_model + +shortest_time_between_assimilations = assimilation_time_step + +end function shortest_time_between_assimilations + + + +!----------------------------------------------------------------------- +! Given an integer index into the state vector, returns the +! associated location and optionally the physical quantity. + +subroutine get_state_meta_data(index_in, location, qty) + +integer(i8), intent(in) :: index_in +type(location_type), intent(out) :: location +integer, optional , intent(out) :: qty + +! Local variables + +integer :: lat_index, lon_index, lev_index +integer :: my_var_id, my_qty + +if ( .not. module_initialized ) call static_init_model + +! Restart data is ordered (lev,lat,lon) (translated from C to fortran). +call get_model_variable_indices(index_in, lev_index, lat_index, lon_index, & + var_id=my_var_id, kind_index=my_qty) + +! should be set to the actual location using set_location() +location = set_location(lons(lon_index), lats(lat_index), levs(lev_index), VERTISHEIGHT) + +! should be set to the physical quantity, e.g. QTY_TEMPERATURE +if (present(qty)) qty = my_qty + +end subroutine get_state_meta_data + +!----------------------------------------------------------------------- +! Does any shutdown and clean-up needed for model. Can be a NULL +! INTERFACE if the model has no need to clean up storage, etc. + +subroutine end_model() + +end subroutine end_model + +!----------------------------------------------------------------------- +! write any additional attributes to the output and diagnostic files + +subroutine nc_write_model_atts(ncid, domain_id) + +integer, intent(in) :: ncid ! netCDF file identifier +integer, intent(in) :: domain_id + +character(len=*), parameter :: routine = 'nc_write_model_atts' + +if ( .not. module_initialized ) call static_init_model + +! It is already in define mode from nc_create_file. +! OR NOT, if called by create_and_open_state_output +call nc_begin_define_mode(ncid) + +! nc_write_model_atts is called by create_and_open_state_output, +! which calls nf90_enddef before it. +call nc_add_global_creation_time(ncid, routine) + +call nc_add_global_attribute(ncid, "model_source", source, routine) +call nc_add_global_attribute(ncid, "model", "aether", routine) + +call nc_end_define_mode(ncid) + +! Flush the buffer and leave netCDF file open +call nc_synchronize_file(ncid) + +end subroutine nc_write_model_atts + +!----------------------------------------------------------------------- +! Read dimension information from the template file and use +! it to assign values to variables. + +subroutine assign_dimensions() + +integer :: ncid +character(len=24), parameter :: routine = 'assign_dimensions' + +call error_handler(E_MSG, routine, 'reading filter input ['//trim(template_file)//']') + +ncid = nc_open_file_readonly(template_file, routine) + +! levels +nlev = nc_get_dimension_size(ncid, trim(LEV_DIM_NAME), routine) +allocate(levs(nlev)) +call nc_get_variable(ncid, trim(LEV_VAR_NAME), levs, routine) + +! latitiude +nlat = nc_get_dimension_size(ncid, trim(LAT_DIM_NAME), routine) +allocate(lats(nlat)) +call nc_get_variable(ncid, trim(LAT_VAR_NAME), lats, routine) + +! longitude +nlon = nc_get_dimension_size(ncid, trim(LON_DIM_NAME), routine) +allocate(lons(nlon)) +call nc_get_variable(ncid, trim(LON_VAR_NAME), lons, routine) + +end subroutine assign_dimensions + +!----------------------------------------------------------------------- +! Parse the table of variables characteristics into arrays for easier access. + +function assign_var(variables, MAX_STATE_VARIABLES) result(var) + +character(len=vtablenamelength), intent(in) :: variables(:, :) +integer, intent(in) :: MAX_STATE_VARIABLES + +type(var_type) :: var +integer :: ivar +character(len=vtablenamelength) :: table_entry + +!----------------------------------------------------------------------- +! Codes for interpreting the NUM_STATE_TABLE_COLUMNS of the variables table +integer, parameter :: NAME_INDEX = 1 ! ... variable name +integer, parameter :: QTY_INDEX = 2 ! ... DART qty +integer, parameter :: MIN_VAL_INDEX = 3 ! ... minimum value if any +integer, parameter :: MAX_VAL_INDEX = 4 ! ... maximum value if any +integer, parameter :: UPDATE_INDEX = 5 ! ... update (state) or not + +! Loop through the variables array to get the actual count of the number of variables +do ivar = 1, MAX_STATE_VARIABLES + ! If the element is an empty string, the loop has exceeded the extent of the variables + if (variables(1, ivar) == '') then + var%count = ivar-1 + exit + endif +enddo + +! Allocate the arrays in the var derived type +allocate(var%names(var%count), var%qtys(var%count), var%clamp_values(var%count, 2), var%updates(var%count)) + +do ivar = 1, var%count + + var%names(ivar) = trim(variables(NAME_INDEX, ivar)) + + table_entry = variables(QTY_INDEX, ivar) + call to_upper(table_entry) + + var%qtys(ivar) = get_index_for_quantity(table_entry) + + if (variables(MIN_VAL_INDEX, ivar) /= 'NA') then + read(variables(MIN_VAL_INDEX, ivar), '(d16.8)') var%clamp_values(ivar,1) + else + var%clamp_values(ivar,1) = MISSING_R8 + endif + + if (variables(MAX_VAL_INDEX, ivar) /= 'NA') then + read(variables(MAX_VAL_INDEX, ivar), '(d16.8)') var%clamp_values(ivar,2) + else + var%clamp_values(ivar,2) = MISSING_R8 + endif + + table_entry = variables(UPDATE_INDEX, ivar) + call to_upper(table_entry) + + if (table_entry == 'UPDATE') then + var%updates(ivar) = .true. + else + var%updates(ivar) = .false. + endif + +enddo + +end function assign_var + +!----------------------------------------------------------------------- +! Extract state values needed by the interpolation from all ensemble members. + +subroutine get_quad_vals(state_handle, ens_size, varid, four_lons, four_lats, & + lon_lat_vert, which_vert, quad_vals, istatus) + +type(ensemble_type), intent(in) :: state_handle +integer, intent(in) :: ens_size +integer, intent(in) :: varid +integer, intent(in) :: four_lons(4), four_lats(4) +real(r8), intent(in) :: lon_lat_vert(3) +integer, intent(in) :: which_vert +real(r8), intent(out) :: quad_vals(4, ens_size) +integer, intent(out) :: istatus(ens_size) + +integer :: lev1, lev2, stat +real(r8) :: vert_val, vert_fract +character(len=512) :: error_string_1 + +character(len=*), parameter :: routine = 'get_quad_vals' + +quad_vals(:,:) = MISSING_R8 +istatus(:) = GENERAL_ERROR_CODE + +vert_val = lon_lat_vert(3) + +if ( which_vert == VERTISHEIGHT ) then + call find_enclosing_indices(nlev, levs(:), vert_val, lev1, lev2, & + vert_fract, stat, log_scale = .false.) + + if (stat /= 0) then + istatus = INVALID_ALTITUDE_VAL_ERROR_CODE + end if +else + istatus(:) = INVALID_VERT_COORD_ERROR_CODE + write(error_string_1, *) 'unsupported vertical type: ', which_vert + call error_handler(E_ERR, routine, error_string_1, source) +endif + +! we have all the indices and fractions we could ever want. +! now get the data values at the bottom levels, the top levels, +! and do vertical interpolation to get the 4 values in the columns. +! the final horizontal interpolation will happen later. + +if (varid > 0) then + + call get_four_state_values(state_handle, ens_size, four_lons, four_lats, & + lev1, lev2, vert_fract, varid, quad_vals, istatus) +else + write(error_string_1, *) 'unsupported variable: ', varid + call error_handler(E_ERR, routine, error_string_1, source) +endif + +if (any(istatus /= 0)) return + +! when you get here, istatus() was set either by passing it to a +! subroutine, or setting it explicitly here. +end subroutine get_quad_vals + +!----------------------------------------------------------------------- +! interpolate in the vertical between 2 arrays of items. + +! vert_fracts: 0 is 100% of the first level and +! 1 is 100% of the second level + +subroutine vert_interp(nitems, levs1, levs2, vert_fract, out_vals) + +integer, intent(in) :: nitems +real(r8), intent(in) :: levs1(nitems) +real(r8), intent(in) :: levs2(nitems) +real(r8), intent(in) :: vert_fract +real(r8), intent(out) :: out_vals(nitems) + +out_vals(:) = (levs1(:) * (1.0_r8 - vert_fract)) + & + (levs2(:) * vert_fract ) + +end subroutine vert_interp + +!----------------------------------------------------------------------- +! Extract the state values at the corners of the 2 quads used for interpolation. + +subroutine get_four_state_values(state_handle, ens_size, four_lons, four_lats, & + lev1, lev2, vert_fract, varid, quad_vals, istatus) + +type(ensemble_type), intent(in) :: state_handle +integer, intent(in) :: ens_size +integer, intent(in) :: four_lons(4), four_lats(4) +integer, intent(in) :: lev1, lev2 +real(r8), intent(in) :: vert_fract +integer, intent(in) :: varid +real(r8), intent(out) :: quad_vals(4, ens_size) !< array of interpolated values +integer, intent(out) :: istatus(ens_size) + +integer :: icorner +integer(i8) :: state_indx +real(r8) :: vals1(ens_size), vals2(ens_size) +real(r8) :: qvals(ens_size) + +character(len=*), parameter :: routine = 'get_four_state_values:' + +do icorner = 1, 4 + + ! Most rapidly varying dim must be first + state_indx = get_dart_vector_index(lev1, four_lats(icorner), & + four_lons(icorner), dom_id, varid) + + if (state_indx < 0) then + write(error_string_1,'(A)') 'Could not find dart state index from ' + write(error_string_2,'(A,3F15.4)') 'lon, lat, and lev1 index :', & + four_lons(icorner), four_lats(icorner), lev1 + call error_handler(E_ERR, routine, error_string_1, source, & + text2=error_string_2) + return + endif + + vals1(:) = get_state(state_indx, state_handle) ! all the ensemble members for level (i) + + state_indx = get_dart_vector_index(lev2, four_lats(icorner), & + four_lons(icorner), dom_id, varid) + + if (state_indx < 0) then + write(error_string_1,'(A)') 'Could not find dart state index from ' + write(error_string_2,'(A,3F15.4)') 'lon, lat, and lev2 index :', & + four_lons(icorner), four_lats(icorner), lev2 + call error_handler(E_ERR, routine, error_string_1, source, & + text2=error_string_2) + return + endif + + vals2(:) = get_state(state_indx, state_handle) ! all the ensemble members for level (i) + + ! if directly using quad_vals here, it would create a temporary array and give a warning + call vert_interp(ens_size, vals1, vals2, vert_fract, qvals) + quad_vals(icorner, :) = qvals +enddo + +istatus = 0 + +end subroutine get_four_state_values + +!----------------------------------------------------------------------- +! End of model_mod +!----------------------------------------------------------------------- +end module model_mod + diff --git a/models/aether_lat-lon/model_mod.nml b/models/aether_lat-lon/model_mod.nml new file mode 100644 index 0000000000..f68b1b0244 --- /dev/null +++ b/models/aether_lat-lon/model_mod.nml @@ -0,0 +1,9 @@ +&model_nml + template_file = 'if other than filter_input_0001.nc' + debug = 0 + variables = 'Temperature', 'QTY_TEMPERATURE', '1000.0', 'NA', 'UPDATE', + 'Opos', 'QTY_DENSITY_ION_OP', 'NA', 'NA', 'UPDATE' + time_step_days = 0 + time_step_seconds = 3600 + / + diff --git a/models/aether_lat-lon/readme.rst b/models/aether_lat-lon/readme.rst new file mode 100644 index 0000000000..4b146b9073 --- /dev/null +++ b/models/aether_lat-lon/readme.rst @@ -0,0 +1,230 @@ +Aether Rectangular Grid Interface +================================= + +Overview +-------- + +The Aether ("eether") space weather model can be implemented +on a logically rectangular grid "lat-lon", or on a cubed-sphere grid. +This is the interface to the lat-lon version. +The model code is available on +`GitHub `_ . + +Aether writes history and restart files. +The restart fields are divided among 2 types of files: neutrals and ions. +They are further divided into "blocks", which are subdomains of the globe. +The numbering of blocks starts in the southwest corner of the lat-lon grid +and goes east first, then to the west end of the next row north, +and ends in the northeast corner. +Each block has a halo around it filled with field values from neighboring blocks. +All of these need to be combined to make a single state vector for filter. +There's a unique set of these files for each member. +The restart file names reflect this information :: + + {neutrals,ions}_mMMMM_gBBBB.nc + MMMM = ensemble member (0-based) + BBBB = block number (0-based) + +The restart files do not have grid information in them. +Grid information must be read from :: + + grid_gBBBB.nc + +Programs ``aether_to_dart`` and ``dart_to_aether`` read the same namelist; +``transform_state_nml``. +The fields chosen to be part of the model state are specified in 'variables'. +``Aether_to_dart`` will read the specified fields, from all the restarts +for a member plus grid files, and repackage them into an ensemble state vector file +(filter_input.nc). Filter_input.nc has a single domain and no halos. +The field names will be transformed into CF-compliant names in filter_input.nc. + +``Filter`` will read a list of variables from ``model_nml`` (not ``transform_state_nml``), +then read the ensemble of filter_input.nc files, assimilate, +and write an ensemble of filter_output.nc files. + +``Dart_to_aether`` will convert the fields' names to the CF-compliant filter names, +find those names in filter_output.nc, extract the updated field data, +and overwrite those fields in the appropriate Aether restart files. + +Namelists +--------- + +- The namelists are read from the file ``input.nml``. +- Namelists start with an ampersand '&' and terminate with a slash '/'. +- Character strings that contain a '/' must be enclosed in quotes + to prevent them from prematurely terminating the namelist. + +transform_state_nml +................... + + aether_restart_dirname + The directory where the Aether restart files reside, + and will be transformed (the "run" directory). + + nblocks_lon, nblocks_lat, nblocks_lev + Number of Aether domain "blocks" in the longitudinal, latitudinal, + and vertical directions. Vertical is always 1 (2024-2). + The total number of blocks (nblocks_lon x nblocks_lat x nblocks_lev) + is defined by the number of processors used by Aether. + + variables + The Aether fields to be transformed into a model state are specified + in the 'variables' namelist variable in transform_state_nml. + The following information must be provided for each field + + 1) Aether field name + 2) which file contains the field ("neutrals" or "ions") + + Aether field names are not CF-compliant and are translated + to CF-compliant forms by aether_to_dart. + + In ``transform_state_nml`` there is no association of DART "quantities" + (QTY\_\*) with fields. + A subset of the transformed variables to be included in the model state + is specified in :ref:`model_nml:variables`, using the CF-compliant names. + That is where the associations with QTYs are made. + See the :ref:`QTY` section, below. + + The neutrals restart files contain the following fields. + The most important fields are **noted in bold text** + + | **Temperature**, **velocity_east**, **velocity_north**, + | velocity_up, N, O2, N2, NO, He, N_2D, N_2P, H, O_1D, CO2 + + Similarly for the ions restart files + + | **O+**, **O+_2D**, **O+_2P**, **O2+**, **N2+**, NO+, N+, He+, + | Temperature_bulk_ion, Temperature_electron + + In addition, there are 7 (independent) fields associated with *each* ion density + :: + + - Temperature\ \(O+\) + - velocity_parallel_east\ \(O+\) + - velocity_parallel_north\ \(O+\) + - velocity_parallel_up\ \(O+\) + - velocity_perp_east\ \(O+\) + - velocity_perp_north\ \(O+\) + - velocity_perp_up\ \(O+\) + +.. WARNING:: + As of this writing (2024-1-30) the electron density and solar radiation + parameter "f10.7" are not available through the restart files, + even though electron temperature is. + They may be available in the history files. + + +.. _model_nml: + +model_nml +......... + +template_file + = 'filter_input_0001.nc' is the default + +variables + Each field to be included in the state vector requires 5 descriptors: + + 1) field name (transformed to CF-compliant) + #) DART "quantity" to be associated with the field + #) min value + #) max value + #) update the field in the restart file? {UPDATE,NO_COPY_BACK} + + The field names listed in 'variables' must be the *transformed* names, + as found in the filter_input.nc files (see :ref:`Usage`). + In general the transformation does the following + + - Remove all '\\', '(', and ')' + - Replace blanks with underscores + - Replace '+' with 'pos' and '-' with 'neg' + - For ions, move the ion name from the end to the beginning. + + For example 'velocity_parallel_east\\ \\(O+_2D\\)' becomes 'Opos_2D_velocity_parallel_east'. + +.. _QTY: + + The DART QTY associated with each field is an open question, + depending on the forward operators required for the available observations + and on the scientific objective. The default choices are not necessarily correct + for your assimilation. For the fields identified as most important + in early Aether assimilation experiments, these are the defaults: + +============== ==================== +variables quantity (kind) +============== ==================== +Temperature QTY_TEMPERATURE +velocity_east QTY_U_WIND_COMPONENT +velocity_north QTY_V_WIND_COMPONENT +Opos QTY_DENSITY_ION_OP +O2pos QTY_DENSITY_ION_O2P +N2pos QTY_DENSITY_ION_N2P +O2pos_2D QTY_DENSITY_ION_O2DP +O2pos_2P QTY_DENSITY_ION_O2PP +============== ==================== + + Some fields could have one of several QTYs associated with them. + For example, the field 'Opos_velocity_parallel_up' + could potentially have these existing QTYs associated with it:: + + - QTY_VELOCITY_W + - QTY_VELOCITY_W_ION + - QTY_VERTICAL_VELOCITY + + It's possible that several fields could have the same QTY. + A third possibility is that the experiment may require the creation of a new QTY. + The example above may require something like QTY_VEL_PARALLEL_VERT_OP. + +.. WARNING:: + The size of these parameters may be limited to 31 characters (``types_mod.f90``) + +time_step_days, time_step_seconds + = 0, 3600 The hindcast period between assimilations. + +.. _Usage: + +Usage +----- + +The workflow and scripting for fully cycling assimilation +(ensemble hindcast, then assimilation, repeat as needed) +has not been defined yet for Aether (2024-2), +but we expect that all of the DART executables will be in a directory +which is defined in the script. +So the script will be able to run the programs using a full pathname. +In addition, all of the Aether restart files will be in a "run" directory, +which has plenty of space for the data. +The DART executables will be run in this directory using their full pathnames. + +To run a more limited test (no assimilation), +which is just the transformation of files for a member (0) +use the following csh commands, or equivalents in your preferred languange. +These build the ``aether_to_dart`` and ``dart_to_aether`` executables +in $DART/models/aether_lat-lon/work directory. +Also in that directory, edit input.nml to set ``transform_state_nml:`` ``aether_restart_dirname`` +to be the full pathname of the directory where the Aether restart and grid files are. + +:: + +> set exec_dir = $DART/models/aether_lat-lon/work +> cd $exec_dir +> ./quick_build.sh +> cd {aether_restart_dirname} +> mkdir Orig +> cp *m0000* Orig/ +> cp ${exec_dir}/input.nml . +> ${exec_dir}/aether_to_dart 0 +> cp filter_input_0001.nc filter_output_0001.nc +> ${exec_dir}/dart_to_aether 0 + +| Compare the modified Aether restart files with those in Orig. +| The filter\_ files will contain the CF-compliant field names + which must be used in ``model_nml:variables``. + +.. NOTE:: + Some halo parts may have no data in them because Aether currently (2024-2) + does not use those regions. +.. WARNING:: + The restart files have dimensions ordered such that common viewing tools + (e.g. ncview) may display the pictures transposed from what is expected. + diff --git a/models/aether_lat-lon/transform_state.nml b/models/aether_lat-lon/transform_state.nml new file mode 100644 index 0000000000..85275b0d88 --- /dev/null +++ b/models/aether_lat-lon/transform_state.nml @@ -0,0 +1,11 @@ +&transform_state_nml + aether_restart_dirname = + '/Users/raeder/DAI/Manhattan/models/aether_lat-lon/testdata4' + variables = + 'Temperature', 'neutrals', + 'O+', 'ions', + nblocks_lon = 2 + nblocks_lat = 2 + nblocks_lev = 1 + debug = 0 + / diff --git a/models/aether_lat-lon/transform_state_mod.f90 b/models/aether_lat-lon/transform_state_mod.f90 new file mode 100644 index 0000000000..382fc1599a --- /dev/null +++ b/models/aether_lat-lon/transform_state_mod.f90 @@ -0,0 +1,623 @@ +! DART software - Copyright UCAR. This open source software is provided +! by UCAR, "as is", without charge, subject to all terms of use at +! http://www.image.ucar.edu/DAReS/DART/DART_download +! + +module transform_state_mod + +!----------------------------------------------------------------------- +! +! Routines used by aether_to_dart and dart_to_aether +! +!----------------------------------------------------------------------- + +use types_mod, only : & + r4, r8, MISSING_R4, MISSING_R8, vtablenamelength, MISSING_I, RAD2DEG + +use time_manager_mod, only : & + time_type, set_calendar_type, set_time, get_time, set_date, & + print_date, print_time + + +use utilities_mod, only : & + open_file, close_file, file_exist, & + error_handler, E_ERR, E_MSG, E_WARN, & + nmlfileunit, do_output, do_nml_file, do_nml_term, & + find_namelist_in_file, check_namelist_read + +use netcdf_utilities_mod, only : & + nc_open_file_readonly, nc_open_file_readwrite, nc_create_file, & + nc_get_dimension_size, nc_get_variable, & + nc_close_file + +implicit none +private + +public :: static_init_blocks, & + state_time, & + block_file_name, open_block_file, aether_name_to_dart, & + nblocks_lon, nblocks_lat, nblocks_lev, & + lons, lats, levs, & + nlon, nlat, nlev, & + nx_per_block, ny_per_block, nz_per_block, nghost, & + variables, VT_ORIGININDX, VT_VARNAMEINDX, & + nvar, nvar_neutral, nvar_ion, & + aether_restart_dirname, & + purge_chars, debug + +character(len=256), parameter :: source = 'aether_lat-lon/transform_state_mod.f90' + +logical :: module_initialized = .false. + +!----------------------------------------------------------------------- +! namelist parameters with default values. +!----------------------------------------------------------------------- + +character(len=256) :: aether_restart_dirname = '.' +! An ensemble of file names is created using this root and $member in it, + +integer, parameter :: MAX_STATE_VARIABLES = 100 +integer, parameter :: NUM_STATE_TABLE_COLUMNS = 2 +character(len=vtablenamelength) :: variables(NUM_STATE_TABLE_COLUMNS,MAX_STATE_VARIABLES) = ' ' + +! number of blocks along each dim +integer :: nblocks_lon=MISSING_I, nblocks_lat=MISSING_I, nblocks_lev=MISSING_I +! These are not used in DA, and lon_start is used only for 1D modeling +! real(r8) :: lat_start =MISSING_I, lat_end =MISSING_I, lon_start=MISSING_I + +integer :: debug = 0 + +namelist /transform_state_nml/ aether_restart_dirname, variables, debug, & + nblocks_lon, nblocks_lat, nblocks_lev + +!----------------------------------------------------------------------- +! Dimensions + +! To be assigned get_grid_from_blocks (aether_to_dart, dart_to_aether). +integer :: nlev, nlat, nlon +real(r8), allocatable :: levs(:), lats(:), lons(:) + +! Aether block parameters (nblocks_{lon,lat,lev} are read from a namelist) +integer :: nx_per_block, ny_per_block, nz_per_block + +integer, parameter :: nghost = 2 ! number of ghost cells on all edges + +!----------------------------------------------------------------------- +! Codes for interpreting the NUM_STATE_TABLE_COLUMNS of the variables table +! VT_ORIGININDX is used differently from the usual domains context. +integer, parameter :: VT_VARNAMEINDX = 1 ! ... variable name +integer, parameter :: VT_ORIGININDX = 2 ! file of origin + +!----------------------------------------------------------------------- +! Day 0 in Aether's calendar is (+/1 a day) -4710/11/24 0 UTC +! integer :: aether_ref_day = 2451545 ! cJULIAN2000 in Aether = day of date 2000/01/01. +character(len=32) :: calendar = 'GREGORIAN' + +! But what we care about is the ref time for the times in the files, which is 1965-1-1 00:00 +integer, dimension(:) :: aether_ref_date(5) = (/1965,1,1,0,0/) ! y,mo,d,h,m (secs assumed 0) +type(time_type) :: aether_ref_time, state_time +integer :: aether_ref_ndays, aether_ref_nsecs + +!----------------------------------------------------------------------- +! to be assigned in the verify_variables subroutine +integer :: nvar, nvar_neutral, nvar_ion + +!----------------------------------------------------------------------- +character(len=512) :: error_string_1, error_string_2 + +contains + +!----------------------------------------------------------------------- +! Like static_init_model, but for aether_to_dart and dart_to_aether. +! Read the namelist, +! parse the 'variables' table, +! get the Aether grid information +! convert the Aether time into a DART time. + +subroutine static_init_blocks(member) + +integer, intent(in) :: member + +character(len=128) :: aether_filename +integer :: iunit, io + +character(len=*), parameter :: routine = 'static_init_blocks' + +if (module_initialized) return ! only need to do this once + +! This prevents subroutines called from here from calling static_init_mod. +module_initialized = .true. + +!------------------ +! Read the namelist + +call find_namelist_in_file("input.nml", 'transform_state_nml', iunit) +read(iunit, nml = transform_state_nml, iostat = io) +! Record the namelist values used for the run +if (do_nml_file()) write(nmlfileunit, nml=transform_state_nml) +if (do_nml_term()) write( * , nml=transform_state_nml) +call check_namelist_read(iunit, io, 'transform_state_nml') ! closes, too. + + +! error-check, convert namelist input 'variables' to global variables. +call verify_variables(variables) + +! Aether uses Julian time internally, andor a Julian calendar +! (days from the start of the calendar), depending on the context) +call set_calendar_type( calendar ) + +!-------------------------------- +! 1) get grid dimensions +! 2) allocate space for the grids +! 3) read them from the block restart files, could be stretched ... +! Opens and closes the grid block file, but not the filter netcdf file. +call get_grid_from_blocks() + +if( debug > 0 ) then + write(error_string_1,'(A,3I5)') 'grid dims are ', nlon, nlat, nlev + call error_handler(E_MSG, routine, error_string_1, source) +endif + +! Convert the Aether reference date (not calendar day = 0 date) +! to the days and seconds of the calendar set in model_mod_nml. +aether_ref_time = set_date(aether_ref_date(1), aether_ref_date(2), aether_ref_date(3), & + aether_ref_date(4), aether_ref_date(5)) +call get_time(aether_ref_time, aether_ref_nsecs, aether_ref_ndays) + +! Get the model time from a restart file. +aether_filename = block_file_name(variables(VT_ORIGININDX,1), member, 0) +state_time = read_aether_time(trim(aether_restart_dirname)//'/'//trim(aether_filename)) + +if ( debug > 0 ) then + write(error_string_1,'("grid: nlon, nlat, nlev =",3(1x,i5))') nlon, nlat, nlev + call error_handler(E_MSG, routine, error_string_1, source) +endif + +end subroutine static_init_blocks + +!----------------------------------------------------------------------- +! Parse the table of variables' characteristics. + +subroutine verify_variables(variables) + +character(len=*), intent(in) :: variables(:,:) + +character(len=vtablenamelength) :: varname, rootstr +integer :: i + +character(len=*), parameter :: routine = 'verify_variables' + +nvar = 0 +MY_LOOP : do i = 1, size(variables,2) + + varname = variables(VT_VARNAMEINDX,i) + rootstr = variables(VT_ORIGININDX,i) + + if ( varname == ' ' .and. rootstr == ' ' ) exit MY_LOOP ! Found end of list. + + if ( varname == ' ' .or. rootstr == ' ' ) then + error_string_1 = 'variable list not fully specified' + call error_handler(E_ERR, routine, error_string_1, source) + endif + + if (i > 1) then + if (variables(VT_ORIGININDX,i-1) == 'ions' .and. rootstr /= 'ions' ) then + write(error_string_1,'(A,I1,A)') ' File type (',i, & + ') in transform_state_nml:variables is out of order or invalid.' + call error_handler(E_ERR, routine, error_string_1, source) + endif + endif + + ! The internal DART routines check if the variable name is valid. + + ! All good to here - fill the output variables + + nvar = nvar + 1 + if (variables(VT_ORIGININDX,i) == 'neutrals') nvar_neutral = nvar_neutral + 1 + if (variables(VT_ORIGININDX,i) == 'ions') nvar_ion = nvar_ion + 1 + + +enddo MY_LOOP + +if (nvar == MAX_STATE_VARIABLES) then + error_string_1 = 'WARNING: you may need to increase "MAX_STATE_VARIABLES"' + write(error_string_2,'(''you have specified at least '',i4,'' perhaps more.'')') nvar + call error_handler(E_MSG, routine, error_string_1, source, text2=error_string_2) +endif + +end subroutine verify_variables + +!----------------------------------------------------------------------- +! ? Will this need to open the grid_{below,corners,down,left} filetypes? +! This code can handle it; a longer filetype passed in, and no member. +! ? Aether output files? + +function block_file_name(filetype, memnum, blocknum) + +character(len=*), intent(in) :: filetype ! one of {grid,ions,neutrals} +integer, intent(in) :: blocknum +integer, intent(in) :: memnum +character(len=128) :: block_file_name + +character(len=*), parameter :: routine = 'block_file_name' + +block_file_name = trim(filetype) +if (memnum >= 0) write(block_file_name, '(A,A2,I0.4)') trim(block_file_name), '_m', memnum +if (blocknum >= 0) write(block_file_name, '(A,A2,I0.4)') trim(block_file_name), '_g', blocknum +block_file_name = trim(block_file_name)//'.nc' +if ( debug > 0 ) then + write(error_string_1,'("filename, memnum, blocknum = ",A,2(1x,i5))') & + trim(block_file_name), memnum, blocknum + call error_handler(E_MSG, routine, error_string_1, source) +endif + +end function block_file_name + +!----------------------------------------------------------------------- +! Read block grid values (2D arrays) from a grid NetCDF file. +! Allocate and fill the full-domain 1-D dimension arrays (lon, lat, levs) + +! This routine needs: +! +! 1. A base dirname for the restart files (aether_restart_dirname). +! The filenames have the format 'dirname/{neutrals,ions}_mMMMM_gBBBB.rst' +! where BBBB is the block number, MMMM is the member number, +! and they have leading 0s. Blocks start in the +! southwest corner of the lat/lon grid and go east first, +! then to the west end of the next row north and end in the northeast corner. +! +! In the process, the routine will find: +! +! 1. The number of blocks in Lon and Lat (nblocks_lon, nblocks_lat) +! +! 2. The number of lons and lats in a single grid block (nx_per_block, ny_per_block, nz_per_block) +! +! 3. The overall grid size, {nlon,nlat,nalt} when you've read in all the blocks. +! +! 4. The number of neutral species (and probably a mapping between +! the species number and the variable name) (nvar_neutral) +! +! 5. The number of ion species (ditto - numbers <-> names) (nvar_ion) +! +! In addition to reading in the state data, it fills Longitude, Latitude, and Altitude arrays. +! This grid is orthogonal and rectangular but can have irregular spacing along +! any of the three dimensions. + +subroutine get_grid_from_blocks() + +integer :: nb, offset, ncid, nboff +integer :: starts(3), ends(3), xcount, ycount, zcount +character(len=256) :: filename +real(r4), allocatable :: temp(:,:,:) + +character(len=*), parameter :: routine = 'get_grid_from_blocks' + +! Read the x,y,z from a NetCDF block file(s), +! in order to calculate the n[xyz]_per_block dimensions. +! grid_g0000.nc looks like a worthy candidate, but a restart could be used. +write (filename,'(2A)') trim(aether_restart_dirname),'/grid_g0000.nc' +ncid = nc_open_file_readonly(filename, routine) + +! The grid (and restart) file variables have halos, so strip them off +! to get the number of actual data values in each dimension of the block. +nx_per_block = nc_get_dimension_size(ncid, 'x', routine) - (2 * nghost) +ny_per_block = nc_get_dimension_size(ncid, 'y', routine) - (2 * nghost) +nz_per_block = nc_get_dimension_size(ncid, 'z', routine) + +nlon = nblocks_lon * nx_per_block +nlat = nblocks_lat * ny_per_block +nlev = nblocks_lev * nz_per_block + +write(error_string_1,'(3(A,I5))') 'nlon = ', nlon, ', nlat = ', nlat, ', nlev = ', nlev +call error_handler(E_MSG, routine, error_string_1, source) + +allocate( lons( nlon )) +allocate( lats( nlat )) +allocate( levs( nlev )) + +if (debug > 4) then + write(error_string_1,'(2A)') 'Successfully read Aether grid file:', trim(filename) + call error_handler(E_MSG, routine, error_string_1, source) + write(error_string_1,'(A,I5)') ' nx_per_block:', nx_per_block, & + ' ny_per_block:', ny_per_block, ' nz_per_block:', nz_per_block + call error_handler(E_MSG, routine, error_string_1, source) +endif + +! A temp array large enough to hold any of the 3D +! Lon, Lat or Alt arrays from a block plus ghost cells. +! The restart files have C-indexing (fastest changing dim is the last). +allocate(temp( 1:nz_per_block, & + 1-nghost:ny_per_block+nghost, & + 1-nghost:nx_per_block+nghost)) +temp = MISSING_R4 + +starts(1) = 1 - nghost +starts(2) = 1 - nghost +starts(3) = 1 +ends(1) = nx_per_block + nghost +ends(2) = ny_per_block + nghost +ends(3) = nz_per_block +xcount = nx_per_block + (2 * nghost) +ycount = ny_per_block + (2 * nghost) +zcount = nz_per_block +if ( debug > 0 ) then + write(error_string_1,'(2(A,3i5),A,3(1X,i5))') & + 'starts = ',starts, 'ends = ',ends, '[xyz]counts = ',xcount,ycount,zcount + call error_handler(E_MSG, routine, error_string_1, source) +endif + +! go across the south-most block row picking up all longitudes +do nb = 1, nblocks_lon + + ! filename is trimmed by passage to open_block_file + "len=*" there. + filename = trim(aether_restart_dirname)//'/'//block_file_name('grid', -1, nb-1) + ncid = open_block_file(filename, 'read') + + ! Read 3D array and extract the longitudes of the non-halo data of this block. + ! The restart files have C-indexing (fastest changing dim is the last), + ! So invert the dimension bounds. + call nc_get_variable(ncid, 'Longitude', & + temp(starts(3):ends(3), starts(2):ends(2), starts(1):ends(1)), & + context=routine, & + nc_count=(/ zcount,ycount,xcount /)) + + offset = (nx_per_block * (nb - 1)) + lons(offset+1:offset+nx_per_block) = temp(1,1,1:nx_per_block) + + call nc_close_file(ncid) +enddo + +! go up west-most block row picking up all latitudes +do nb = 1, nblocks_lat + + ! Aether's block name counter start with 0, but the lat values can come from + ! any lon=const column of blocks. + nboff = ((nb - 1) * nblocks_lon) + filename = trim(aether_restart_dirname)//'/'//block_file_name('grid', -1, nboff) + ncid = open_block_file(filename, 'read') + + call nc_get_variable(ncid, 'Latitude', & + temp(starts(3):ends(3), starts(2):ends(2), starts(1):ends(1)), & + context=routine, nc_count=(/zcount,ycount,xcount/)) + + + offset = (ny_per_block * (nb - 1)) + lats(offset+1:offset+ny_per_block) = temp(1,1:ny_per_block,1) + + call nc_close_file(ncid) +enddo + + +! this code assumes all columns share the same altitude array, +! so we can read it from the first block. +! if this is not the case, this code has to change. + +filename = trim(aether_restart_dirname)//'/'//block_file_name('grid', -1, 0) +ncid = open_block_file(filename, 'read') + +temp = MISSING_R8 +call nc_get_variable(ncid, 'Altitude', & + temp(starts(3):ends(3), starts(2):ends(2), starts(1):ends(1)), & + context=routine, nc_count=(/zcount,ycount,xcount/)) + +levs(1:nz_per_block) = temp(1:nz_per_block,1,1) + +call nc_close_file(ncid) + +deallocate(temp) + +! convert from radians into degrees +lons = lons * RAD2DEG +lats = lats * RAD2DEG + +if (debug > 4) then + print *, routine, 'All lons ', lons + print *, routine, 'All lats ', lats + print *, routine, 'All levs ', levs +endif + +if ( debug > 1 ) then ! Check dimension limits + write(error_string_1,'(A,2F15.4)') 'LON range ', minval(lons), maxval(lons) + call error_handler(E_MSG, routine, error_string_1, source) + write(error_string_1,'(A,2F15.4)') 'LAT range ', minval(lats), maxval(lats) + call error_handler(E_MSG, routine, error_string_1, source) + write(error_string_1,'(A,2F15.4)') 'ALT range ', minval(levs), maxval(levs) + call error_handler(E_MSG, routine, error_string_1, source) +endif + +end subroutine get_grid_from_blocks + +!----------------------------------------------------------------------- +! Read the Aether restart file time and convert to a DART time. + +function read_aether_time(filename) +type(time_type) :: read_aether_time +character(len=*), intent(in) :: filename + +integer :: ncid +integer :: tsimulation ! the time read from a restart file; seconds from aether_ref_date. +integer :: ndays, nsecs + +character(len=*), parameter :: routine = 'read_aether_time' + +tsimulation = MISSING_I + +ncid = open_block_file(filename, 'read') +call nc_get_variable(ncid, 'time', tsimulation, context=routine) +call nc_close_file(ncid, routine, filename) + +! Calculate the DART time of the file time. +ndays = tsimulation / 86400 +nsecs = tsimulation - (ndays * 86400) +! The ref day is not finished, but don't need to subtract 1 because +! that was accounted for in the integer calculation of ndays. +ndays = aether_ref_ndays + ndays +read_aether_time = set_time(nsecs, ndays) + +if (do_output()) & + call print_time(read_aether_time, routine//': time in restart file '//filename) +if (do_output()) & + call print_date(read_aether_time, routine//': date in restart file '//filename) + +if (debug > 8) then + write(error_string_1,'(A,I5)')'tsimulation ', tsimulation + call error_handler(E_MSG, routine, error_string_1, source) + write(error_string_1,'(A,I5)')'ndays ', ndays + call error_handler(E_MSG, routine, error_string_1, source) + write(error_string_1,'(A,I5)')'nsecs ', nsecs + call error_handler(E_MSG, routine, error_string_1, source) + + call print_date(aether_ref_time, routine//':model base date') + call print_time(aether_ref_time, routine//':model base time') +endif + +end function read_aether_time + +!----------------------------------------------------------------------- +! Convert Aether's non-CF-compliant names into CF-compliant names for filter. +! For the ions, it moves the name of the ion from the end of the variable names +! to the beginning. + +function aether_name_to_dart(varname) + +character(len=vtablenamelength), intent(in) :: varname + +character(len=vtablenamelength) :: aether_name_to_dart, aether +character(len=64) :: parts(8), var_root +integer :: char_num, first, i_parts, aether_len, end_str + +aether = trim(varname) +aether_len = len_trim(varname) +parts = '' + +! Look for the last ' '. The characters after that are the species. +! If there's no ' ', the whole string is the species. +char_num = 0 +char_num = scan(trim(aether),' ', back=.true.) +var_root = aether(char_num+1:aether_len) +! purge_chars removes unwanted [()\] +parts(1) = purge_chars( trim(var_root),')(\', plus_minus=.true.) +end_str = char_num + +! Tranform remaining pieces of varname into DART versions. +char_num = MISSING_I +first = 1 +i_parts = 2 +P_LOOP: do + ! This returns the position of the first blank *within the substring* passed in. + char_num = scan(aether(first:end_str),' ', back=.false.) + if (char_num > 0 .and. first < aether_len) then + parts(i_parts) = purge_chars(aether(first:first+char_num-1), '.)(\', plus_minus=.true.) + + first = first + char_num + i_parts = i_parts + 1 + else + exit P_LOOP + endif +enddo P_LOOP + +! Construct the DART field name from the parts +aether_name_to_dart = trim(parts(1)) +i_parts = 2 +Build : do + if (trim(parts(i_parts)) /= '') then + aether_name_to_dart = trim(aether_name_to_dart)//'_'//trim(parts(i_parts)) + i_parts = i_parts + 1 + else + exit Build + endif +enddo Build + +end function aether_name_to_dart + +!----------------------------------------------------------------------- +! Replace undesirable characters with better. + +function purge_chars(ugly_string, chars, plus_minus) + +character (len=*), intent(in) :: ugly_string, chars +logical, intent(in) :: plus_minus +character (len=64) :: purge_chars + +character (len=256) :: temp_str + +integer :: char_num, end_str, pm_num + +! Trim is not needed here +temp_str = ugly_string +end_str = len_trim(temp_str) +char_num = MISSING_I +Squeeze : do + ! Returns 0 if chars are not found + char_num = scan(temp_str, chars) + ! Need to change it to a char that won't be found by scan in the next iteration, + ! and can be easily removed. + if (char_num > 0) then + ! Squeeze out the character + temp_str(char_num:end_str-1) = temp_str(char_num+1:end_str) + temp_str(end_str:end_str) = '' +! temp_str(char_num:char_num) = ' ' + else + exit Squeeze + endif +enddo Squeeze + +! Replace + and - with pos and neg. Assume there's only 1. +temp_str = trim(adjustl(temp_str)) +end_str = len_trim(temp_str) +pm_num = scan(trim(temp_str),'+-', back=.false.) +if (pm_num == 0 .or. .not. plus_minus) then + purge_chars = trim(temp_str) +else + if (temp_str(pm_num:pm_num) == '+') then + purge_chars = temp_str(1:pm_num-1)//'pos' + else if (temp_str(pm_num:pm_num) == '-') then + purge_chars = temp_str(1:pm_num-1)//'neg' + endif + if (pm_num + 1 <= end_str) & + purge_chars = trim(purge_chars)//temp_str(pm_num+1:end_str) +endif + +end function purge_chars + +!----------------------------------------------------------------------- +! Open an Aether restart block file (neutral, ion, ...?) + +function open_block_file(filename, rw) + +! filename is trimmed by this definition +character(len=*), intent(in) :: filename +character(len=*), intent(in) :: rw ! 'read' or 'readwrite' +integer :: open_block_file + +character(len=*), parameter :: routine = 'open_block_file' + +if ( .not. file_exist(filename) ) then + write(error_string_1,'(4A)') 'cannot open file ', filename,' for ', rw + call error_handler(E_ERR, routine, error_string_1, source) +endif + +if (debug > 0) then + write(error_string_1,'(4A)') 'Opening file ', trim(filename), ' for ', rw + call error_handler(E_MSG, routine, error_string_1, source) +end if + + +if (rw == 'read') then + open_block_file = nc_open_file_readonly(filename, routine) +else if (rw == 'readwrite') then + open_block_file = nc_open_file_readwrite(filename, routine) +else + error_string_1 = ': must be called with rw={read,readwrite}, not '//rw + call error_handler(E_ERR, routine, error_string_1, source) +endif + + +if (debug > 80) then + write(error_string_1,'(4A)') 'Returned file descriptor is ', open_block_file + call error_handler(E_MSG, routine, error_string_1, source) +end if + +end function open_block_file + +end module transform_state_mod diff --git a/models/aether_lat-lon/work/filter_inputs.txt b/models/aether_lat-lon/work/filter_inputs.txt new file mode 100644 index 0000000000..c5891b8e29 --- /dev/null +++ b/models/aether_lat-lon/work/filter_inputs.txt @@ -0,0 +1,20 @@ +filter_input_0001.nc +filter_input_0002.nc +filter_input_0003.nc +filter_input_0004.nc +filter_input_0005.nc +filter_input_0006.nc +filter_input_0007.nc +filter_input_0008.nc +filter_input_0009.nc +filter_input_0010.nc +filter_input_0011.nc +filter_input_0012.nc +filter_input_0013.nc +filter_input_0014.nc +filter_input_0015.nc +filter_input_0016.nc +filter_input_0017.nc +filter_input_0018.nc +filter_input_0019.nc +filter_input_0020.nc diff --git a/models/aether_lat-lon/work/filter_outputs.txt b/models/aether_lat-lon/work/filter_outputs.txt new file mode 100644 index 0000000000..1b23ee7982 --- /dev/null +++ b/models/aether_lat-lon/work/filter_outputs.txt @@ -0,0 +1,20 @@ +filter_output_0001.nc +filter_output_0002.nc +filter_output_0003.nc +filter_output_0004.nc +filter_output_0005.nc +filter_output_0006.nc +filter_output_0007.nc +filter_output_0008.nc +filter_output_0009.nc +filter_output_0010.nc +filter_output_0011.nc +filter_output_0012.nc +filter_output_0013.nc +filter_output_0014.nc +filter_output_0015.nc +filter_output_0016.nc +filter_output_0017.nc +filter_output_0018.nc +filter_output_0019.nc +filter_output_0020.nc diff --git a/models/aether_lat-lon/work/input.nml b/models/aether_lat-lon/work/input.nml new file mode 100644 index 0000000000..c793d2cda5 --- /dev/null +++ b/models/aether_lat-lon/work/input.nml @@ -0,0 +1,327 @@ +&probit_transform_nml + / + +&algorithm_info_nml + qceff_table_filename = '' + / + +&quality_control_nml + input_qc_threshold = 3.0 + outlier_threshold = 3.0 +/ + +&state_vector_io_nml + / + +&perfect_model_obs_nml + read_input_state_from_file = .true. + single_file_in = .false. + input_state_files = 'pmo not ready for use' + init_time_days = -1 + init_time_seconds = -1 + + write_output_state_to_file = .false. + single_file_out = .false. + output_state_files = 'perfect_output_d01.nc' + output_interval = 1 + + obs_seq_in_file_name = "obs_seq.in" + obs_seq_out_file_name = "obs_seq.out" + first_obs_days = -1 + first_obs_seconds = -1 + last_obs_days = -1 + last_obs_seconds = -1 + + async = 0 + adv_ens_command = "../shell_scripts/advance_model.csh" + + trace_execution = .true. + output_timestamps = .false. + print_every_nth_obs = -1 + output_forward_op_errors = .true. + silence = .false. + / + +&filter_nml + single_file_in = .false., + input_state_files = '' + input_state_file_list = 'filter_inputs.txt' + init_time_days = 153131, + init_time_seconds = 0, + perturb_from_single_instance = .false., + perturbation_amplitude = 0.2, + + stages_to_write = 'preassim', 'analysis' + + single_file_out = .false., + output_state_files = '' + output_state_file_list = 'filter_outputs.txt' + output_interval = 1, + output_members = .true. + num_output_state_members = 20, + output_mean = .true. + output_sd = .true. + write_all_stages_at_end = .false. + compute_posterior = .true. + + ens_size = 20, + num_groups = 1, + distributed_state = .true. + + async = 4, + adv_ens_command = "./advance_model.csh", + tasks_per_model_advance = 1 + + obs_sequence_in_name = "obs_seq.out.1", + obs_sequence_out_name = "obs_seq.final", + num_output_obs_members = 20, + first_obs_days = -1, + first_obs_seconds = -1, + last_obs_days = -1, + last_obs_seconds = -1, + obs_window_days = -1, + obs_window_seconds = -1, + + inf_flavor = 5, 0, + inf_initial_from_restart = .false., .false., + inf_sd_initial_from_restart = .false., .false., + inf_deterministic = .true., .true., + inf_initial = 1.0, 1.0, + inf_lower_bound = 0.0, 1.0, + inf_upper_bound = 1000000.0, 1000000.0, + inf_damping = 1.0, 1.0, + inf_sd_initial = 0.6, 0.0, + inf_sd_lower_bound = 0.6, 0.0 + inf_sd_max_change = 1.05, 1.05, + + trace_execution = .false., + output_timestamps = .false., + output_forward_op_errors = .false., + write_obs_every_cycle = .false., + silence = .false., + + allow_missing_clm = .false. + / + + + +&ensemble_manager_nml + / + +&assim_tools_nml + cutoff = 0.2 + sort_obs_inc = .false. + spread_restoration = .false. + sampling_error_correction = .false. + adaptive_localization_threshold = -1 + output_localization_diagnostics = .false. + localization_diagnostics_file = 'localization_diagnostics' + print_every_nth_obs = 0 + / + +&transform_state_nml + aether_restart_dirname = + '/Users/raeder/DAI/Manhattan/models/aether_lat-lon/test4' + variables = + 'Temperature', 'neutrals', + 'O+', 'ions', + nblocks_lon = 2 + nblocks_lat = 2 + nblocks_lev = 1 + debug = 0 + / + +&model_nml + template_file = 'filter_input_0001.nc' + variables = 'Temperature', 'QTY_TEMPERATURE', '0.0', 'NA', 'UPDATE', + 'Opos', 'QTY_DENSITY_ION_OP', '0.0', 'NA', 'UPDATE' + time_step_days = 0 + time_step_seconds = 3600 + / + +&cov_cutoff_nml + select_localization = 1 + / + +®_factor_nml + select_regression = 1 + input_reg_file = "time_mean_reg" + save_reg_diagnostics = .false. + reg_diagnostics_file = "reg_diagnostics" + / + +&obs_sequence_nml + write_binary_obs_sequence = .false. + / + +&obs_kind_nml + assimilate_these_obs_types = 'AIRS_TEMPERATURE' + evaluate_these_obs_types = '' + / + +&location_nml + horiz_dist_only = .true. + vert_normalization_pressure = 100000.0 + vert_normalization_height = 10000.0 + vert_normalization_level = 20.0 + approximate_distance = .false. + nlon = 71 + nlat = 36 + output_box_info = .false. + / + +&preprocess_nml + overwrite_output = .true. + input_obs_qty_mod_file = '../../../assimilation_code/modules/observations/DEFAULT_obs_kind_mod.F90' + output_obs_qty_mod_file = '../../../assimilation_code/modules/observations/obs_kind_mod.f90' + input_obs_def_mod_file = '../../../observations/forward_operators/DEFAULT_obs_def_mod.F90' + output_obs_def_mod_file = '../../../observations/forward_operators/obs_def_mod.f90' + obs_type_files = '../../../observations/forward_operators/obs_def_upper_atm_mod.f90', + '../../../observations/forward_operators/obs_def_reanalysis_bufr_mod.f90', + '../../../observations/forward_operators/obs_def_altimeter_mod.f90', + '../../../observations/forward_operators/obs_def_metar_mod.f90', + '../../../observations/forward_operators/obs_def_dew_point_mod.f90', + '../../../observations/forward_operators/obs_def_rel_humidity_mod.f90', + '../../../observations/forward_operators/obs_def_gps_mod.f90', + '../../../observations/forward_operators/obs_def_vortex_mod.f90', + '../../../observations/forward_operators/obs_def_gts_mod.f90' + quantity_files = '../../../assimilation_code/modules/observations/atmosphere_quantities_mod.f90', + '../../../assimilation_code/modules/observations/space_quantities_mod.f90', + '../../../assimilation_code//modules/observations/chemistry_quantities_mod.f90' + / + +&utilities_nml + TERMLEVEL = 1 + module_details = .true. + logfilename = 'dart_log.out' + nmlfilename = 'dart_log.nml' + write_nml = 'file' + print_debug = .true. + / + +&mpi_utilities_nml + / + + +# The times in the namelist for the obs_diag program are vectors +# that follow the following sequence: +# year month day hour minute second +# max_num_bins can be used to specify a fixed number of bins +# in which case last_bin_center should be safely in the future. +# +# Acceptable latitudes range from [-90, 90] +# Acceptable longitudes range from [ 0, Inf] + +&obs_diag_nml + obs_sequence_name = 'obs_seq.final' + obs_sequence_list = '' + first_bin_center = 2005, 9, 9, 0, 0, 0 + last_bin_center = 2005, 9, 10, 0, 0, 0 + bin_separation = 0, 0, 0, 1, 0, 0 + bin_width = 0, 0, 0, 1, 0, 0 + time_to_skip = 0, 0, 0, 1, 0, 0 + max_num_bins = 1000 + trusted_obs = 'null' + Nregions = 4 + hlevel = 0, 100000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000, 1000000 + lonlim1 = 0.0, 0.0, 0.0, 235.0 + lonlim2 = 360.0, 360.0, 360.0, 295.0 + latlim1 = 20.0, -80.0, -20.0, 25.0 + latlim2 = 80.0, -20.0, 20.0, 55.0 + reg_names = 'Northern Hemisphere', 'Southern Hemisphere', 'Tropics', 'North America' + print_mismatched_locs = .false. + create_rank_histogram = .true. + outliers_in_histogram = .true. + use_zero_error_obs = .false. + verbose = .true. + / + +# obs_seq_to_netcdf also requires the schedule_nml. +# In this context, schedule_nml defines how many netcdf files get created. +# Each 'bin' results in an obs_epoch_xxxx.nc file. +# default is to put everything into one 'bin'. + +&obs_seq_to_netcdf_nml + obs_sequence_name = 'obs_seq.final' + obs_sequence_list = '' + append_to_netcdf = .false. + lonlim1 = 0.0 + lonlim2 = 360.0 + latlim1 = -90.0 + latlim2 = 90.0 + verbose = .false. + / + +&schedule_nml + calendar = 'Gregorian' + first_bin_start = 1601, 1, 1, 0, 0, 0 + first_bin_end = 2999, 1, 1, 0, 0, 0 + last_bin_end = 2999, 1, 1, 0, 0, 0 + bin_interval_days = 1000000 + bin_interval_seconds = 0 + max_num_bins = 1000 + print_table = .true. + / + +&obs_sequence_tool_nml + num_input_files = 1 + filename_seq = 'obs_seq.out' + filename_out = 'obs_seq.processed' + first_obs_days = -1 + first_obs_seconds = -1 + last_obs_days = -1 + last_obs_seconds = -1 + obs_types = '' + keep_types = .false. + print_only = .false. + min_lat = -90.0 + max_lat = 90.0 + min_lon = 0.0 + max_lon = 360.0 + / + + test 6 produces an exhaustive list of metadata for EVERY element in the DART state vector. + num_ens must = 1 + x_ind is for test3. The default (-1) will fail. + interp_test_dX are the model grid resolutions, + or numbers to use as such in the testing. + _d[xyz] is for cartesian grids, + _d{lon,lat,vert} is for spherical grids + interp_test_d[xyz] take precedence over d{lon,lat,vert} + all 3 must be specified. + aether (54 levels) dz ranges from ~1500 in the low levels to ~15,000 at the top. + interp_test_{lon,lat,vert}range; model domain limits (or a subdomain?) + Aether longitudes; in the filter_input_#.nc some are not whole numbers.; 75.00001 + Doc error: web page says run_tests uses entries from test1thru, + but that has test 0, which is not an option in model_mod_check. + tests_to_run is not dimensioned '(0:'. + +&model_mod_check_nml + num_ens = 1 + single_file = .FALSE. + input_state_files = 'filter_input_0001.nc' + output_state_files = 'filter_output_0001.nc' + quantity_of_interest = 'QTY_DENSITY_ION_OP' + all_metadata_file = 'test6_metadata.txt' + x_ind = 1234 + loc_of_interest = 15.0, -2.5, 100000. + interp_test_dlon = 10.0 + interp_test_dlat = 5.0 + interp_test_dvert = 1500.0 + interp_test_lonrange = 0, 360 + interp_test_latrange = -87.5, 87.5 + interp_test_vertrange = 96952.5625, 436360.25 + interp_test_dx = -888888.0 + interp_test_dy = -888888.0 + interp_test_dz = -888888.0 + interp_test_xrange = -888888.0, -888888.0 + interp_test_yrange = -888888.0, -888888.0 + interp_test_zrange = -888888.0, -888888.0 + interp_test_vertcoord = 'VERTISHEIGHT' + test1thru = -1 + run_tests = 1,2,3,4,5,7 + verbose = .FALSE. + / + +&quad_interpolate_nml +/ diff --git a/models/aether_lat-lon/work/obs_seq.out.1 b/models/aether_lat-lon/work/obs_seq.out.1 new file mode 100644 index 0000000000..1cd14ccb15 --- /dev/null +++ b/models/aether_lat-lon/work/obs_seq.out.1 @@ -0,0 +1,31 @@ + obs_sequence +obs_kind_definitions + 1 + 33 AIRS_TEMPERATURE + num_copies: 1 num_qc: 1 + num_obs: 2 max_num_obs: 2 +observation +Data QC + first: 1 last: 2 + OBS 1 + 271.330627441406 + 0.000000000000000E+000 + -1 2 -1 +obdef +loc3d + 3.406717740263719 0.5806184282903090 100000.0000000000 3 +kind + 33 +84601 153130 + 1.07229244766182 + OBS 2 + 27450.2966235645 + 0.000000000000000E+000 + 1 -1 -1 +obdef +loc3d + 3.484538383406885 0.5925166389933947 120000.0000000000 3 +kind + 33 +84601 153130 + 1.03153675838621 \ No newline at end of file diff --git a/models/aether_lat-lon/work/quickbuild.sh b/models/aether_lat-lon/work/quickbuild.sh new file mode 100755 index 0000000000..dd1b063219 --- /dev/null +++ b/models/aether_lat-lon/work/quickbuild.sh @@ -0,0 +1,51 @@ +#!/usr/bin/env bash + +# DART software - Copyright UCAR. This open source software is provided +# by UCAR, "as is", without charge, subject to all terms of use at +# http://www.image.ucar.edu/DAReS/DART/DART_download + +main() { + +export DART=$(git rev-parse --show-toplevel) +source "$DART"/build_templates/buildfunctions.sh + +MODEL=aether_lat-lon +LOCATION=threed_sphere + +programs=( +filter +model_mod_check +perfect_model_obs +) + +serial_programs=( +create_fixed_network_seq +create_obs_sequence +obs_diag +obs_seq_to_netcdf +) + +model_serial_programs=( +aether_to_dart +dart_to_aether) + +arguments "$@" + +# clean the directory +\rm -f -- *.o *.mod Makefile .cppdefs + +# build any NetCDF files from .cdl files +cdl_to_netcdf + +# build and run preprocess before making any other DART executables +buildpreprocess + +# build DART +buildit + +# clean up +\rm -f -- *.o *.mod + +} + +main "$@" diff --git a/models/clm/dart_to_clm.f90 b/models/clm/dart_to_clm.f90 index 93b669a95b..f1707bed5f 100644 --- a/models/clm/dart_to_clm.f90 +++ b/models/clm/dart_to_clm.f90 @@ -415,7 +415,7 @@ subroutine update_snow(dom_id, ncid_dart, ncid_clm, ncolumn, nlevel, nlevsno, re real(r8) :: dart_H2OICE(nlevel,ncolumn) real(r8) :: clm_H2OSNO(ncolumn) !(column,time) for vector history -real(r8) :: clm_SNLSNO(ncolumn) +integer :: clm_SNLSNO(ncolumn) real(r8) :: clm_SNOWDP(ncolumn) real(r8) :: clm_DZSNO(nlevsno,ncolumn) real(r8) :: clm_ZSNO(nlevsno,ncolumn) @@ -423,7 +423,7 @@ subroutine update_snow(dom_id, ncid_dart, ncid_clm, ncolumn, nlevel, nlevsno, re real(r8) :: clm_H2OLIQ(nlevel,ncolumn) real(r8) :: clm_H2OICE(nlevel,ncolumn) -real(r8) :: snlsno(ncolumn) +integer :: snlsno(ncolumn) real(r8) :: h2osno_pr(ncolumn) real(r8) :: h2osno_po(ncolumn) real(r8) :: snowdp_pr(ncolumn) @@ -611,7 +611,7 @@ subroutine update_snow(dom_id, ncid_dart, ncid_clm, ncolumn, nlevel, nlevsno, re ! Leave layer aggregation/initialization to CLM do ilevel=1,-snlsno(icolumn) - h2oliq_po(ilevel,icolumn) = 0.0_r8 + h2oliq_po(ilevel,icolumn) = 0.00000001_r8 h2oice_po(ilevel,icolumn) = 0.00000001_r8 dzsno_po(ilevel,icolumn) = 0.00000001_r8 @@ -686,7 +686,10 @@ subroutine update_snow(dom_id, ncid_dart, ncid_clm, ncolumn, nlevel, nlevsno, re ! Apply the increment for liquid, ice and depth for each layer. h2oliq_po(ilevel,icolumn) = h2oliq_pr(ilevel,icolumn) + gain_h2oliq h2oice_po(ilevel,icolumn) = h2oice_pr(ilevel,icolumn) + gain_h2oice - + + if (h2oliq_po(ilevel,icolumn) < 0.0_r8) h2oliq_po(ilevel,icolumn) = 0.00000001_r8 + if (h2oice_po(ilevel,icolumn) < 0.0_r8) h2oice_po(ilevel,icolumn) = 0.00000001_r8 + ! Important to update snow layer dimensions because CLM code relies ! on snow layer thickness for compaction/aggregation snow algorithm ! to function properly @@ -695,7 +698,8 @@ subroutine update_snow(dom_id, ncid_dart, ncid_clm, ncolumn, nlevel, nlevsno, re if (abs(h2osno_po(icolumn) - h2osno_pr(icolumn)) > 0.0_r8) then dzsno_po(ilevel,icolumn) = dzsno_pr(ilevel,icolumn) + gain_dzsno(ilevel,icolumn) - + if (dzsno_po(ilevel,icolumn) < 0.0_r8) dzsno_po(ilevel,icolumn) = 0.00000001_r8 + ! For consistency with updated dzsno_po (thickness) ! also update zsno_po (middle depth) and zisno (top interface depth) @@ -748,12 +752,13 @@ subroutine update_snow(dom_id, ncid_dart, ncid_clm, ncolumn, nlevel, nlevsno, re ! and recieved an H2OSNO adjustment. This eliminates operating ! on columns that do not require re-partitioning if (abs(h2osno_po(icolumn) - h2osno_pr(icolumn)) & - > 0.0_r8 .and. snlsno(icolumn) < 0.0_r8) then + > 0.0_r8 .and. snlsno(icolumn) < 0) then snowdp_po(icolumn) = snowdp_pr(icolumn) + sum(gain_dzsno(nlevsno+1+snlsno(icolumn):nlevsno,icolumn)) else snowdp_po(icolumn) = snowdp_pr(icolumn) endif + if (snowdp_po(icolumn) < 0.0_r8) snowdp_po(icolumn) = 0.00000001_r8 endif diff --git a/models/clm/shell_scripts/cesm2_2/DART_params.csh b/models/clm/shell_scripts/cesm2_2/DART_params.csh index 54feff81af..be45dc806a 100755 --- a/models/clm/shell_scripts/cesm2_2/DART_params.csh +++ b/models/clm/shell_scripts/cesm2_2/DART_params.csh @@ -77,7 +77,7 @@ endif # setenv dartroot /glade/work/${USER}/DART setenv use_SourceMods TRUE -setenv SourceModDir ${dartroot}/DART_SourceMods/cesm2_2_0/SourceMods +setenv SourceModDir ${dartroot}/models/clm/DART_SourceMods/cesm2_2_0/SourceMods # ============================================================================== # Directories: diff --git a/models/seir/model_mod.f90 b/models/seir/model_mod.f90 new file mode 100644 index 0000000000..d71657c17c --- /dev/null +++ b/models/seir/model_mod.f90 @@ -0,0 +1,439 @@ +! DART software - Copyright UCAR. This open source software is provided +! by UCAR, "as is", without charge, subject to all terms of use at +! http://www.image.ucar.edu/DAReS/DART/DART_download +! + +module model_mod + +use types_mod, only : r8, i8, i4, MISSING_R8 + +use time_manager_mod, only : time_type, set_time + +use location_mod, only : location_type, set_location, get_location, & + get_close_obs, get_close_state, & + convert_vertical_obs, convert_vertical_state + +use utilities_mod, only : register_module, do_nml_file, do_nml_term, & + nmlfileunit, find_namelist_in_file, & + check_namelist_read + +use location_io_mod, only : nc_write_location_atts, nc_write_location + +use netcdf_utilities_mod, only : nc_add_global_attribute, nc_synchronize_file, & + nc_add_global_creation_time, nc_begin_define_mode, & + nc_end_define_mode + +use obs_kind_mod, only : QTY_STATE_VARIABLE + +use mpi_utilities_mod, only : my_task_id + +use random_seq_mod, only : random_seq_type, init_random_seq, random_gaussian + +use ensemble_manager_mod, only : ensemble_type, get_my_num_vars, get_my_vars + +use distributed_state_mod, only : get_state + +use state_structure_mod, only : add_domain + +use default_model_mod, only : end_model, nc_write_model_vars + +use dart_time_io_mod, only : read_model_time, write_model_time + +implicit none +private + +public :: get_model_size, & + get_state_meta_data, & + model_interpolate, & + shortest_time_between_assimilations, & + static_init_model, & + init_conditions, & + init_time, & + adv_1step, & + nc_write_model_atts + +public :: pert_model_copies, & + nc_write_model_vars, & + get_close_obs, & + get_close_state, & + end_model, & + convert_vertical_obs, & + convert_vertical_state, & + read_model_time, & + write_model_time + +character(len=*), parameter :: source = "seir/model_mod.f90" + +type(location_type) :: state_loc ! state location, compute once and store for speed +type(random_seq_type) :: random_seq +type(time_type) :: time_step + +! Input parameters +integer(i8) :: model_size = 7 +integer :: time_step_days = 0 +integer :: time_step_seconds = 3600 +real(r8) :: t_incub = 5.6_r8 ! Incubation period (days) +real(r8) :: t_infec = 3.8_r8 ! Infection time (days) +real(r8) :: t_recov = 14.0_r8 ! Recovery period (days) +real(r8) :: t_death = 7.0_r8 ! Time until death (days) +real(r8) :: alpha = 0.007_r8 ! Vaccination rate (per day) +integer(i8) :: theta = 12467 ! New birth and new residents (persons per day) +real(r8) :: mu = 0.000025_r8 ! Natural death rate (persons per day) +real(r8) :: sigma = 0.05_r8 ! Vaccination inefficacy (e.g., 95% efficient) +real(r8) :: beta = 1.36e-9_r8 ! Transmission rate (per day) +real(r8) :: kappa = 0.00308 ! Mortality rate +real(r8) :: delta_t = 0.04167_r8 ! Model time step; 1/24 := 1 hour +integer(i8) :: num_pop = 331996199 ! Population (US) +real(r8) :: pert_size = 1.0 ! Size of perturbation (lognormal pdf param) + +real(r8) :: gama, delta, lambda, rho + +! Other related model parameters +real(r8), parameter :: E0 = 1.0_r8 ! Exposed (not yet infected) +real(r8), parameter :: I0 = 1.0_r8 ! Infected (not yet quarantined) +real(r8), parameter :: Q0 = 1.0_r8 ! Quarantined (confirmed and infected) +real(r8), parameter :: R0 = 1.0_r8 ! Recovered +real(r8), parameter :: D0 = 1.0_r8 ! Dead +real(r8), parameter :: V0 = 1.0_r8 ! Vaccinated +real(r8) :: S0 ! Susceptible + +namelist /model_nml/ model_size, time_step_days, time_step_seconds, & + delta_t, num_pop, pert_size, & + t_incub, t_infec, t_recov, t_death, & + alpha, theta, beta, sigma, kappa, mu + +contains + + +!------------------------------------------------------------------ +! + +subroutine static_init_model() + +real(r8) :: x_loc +integer :: i, dom_id + +! Do any initial setup needed, including reading the namelist values +call initialize() + +! Define the locations of the model state variables +! The SEIR variables have no physical location, +! and so I'm placing all 7 variables at the same +! virtual point in space. +x_loc = 0.5_r8 +state_loc = set_location(x_loc) + +! This time is both the minimum time you can ask the model to advance +! and it sets the assimilation window. +! All observations within +/- 1/2 this interval from the current +! model time will be assimilated. +time_step = set_time(time_step_seconds, time_step_days) + +! Tell the DART I/O routines how large the model data is so they +! can read/write it. +dom_id = add_domain(model_size) + +end subroutine static_init_model + +!------------------------------------------------------------------ +! Advance the SEIR model using a four-step RK scheme + +subroutine adv_1step(x, time) + +real(r8), intent(inout) :: x(:) +type(time_type), intent(in) :: time + +real(r8), dimension(size(x)) :: xi, x1, x2, x3, x4, dx + +! 1st step +call seir_eqns(x, dx) +x1 = delta_t * dx + +! 2nd step +xi = x + 0.5_r8 * delta_t * dx +call seir_eqns(xi, dx) +x2 = delta_t * dx + +! 3rd step +xi = x + 0.5_r8 * delta_t * dx +call seir_eqns(xi, dx) +x3 = delta_t * dx + +! 4th step +xi = x + delta_t * dx +call seir_eqns(xi, dx) +x4 = delta_t * dx + +! Compute new value for x +x = x + x1/6.0_r8 + x2/3.0_r8 + x3/3.0_r8 + x4/6.0_r8 + +end subroutine adv_1step + +!------------------------------------------------------------------ +! SEIR Model Equations +! The following extended SEIR Model with Vaccination +! is adaopted from Ghostine et al. (2021): +! Ghostine, R., Gharamti, M., Hassrouny, S and Hoteit, I +! "An Extended SEIR Model with Vaccination for Forecasting +! the COVID-19 Pandemic in Saudi Arabia Using an Ensemble +! Kalman Filter" Mathematics 2021, 9, 636. +! https://dx.doi.org/10.3390/math9060636 + +subroutine seir_eqns(x, fx) + +! State: x = [S, E, I, Q, R, D, V] + +real(r8), intent(in) :: x(:) +real(r8), intent(out) :: fx(:) + +fx(1) = theta - beta * x(1) * x(3) - alpha * x(1) - mu * x(1) +fx(2) = beta * x(1) * x(3) - gama * x(2) + sigma * beta * x(7) * x(3) - mu * x(2) +fx(3) = gama * x(2) - delta * x(3) - mu * x(3) +fx(4) = delta * x(3) - (1.0_r8 - kappa) * lambda * x(4) - kappa * rho * x(4) - mu * x(4) +fx(5) = (1.0_r8 - kappa) * lambda * x(4) - mu * x(5) +fx(6) = kappa * rho * x(4) +fx(7) = alpha * x(1) - sigma * beta * x(7) * x(3) - mu * x(7) + +end subroutine seir_eqns + +!------------------------------------------------------------------ +! Perturbs a model state for generating initial ensembles. +! Returning interf_provided .true. means this code has +! added uniform small independent perturbations to a +! single ensemble member to generate the full ensemble. +subroutine pert_model_copies(state_ens_handle, ens_size, pert_amp, interf_provided) + +type(ensemble_type), intent(inout) :: state_ens_handle +integer, intent(in) :: ens_size +real(r8), intent(in) :: pert_amp +logical, intent(out) :: interf_provided + +integer :: i,j, num_my_grid_points +real(r8) :: rng + +interf_provided = .true. + +call init_random_seq(random_seq, my_task_id()+1) +! if we are running with more than 1 task, then +! we have all the ensemble members for a subset of +! the model state. which variables we have are determined +! by looking at the global index number into the state vector. + +num_my_grid_points = get_my_num_vars(state_ens_handle) + +do i=1,num_my_grid_points + + ! Lognormal Distribution + do j= 1, ens_size + rng = pert_size * random_gaussian(random_seq, 0.0_r8, 1.0_r8) + state_ens_handle%copies(j, i) = & + state_ens_handle%copies(j, i) * exp(rng) + end do +end do + + +end subroutine pert_model_copies + +!------------------------------------------------------------------ +! Returns a model state vector, x, that is some sort of appropriate +! initial condition for starting up a long integration of the model. +! At present, this is only used if the namelist parameter +! start_from_restart is set to .false. in the program perfect_model_obs. +! If this option is not to be used in perfect_model_obs, or if no +! synthetic data experiments using perfect_model_obs are planned, +! this can be a NULL INTERFACE. + +subroutine init_conditions(x) + +real(r8), dimension (model_size) :: x0 +real(r8), intent(out) :: x(:) + +x0 = (/S0, E0, I0, Q0, R0, D0, V0/) +x = x0 + +end subroutine init_conditions + +!------------------------------------------------------------------ +! Companion interface to init_conditions. Returns a time that is somehow +! appropriate for starting up a long integration of the model. +! At present, this is only used if the namelist parameter +! start_from_restart is set to .false. in the program perfect_model_obs. +! If this option is not to be used in perfect_model_obs, or if no +! synthetic data experiments using perfect_model_obs are planned, +! this can be a NULL INTERFACE. + +subroutine init_time(time) + +type(time_type), intent(out) :: time + +! for now, just set to 0 +time = set_time(0,0) + +end subroutine init_time + + +!------------------------------------------------------------------ +! Returns the number of items in the state vector as an integer. +! This interface is required for all applications. + +function get_model_size() + +integer(i8) :: get_model_size + +get_model_size = model_size + +end function get_model_size + + + +!------------------------------------------------------------------ +! Returns the smallest increment in time that the model is capable +! of advancing the state in a given implementation, or the shortest +! time you want the model to advance between assimilations. +! This interface is required for all applications. + +function shortest_time_between_assimilations() + +type(time_type) :: shortest_time_between_assimilations + +shortest_time_between_assimilations = time_step + +end function shortest_time_between_assimilations + + + +!------------------------------------------------------------------ +! Given a state handle, a location, and a model state variable quantity, +! interpolates the state variable fields to that location and returns +! the values in expected_obs. The istatus variables should be returned as +! 0 unless there is some problem in computing the interpolation in +! which case an alternate value should be returned. The itype variable +! is an integer that specifies the quantity of field (for +! instance temperature, zonal wind component, etc.). In low order +! models that have no notion of types of variables this argument can +! be ignored. For applications in which only perfect model experiments +! with identity observations (i.e. only the value of a particular +! state variable is observed), this can be a NULL INTERFACE. + +subroutine model_interpolate(state_handle, ens_size, location, iqty, expected_obs, istatus) + +type(ensemble_type), intent(in) :: state_handle +integer, intent(in) :: ens_size +type(location_type), intent(in) :: location +integer, intent(in) :: iqty +real(r8), intent(out) :: expected_obs(ens_size) !< array of interpolated values +integer, intent(out) :: istatus(ens_size) + +! Given the nature of the SEIR model, no interpolation +! is needed. Only identity observations are utilized. + +! This should be the result of the interpolation of a +! given quantity (iqty) of variable at the given location. +expected_obs(:) = MISSING_R8 + +! The return code for successful return should be 0. +! Any positive number is an error. +! Negative values are reserved for use by the DART framework. +! Using distinct positive values for different types of errors can be +! useful in diagnosing problems. +istatus(:) = 1 + +end subroutine model_interpolate + + + +!------------------------------------------------------------------ +! Given an integer index into the state vector structure, returns the +! associated location. A second intent(out) optional argument quantity +! (qty) can be returned if the model has more than one type of field +! (for instance temperature and zonal wind component). This interface is +! required for all filter applications as it is required for computing +! the distance between observations and state variables. + +subroutine get_state_meta_data(index_in, location, qty) + +integer(i8), intent(in) :: index_in +type(location_type), intent(out) :: location +integer, intent(out), optional :: qty + +! these should be set to the actual location and state quantity +location = state_loc !(index_in) +if (present(qty)) qty = QTY_STATE_VARIABLE + +end subroutine get_state_meta_data + + + +!------------------------------------------------------------------ +! Do any initialization/setup, including reading the +! namelist values. + +subroutine initialize() + +integer :: iunit, io + +! Print module information +call register_module(source) + +! Read the namelist +call find_namelist_in_file("input.nml", "model_nml", iunit) +read(iunit, nml = model_nml, iostat = io) +call check_namelist_read(iunit, io, "model_nml") + +! Output the namelist values if requested +if (do_nml_file()) write(nmlfileunit, nml=model_nml) +if (do_nml_term()) write( * , nml=model_nml) + +! Compute other model parameters +gama = 1.0_r8 / t_incub +delta = 1.0_r8 / t_infec +lambda = 1.0_r8 / t_recov +rho = 1.0_r8 / t_death + +! Compute initial value for S: +S0 = num_pop - E0 - I0 - Q0 - R0 - D0 + +end subroutine initialize + + +!------------------------------------------------------------------ +! Writes model-specific attributes to a netCDF file + +subroutine nc_write_model_atts(ncid, domain_id) + +integer, intent(in) :: ncid +integer, intent(in) :: domain_id +integer(i4) :: offset + +! put file into define mode. + +integer :: msize + +msize = int(model_size, i4) + +call nc_begin_define_mode(ncid) + +call nc_add_global_creation_time(ncid) + +call nc_add_global_attribute(ncid, "model_source", source) + +call nc_add_global_attribute(ncid, "model", "seir") + +call nc_write_location_atts(ncid, msize) +call nc_end_define_mode(ncid) + +! Note that location for this model isn't used +do offset = 1, msize + call nc_write_location(ncid, state_loc, offset) +enddo + +! Flush the buffer and leave netCDF file open +call nc_synchronize_file(ncid) + +end subroutine nc_write_model_atts + +!=================================================================== +! End of model_mod +!=================================================================== +end module model_mod + diff --git a/models/seir/readme.rst b/models/seir/readme.rst new file mode 100644 index 0000000000..54bb894f3b --- /dev/null +++ b/models/seir/readme.rst @@ -0,0 +1,138 @@ +SEIR +==== + +Overview +-------- + +The extended SEIR Model with Vaccination was first proposed by Ghostine et al. (2021) [1]_ +to simulate the novel coronavirus disease (COVID-19) spread. The model considers 7 +stages of infection: + + 1. Susceptible (S), + 2. Exposed (E), + 3. Infected (I), + 4. Quarantined (Q), + 5. Recovered (R), + 6. Deaths (D), + 7. Vaccinated (V). + +There are several parameters that can be changed to study different cases and regions: + + - :math:`\theta`: New births and new residents per unit of time, + - :math:`\beta`: Transmission rate divided by the population size, + - :math:`\alpha`: Vaccination rate, + - :math:`\mu`: Natural death rate, + - :math:`\gamma`: Average latent time, + - :math:`\delta`: Average quarantine time, + - :math:`\kappa`: Mortality rate, + - :math:`\lambda`: Average days until recovery, + - :math:`\rho`: Average days until death, + - :math:`\sigma`: Vaccine in-efficacy (:math:`0 \leq \sigma \leq 1`). + +Earth system models are often descritized in space. The state in these models represents +variables at different spatial locations. The variables of the SEIR model describe the +stage/phase of the disease and they do not have a physical location. To this end, +techniques such as spatial localization are not applicable in this model. DART assumes +that all 7 variables belong to the same *virtual* point in space. Any assimilated +observation will impact all 7 variables. + +The SEIR model uses identity observations. Typical observations that can be assimilated +are: + + *Recovered*, *Death* and *Vaccinated*. + +Some agencies provide data for "*Confirmed*" cases. This can be used to compute and +assimilate the number of active (which is equivelant to quarantined) cases as shown: + +*Active/Quarantined (Q) = Confirmed - Recovered (R) - Deaths (D)* + +Initial versions of the model were tested using DART_LAB. This was conducted by +**Shaniah Reece** as part of her SIParCS internship at NSF NCAR (2022). + +Namelist +-------- + +The ``&model_nml`` namelist is read from the ``input.nml`` file. Namelists +start with an ampersand ``&`` and terminate with a slash ``/``. Character +strings that contain a ``/`` must be enclosed in quotes to prevent them from +prematurely terminating the namelist. + +.. code-block:: fortran + + &model_nml + model_size = 40, + delta_t = 0.04167, + time_step_days = 0, + time_step_seconds = 3600, + num_pop = 331996199, + pert_size = 0.5, + t_incub = 5.6, + t_infec = 3.8, + t_recov = 14.0, + t_death = 7.0, + alpha = 0.000001, + theta = 12467, + mu = 0.000025, + sigma = 0.05, + beta = 0.00000000136, + kappa = 0.00308, + / + +Description of each namelist entry +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + ++-------------------+----------+-------------------------------------------+ +| Item | Type | Description | ++===================+==========+===========================================+ +| model_size | integer | Number of variables in model. | ++-------------------+----------+-------------------------------------------+ +| delta_t | real(r8) | Non-dimensional timestep. This is | +| | | mapped to the dimensional timestep | +| | | specified by time_step_days and | +| | | time_step_seconds. | ++-------------------+----------+-------------------------------------------+ +| time_step_days | integer | Number of days for dimensional | +| | | timestep, mapped to delta_t. | ++-------------------+----------+-------------------------------------------+ +| time_step_seconds | integer | Number of seconds for dimensional | +| | | timestep, mapped to delta_t. | ++-------------------+----------+-------------------------------------------+ +| num_pop | integer | Population size. | ++-------------------+----------+-------------------------------------------+ +| pert_size | real(r8) | Size of perturbation used to create | +| | | an ensemble using a lognormal pdf. | ++-------------------+----------+-------------------------------------------+ +| t_incub | real(r8) | Incubation period | +| | | :math:`\equiv 1/\gamma`. | ++-------------------+----------+-------------------------------------------+ +| t_infec | real(r8) | Infection time | +| | | :math:`\equiv 1/\delta`. | ++-------------------+----------+-------------------------------------------+ +| t_recov | real(r8) | Recovery period | +| | | :math:`\equiv 1/\lambda`. | ++-------------------+----------+-------------------------------------------+ +| t_death | real(r8) | Time until death | +| | | :math:`\equiv 1/\rho`. | ++-------------------+----------+-------------------------------------------+ +| alpha | real(r8) | Vaccination rate. If study period | +| | | starts before vaccination is | +| | | available, this must be set to 0. | ++-------------------+----------+-------------------------------------------+ +| theta | integer | New birth and new residents. | ++-------------------+----------+-------------------------------------------+ +| mu | real(r8) | Natural death rate. | ++-------------------+----------+-------------------------------------------+ +| sigma | real(r8) | Vaccination inefficacy (e.g., if the | +| | | vaccine is 95% effective, then | +| | | :math:`\sigma = 1-0.95 = 0.05`). | ++-------------------+----------+-------------------------------------------+ +| beta | real(r8) | Transmission rate divided by population | +| | | size. | ++-------------------+----------+-------------------------------------------+ +| kappa | real(r8) | Mortality rate. | ++-------------------+----------+-------------------------------------------+ + +References +---------- + +.. [1] Ghostine, R.; Gharamti, M.; Hassrouny, S.; Hoteit, I. An Extended SEIR Model with Vaccination for Forecasting the COVID-19 Pandemic in Saudi Arabia Using an Ensemble Kalman Filter. Mathematics 2021, 9, 636. https://dx.doi.org/10.3390/math9060636. diff --git a/models/seir/work/input.nml b/models/seir/work/input.nml new file mode 100644 index 0000000000..f5681a265d --- /dev/null +++ b/models/seir/work/input.nml @@ -0,0 +1,245 @@ +&probit_transform_nml + / + +&algorithm_info_nml + qceff_table_filename = '' + / + +&perfect_model_obs_nml + read_input_state_from_file = .true., + single_file_in = .true. + input_state_files = "perfect_input.nc" + + write_output_state_to_file = .true., + single_file_out = .true. + output_state_files = "perfect_output.nc" + output_interval = 1, + + async = 0, + adv_ens_command = "./advance_model.csh", + + obs_seq_in_file_name = "obs_seq.in", + obs_seq_out_file_name = "obs_seq.out", + init_time_days = 0, + init_time_seconds = 0, + first_obs_days = -1, + first_obs_seconds = -1, + last_obs_days = -1, + last_obs_seconds = -1, + + trace_execution = .false., + output_timestamps = .false., + print_every_nth_obs = -1, + output_forward_op_errors = .false., + silence = .false., + / + +&filter_nml + single_file_in = .true., + input_state_files = '' + input_state_file_list = 'filter_input_list.txt' + + stages_to_write = 'preassim', 'analysis', 'output' + + single_file_out = .true., + output_state_files = '' + output_state_file_list = 'filter_output_list.txt' + output_interval = 1, + output_members = .true. + num_output_state_members = 20, + output_mean = .true. + output_sd = .true. + + ens_size = 3, + num_groups = 1, + perturb_from_single_instance = .true., + perturbation_amplitude = 0.2, + distributed_state = .true. + + async = 0, + adv_ens_command = "./advance_model.csh", + + obs_sequence_in_name = "obs_seq.out", + obs_sequence_out_name = "obs_seq.final", + num_output_obs_members = 20, + init_time_days = 0, + init_time_seconds = 0, + first_obs_days = -1, + first_obs_seconds = -1, + last_obs_days = -1, + last_obs_seconds = -1, + + inf_flavor = 5, 0, + inf_initial_from_restart = .false., .false., + inf_sd_initial_from_restart = .false., .false., + inf_deterministic = .true., .true., + inf_initial = 1.0, 1.0, + inf_lower_bound = 0.0, 1.0, + inf_upper_bound = 1000000.0, 1000000.0, + inf_damping = 0.9, 1.0, + inf_sd_initial = 0.6, 0.0, + inf_sd_lower_bound = 0.6, 0.0, + inf_sd_max_change = 1.05, 1.05, + + trace_execution = .false., + output_timestamps = .false., + output_forward_op_errors = .false., + write_obs_every_cycle = .false., + silence = .false., + / + + +&ensemble_manager_nml + / + +&assim_tools_nml + cutoff = 0.02, + sort_obs_inc = .false., + spread_restoration = .false., + sampling_error_correction = .false., + adaptive_localization_threshold = -1, + output_localization_diagnostics = .false., + localization_diagnostics_file = 'localization_diagnostics', + print_every_nth_obs = 0, + rectangular_quadrature = .true., + gaussian_likelihood_tails = .false., + / + +&cov_cutoff_nml + select_localization = 1, + / + +®_factor_nml + select_regression = 1, + input_reg_file = "time_mean_reg", + save_reg_diagnostics = .false., + reg_diagnostics_file = "reg_diagnostics", + / + +&obs_sequence_nml + write_binary_obs_sequence = .false., + read_binary_file_format = 'native' + / + +&obs_kind_nml + assimilate_these_obs_types = 'RAW_STATE_VARIABLE' + / + +&model_nml + model_size = 7, + delta_t = 0.04167, + time_step_days = 0, + time_step_seconds = 3600, + num_pop = 331996199, + pert_size = 0.5, + t_incub = 5.6, + t_infec = 3.8, + t_recov = 14.0, + t_death = 7.0, + alpha = 0.000001, + theta = 12467, + mu = 0.000025, + sigma = 0.05, + beta = 0.00000000136, + kappa = 0.00308, + / + +&utilities_nml + termlevel = 1, + module_details = .false., + logfilename = 'dart_log.out', + nmlfilename = 'dart_log.nml', + write_nml = 'file', + print_debug = .false., + / + +&mpi_utilities_nml + / + +&preprocess_nml + overwrite_output = .true. + input_obs_def_mod_file = '../../../observations/forward_operators/DEFAULT_obs_def_mod.F90' + output_obs_def_mod_file = '../../../observations/forward_operators/obs_def_mod.f90' + input_obs_qty_mod_file = '../../../assimilation_code/modules/observations/DEFAULT_obs_kind_mod.F90' + output_obs_qty_mod_file = '../../../assimilation_code/modules/observations/obs_kind_mod.f90' + obs_type_files = '../../../observations/forward_operators/obs_def_1d_state_mod.f90' + quantity_files = '../../../assimilation_code/modules/observations/oned_quantities_mod.f90' + / + +&obs_sequence_tool_nml + filename_seq = 'obs1.out', 'obs2.out', + filename_seq_list = '', + filename_out = 'obs_seq.combined', + first_obs_days = -1, + first_obs_seconds = -1, + last_obs_days = -1, + last_obs_seconds = -1, + print_only = .false., + gregorian_cal = .false., + / + +&obs_diag_nml + obs_sequence_name = 'obs_seq.final', + bin_width_days = -1, + bin_width_seconds = -1, + init_skip_days = 0, + init_skip_seconds = 0, + Nregions = 3, + trusted_obs = 'null', + lonlim1 = 0.00, 0.00, 0.50 + lonlim2 = 1.01, 0.50, 1.01 + reg_names = 'whole', 'lower', 'upper' + create_rank_histogram = .true., + outliers_in_histogram = .true., + use_zero_error_obs = .false., + verbose = .false. + / + +&schedule_nml + calendar = 'Gregorian', + first_bin_start = 1601, 1, 1, 0, 0, 0, + first_bin_end = 2999, 1, 1, 0, 0, 0, + last_bin_end = 2999, 1, 1, 0, 0, 0, + bin_interval_days = 1000000, + bin_interval_seconds = 0, + max_num_bins = 1000, + print_table = .true. + / + +&obs_seq_to_netcdf_nml + obs_sequence_name = 'obs_seq.final', + obs_sequence_list = '', + append_to_netcdf = .false., + lonlim1 = 0.0, + lonlim2 = 1.0, + verbose = .true. + / + +&state_vector_io_nml + / + +&quality_control_nml + input_qc_threshold = 3.0, + outlier_threshold = -1.0, + / + +&integrate_model_nml + trace_execution = .true. + ic_file_name = 'temp_ic.nc' + ud_file_name = 'temp_uc.nc' + / + +&model_mod_check_nml + input_state_files = 'perfect_input.nc' + output_state_files = 'mmc_output.nc' + num_ens = 1 + single_file = .false. + test1thru = 7 + run_tests = 1,2,3,4,5,6 + x_ind = 4 + loc_of_interest = 0.3 + quantity_of_interest = 'QTY_STATE_VARIABLE' + interp_test_dx = 0.02 + interp_test_xrange = 0.0, 1.0 + verbose = .true. + / diff --git a/models/seir/work/obs_seq.out b/models/seir/work/obs_seq.out new file mode 100644 index 0000000000..7c6ec7e6ae --- /dev/null +++ b/models/seir/work/obs_seq.out @@ -0,0 +1,189 @@ + obs_sequence +obs_type_definitions + 0 + num_copies: 2 num_qc: 1 + num_obs: 15 max_num_obs: 15 +observations +truth +Quality Control + first: 1 last: 15 + OBS 1 + 1.5550552250651206 + 0.57764969235130070 + 0.0000000000000000 + -1 2 -1 +obdef +loc1d + 0.5000000000000000 +kind + -5 + 0 1 + 0.50000000000000000 + OBS 2 + 0.90066834931003292 + 0.30047988296891687 + 0.0000000000000000 + 1 3 -1 +obdef +loc1d + 0.5000000000000000 +kind + -6 + 0 1 + 0.29999999999999999 + OBS 3 + 332.04521140436549 + 332.12052020582382 + 0.0000000000000000 + 2 4 -1 +obdef +loc1d + 0.5000000000000000 +kind + -7 + 0 1 + 1.0000000000000000E-002 + OBS 4 + 1.2957274103345058 + 0.66709604449775439 + 0.0000000000000000 + 3 5 -1 +obdef +loc1d + 0.5000000000000000 +kind + -5 + 0 2 + 0.50000000000000000 + OBS 5 + -0.28483908888387055 + 0.30103267079449658 + 0.0000000000000000 + 4 6 -1 +obdef +loc1d + 0.5000000000000000 +kind + -6 + 0 2 + 0.29999999999999999 + OBS 6 + 664.18897402952678 + 664.13657445350088 + 0.0000000000000000 + 5 7 -1 +obdef +loc1d + 0.5000000000000000 +kind + -7 + 0 2 + 1.0000000000000000E-002 + OBS 7 + 1.2510430688864891 + 0.76733890921198655 + 0.0000000000000000 + 6 8 -1 +obdef +loc1d + 0.5000000000000000 +kind + -5 + 0 3 + 0.50000000000000000 + OBS 8 + 0.14649035752369940 + 0.30165218527115972 + 0.0000000000000000 + 7 9 -1 +obdef +loc1d + 0.5000000000000000 +kind + -6 + 0 3 + 0.29999999999999999 + OBS 9 + 996.25575565683448 + 996.14816275897965 + 0.0000000000000000 + 8 10 -1 +obdef +loc1d + 0.5000000000000000 +kind + -7 + 0 3 + 1.0000000000000000E-002 + OBS 10 + 1.5062181726711423 + 0.87797602399919394 + 0.0000000000000000 + 9 11 -1 +obdef +loc1d + 0.5000000000000000 +kind + -5 + 0 4 + 0.50000000000000000 + OBS 11 + 0.54462279403932623 + 0.30233594243716233 + 0.0000000000000000 + 10 12 -1 +obdef +loc1d + 0.5000000000000000 +kind + -6 + 0 4 + 0.29999999999999999 + OBS 12 + 1327.9926698239965 + 1328.1552851235194 + 0.0000000000000000 + 11 13 -1 +obdef +loc1d + 0.5000000000000000 +kind + -7 + 0 4 + 1.0000000000000000E-002 + OBS 13 + 1.3441050859614210 + 0.99898224420578696 + 0.0000000000000000 + 12 14 -1 +obdef +loc1d + 0.5000000000000000 +kind + -5 + 0 5 + 0.50000000000000000 + OBS 14 + 0.96724285779869490 + 0.30308378852289997 + 0.0000000000000000 + 13 15 -1 +obdef +loc1d + 0.5000000000000000 +kind + -6 + 0 5 + 0.29999999999999999 + OBS 15 + 1660.0844485052482 + 1660.1579415388080 + 0.0000000000000000 + 14 -1 -1 +obdef +loc1d + 0.5000000000000000 +kind + -7 + 0 5 + 1.0000000000000000E-002 diff --git a/models/seir/work/perfect_input.cdl b/models/seir/work/perfect_input.cdl new file mode 100644 index 0000000000..f07c245e32 --- /dev/null +++ b/models/seir/work/perfect_input.cdl @@ -0,0 +1,45 @@ +netcdf perfect_input { +dimensions: + member = 1 ; + metadatalength = 32 ; + location = 7 ; + time = UNLIMITED ; // (1 currently) +variables: + + char MemberMetadata(member, metadatalength) ; + MemberMetadata:long_name = "description of each member" ; + + double location(location) ; + location:short_name = "loc1d" ; + location:long_name = "location on a unit circle" ; + location:dimension = 1 ; + location:valid_range = 0., 1. ; + + double state(time, member, location) ; + state:long_name = "the model state" ; + + double time(time) ; + time:long_name = "valid time of the model state" ; + time:axis = "T" ; + time:cartesian_axis = "T" ; + time:calendar = "none" ; + time:units = "days" ; + +// global attributes: + :title = "true state from control" ; + :version = "$Id$" ; + :model = "SEIR" ; + :history = "NA" ; + +data: + + MemberMetadata = + "true state" ; + + location = 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 ; + + state = 331996196, 1.0, 1.0, 1.0, 0.5, 0.3, 0.1 ; + + time = 1.0 ; + +} diff --git a/models/seir/work/quickbuild.sh b/models/seir/work/quickbuild.sh new file mode 100755 index 0000000000..a08deca17f --- /dev/null +++ b/models/seir/work/quickbuild.sh @@ -0,0 +1,59 @@ +#!/usr/bin/env bash + +# DART software - Copyright UCAR. This open source software is provided +# by UCAR, "as is", without charge, subject to all terms of use at +# http://www.image.ucar.edu/DAReS/DART/DART_download + +main() { + +export DART=$(git rev-parse --show-toplevel) +source "$DART"/build_templates/buildfunctions.sh + +MODEL=seir +LOCATION=oned + + +programs=( +closest_member_tool +filter +model_mod_check +perfect_model_obs +) + +serial_programs=( +create_fixed_network_seq +create_obs_sequence +fill_inflation_restart +integrate_model +obs_common_subset +obs_diag +obs_sequence_tool +) + +model_programs=( +) + +model_serial_programs=( +) + +# quickbuild arguments +arguments "$@" + +# clean the directory +\rm -f -- *.o *.mod Makefile .cppdefs + +# build any NetCDF files from .cdl files +cdl_to_netcdf + +# build and run preprocess before making any other DART executables +buildpreprocess + +# build +buildit + +# clean up +\rm -f -- *.o *.mod + +} + +main "$@" diff --git a/models/wrf/WRF_BC/pert_wrf_bc.f90 b/models/wrf/WRF_BC/pert_wrf_bc.f90 index 2cfde431f4..2cf94facd3 100644 --- a/models/wrf/WRF_BC/pert_wrf_bc.f90 +++ b/models/wrf/WRF_BC/pert_wrf_bc.f90 @@ -61,8 +61,6 @@ program pert_wrf_bc integer, dimension(4) :: dims -integer, external :: iargc - real(r8), allocatable, dimension(:,:) :: tend2d, scnd2d, frst2d real(r8), allocatable, dimension(:,:,:) :: tend3d, scnd3d, frst3d, full3d, full3d_next diff --git a/models/wrf/WRF_BC/update_wrf_bc.f90 b/models/wrf/WRF_BC/update_wrf_bc.f90 index 68228d2770..cd78e5a84f 100644 --- a/models/wrf/WRF_BC/update_wrf_bc.f90 +++ b/models/wrf/WRF_BC/update_wrf_bc.f90 @@ -60,8 +60,6 @@ program update_wrf_bc integer, dimension(4) :: dims -integer, external :: iargc - real(r8), allocatable, dimension(:,:) :: tend2d, scnd2d, frst2d real(r8), allocatable, dimension(:,:,:) :: tend3d, scnd3d, frst3d, full3d, full3d_mean diff --git a/models/wrf/WRF_DART_utilities/add_pert_where_high_refl.f90 b/models/wrf/WRF_DART_utilities/add_pert_where_high_refl.f90 index a89f063f28..ae4a0d228f 100644 --- a/models/wrf/WRF_DART_utilities/add_pert_where_high_refl.f90 +++ b/models/wrf/WRF_DART_utilities/add_pert_where_high_refl.f90 @@ -34,7 +34,6 @@ PROGRAM add_pert_where_high_refl use utilities_mod, only : error_handler, E_ERR, initialize_utilities, finalize_utilities use random_seq_mod, only : random_gaussian, random_seq_type, init_random_seq use netcdf -use f2kcli implicit none diff --git a/models/wrf/WRF_DART_utilities/advance_cymdh.f90 b/models/wrf/WRF_DART_utilities/advance_cymdh.f90 index 4b29fc9ffd..9cc291c316 100644 --- a/models/wrf/WRF_DART_utilities/advance_cymdh.f90 +++ b/models/wrf/WRF_DART_utilities/advance_cymdh.f90 @@ -22,7 +22,7 @@ program advance_cymdh character(len=10) :: ccyymmddhh - nargum=iargc() + nargum=COMMAND_ARGUMENT_COUNT() if(nargum /= 2) then write(unit=*, fmt='(a)') & @@ -34,7 +34,7 @@ program advance_cymdh do n=1,80 argum(i)(n:n)=' ' enddo - call getarg(i,argum(i)) + call GET_COMMAND_ARGUMENT(i,argum(i)) enddo ccyymmddhh = trim(argum(1)) diff --git a/models/wrf/WRF_DART_utilities/f2kcli.f90 b/models/wrf/WRF_DART_utilities/f2kcli.f90 deleted file mode 100644 index 89b1247bd1..0000000000 --- a/models/wrf/WRF_DART_utilities/f2kcli.f90 +++ /dev/null @@ -1,197 +0,0 @@ -! This code is not protected by the DART copyright agreement. -! DART $Id$ - -! F2KCLI : Fortran 200x Command Line Interface -! copyright Interactive Software Services Ltd. 2002 -! For conditions of use see manual.txt -! -! Platform : Mac OS/X -! Compiler : Absoft Pro Fortran -! To compile : f95 -c f2kcli.f90 -! Implementer : Lawson B. Wakefield, I.S.S. Ltd. -! Date : June 2002 -! - MODULE F2KCLI -! - CONTAINS -! - SUBROUTINE GET_COMMAND(COMMAND,LENGTH,STATUS) -! -! Description. Returns the entire command by which the program was -! invoked. -! -! Class. Subroutine. -! -! Arguments. -! COMMAND (optional) shall be scalar and of type default character. -! It is an INTENT(OUT) argument. It is assigned the entire command -! by which the program was invoked. If the command cannot be -! determined, COMMAND is assigned all blanks. -! LENGTH (optional) shall be scalar and of type default integer. It is -! an INTENT(OUT) argument. It is assigned the significant length -! of the command by which the program was invoked. The significant -! length may include trailing blanks if the processor allows commands -! with significant trailing blanks. This length does not consider any -! possible truncation or padding in assigning the command to the -! COMMAND argument; in fact the COMMAND argument need not even be -! present. If the command length cannot be determined, a length of -! 0 is assigned. -! STATUS (optional) shall be scalar and of type default integer. It is -! an INTENT(OUT) argument. It is assigned the value 0 if the -! command retrieval is sucessful. It is assigned a processor-dependent -! non-zero value if the command retrieval fails. -! - CHARACTER(LEN=*), INTENT(OUT), OPTIONAL :: COMMAND - INTEGER , INTENT(OUT), OPTIONAL :: LENGTH - INTEGER , INTENT(OUT), OPTIONAL :: STATUS -! - INTEGER :: IARG,NARG,IPOS - INTEGER , SAVE :: LENARG - CHARACTER(LEN=2000), SAVE :: ARGSTR - LOGICAL , SAVE :: GETCMD = .TRUE. -! -! Reconstruct the command line from its constituent parts. -! This may not be the original command line. -! - IF (GETCMD) THEN - NARG = IARGC() - IF (NARG > 0) THEN - IPOS = 1 - DO IARG = 1,NARG - CALL GETARG(IARG,ARGSTR(IPOS:)) - LENARG = LEN_TRIM(ARGSTR) - IPOS = LENARG + 2 - IF (IPOS > LEN(ARGSTR)) EXIT - END DO - ELSE - ARGSTR = ' ' - LENARG = 0 - ENDIF - GETCMD = .FALSE. - ENDIF - IF (PRESENT(COMMAND)) COMMAND = ARGSTR - IF (PRESENT(LENGTH)) LENGTH = LENARG - IF (PRESENT(STATUS)) STATUS = 0 - RETURN - END SUBROUTINE GET_COMMAND -! - INTEGER FUNCTION COMMAND_ARGUMENT_COUNT() -! -! Description. Returns the number of command arguments. -! -! Class. Inquiry function -! -! Arguments. None. -! -! Result Characteristics. Scalar default integer. -! -! Result Value. The result value is equal to the number of command -! arguments available. If there are no command arguments available -! or if the processor does not support command arguments, then -! the result value is 0. If the processor has a concept of a command -! name, the command name does not count as one of the command -! arguments. -! - COMMAND_ARGUMENT_COUNT = IARGC() - RETURN - END FUNCTION COMMAND_ARGUMENT_COUNT -! - SUBROUTINE GET_COMMAND_ARGUMENT(NUMBER,VALUE,LENGTH,STATUS) -! -! Description. Returns a command argument. -! -! Class. Subroutine. -! -! Arguments. -! NUMBER shall be scalar and of type default integer. It is an -! INTENT(IN) argument. It specifies the number of the command -! argument that the other arguments give information about. Useful -! values of NUMBER are those between 0 and the argument count -! returned by the COMMAND_ARGUMENT_COUNT intrinsic. -! Other values are allowed, but will result in error status return -! (see below). Command argument 0 is defined to be the command -! name by which the program was invoked if the processor has such -! a concept. It is allowed to call the GET_COMMAND_ARGUMENT -! procedure for command argument number 0, even if the processor -! does not define command names or other command arguments. -! The remaining command arguments are numbered consecutively from -! 1 to the argument count in an order determined by the processor. -! VALUE (optional) shall be scalar and of type default character. -! It is an INTENT(OUT) argument. It is assigned the value of the -! command argument specified by NUMBER. If the command argument value -! cannot be determined, VALUE is assigned all blanks. -! LENGTH (optional) shall be scalar and of type default integer. -! It is an INTENT(OUT) argument. It is assigned the significant length -! of the command argument specified by NUMBER. The significant -! length may include trailing blanks if the processor allows command -! arguments with significant trailing blanks. This length does not -! consider any possible truncation or padding in assigning the -! command argument value to the VALUE argument; in fact the -! VALUE argument need not even be present. If the command -! argument length cannot be determined, a length of 0 is assigned. -! STATUS (optional) shall be scalar and of type default integer. -! It is an INTENT(OUT) argument. It is assigned the value 0 if -! the argument retrieval is sucessful. It is assigned a -! processor-dependent non-zero value if the argument retrieval fails. -! -! NOTE -! One possible reason for failure is that NUMBER is negative or -! greater than COMMAND_ARGUMENT_COUNT(). -! - INTEGER , INTENT(IN) :: NUMBER - CHARACTER(LEN=*), INTENT(OUT), OPTIONAL :: VALUE - INTEGER , INTENT(OUT), OPTIONAL :: LENGTH - INTEGER , INTENT(OUT), OPTIONAL :: STATUS -! -! A temporary variable for the rare case case where LENGTH is -! specified but VALUE is not. An arbitrary maximum argument length -! of 1000 characters should cover virtually all situations. -! - CHARACTER(LEN=1000) :: TMPVAL -! -! Possible error codes: -! 1 = Argument number is less than minimum -! 2 = Argument number exceeds maximum -! - IF (NUMBER < 0) THEN - IF (PRESENT(VALUE )) VALUE = ' ' - IF (PRESENT(LENGTH)) LENGTH = 0 - IF (PRESENT(STATUS)) STATUS = 1 - RETURN - ELSE IF (NUMBER > IARGC()) THEN - IF (PRESENT(VALUE )) VALUE = ' ' - IF (PRESENT(LENGTH)) LENGTH = 0 - IF (PRESENT(STATUS)) STATUS = 2 - RETURN - END IF -! -! Get the argument if VALUE is present -! - IF (PRESENT(VALUE)) CALL GETARG(NUMBER,VALUE) -! -! As under Unix, the LENGTH option is probably fairly pointless here, -! but LEN_TRIM is used to ensure at least some sort of meaningful result. -! - IF (PRESENT(LENGTH)) THEN - IF (PRESENT(VALUE)) THEN - LENGTH = LEN_TRIM(VALUE) - ELSE - CALL GETARG(NUMBER,TMPVAL) - LENGTH = LEN_TRIM(TMPVAL) - END IF - END IF -! -! Since GETARG does not return a result code, assume success -! - IF (PRESENT(STATUS)) STATUS = 0 - RETURN - END SUBROUTINE GET_COMMAND_ARGUMENT -! - END MODULE F2KCLI - - -! -! $URL$ -! $Id$ -! $Revision$ -! $Date$ diff --git a/models/wrf/WRF_DART_utilities/grid_refl_obs.f90 b/models/wrf/WRF_DART_utilities/grid_refl_obs.f90 index 0d9703b1e7..d2c20bc5be 100644 --- a/models/wrf/WRF_DART_utilities/grid_refl_obs.f90 +++ b/models/wrf/WRF_DART_utilities/grid_refl_obs.f90 @@ -48,7 +48,6 @@ PROGRAM grid_refl_obs use utilities_mod, only : error_handler, E_ERR, E_MSG, file_exist, & initialize_utilities, finalize_utilities use netcdf -use f2kcli implicit none @@ -111,14 +110,14 @@ PROGRAM grid_refl_obs integer :: var_id, ncid, ierr character(len=80) :: varname -! f2kcli stuff +! command-line parameters stuff integer :: status, length character(len=120) :: string call initialize_utilities('grid_refl_obs') -! Get command-line parameters, using the F2KCLI interface. See f2kcli.f90 for details. +! Get command-line parameters, using the fortran 2003 intrinsics. if( COMMAND_ARGUMENT_COUNT() .ne. 7 ) then print*, 'INCORRECT # OF ARGUMENTS ON COMMAND LINE: ', COMMAND_ARGUMENT_COUNT() diff --git a/models/wrf/experiments/Radar/IC/sounding_perturbation/pert_sounding_module.f90 b/models/wrf/experiments/Radar/IC/sounding_perturbation/pert_sounding_module.f90 index f4b447c1c4..4d6ca20cd9 100644 --- a/models/wrf/experiments/Radar/IC/sounding_perturbation/pert_sounding_module.f90 +++ b/models/wrf/experiments/Radar/IC/sounding_perturbation/pert_sounding_module.f90 @@ -18,7 +18,7 @@ module pert_sounding_mod implicit none real, parameter :: PI = 3.1415926535897932346 -real :: iseed1, iseed2 +integer :: iseed1, iseed2 contains diff --git a/models/wrf/readme.rst b/models/wrf/readme.rst index 4746f9124e..0f37ad09fe 100644 --- a/models/wrf/readme.rst +++ b/models/wrf/readme.rst @@ -9,6 +9,10 @@ DART interface module for the Weather Research and Forecasting `(WRF) `__ model. This page documents the details of the module compiled into DART that interfaces with the WRF data in the state vector. +**The WRF-DART interface is compatible with WRF versions 4 and later, and is +no longer backwards compatible with WRFv3.9 and earlier.** +For more information on the interface changes required between +different WRF versions see the WRF tutorial link in the next section. WRF+DART Tutorial ----------------- @@ -362,4 +366,4 @@ Files References ---------- -http://www2.mmm.ucar.edu/wrf/users/docs/user_guide_V3/contents.html +https://www2.mmm.ucar.edu/wrf/users/docs/user_guide_v4/contents.html diff --git a/models/wrf/shell_scripts/add_bank_perts.ncl b/models/wrf/shell_scripts/add_bank_perts.ncl index 7b03baec9d..964cb61f2d 100644 --- a/models/wrf/shell_scripts/add_bank_perts.ncl +++ b/models/wrf/shell_scripts/add_bank_perts.ncl @@ -43,16 +43,19 @@ begin asciiwrite("mem"+MEM_NUM+"_pert_bank_num",ens_mem_num) print ("bank member number "+ens_mem_num) + +;For WRFv4 or later prognostic temp variable is THM pert_fields = (/"U", "V", "T", "QVAPOR","MU"/) + wrf_fields = (/"U", "V", "THM", "QVAPOR","MU"/) pert_scale = (/scale_U,scale_V,scale_T,scale_Q,scale_M/) nperts = dimsizes(pert_fields) pert_in = addfile(pert_bank_path+"/"+pert_bank_file,"r") wrf_in = addfile(wrf_file,"w") do n=0,nperts-1 - temp_w = wrf_in->$pert_fields(n)$ + temp_w = wrf_in->$wrf_fields(n)$ temp_p = pert_in->$pert_fields(n)$ temp_c = temp_w+(temp_p * pert_scale(n)) - wrf_in->$pert_fields(n)$ = temp_c + wrf_in->$wrf_fields(n)$ = temp_c delete(temp_w) delete(temp_p) delete(temp_c) diff --git a/models/wrf/shell_scripts/assim_advance.csh b/models/wrf/shell_scripts/assim_advance.csh index 1db7d64d26..b9fa8e5e44 100755 --- a/models/wrf/shell_scripts/assim_advance.csh +++ b/models/wrf/shell_scripts/assim_advance.csh @@ -50,7 +50,7 @@ if ( -e $RUN_DIR/advance_temp${emember}/wrf.info ) then endif touch wrf.info -if ( $SUPER_PLATFORM == 'yellowstone' ) then +if ( $SUPER_PLATFORM == 'LSF queuing system' ) then cat >! $RUN_DIR/advance_temp${emember}/wrf.info << EOF ${gdatef[2]} ${gdatef[1]} @@ -60,7 +60,7 @@ $yyyy $mm $dd $hh $nn $ss mpirun.lsf ./wrf.exe EOF -else if ( $SUPER_PLATFORM == 'cheyenne' ) then +else if ( $SUPER_PLATFORM == 'derecho' ) then # module load openmpi cat >! $RUN_DIR/advance_temp${emember}/wrf.info << EOF @@ -68,7 +68,7 @@ ${gdatef[2]} ${gdatef[1]} ${gdate[2]} ${gdate[1]} $yyyy $mm $dd $hh $nn $ss $domains - mpiexec_mpt dplace -s 1 ./wrf.exe + mpiexec -n 128 -ppn 128 ./wrf.exe EOF endif diff --git a/models/wrf/shell_scripts/assimilate.csh b/models/wrf/shell_scripts/assimilate.csh index 871f0f311d..00cce0d190 100755 --- a/models/wrf/shell_scripts/assimilate.csh +++ b/models/wrf/shell_scripts/assimilate.csh @@ -23,20 +23,19 @@ if ( -e ${RUN_DIR}/obs_seq.final ) ${REMOVE} ${RUN_DIR}/obs_seq.final if ( -e ${RUN_DIR}/filter_done ) ${REMOVE} ${RUN_DIR}/filter_done # run data assimilation system -if ( $SUPER_PLATFORM == 'yellowstone' ) then +if ( $SUPER_PLATFORM == 'LSF queuing system' ) then setenv TARGET_CPU_LIST -1 setenv FORT_BUFFERED true mpirun.lsf ./filter || exit 1 -else if ( $SUPER_PLATFORM == 'cheyenne' ) then +else if ( $SUPER_PLATFORM == 'derecho' ) then -# TJH MPI_SHEPHERD TRUE may be a very bad thing setenv MPI_SHEPHERD FALSE setenv TMPDIR /dev/shm limit stacksize unlimited - mpiexec_mpt dplace -s 1 ./filter || exit 1 + mpiexec -n 256 -ppn 128 ./filter || exit 1 endif diff --git a/models/wrf/shell_scripts/driver.csh b/models/wrf/shell_scripts/driver.csh index 3c609bbab1..dbe9b6c113 100755 --- a/models/wrf/shell_scripts/driver.csh +++ b/models/wrf/shell_scripts/driver.csh @@ -78,13 +78,13 @@ while ( 1 == 1 ) # # NOTE that multiple domains might be present, but only looking for domain 1 - if ( $SUPER_PLATFORM == 'yellowstone' ) then + if ( $SUPER_PLATFORM == 'LSF queuing system' ) then set ic_queue = caldera set logfile = "${RUN_DIR}/ic_gen.log" set sub_command = "bsub -q ${ic_queue} -W 00:05 -o ${logfile} -n 1 -P ${COMPUTER_CHARGE_ACCOUNT}" - else if ( $SUPER_PLATFORM == 'cheyenne' ) then - set ic_queue = "economy" - set sub_command = "qsub -l select=1:ncpus=2:mpiprocs=36:mem=5GB -l walltime=00:03:00 -q ${ic_queue} -A ${COMPUTER_CHARGE_ACCOUNT} -j oe -k eod -N icgen " + else if ( $SUPER_PLATFORM == 'derecho' ) then + set ic_queue = "main" + set sub_command = "qsub -l select=1:ncpus=128:mpiprocs=128:mem=5GB -l walltime=00:03:00 -q ${ic_queue} -A ${COMPUTER_CHARGE_ACCOUNT} -j oe -k eod -N icgen " endif echo "this platform is $SUPER_PLATFORM and the job submission command is $sub_command" @@ -120,7 +120,7 @@ while ( 1 == 1 ) set n = 1 while ( $n <= $NUM_ENS ) - if ( $SUPER_PLATFORM == 'cheyenne' ) then # can't pass along arguments in the same way + if ( $SUPER_PLATFORM == 'derecho' ) then # can't pass along arguments in the same way $sub_command -v mem_num=${n},date=${datep},domain=${domains},paramf=${paramfile} ${SHELL_SCRIPTS_DIR}/prep_ic.csh else $sub_command " ${SHELL_SCRIPTS_DIR}/prep_ic.csh ${n} ${datep} ${dn} ${paramfile} " @@ -147,9 +147,8 @@ while ( 1 == 1 ) @ loop++ if ( $loop > 60 ) then # wait 5 minutes for the ic file to be ready, else run manually echo "gave up on ic member $n - redo" - # TJH this is not the command for cheyenne, why not $sub_command from above ${SHELL_SCRIPTS_DIR}/prep_ic.csh ${n} ${datep} ${dn} ${paramfile} - # TJH the job queued above is still queued and should be killed ... + # If manual execution of script, shouldn't queued job be killed? endif endif end @@ -210,7 +209,7 @@ while ( 1 == 1 ) # run filter to generate the analysis ${REMOVE} script.sed - if ( $SUPER_PLATFORM == 'yellowstone' ) then + if ( $SUPER_PLATFORM == 'LSF queuing system' ) then # This is a most unusual application of 'sed' to insert the batch submission # directives into a file. The last backslash '\' before the quote is essential. @@ -241,7 +240,7 @@ while ( 1 == 1 ) endif set this_filter_runtime = $FILTER_TIME - else if ( $SUPER_PLATFORM == 'cheyenne' ) then + else if ( $SUPER_PLATFORM == 'derecho' ) then echo "2i\" >! script.sed echo "#=================================================================\" >> script.sed @@ -250,6 +249,7 @@ while ( 1 == 1 ) echo "#PBS -A ${COMPUTER_CHARGE_ACCOUNT}\" >> script.sed echo "#PBS -l walltime=${FILTER_TIME}\" >> script.sed echo "#PBS -q ${FILTER_QUEUE}\" >> script.sed + echo "#PBS -l job_priority=${FILTER_PRIORITY}\" >> script.sed echo "#PBS -m ae\" >> script.sed echo "#PBS -M ${EMAIL}\" >> script.sed echo "#PBS -k eod\" >> script.sed @@ -382,7 +382,7 @@ while ( 1 == 1 ) set n = 1 while ( $n <= $NUM_ENS ) - if ( $SUPER_PLATFORM == 'yellowstone' ) then + if ( $SUPER_PLATFORM == 'LSF queuing system' ) then echo "2i\" >! script.sed echo "#==================================================================\" >> script.sed @@ -407,7 +407,7 @@ while ( 1 == 1 ) bsub < assim_advance_mem${n}.csh endif - else if ( $SUPER_PLATFORM == 'cheyenne' ) then + else if ( $SUPER_PLATFORM == 'derecho' ) then echo "2i\" >! script.sed echo "#=================================================================\" >> script.sed @@ -416,6 +416,7 @@ while ( 1 == 1 ) echo "#PBS -A ${COMPUTER_CHARGE_ACCOUNT}\" >> script.sed echo "#PBS -l walltime=${ADVANCE_TIME}\" >> script.sed echo "#PBS -q ${ADVANCE_QUEUE}\" >> script.sed + echo "#PBS -l job_priority=${ADVANCE_PRIORITY}\" >> script.sed echo "#PBS -m a\" >> script.sed echo "#PBS -M ${EMAIL}\" >> script.sed echo "#PBS -k eod\" >> script.sed @@ -456,7 +457,7 @@ while ( 1 == 1 ) # Wait for the script to start while ( ! -e ${RUN_DIR}/start_member_${n} ) - if ( $SUPER_PLATFORM == 'yellowstone' ) then + if ( $SUPER_PLATFORM == 'LSF queuing system' ) then if ( `bjobs -w | grep assim_advance_${n} | wc -l` == 0 ) then @@ -470,7 +471,12 @@ while ( 1 == 1 ) endif - else if ( $SUPER_PLATFORM == 'cheyenne' ) then + else if ( $SUPER_PLATFORM == 'derecho' ) then + + # Prevent double submission for member 1 only + if ( $n == 1) then + sleep 5 + endif if ( `qstat -wa | grep assim_advance_${n} | wc -l` == 0 ) then @@ -502,7 +508,7 @@ while ( 1 == 1 ) # Obviously, the job crashed. Resubmit to queue ${REMOVE} start_member_${n} echo "didn't find the member done file" - if ( $SUPER_PLATFORM == 'yellowstone' ) then + if ( $SUPER_PLATFORM == 'LSF queuing system' ) then if ( $?reservation ) then echo "MEMBER ${n} USING RESERVATION," `/contrib/lsf/get_my_rsvid` @@ -511,15 +517,15 @@ while ( 1 == 1 ) bsub < assim_advance_mem${n}.csh endif - else if ( $SUPER_PLATFORM == 'cheyenne' ) then + else if ( $SUPER_PLATFORM == 'derecho' ) then qsub assim_advance_mem${n}.csh - + sleep 5 endif break endif - sleep 10 # this might need to be longer, though I moved the done flag lower in the + sleep 15 # this might need to be longer, though I moved the done flag lower in the # advance_model.csh to hopefully avoid the file moves below failing end diff --git a/models/wrf/shell_scripts/first_advance.csh b/models/wrf/shell_scripts/first_advance.csh index 28e7fe2f3a..32ec373706 100755 --- a/models/wrf/shell_scripts/first_advance.csh +++ b/models/wrf/shell_scripts/first_advance.csh @@ -38,7 +38,7 @@ endif touch wrf.info -if ( $SUPER_PLATFORM == 'yellowstone' ) then +if ( $SUPER_PLATFORM == 'LSF queuing system' ) then cat >! $RUN_DIR/advance_temp${emember}/wrf.info << EOF ${gdatef[2]} ${gdatef[1]} @@ -48,11 +48,8 @@ if ( $SUPER_PLATFORM == 'yellowstone' ) then mpirun.lsf ./wrf.exe EOF -else if ( $SUPER_PLATFORM == 'cheyenne' ) then +else if ( $SUPER_PLATFORM == 'derecho' ) then - # TJH MPI_IB_CONGESTED, MPI_LAUNCH_TIMEOUT used after cheyenne O/S change in July 2019 - # TJH setenv MPI_IB_CONGESTED 1 - # TJH setenv MPI_LAUNCH_TIMEOUT 40 setenv MPI_SHEPHERD false cat >! $RUN_DIR/advance_temp${emember}/wrf.info << EOF @@ -60,7 +57,7 @@ else if ( $SUPER_PLATFORM == 'cheyenne' ) then ${gdate[2]} ${gdate[1]} $yyyy $mm $dd $hh $nn $ss $domains - mpiexec_mpt dplace -s 1 ./wrf.exe + mpiexec -n 128 -ppn 128 ./wrf.exe EOF endif diff --git a/models/wrf/shell_scripts/gen_retro_icbc.csh b/models/wrf/shell_scripts/gen_retro_icbc.csh index 4b2bb6bf49..14b4fb3b69 100755 --- a/models/wrf/shell_scripts/gen_retro_icbc.csh +++ b/models/wrf/shell_scripts/gen_retro_icbc.csh @@ -42,7 +42,7 @@ echo "gen_retro_icbc.csh is running in `pwd`" set datea = 2017042700 set datefnl = 2017042712 # set this appropriately #%%%# -set paramfile = /glade2/scratch2/USERNAME/WORK_DIR/scripts/param.csh # set this appropriately #%%%# +set paramfile = /glade/derecho/scratch/USERNAME/WORK_DIR/scripts/param.csh # set this appropriately #%%%# source $paramfile @@ -171,22 +171,23 @@ EOF #if ( -e rsl.out.0000 ) cat rsl.out.0000 >> out.real.exe rm script.sed real_done rsl.* - echo "2i\" >! script.sed - echo "#======================================\" >> script.sed - echo "#PBS -N run_real\" >> script.sed - echo "#PBS -A ${COMPUTER_CHARGE_ACCOUNT}\" >> script.sed - echo "#PBS -l walltime=00:05:00\" >> script.sed - echo "#PBS -q ${ADVANCE_QUEUE}\" >> script.sed - echo "#PBS -o run_real.out\" >> script.sed - echo "#PBS -j oe\" >> script.sed - echo "#PBS -k eod\" >> script.sed - echo "#PBS -l select=3:ncpus=36:mpiprocs=36\" >> script.sed - echo "#PBS -V\" >> script.sed - echo "#======================================\" >> script.sed - echo "\" >> script.sed - echo "" >> script.sed - echo 's%${1}%'"${paramfile}%g" >> script.sed - sed -f script.sed ${SHELL_SCRIPTS_DIR}/real.csh >! real.csh + echo "2i\" >! script.sed + echo "#======================================\" >> script.sed + echo "#PBS -N run_real\" >> script.sed + echo "#PBS -A ${COMPUTER_CHARGE_ACCOUNT}\" >> script.sed + echo "#PBS -l walltime=00:05:00\" >> script.sed + echo "#PBS -q ${ADVANCE_QUEUE}\" >> script.sed + echo "#PBS -l job_priority=${ADVANCE_PRIORITY}\" >> script.sed + echo "#PBS -o run_real.out\" >> script.sed + echo "#PBS -j oe\" >> script.sed + echo "#PBS -k eod\" >> script.sed + echo "#PBS -l select=1:ncpus=128:mpiprocs=128\" >> script.sed + echo "#PBS -V\" >> script.sed + echo "#======================================\" >> script.sed + echo "\" >> script.sed + echo "" >> script.sed + echo 's%${1}%'"${paramfile}%g" >> script.sed + sed -f script.sed ${SHELL_SCRIPTS_DIR}/real.csh >! real.csh qsub real.csh diff --git a/models/wrf/shell_scripts/init_ensemble_var.csh b/models/wrf/shell_scripts/init_ensemble_var.csh index 3b1196e00a..a6af8845a4 100755 --- a/models/wrf/shell_scripts/init_ensemble_var.csh +++ b/models/wrf/shell_scripts/init_ensemble_var.csh @@ -83,6 +83,7 @@ EOF #PBS -A ${COMPUTER_CHARGE_ACCOUNT} #PBS -l walltime=${ADVANCE_TIME} #PBS -q ${ADVANCE_QUEUE} +#PBS -l job_priority=${ADVANCE_PRIORITY} #PBS -m ae #PBS -M ${EMAIL} #PBS -k eod diff --git a/models/wrf/shell_scripts/param.csh b/models/wrf/shell_scripts/param.csh index e41ccdb1d8..e1dc1aff96 100755 --- a/models/wrf/shell_scripts/param.csh +++ b/models/wrf/shell_scripts/param.csh @@ -9,8 +9,7 @@ # ASSIM_INT_MINUTES support needs to be added to param.csh, # it is referenced in assim_advance.csh but not declared in param.csh -# Set up environment. Current settings are for NCAR's Cheyenne -module load mpt # set this appropriately #%%%# +# Set up environment. Current settings are for NCAR's Derecho module load nco # set this appropriately #%%%# module load ncl/6.6.2 # set this appropriately #%%%# @@ -25,7 +24,7 @@ set NUM_DOMAINS = 1 # Directories where things are run # IMPORTANT : Scripts provided rely on this directory structure and names relative to BASE_DIR. # Do not change, otherwise tutorial will fail. -set BASE_DIR = /glade2/scratch2/USER/WORK_DIR # set this appropriately #%%%# +set BASE_DIR = /glade/derecho/scratch/USER/WORK_DIR # set this appropriately #%%%# set RUN_DIR = ${BASE_DIR}/rundir set TEMPLATE_DIR = ${BASE_DIR}/template set OBSPROC_DIR = ${BASE_DIR}/obsproc @@ -35,57 +34,55 @@ set POST_STAGE_DIR = ${BASE_DIR}/post set OBS_DIAG_DIR = ${BASE_DIR}/obs_diag set PERTS_DIR = ${BASE_DIR}/perts -# Directories that can be used by many things +# Assign path to DART, WRF, WPS and WRFDA build set SHELL_SCRIPTS_DIR = ${BASE_DIR}/scripts -set DART_DIR = /glade/p/work/USER/DART_manhattan # set this appropriately #%%%# -set WRF_DM_SRC_DIR = /glade/p/work/USER/WRFV3_dmpar # set this appropriately #%%%# -set WPS_SRC_DIR = /glade/p/work/USER/WPS # set this appropriately #%%%# -set VAR_SRC_DIR = /glade/p/work/USER/WRFDA # set this appropriately #%%%# +set DART_DIR = /glade/work/USER/DART # set this appropriately #%%%# +set WRF_DM_SRC_DIR = /glade/work/USER/WRFV3 # set this appropriately #%%%# +set WPS_SRC_DIR = /glade/work/USER/WPS # set this appropriately #%%%# +set VAR_SRC_DIR = /glade/work/USER/WRFDA # set this appropriately #%%%# # for generating wrf template files -set GEO_FILES_DIR = /glade/p/work/USER/WPS # set this appropriately #%%%# -set GRIB_DATA_DIR = /glade/p/work/USER/WPS/GRIB # set this appropriately #%%%# -set GRIB_SRC = 'GFS' # set this appropriately #%%%# +set GEO_FILES_DIR = /glade/u/home/wrfhelp/WPS_GEOG # set this appropriately #%%%# +set GRIB_DATA_DIR = ${ICBC_DIR}/grib_data # set this appropriately #%%%# +set GRIB_SRC = 'GFS' # set this appropriately #%%%# # list of variables for extraction and cycling -set extract_vars_a = ( U V PH T MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN \ +set extract_vars_a = ( U V PH THM MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN \ U10 V10 T2 Q2 PSFC TSLB SMOIS TSK RAINC RAINNC GRAUPELNC ) -set extract_vars_b = ( U V W PH T MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN \ +set extract_vars_b = ( U V W PH THM MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN \ U10 V10 T2 Q2 PSFC TSLB SMOIS TSK RAINC RAINNC GRAUPELNC \ REFL_10CM VT_DBZ_WT ) -set cycle_vars_a = ( U V PH T MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN \ +set cycle_vars_a = ( U V PH THM MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN \ U10 V10 T2 Q2 PSFC TSLB SMOIS TSK ) -set increment_vars_a = ( U V PH T MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN U10 V10 T2 Q2 PSFC ) +set increment_vars_a = ( U V PH THM MU QVAPOR QCLOUD QRAIN QICE QSNOW QGRAUP QNICE QNRAIN U10 V10 T2 Q2 PSFC ) # Diagnostic parameters set OBS_VERIF_DAYS = 7 # Generic queuing system parameters -set SUPER_PLATFORM = cheyenne - -# TJH consistent way of checking the SUPER_PLATFORM and injecting that -# header information into the scripts ... rather than have scripts -# that have redundant blocks in them ... -# +set SUPER_PLATFORM = derecho set COMPUTER_CHARGE_ACCOUNT = YOUR_ACCT # set this appropriately #%%%# -set EMAIL = YOUR_EMAIL@SOMEPLACE.COM # set this appropriately #%%%# +set EMAIL = YOUR_EMAIL # set this appropriately #%%%# -if ( $SUPER_PLATFORM == 'cheyenne') then - # cheyenne values (uses 'PBS' queueing system) - # set this appropriately #%%%# ... ALL OF THESE if using PBS - set FILTER_QUEUE = regular +if ( $SUPER_PLATFORM == 'derecho') then + # Derecho values (uses 'PBS' queueing system) + # Set these appropriately for your PBS system #%%%# + set FILTER_QUEUE = main + set FILTER_PRIORITY = premium set FILTER_TIME = 0:35:00 - set FILTER_NODES = 10 - set FILTER_PROCS = 36 - set FILTER_MPI = 36 - set ADVANCE_QUEUE = regular - set ADVANCE_TIME = 0:20:00 - set ADVANCE_NODES = 3 - set ADVANCE_PROCS = 36 - set ADVANCE_MPI = 36 + set FILTER_NODES = 2 + set FILTER_PROCS = 128 + set FILTER_MPI = 128 + + set ADVANCE_QUEUE = main + set ADVANCE_PRIORITY = premium + set ADVANCE_TIME = 0:20:00 + set ADVANCE_NODES = 1 + set ADVANCE_PROCS = 128 + set ADVANCE_MPI = 128 else - # yellowstone (uses 'LSF' queueing system) - # set this appropriately #%%%# ... ALL OF THESE if using LSF + # 'LSF' queueing system example + # Set these appropriately for your LSF or Slurm system #%%%# set FILTER_QUEUE = regular set FILTER_TIME = 0:25 set FILTER_CORES = 512 @@ -97,8 +94,6 @@ else endif # System specific commands -# TJH ... The LINK command probably should not have the force option. -# TJH ... and if the LINK fails, should it die right there? setenv REMOVE 'rm -rf' setenv COPY 'cp -pfr' setenv MOVE 'mv -f' diff --git a/models/wrf/shell_scripts/real.csh b/models/wrf/shell_scripts/real.csh index 0185106bc9..d1459b6168 100755 --- a/models/wrf/shell_scripts/real.csh +++ b/models/wrf/shell_scripts/real.csh @@ -5,7 +5,7 @@ source $paramfile cd ${ICBC_DIR} - mpiexec_mpt dplace -s 1 ${RUN_DIR}/WRF_RUN/real.exe + mpiexec -n 128 -ppn 128 ${RUN_DIR}/WRF_RUN/real.exe #if ( `grep "Successful completion of program real.exe" ./rsl.out.0000 | wc -l ` == 1 ) touch ${ICBC_DIR}/real_done diff --git a/models/wrf/tutorial/README.rst b/models/wrf/tutorial/README.rst index 1acd4919a4..80cd82c344 100644 --- a/models/wrf/tutorial/README.rst +++ b/models/wrf/tutorial/README.rst @@ -8,49 +8,52 @@ Introduction This document will describe how to get started with your own Weather Research and Forecasting (WRF) data assimilation experiments using DART -and only covers only the WRF-specific aspects of integrating with DART. -It is not wise to try to run WRF/DART if you have no experience with WRF -and/or no experience with DART. - -This tutorial was assembled to be compatible with ~WRF V3.9.1 and the -DART Manhattan release. Other releases of WRF may or may not be -backwards or forwards compatible with this tutorial. - -You must already be comfortable running the -`WRF `__ -system (WPS, real_em build of WRF). If not, work through the `WRF model -tutorial `__ -first before trying to link WRF and DART together. Check the WRF user -guide or the -`WRFHELP `__ -forum for WRF-specific assistance. +and only covers the WRF-specific aspects of coupling with DART. +It is not wise to try to run WRF-DART if you have no experience with +either WRF or DART. + +.. Important :: + + This tutorial was designed to be compatible with WRF Version 4 and was + tested with WRFv4.5.2. This tutorial should not be used with DART + versions 11.4.0 and earlier because those older versions do not account + for different coordinate systems including the sigma hybrid coordinates as + described in `DART Issue #650 `__. + Furthermore, older versions do not account for the prognostic temperature variable + switch from ``T`` (perturbation potential temperature) to ``THM``, (either perturbation + potential temperature or perturbation moist potential temperature) as described in + `DART issue #661 `__. The current implementation + of the code sets ``T=THM`` because within &dynamics section of ``namelist.input`` + ``use_theta_m=0``. + + Earlier version of WRF (v3.9) may run without errors with more recent versions of + DART (later than 11.4.0), but the assimilation performance will be deprecated. + If you need to run with earlier versions of WRF, please review the changes required + to switch from WRFv4 to WRFv3 as documented within + `DART issue #661 `__, + or contact the DART team. Earlier WRF versions also require different settings + within the WRF ``namelist.input`` file to promote vertical stability for the tutorial + example. These settings are also described in DART Issue #661. + +Prior to running this tutorial, we urge the users to familarize themselves with the +`WRF system `__ +(WRF_ARW, WPS and WRFDA), and to read through the `WRFv4.5 User's Guide +`__ +and the `WRF model tutorials `__ + +The DART team is not responsible for and does not maintain the WRF code. For WRF related issues check out the +`WRF User Forum `__ +or the `WRF github page. `__ If you are new to DART, we recommend that you become familiar with DART by working through the :doc:`../../../theory/readme` and then understanding the :ref:`DART getting started ` documentation. -before attempting the WRF/DART tutorial as you will find many helpful -resources for learning the base DART configuration. - -*We do not claim that this is a “turnkey” or “black box” system.* Be -mentally prepared to invest a reasonable amount of time on the learning -curve. There are many outstanding research issues which have no easy -answers. This is not a one week/grad student/naive user system. Even -after you get the code up and running, you have to be able to interpret -the results, which requires developing specific skills. There are a lot -of ways to alter how the system works – localization, inflation, which -variables and observations are assimilated, the assimilation window -time, the model resolution, etc, etc. This is both good and bad - you -have many ways of improving your results, but you have to take care on -how you leave all the settings of these inputs. Getting a set of scripts -that runs doesn’t mean the system is running well, or producing useful -results. So - if you’re still reading: Let the adventure begin! - -This tutorial introduces a “canned” WRF/DART experiment involving an +This tutorial is **not** a toy simulation, but represents a realistic WRF-DART +assimilation for the continental United States. It uses a WRF ensemble of 50 members that will be initialized from GFS initial -conditions at 2017/04/27 00:00 UTC using a domain of the continental -United States. The data included in the tutorial lasts until 2017/04/30 -18:00 UTC. During this period, there was a strong rain and wind event +conditions at 2017/04/27 00:00 UTC. The data included in the tutorial lasts +until 2017/04/30 18:00 UTC. During this period, there was a strong rain and wind event that affected a large portion of the United States, causing record rains, localized flooding, and numerous tornadoes. For more information on the physical account of this case, see @@ -63,46 +66,67 @@ observations will then be performed at 06:00 UTC, at which time analysis files will be generated to begin a new ensemble forecast. The WRF model will be advanced for 6 hours and a final assimilation cycle will be performed at 12:00 UTC. This process could then continue in order to -investigate the strong rain and wind event. For what it's worth, on -NSF NCAR's *Cheyenne* under the default test configuration for this case, it -can take an hour to complete a forecast/assimilation cycle. Since the -tutorial runs for two cycles, it can take twice as long. - -The goals of this tutorial are to demonstrate how WRF/DART works. After -running this tutorial, you will be able to understand the major steps -involved in setting up your own data assimilation (DA) experiments. -However, you will need to do additional work before you can expect to -have a fully functional WRF/DART system, as some of the steps involved +investigate the strong rain and wind event. On NSF NCAR's *Derecho*, +the tutorial requires at least 30 minutes of run time, and can take +much longer (1-2 hours) depending upon the PBS queue wait time. + +The goal of this tutorial is to demonstrate how WRF-DART works, and to provide an +understanding of the major steps within a data assimilation (DA) experiment. +However, you will need to do additional work before you can apply +WRF-DART to your own research application, as some of the steps involved in this tutorial (in particular, the perturbation bank and the observation sequence files) are provided for you in order to simplify -the process. Furthermore, if you are not running on the NSF NCAR -Cheyenne supercomputing system, you will likely need to customize the -assimilation scripts to match the details of your particular system. +the process. We provide a diagnostic section at the end of the tutorial to +assess the skill/success of the assimilation. Be aware, an assimilation is +not successful just because it runs to completion. A successful assimilation +generally uses the vast majority of the observations provided and minimizes +the bias and RMSE between the posterior model state and the observations. +Finally, if you are not running on the NSF NCAR Derecho (PBS) supercomputing system, you will +need to customize the assimilation scripts (located in /DART/models/wrf/shell_scripts/) to match the details of your particular system. +Specifically, you will need to edit the DART csh scripting to match your system settings +whether that be, for example, a PBS, SLURM or LSF HPC system. Although the DART team can +offer advice on how to customize the scripting to accomodate your HPC system, your +HPC system administrator is likely the best resource to resolve these issues. -.. important :: - We have provided instructions for the NSF NCAR supercomputer - Cheyenne, so you may need to tailor these instructions to your system if - you are not using Cheyenne. These system-specific setup steps may take a - good deal of effort, especially if you are unfamiliar with details such - as MPI, NetCDF, etc. Furthermore, even after you get the code up and - running, you will need to properly interpret your results. +.. Important :: + + The tutorial scripting and instructions are based on the NSF NCAR supercomputer + Derecho, so you will need to edit the scripts and interpret the instructions for + other HPC systems. The scripting uses examples of a PBS queuing system (e.g. Derecho) + and LSF queuing system (e.g. decommissioned Yellowstone). You can use these as a + template for your own system. Step 1: Setup ------------- -There are several dependencies for the executables and scripting -components. On Cheyennne, users have reported success building WRF, WPS, -WRFDA, and DART with the default module environment including Intel -compilers, MPT, and netCDF4. In addition, you'll need to load the +There are several required dependencies for the executables and WRF-DART scripting +components. On NSF NCAR's Derecho, users have reported success building WRF, WPS, +WRFDA, and DART using gfortan with the following module environment. Note: not all +modules listed below are a requirement to compile and run the tutorial. + + :: + + Currently Loaded Modules: + 1) ncarenv/23.09 (S) 3) udunits/2.2.28 5) ncarcompilers/1.0.0 7) cray-mpich/8.1.27 9) netcdf-mpi/4.9.2 + 2) gcc/12.2.0 4) ncview/2.1.9 6) craype/2.7.23 8) hdf5-mpi/1.12.2 10) hdf/4.2.15 + +In addition, you'll need to load the `nco `__ and `ncl `__ modules to run the set of scripts -that accompany the tutorial. +that accompany the tutorial. For Derecho the nco and ncl +packages can be automatically loaded using the following commands: -There are multiple phases for the setup: building the DART executables, -getting the initial WRF boundary conditions etc., building (or using + :: + + module load nco + module load ncl/6.6.2 + +These commands are provided by default with the param.csh script. More details +are provided below. There are multiple phases for the setup: building the DART executables, +downloading the initial WRF boundary conditions, building (or using existing) WRF executables, and configuring and staging the scripting needed to perform an experiment. @@ -138,6 +162,13 @@ might need for an experiment with that model. not, you will need to do so now. See :ref:`Getting Started ` for more detail, if necessary. +.. Important :: + + If using gfortan to compile DART on Derecho, a successful configuration + of the ``mkmf.template`` includes using the ``mkmf.template.gfortan`` script + and customizing the compiler flags as follows: + FFLAGS = -O2 -ffree-line-length-none -fallow-argument-mismatch -fallow-invalid-boz $(INCS) + 2. [OPTIONAL] Modify the DART code to use 32bit reals. Most WRF/DART users run both the WRF model and the DART assimilation code using 32bit reals. This is not the default for the DART code. Make this @@ -176,7 +207,7 @@ might need for an experiment with that model. cd $DART_DIR/models/wrf cp tutorial/template/input.nml.template work/input.nml -4. Build the WRF/DART executables: +4. Build the WRF-DART executables: :: @@ -204,7 +235,8 @@ Preparing the experiment directory. Approximately 100Gb of space is needed to run the tutorial. Create a "work" directory someplace with a lot of free space. The rest of the instructions assume you have an environment variable called *BASE_DIR* -that points to this directory. +that points to this directory. On Derecho it is convenient to use your +scratch directory for this purpose. ===== ==================================================== shell command @@ -221,13 +253,16 @@ bash ``export BASE_DIR=`` :: cd $BASE_DIR - wget http://www.image.ucar.edu/wrfdart/tutorial/wrf_dart_tutorial_23May2018_v3.tar.gz - tar -xzvf wrf_dart_tutorial_23May2018_v3.tar.gz + wget http://www.image.ucar.edu/wrfdart/tutorial/wrf_dart_tutorial_29Apr2024.tar.gz + tar -xzvf wrf_dart_tutorial_29Apr2024.tar.gz After untarring the file you should see the following directories: *icbc, output, perts,* and *template.* The directory names (case sensitive) are important, as the scripts rely on these local paths - and file names. + and file names. Please note that the perturbation, surface and initial + condition files were derived from an earlier version (pre-4.0) of WRF/WPS/WRFDA + but still maintains compatibility with the (post-4.0, post-11.4.0) + WRF-DART versions recommended to run this WRF assimilation example. 2. You will need template WRF namelists from the ``$DART_DIR/models/wrf/tutorial/template`` directory: @@ -245,40 +280,50 @@ bash ``export BASE_DIR=`` mkdir $BASE_DIR/scripts cp -R $DART_DIR/models/wrf/shell_scripts/* $BASE_DIR/scripts -Build or locate WRF executables. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The -`WRFDA `__ -package is needed to generate a set of perturbed initial ensemble member +Build or locate the WRF, WPS and WRFDA executables +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Instruction for donwloading the WRF package is located +`here. `__ +The WRF package consists of 3 parts: the WRF atmospheric model WRF(ARW), the +WRF Preprocessing System (WPS) and WRF Data Assimilation System (WRFDA). + +Importantly, DART is used to perform the ensemble DA for this tutorial, however, +the WRFDA package is required to generate a set of perturbed initial ensemble member files and also to generate perturbed boundary condition files. Since the tutorial provides a perturbation bank for a specific case, it is not required to actually *run da_wrfvar.exe* but it needs to be in the ``WRF_RUN`` directory for the tutorial. -Build (or locate an appropriate build of) WRF, WPS and WRFDA. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - WRF and WRFDA should be built with the "dmpar" option, while WPS can be -built "serial"ly. See the WRF/WRFDA documentation for more information +built "serial"ly. See the WRF documentation for more information about building these packages. -.. note:: +.. Warning:: For consistency and to avoid errors, you should build WRF, WPS, WRFDA, and DART with the same compiler you use for NetCDF. Likewise MPI should use the same compiler. You will need the location of the WRF and WRFDA builds to customize the - *params.csh* script in the next step. + *params.csh* script in the next step. If using gfortran to compile WRF on Derecho + we recommend using option 34 (gnu dmpar) to configure WRF, option 1 (gnu serial) to + configure WPS, and option 34 (gnu dmpar) to configure WRFDA. You will need the location + of the WRF, WPS,and WRFDA builds to customize the *params.csh* script in the next step. + + Using the gfortan compiler on Derecho required custom flag settings to successfully + compile the WRF, WPS and WRFDA executables. For more information please see + NCAR/DART `github issue 627. `__ + Configure ``$BASE_DIR/scripts/param.csh`` with proper paths, info, etc. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is a script that sets variables which will be read by other -WRF/DART scripts. There are some specific parameters for either the -Cheyenne supercomputing system using the +WRF-DART scripts. There are some specific parameters for either the +Derecho supercomputing system using the `PBS `__ queueing system or the (decommissioned) Yellowstone system which used the *LSF* queueing -system. If you are not using Cheyenne, you may still want to use this +system. If you are not using Derecho, you may still want to use this script to set your queueing-system specific parameters. .. important:: @@ -291,8 +336,6 @@ script to set your queueing-system specific parameters. +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ | Script variable | Description | +=========================+=====================================================================================================================================================+ - | module load mpt | The Environment Modules MPI compiler to use (here the HPE MPI) compiler). Note that on Cheyenne the default compiler is Intel. | - +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ | module load nco | The nco package. | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ | module load ncl/6.6.2 | The ncl package. | @@ -307,7 +350,7 @@ script to set your queueing-system specific parameters. +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ | VAR_SRC_DIR | The directory of the WRFDA installation. | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ - | GEO_FILES_DIR | The root directory of the WPS_GEOG files. NOTE: on Cheyenne these are available in the /glade/u/home/wrfhelp/WPS_GEOG directory | + | GEO_FILES_DIR | The root directory of the WPS_GEOG files. NOTE: on Derecho these are available in the /glade/u/home/wrfhelp/WPS_GEOG directory | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ | GRIB_DATA_DIR | The root directory of the GRIB data input into ungrib.exe. For this tutorial the grib files are included, so use ${ICBC_DIR}/grib_data | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -315,7 +358,7 @@ script to set your queueing-system specific parameters. +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ | COMPUTER_CHARGE_ACCOUNT | The project account for supercomputing charges. See your supercomputing project administrator for more information. | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ - | EMAIL | The e-mail address used by the queueing system to send job summary information. | + | EMAIL | The e-mail address used by the queueing system to send job summary information. This is optional. | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------+ @@ -427,7 +470,7 @@ find the following scripts: +-----------------------+-------------------------------------------------------------------------------------------+ | new_advance_model.csh | advances the WRF model after running DART in a cycling context. | +-----------------------+-------------------------------------------------------------------------------------------+ -| param.csh | Contains most of the key settings to run the WRF/DART system. | +| param.csh | Contains most of the key settings to run the WRF-DART system. | +-----------------------+-------------------------------------------------------------------------------------------+ | prep_ic.csh | Prepares the initial conditions for a single ensemble member. | +-----------------------+-------------------------------------------------------------------------------------------+ @@ -570,14 +613,22 @@ you when each ensemble member has finished. Step 3: Prepare observations [OPTIONAL] --------------------------------------- -For the tutorial exercise, observation sequence files are provided to -enable you to quickly get started running a test WRF/DART system. If you -want to run with the example observations, you can skip to Step -4. - -However, observation processing is critical to the success of running -DART and was covered in :ref:`Getting Started `. In -brief, to add your own observations to WRF/DART you will need to +.. Warning:: + + The observation sequence files to run this tutorial are already provided + for you. If you want to run with the provided tutorial observations, you + can skip to Step 4 right now. If you are interested in using custom + observations for a WRF experiment other than the tutorial you should read on. + The remaining instructions provided below in Step 3 are meant as a guideline + to converting raw PREPBUFR data files into the required ``obs_seq`` format + required by DART. Be aware that there is ongoing discussion of the proper + archived data set (RDA ds090.0 or ds337.0) that should be used to obtain + the PREPBUFR data. See the discussion in `bug report #634 `__. + If you have questions please contact the DART team. + +Observation processing is critical to the success of running +DART and is covered in :ref:`Getting Started `. In +brief, to add your own observations to WRF-DART you will need to understand the relationship between observation definitions and observation sequences, observation types and observation quantities, and understand how observation converters extract observations from their @@ -589,26 +640,22 @@ contain a wide array of observation types from many platforms within a single file. If you wanted to generate your own observation sequence files from -PREPBUFR for an experiment with WRF/DART, you should follow the guidance +PREPBUFR for an experiment with WRF-DART, you should follow the guidance on the `prepbufr <../../../observations/obs_converters/NCEP/prep_bufr/prep_bufr.html>`__ page to build the bufr conversion programs, get observation files for the dates you plan to build an analysis for, and run the codes to generate an observation sequence file. -For completeness, we list here how you could generate these observation -sequence files yourself. - -.. important:: - - the following steps are **not - necessary** for the tutorial as the processed PREPBUFR observation - sequence files have already been provided for you. However, these steps - are provided in order to help users get started with these observations - quickly for their own experiments. +The steps listed below to generate these observation +sequence files are meant as a guideline for NSF NCAR Research Data +Archive data file ds090.0. **Be aware not all required software has been +migrated to Derecho to perform this conversion.** +See `bug report #634 `__ +for more updated information. -To (again, *optionally*) reproduce the observation sequence files in the -*output* directories, you would do the following: +To reproduce the observation sequence files in the *output* directories, +you would do the following: - Go into your DART prep_bufr observation converter directory and install the PREPBUFR utilities as follows: @@ -835,18 +882,17 @@ necessary for ensemble data assimilation, for large models such as WRF that are run on a supercomputer queueing system, an additional layer of scripts is necessary to glue all of the pieces together. A set of scripts is provided with the tutorial tarball to provide you a starting -point for your own WRF/DART system. You will need to edit these scripts, +point for your own WRF-DART system. You will need to edit these scripts, perhaps extensively, to run them within your particular computing -environment. If you will run on NSF NCAR's Cheyenne environment, fewer edits +environment. If you will run on NSF NCAR's Derecho environment, fewer edits may be needed, but you should familiarize yourself with `running jobs on -Cheyenne `__ +Derecho `__ if necessary. A single forecast/assimilation cycle of this tutorial can -take an hour on Cheyenne - longer if debug options are enabled or the -shared nodes are busy - shorter if more cores or a higher optimization -level is acceptable. +take up to 10 minutes on Derecho - longer if debug options are enabled or +if there is a wait time during the queue submission. In this tutorial, we have previously edited the *param.csh* and other -scripts. Throughout the WRF/DART scripts, there are many options to +scripts. Throughout the WRF-DART scripts, there are many options to adjust cycling frequency, domains, ensemble size, etc., which are available when adapting this set of scripts for your own research. To become more famililar with this set of scripts and to eventually make @@ -900,10 +946,10 @@ between the background (prior) and the analysis (posterior) after running The ``analysis_increment.nc`` file includes the following atmospheric variables: -``MU, PH, PSFC, QRAIN, QCLOUD, QGRAUP, QICE, QNICE, QSNOW, QVAPOR, T`` and ``T2``. -The example figure below shows the increments for temperature (T) only. You can -use **ncview** to advance through all 11 atmospheric pressure levels. You should -see spatial patterns that look something like the meteorology of the day. +``MU, PH, PSFC, QRAIN, QCLOUD, QGRAUP, QICE, QNICE, QSNOW, QVAPOR, THM`` and ``T2``. +The example figure below shows the increments for THM (perturbation potential temperature) +only. You can use **ncview** to advance through all 11 atmospheric pressure levels. +You should see spatial patterns that look something like the meteorology of the day. +--------------------------+--------------------------------+ | |ncview1| | |ncview2| | @@ -951,7 +997,7 @@ In some cases there could be multiple obs_epoch*.nc files, but in general, the u should use the obs_epoch file appended with the largest numeric value as it contains the most complete set of observations. The diagnostic scripts used here are included within the DART package, and require a license of Matlab to run. The -commands shown below to run the diagnostics use NSF NCAR's Cheyenne, but a user could +commands shown below to run the diagnostics use NSF NCAR's Derecho, but a user could also run on their local machine. First explore the obs_epoch*.nc file and identify the variety of observations included @@ -1274,9 +1320,9 @@ period of the assimilation. calendar = 'Gregorian', first_bin_start = 1601, 1, 1, 0, 0, 0, first_bin_end = 2999, 1, 1, 0, 0, 0, - last_bin_end = 2999, 1, 1, 0, 0, 0, - bin_interval_days = 0, - bin_interval_seconds = 21600, + last_bin_end = 2999, 1, 1, 0, 0, 0, + bin_interval_days = 1000000, + bin_interval_seconds = 0, max_num_bins = 1000, print_table = .true / @@ -1340,12 +1386,12 @@ Additional materials from previous in-person tutorials - Introduction - `DART Lab materials <../../../guide/DART_LAB/DART_LAB.html>`__ -- WRF/DART basic building blocks +- WRF-DART basic building blocks -`slides `__ (some material is outdated) - Computing environment support -`slides `__ -- WRF/DART application examples +- WRF-DART application examples -`slides `__ (some material is outdated) - Observation processing @@ -1357,10 +1403,8 @@ More Resources -------------- - `Check or Submit DART Issues `__ -- `DAReS website `__ -- `Preparing - MATLAB `__ - to use with DART. +- `DAReS website `__ +- :ref:`Preparing MATLAB` to use with DART. - `WRF model users page `__ .. |ncview1| image:: ../../../guide/images/WRF_tutorial_ncview1.png diff --git a/models/wrf/tutorial/template/input.nml.template b/models/wrf/tutorial/template/input.nml.template index 5b3fd1dfab..3e2a1c1007 100644 --- a/models/wrf/tutorial/template/input.nml.template +++ b/models/wrf/tutorial/template/input.nml.template @@ -1,3 +1,10 @@ +&probit_transform_nml + / + +&algorithm_info_nml + qceff_table_filename = '' + / + &filter_nml async = 2, adv_ens_command = "./advance_model.csh", @@ -12,8 +19,8 @@ first_obs_seconds = -1, last_obs_days = -1, last_obs_seconds = -1, - num_output_state_members = 0, - num_output_obs_members = 3, + num_output_state_members = 50, + num_output_obs_members = 50, output_interval = 1, num_groups = 1, output_forward_op_errors = .false., @@ -65,10 +72,9 @@ &assim_tools_nml - filter_kind = 1, cutoff = 0.10, sort_obs_inc = .false., - spread_restoration = .true., + spread_restoration = .false., sampling_error_correction = .true., print_every_nth_obs = 1000, adaptive_localization_threshold = 2000, @@ -111,7 +117,7 @@ wrf_state_variables = 'U','QTY_U_WIND_COMPONENT','TYPE_U','UPDATE','999', 'V','QTY_V_WIND_COMPONENT','TYPE_V','UPDATE','999', 'W','QTY_VERTICAL_VELOCITY','TYPE_W','UPDATE','999', - 'T','QTY_POTENTIAL_TEMPERATURE','TYPE_T','UPDATE','999', + 'THM','QTY_POTENTIAL_TEMPERATURE','TYPE_T','UPDATE','999', 'PH','QTY_GEOPOTENTIAL_HEIGHT','TYPE_GZ','UPDATE','999', 'MU','QTY_PRESSURE','TYPE_MU','UPDATE','999', 'QVAPOR','QTY_VAPOR_MIXING_RATIO','TYPE_QV','UPDATE','999', diff --git a/models/wrf/tutorial/template/namelist.input.meso b/models/wrf/tutorial/template/namelist.input.meso index 410352fccd..841bd16635 100644 --- a/models/wrf/tutorial/template/namelist.input.meso +++ b/models/wrf/tutorial/template/namelist.input.meso @@ -109,6 +109,8 @@ diff_6th_opt = 2, 2, diff_6th_factor = 0.25, 0.12, epssm = 0.1 + use_theta_m = 0 + zadvect_implicit = 1 / &bdy_control diff --git a/models/wrf/work/input.nml b/models/wrf/work/input.nml index 37451085e5..2c19329ded 100644 --- a/models/wrf/work/input.nml +++ b/models/wrf/work/input.nml @@ -48,8 +48,8 @@ first_obs_seconds = -1, last_obs_days = -1, last_obs_seconds = -1, - num_output_state_members = 0, - num_output_obs_members = 32, + num_output_state_members = 3, + num_output_obs_members = 3, output_interval = 1, num_groups = 1, distributed_state = .true. diff --git a/models/wrf_hydro/create_identity_streamflow_obs.f90 b/models/wrf_hydro/create_identity_streamflow_obs.f90 index f39141eddb..dc8ace47cc 100644 --- a/models/wrf_hydro/create_identity_streamflow_obs.f90 +++ b/models/wrf_hydro/create_identity_streamflow_obs.f90 @@ -66,7 +66,7 @@ program create_identity_streamflow_obs integer, parameter :: NUM_COPIES = 1 ! number of copies in sequence integer, parameter :: NUM_QC = 1 ! number of QC entries real(r8), parameter :: MIN_OBS_ERR_STD = 0.1_r8 ! m^3/sec -real(r8), parameter :: MAX_OBS_ERR_STD = 100000.0_r8 +real(r8), parameter :: MAX_OBS_ERR_STD = 1000000.0_r8 real(r8), parameter :: NORMAL_FLOW = 10.0_r8 real(r8), parameter :: contract = 0.001_r8 @@ -104,7 +104,7 @@ program create_identity_streamflow_obs real(r8), allocatable :: discharge(:) character(len=IDLength), allocatable :: desired_gages(:) -integer :: n_wanted_gages +integer :: n_wanted_gages, n_desired_gages real(r8) :: oerr, qc integer :: oday, osec type(obs_type) :: obs @@ -127,6 +127,7 @@ program create_identity_streamflow_obs character(len=256) :: location_file = 'location.nc' character(len=256) :: gages_list_file = '' real(r8) :: obs_fraction_for_error = 0.01 +logical :: assimilate_all = .false. integer :: debug = 0 namelist / create_identity_streamflow_obs_nml / & @@ -135,6 +136,7 @@ program create_identity_streamflow_obs location_file, & gages_list_file, & obs_fraction_for_error, & + assimilate_all, & debug !------------------------------------------------------------------------------- @@ -209,7 +211,12 @@ program create_identity_streamflow_obs call init_obs(obs, num_copies=NUM_COPIES, num_qc=NUM_QC) call init_obs(prev_obs, num_copies=NUM_COPIES, num_qc=NUM_QC) -n_wanted_gages = set_desired_gages(gages_list_file) +! Collect all the gauges: +! - desired ones will have the provided obs_err_sd +! - remaining gauges are dummy with very large obs_err_sd + +n_desired_gages = set_desired_gages(gages_list_file) +n_wanted_gages = 0 !set_desired_gages(gages_list_file) call find_textfile_dims(input_files, nfiles) num_new_obs = estimate_total_obs_count(input_files, nfiles) @@ -308,7 +315,8 @@ program create_identity_streamflow_obs OBSLOOP: do n = 1, nobs - if ( discharge(n) < 0.0_r8 ) cycle OBSLOOP + ! make sure discharge is physical + if ( discharge(n) < 0.0_r8 .or. discharge(n) /= discharge(n) ) cycle OBSLOOP ! relate the TimeSlice:station to the RouteLink:gage so we can ! determine the location @@ -318,13 +326,13 @@ program create_identity_streamflow_obs ! relate the physical location to the dart state vector index dart_index = linkloc_to_dart(lat(indx), lon(indx)) - ! oerr is the observation error standard deviation in this application. - ! The observation error variance encoded in the observation file - ! will be oerr*oerr - oerr = max(discharge(n)*obs_fraction_for_error, MIN_OBS_ERR_STD) - - ! MEG: A fix to not crush the ensemble in a no-flood period (stagnant water). - !if ( discharge(n) < NORMAL_FLOW ) then + ! desired gauges get the provided obs_err + ! remaining ones are for verification purposes + if (ANY(desired_gages == station_strings(n)) .or. assimilate_all) then + oerr = max(discharge(n)*obs_fraction_for_error, MIN_OBS_ERR_STD) + else + oerr = MAX_OBS_ERR_STD + endif ! don't correct that much, the gauge observations imply that the flow ! in the stream is small. This is not a flood period. Streamflow values ! indicate a more or less lake situation rather than a strongly flowing stream. @@ -660,9 +668,9 @@ function estimate_total_obs_count(file_list,nfiles) result (num_obs) ! We need to know how many observations there may be. ! Specifying too many is not really a problem. -! I am adding 20% +! I am multiplying by 10. -num_obs = 1.2_r8 * nobs * nfiles +num_obs = 10.0_r8 * nobs * nfiles end function estimate_total_obs_count diff --git a/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/create_usgs_daily_obs_seq.py b/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/create_usgs_daily_obs_seq.py index 38d38c194c..ec95a4a968 100644 --- a/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/create_usgs_daily_obs_seq.py +++ b/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/create_usgs_daily_obs_seq.py @@ -50,7 +50,7 @@ def parallel_process_day(arg_dict): the_cmd = exe_cmd.format( **{ - 'cmd': './' + the_converter.name, + 'cmd': './create_identity_streamflow_obs', 'nproc': 1 } ) diff --git a/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/setup_usgs_daily.py b/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/setup_usgs_daily.py index c3e8b75852..d81a4db50c 100755 --- a/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/setup_usgs_daily.py +++ b/models/wrf_hydro/hydro_dart_py/hydrodartpy/core/setup_usgs_daily.py @@ -27,7 +27,8 @@ def setup_usgs_daily( input_dir = usgs_daily_config['input_dir'] output_dir = usgs_daily_config['output_dir'] # Output directory: make if DNE - output_dir.mkdir(exist_ok=False, parents=True) + #output_dir.mkdir(exist_ok=False, parents=True) + output_dir.mkdir(exist_ok=True, parents=True) # converter: identity or regular obs converter? # Check that the desired obs converter is in the dart build @@ -75,7 +76,8 @@ def setup_usgs_daily( run_dir = config['experiment']['run_dir'] m0 = pickle.load(open(run_dir / "member_000/WrfHydroSim.pkl", 'rb')) route_link_f = run_dir / 'member_000' / m0.base_hydro_namelist['hydro_nlist']['route_link_f'] - (output_dir / route_link_f.name).symlink_to(route_link_f) + if not route_link_f.is_file(): + (output_dir / route_link_f.name).symlink_to(route_link_f) input_nml[converter_nml]['location_file'] = route_link_f.name #input.nml input_files: create a list of files in the start and end range. @@ -101,27 +103,35 @@ def setup_usgs_daily( if usgs_daily_config['identity_obs']: hydro_rst_file = run_dir / 'member_000' / m0.base_hydro_namelist['hydro_nlist']['restart_file'] - (output_dir / hydro_rst_file.name).symlink_to(hydro_rst_file) + if not hydro_rst_file.is_file(): + (output_dir / hydro_rst_file.name).symlink_to(hydro_rst_file) input_nml['model_nml']['domain_order'] = 'hydro' input_nml['model_nml']['domain_shapefiles'] = str(hydro_rst_file.name) f90nml.Namelist(m0.base_hydro_namelist).write(output_dir / 'hydro.namelist', force=True) top_level_dir = get_top_level_dir_from_config(config, m0) - (output_dir / top_level_dir).symlink_to(config['wrf_hydro']['domain_src'] / top_level_dir) + nwm_dir = config['wrf_hydro']['domain_src'] / top_level_dir + if not nwm_dir.is_dir(): + (output_dir / top_level_dir).symlink_to(config['wrf_hydro']['domain_src'] / top_level_dir) # Now we are done editing it, write the input.nml back out. - input_nml.write(output_dir / 'input.nml') + out_input = output_dir / 'input.nml' + if not out_input.is_file(): + input_nml.write(output_dir / 'input.nml') # Symlink the config file into the output_dir so the default yaml file name # can be used by create_usgs_daily_obs_seq. if config_file is None: config_file = sorted(exp_dir.glob('original.*.yaml'))[0] - (output_dir / 'config_file.yaml').symlink_to(config_file) + if not config_file.is_file(): + (output_dir / 'config_file.yaml').symlink_to(config_file) # Stage the file that does the batch processing. this_file = pathlib.Path(__file__) batcher_base = 'create_usgs_daily_obs_seq.py' - (output_dir / batcher_base).symlink_to(this_file.parent / batcher_base) + pyscript = this_file.parent / batcher_base + if not pyscript.is_file(): + (output_dir / batcher_base).symlink_to(this_file.parent / batcher_base) # Setup the scheduled script. orig_submit_script = this_file.parent / 'submission_scripts/submit_usgs_daily_obs_converter.sh' @@ -152,10 +162,11 @@ def setup_usgs_daily( # Select statement # Right now, only single node processing - select_stmt = 'select=1:ncpus={ncpus}:mpiprocs={mpiprocs}'.format( + select_stmt = 'select=1:ncpus={ncpus}:mpiprocs={mpiprocs}:mem={reqmem}GB'.format( **{ 'ncpus': usgs_sched['ncpus'], - 'mpiprocs': usgs_sched['mpiprocs'] + 'mpiprocs': usgs_sched['mpiprocs'], + 'reqmem': usgs_sched['reqmem'] } ) replace_in_file(this_submit_script, 'PBS_SELECT_TEMPLATE', select_stmt) @@ -191,6 +202,7 @@ def setup_usgs_daily( all_obs_dir = pathlib.PosixPath(config['observation_preparation']['all_obs_dir']) all_obs_seq = output_dir.glob('obs_seq.*') for oo in all_obs_seq: - (all_obs_dir / oo.name).symlink_to(oo) + if not oo.is_file(): + (all_obs_dir / oo.name).symlink_to(oo) return 0 diff --git a/models/wrf_hydro/matlab/HydroDARTdiags.m b/models/wrf_hydro/matlab/HydroDARTdiags.m index a730c80b6c..d2ea7d31bd 100644 --- a/models/wrf_hydro/matlab/HydroDARTdiags.m +++ b/models/wrf_hydro/matlab/HydroDARTdiags.m @@ -1,4 +1,5 @@ -function [observation, openloop, forecast, analysis, exp] = HydroDARTdiags(dir_exps, obs, dir_ol, disp_res, plot_state) +function [observation, openloop, forecast, analysis, exp] = ... + HydroDARTdiags(dir_exps, obs, dir_ol, disp_res, plot_state, fig_to_pdf) %% DART software - Copyright UCAR. This open source software is provided % by UCAR, "as is", without charge, subject to all terms of use at @@ -35,19 +36,19 @@ elseif nargin > 2 plot_ol = true; -elseif nargin == 2 - plot_ol = false; +elseif nargin == 2 % no open-loop case + plot_ol = false; plot_state = false; + disp_res = 1; + fig_to_pdf = 'results.pdf'; end gY = [ 150, 150, 150 ]/255; lB = [ 153, 255, 255 ]/255; -lP = [ 204, 153, 255 ]/255; bK = [ 0, 0, 0 ]/255; bL = [ 30, 144, 255 ]/255; rD = [ 255, 51, 51 ]/255; gR = [ 0, 153, 0 ]/255; -pR = [ 153, 51, 255 ]/255; oR = [ 255, 153, 51 ]/255; @@ -82,50 +83,75 @@ end end +% Do we have any hybrid weight files? +ishybrid = '/all_output_hybridweight_mean.nc'; + +hyb_flavor = cell(num_exps); +hyb_flav_n = zeros(num_exps); +for e = 1:num_exps + if exist(char(strcat(dir_exps(e), ishybrid)), 'file') == 2 + hyb_flavor{e} = '/all_output_hybridweight_'; + hyb_flav_n(e) = 1; + end +end + nc = struct; for e = 1:num_exps diag_dir = dir_exps(e); - nc(e).state_mean_pr = char(strcat(diag_dir, '/all_preassim_mean.nc' )); % aggregated prior state_mean - nc(e).state_sd_pr = char(strcat(diag_dir, '/all_preassim_sd.nc' )); % aggregated prior state_sd + nc(e).state_mean_pr = char(strcat(diag_dir, '/all_preassim_mean.nc' )); % aggregated prior state_mean + nc(e).state_sd_pr = char(strcat(diag_dir, '/all_preassim_sd.nc' )); % aggregated prior state_sd - nc(e).state_mean_po = char(strcat(diag_dir, '/all_analysis_mean.nc' )); % aggregated analysis state_mean - nc(e).state_sd_po = char(strcat(diag_dir, '/all_analysis_sd.nc' )); % aggregated analysis state_sd + nc(e).state_mean_po = char(strcat(diag_dir, '/all_analysis_mean.nc' )); % aggregated analysis state_mean + nc(e).state_sd_po = char(strcat(diag_dir, '/all_analysis_sd.nc' )); % aggregated analysis state_sd if inf_flav_n(e, 1) > 0 && inf_flav_n(e, 2) > 0 - nc(e).pr_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 1}, 'mean.nc' )); % aggregated inf_mean - nc(e).pr_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 1}, 'sd.nc' )); % aggregated inf_std - nc(e).po_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 2}, 'mean.nc' )); % aggregated inf_mean - nc(e).po_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 2}, 'sd.nc' )); % aggregated inf_std + nc(e).pr_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 1}, 'mean.nc')); % aggregated inf_mean + nc(e).pr_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 1}, 'sd.nc' )); % aggregated inf_std + nc(e).po_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 2}, 'mean.nc')); % aggregated inf_mean + nc(e).po_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 2}, 'sd.nc' )); % aggregated inf_std elseif inf_flav_n(e, 1) > 0 - nc(e).pr_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 1}, 'mean.nc' )); % aggregated inf_mean - nc(e).pr_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 1}, 'sd.nc' )); % aggregated inf_std + nc(e).pr_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 1}, 'mean.nc')); % aggregated inf_mean + nc(e).pr_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 1}, 'sd.nc' )); % aggregated inf_std elseif inf_flav_n(e, 2) > 0 - nc(e).po_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 2}, 'mean.nc' )); % aggregated inf_mean - nc(e).po_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 2}, 'sd.nc' )); % aggregated inf_std + nc(e).po_inflate_mean = char(strcat(diag_dir, inf_flavor{e, 2}, 'mean.nc')); % aggregated inf_mean + nc(e).po_inflate_sd = char(strcat(diag_dir, inf_flavor{e, 2}, 'sd.nc' )); % aggregated inf_std + end + if hyb_flav_n(e) > 0 + nc(e).hybrid_mean = char(strcat(diag_dir, hyb_flavor{e}, 'mean.nc')); % aggregated hyb_mean + nc(e).hybrid_sd = char(strcat(diag_dir, hyb_flavor{e}, 'sd.nc' )); % aggregated hyb_std end - - nc(e).routelink = char(strcat(diag_dir, '/RouteLink.nc' )); % routelink file - nc(e).obs_diag = char(strcat(diag_dir, '/obs_diag_output.nc' )); % output of obs_diag - nc(e).obs_epoc = char(strcat(diag_dir, '/obs_epoch_001.nc' )); % output of obs_seq_to_netcdf + nc(e).routelink = char(strcat(diag_dir, '/../RouteLink.nc' )); % routelink file + + nc(e).obs_diag = char(strcat(diag_dir, '/obs_diag_output.nc' )); % output of obs_diag + nc(e).obs_epoc = char(strcat(diag_dir, '/obs_epoch_001.nc' )); % output of obs_seq_to_netcdf % open loop if plot_ol - ol.obs_diag = char(strcat(dir_ol , '/obs_diag_output.nc' )); - ol.obs_epoc = char(strcat(dir_ol , '/obs_epoch_001.nc' )); + ol.obs_diag = char(strcat(dir_ol , '/obs_diag_output.nc' )); + ol.obs_epoc = char(strcat(dir_ol , '/obs_epoch_001.nc' )); end end +% Figure the links and time variables in the netcdf file +ncid = netcdf.open(nc(e).state_mean_pr, 'NC_NOWRITE'); +ncvar = netcdf.inqDim(ncid, 0); +if strcmp(ncvar, 'links') + iL = 0; iT = 1; +else + iT = 0; iL = 1; +end + % Retrieve the dimensions from the netcdf file Nt_tmp = zeros(1, num_exps); for e = 1:num_exps ncid = netcdf.open(nc(e).state_mean_pr, 'NC_NOWRITE'); - [~, Nt_tmp(e)] = netcdf.inqDim(ncid, 0); % # of assim cycles + [~, Nt_tmp(e)] = netcdf.inqDim(ncid, iT); % # of assim cycles netcdf.close(ncid); end if length(unique(Nt_tmp)) > 1 @@ -134,7 +160,7 @@ Nt = Nt_tmp(1); ncid = netcdf.open(nc(1).state_mean_pr, 'NC_NOWRITE'); -[~, Nl] = netcdf.inqDim(ncid, 1); % # of links in the domain +[~, Nl] = netcdf.inqDim(ncid, iL); % # of links in the domain netcdf.close(ncid); exp = struct; @@ -146,6 +172,18 @@ netcdf.close(ncid); end +% ensemble size of the open loop +ncid = netcdf.open(ol.obs_diag, 'NC_NOWRITE'); +[~, rbins] = netcdf.inqDim(ncid, 14); % # of bins in the rank histogram (i.e., ens_size+1) +ol.ens_size = rbins-1; % size of the ensemble +netcdf.close(ncid); + +% separate exp name from path +exp_name = string(missing); +for e = 1:num_exps + sp_names = strsplit(dir_exps{e}, '/'); + exp_name(e) = sp_names{end}; +end %% TIME HANDLING: @@ -165,10 +203,17 @@ xticks = current(time_label); xtickslabel = goodtime(time_label, :); +% obs_diag time +od_time = ncread(nc(e).obs_diag, 'time') + origin; +od_time_b1 = find(od_time == current(1)); +od_time_b2 = find(od_time == current(Nt)); +diag_range = od_time_b1:od_time_b2; %% PROCESSING DATA: gauge_id = strtrim(ncread(nc(1).routelink, 'gages')'); +obserr_L = 1.e8; + % All available gauges in the domain: k = 0; for i = 1:Nl @@ -210,14 +255,16 @@ end - % Reading: for e = 1:num_exps exp(e).ensemble = double(ncread(nc(e).obs_epoc, 'observations')); exp(e).obs_ind = -1 * double(ncread(nc(e).obs_epoc, 'obs_type')); exp(e).O_time = double(ncread(nc(e).obs_epoc, 'time')) + origin; - if plot_ol, ol.ensemble = double(ncread(ol.obs_epoc, 'observations')); end + if plot_ol + ol.ensemble = double(ncread(ol.obs_epoc, 'observations')); + ol.obs_ind = -1 * double(ncread(ol.obs_epoc, 'obs_type')); + end % State and spread files exp(e).pr.state.x1 = ncread(nc(e).state_mean_pr, 'qlink1'); @@ -272,15 +319,27 @@ exp(e).po.infs.x2 = ncread(nc(e).po_inflate_sd , 'z_gwsubbas'); end end + + % Hybrid files + if hyb_flav_n(e) > 0 + exp(e).pr.hybm.x1 = ncread(nc(e).hybrid_mean, 'qlink1'); + exp(e).pr.hybs.x1 = ncread(nc(e).hybrid_sd , 'qlink1'); + + if bucket + exp(e).pr.hybm.x2 = ncread(nc(e).hybrid_mean, 'z_gwsubbas'); + exp(e).pr.hybs.x2 = ncread(nc(e).hybrid_sd , 'z_gwsubbas'); + end + end end end -openloop = cell( 3, gauges.want.num ); -forecast = cell(10, gauges.want.num, num_exps); +openloop = cell( 4, gauges.want.num ); +forecast = cell(12, gauges.want.num, num_exps); observation = cell( 9, gauges.want.num, num_exps); analysis = cell( 9, gauges.want.num, num_exps); flood = zeros(gauges.want.num, num_exps); + for i = 1:gauges.want.num k = gauges.want.IND(i); @@ -289,6 +348,8 @@ find_obs = k == exp(e).obs_ind; + fprintf('exp: %2d, obs no: %3d, dart-index: %6d, USGS gauge ID: %10d\n', e, i, k, gauges.want.OID(i)) + tmp.obs_val = exp(e).ensemble(1, find_obs); tmp.ens_mean_f = exp(e).ensemble(2, find_obs); tmp.ens_sd_f = exp(e).ensemble(4, find_obs); @@ -298,7 +359,14 @@ tmp.ens_sd_a = exp(e).ensemble(5, find_obs); tmp.ensemble_a = exp(e).ensemble(7:2:end, find_obs); - if plot_ol, tmp.ens_mean_ol = ol.ensemble(2, find_obs); tmp.ens_sd_ol = ol.ensemble(4, find_obs); end + if plot_ol + % Just in case, open loop has more gauges + ol_obs = k == ol.obs_ind; + + tmp.ens_mean_ol = ol.ensemble(2, ol_obs); + tmp.ens_sd_ol = ol.ensemble(4, ol_obs); + tmp.ensemble_ol = ol.ensemble(6:2:end-1, ol_obs); + end ens_time = zeros(1, Nt+1); Found_time = exp(e).O_time(find_obs); @@ -331,7 +399,11 @@ ens_sd_a = zeros(1, Nt); ensemble_a = NaN(exp(e).ens_size, Nt); - if plot_ol, ens_mean_ol = zeros(1, Nt); ens_sd_ol = zeros(1, Nt); end + if plot_ol + ens_mean_ol = zeros(1, Nt); + ens_sd_ol = zeros(1, Nt); + ensemble_ol = NaN(ol.ens_size, Nt); + end for j = 1:Nt if ~isnan(ens_time(j)) && ~isnan(ens_time(j+1)) @@ -343,10 +415,11 @@ ens_mean_a(j) = mean(tmp.ens_mean_a(:, ens_time(j):ens_time(j+1)), 2); ens_sd_a(j) = mean(tmp.ens_sd_a (:, ens_time(j):ens_time(j+1)), 2); ensemble_a(:, j) = mean(tmp.ensemble_a(:, ens_time(j):ens_time(j+1)), 2); - + if plot_ol - ens_mean_ol(j) = mean(tmp.ens_mean_ol(:, ens_time(j):ens_time(j+1)), 2); - ens_sd_ol(j) = mean(tmp.ens_sd_ol(:, ens_time(j):ens_time(j+1)), 2); + ens_mean_ol(j) = mean(tmp.ens_mean_ol(:, ens_time(j):ens_time(j+1)), 2); + ens_sd_ol(j) = mean(tmp.ens_sd_ol(:, ens_time(j):ens_time(j+1)), 2); + ensemble_ol(:, j) = mean(tmp.ensemble_ol(:, ens_time(j):ens_time(j+1)), 2); end elseif ~isnan(ens_time(j)) @@ -360,8 +433,9 @@ ensemble_a(:, j) = tmp.ensemble_a(:, ens_time(j)); if plot_ol - ens_mean_ol(j) = tmp.ens_mean_ol(:, ens_time(j)); - ens_sd_ol(j) = tmp.ens_sd_ol(:, ens_time(j)); + ens_mean_ol(j) = tmp.ens_mean_ol(:, ens_time(j)); + ens_sd_ol(j) = tmp.ens_sd_ol(:, ens_time(j)); + ensemble_ol(:, j) = tmp.ensemble_ol(:, ens_time(j)); end @@ -376,8 +450,9 @@ ensemble_a(:, j) = tmp.ensemble_a(:, ens_time(j+1)); if plot_ol - ens_mean_ol(j) = tmp.ens_mean_ol(:, ens_time(j+1)); - ens_sd_ol(j) = tmp.ens_sd_ol(:, ens_time(j+1)); + ens_mean_ol(j) = tmp.ens_mean_ol(:, ens_time(j+1)); + ens_sd_ol(j) = tmp.ens_sd_ol(:, ens_time(j+1)); + ensemble_ol(:, j) = tmp.ensemble_ol(:, ens_time(j+1)); end else @@ -390,13 +465,17 @@ ens_sd_a(j) = NaN; ensemble_a(:, j) = NaN; - if plot_ol, ens_mean_ol(j) = NaN; ens_sd_ol(j) = NaN; end + if plot_ol + ens_mean_ol(j) = NaN; + ens_sd_ol(j) = NaN; + ensemble_ol(:, j) = NaN; + end end end clear tmp - - flood(i, e) = find(obs_val == nanmax(obs_val), 1); + + flood(i, e) = find(obs_val == max(obs_val, [], 'omitnan'), 1); varname_f = strcat(gauges.want.names(i, :), '_guess'); varname_a = strcat(gauges.want.names(i, :), '_analy'); @@ -406,15 +485,16 @@ % Manage the forecast copies tmp_f = squeeze(double(ncread(nc(e).obs_diag, varname_f))); + tmp_f = tmp_f(:, diag_range); rmse_f = abs(ens_mean_f - obs_val); bias_f = obs_val - ens_mean_f; totspread_f = sqrt(ens_sd_f.^2 + obs_var.^2); rank_hist_f = squeeze(double(ncread(nc(e).obs_diag, strcat(varname_f, '_RankHist')))); + rank_hist_f = rank_hist_f(:, diag_range); if inf_flav_n(e, 1) > 0 pr_inflate_mean = exp(e).pr.infm.x1(k, :); - pr_inflate_sd = exp(e).pr.infs.x1(k, :); - + pr_inflate_sd = exp(e).pr.infs.x1(k, :); elseif inf_flav_n(e, 1) == 0 pr_inflate_mean = nan; pr_inflate_sd = nan; @@ -428,6 +508,14 @@ ensemble_f_def(:, j) = 1/lambda(j) * (ensemble_f(:, j) - ens_mean_f(j)) + ens_mean_f(j); end end + + if hyb_flav_n(e) > 0 + hybrid_mean = exp(e).pr.hybm.x1(k, :); + hybrid_sd = exp(e).pr.hybs.x1(k, :); + else + hybrid_mean = nan; + hybrid_sd = nan; + end % Manage the observation copies obs_poss = tmp_f(1, :); @@ -471,6 +559,7 @@ openloop(:, i) = { ens_mean_ol ; ... rmse_ol ; ... ens_sd_ol ; ... + ensemble_ol ; ... }; end @@ -484,6 +573,8 @@ ensemble_f_def ; ... pr_inflate_mean ; ... pr_inflate_sd ; ... + hybrid_mean ; ... + hybrid_sd ; ... }; observation(:, i, e) = { k ; ... @@ -517,6 +608,8 @@ if disp_res +if isfile(fig_to_pdf), delete(fig_to_pdf); end + %% TIME SERIES EVOLUTION: for o = 1:gauges.want.num @@ -524,13 +617,10 @@ if num_exps > 1 if num_exps == 2 - figure('uni','pi','pos',[50, 600, 1600, 450]); elseif num_exps <= 4 - figure('uni','pi','pos',[50, 600, 1600, 900]); else - figure('uni','pi','pos',[50, 600, 1600, 1000]); end @@ -538,13 +628,24 @@ for e = 1:num_exps if num_exps == 2 + + start_x = [.05, .55]; + fig_wid = .40; + + fig_ht1 = .50; + fig_ht2 = .26; + + sep = .05; + bot_y = .08; + top_y = bot_y + sep + fig_ht2; + + subplot('Position', [start_x(e), top_y, fig_wid, fig_ht1]); - subplot(1, 2, e) en_f = plot(current, forecast{8, o, e} , '-', 'Color', gY); hold on en_a = plot(current, analysis{7, o, e} , '-', 'Color', lB); grid on ob_a = plot(current, observation{7, o, e}, '*', 'Color', gR); - ob_r = plot(current, observation{8, o, e}, '*', 'Color', rD); + ob_r = plot(current, observation{8, o, e}, '*', 'Color', rD); if plot_ol, op = plot(current, openloop{1, o} , '-', 'Color', oR, 'LineWidth', 3); end mf = plot(current, forecast{6, o, e} , '-', 'Color', bK, 'LineWidth', 3); @@ -556,44 +657,106 @@ limsy = get(gca, 'YLim'); - strY = [ gauges.want.names(o, :), ' | ID: ', num2str(gauges.want.OID(o)) ]; + strY = 'Sream flow (cms)'; - set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0 limsy(2)]) - ylabel(strY, 'FontSize', 18, 'Interpreter', 'none', 'FontWeight', 'bold') + set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', {}, 'Ylim', [0 limsy(2)]) + ylabel(strY, 'FontSize', 18) + + if mean(observation{9, o, e}, 'omitnan') < obserr_L + obs_status = 'Used Obs (Assimilated)'; + else + obs_status = 'Used Obs (Evaluated only)'; + end if plot_ol - L = legend([ob_a, ob_r, en_f(1), en_a(1), op, mf , ... - ma, so, sf, sa], 'Used Obs' , ... + L = legend([ob_a, ob_r, en_f(1), en_a(1), op, mf , ma, so, sf, sa], ... + obs_status , ... sprintf('Rejected Obs: %.2f%%' , 100-observation{4, o, e}/observation{3, o, e}*100), ... 'Prior Members', 'Posterior Members' , ... - sprintf('Open Loop, RMSE: %.2f' , nanmean(openloop{2, o})), ... - sprintf('Prior Mean, RMSE: %.2f' , nanmean(forecast{2, o, e})), ... - sprintf('Posterior Mean, RMSE: %.2f' , nanmean(analysis{2, o, e})), ... - sprintf('Open loop Spread, avg: %.2f', nanmean(openloop{3, o})), ... - sprintf('Prior Spread, avg: %.2f' , nanmean(forecast{4, o, e})), ... - sprintf('Posterior Spread, avg: %.2f', nanmean(analysis{4, o, e})), ... + sprintf('Open Loop, RMSE: %.2f' , mean(openloop{2, o}, 'omitnan')), ... + sprintf('Prior Mean, RMSE: %.2f' , mean(forecast{2, o, e}, 'omitnan')), ... + sprintf('Posterior Mean, RMSE: %.2f' , mean(analysis{2, o, e}, 'omitnan')), ... + sprintf('Open loop Spread, avg: %.2f', mean(openloop{3, o}, 'omitnan')), ... + sprintf('Prior Spread, avg: %.2f' , mean(forecast{4, o, e}, 'omitnan')), ... + sprintf('Posterior Spread, avg: %.2f', mean(analysis{4, o, e}, 'omitnan')), ... 'Location', 'NorthEast'); else - L = legend([ob_a, ob_r, en_f(1), en_a(1), mf , ... - ma, sf, sa], 'Used Obs' , ... + L = legend([ob_a, ob_r, mf , ma], obs_status , ... sprintf('Rejected Obs: %.2f%%' , 100-observation{4, o, e}/observation{3, o, e}*100), ... - 'Prior Members', 'Posterior Members' , ... - sprintf('Prior Mean, RMSE: %.2f' , nanmean(forecast{2, o, e})), ... - sprintf('Posterior Mean, RMSE: %.2f' , nanmean(analysis{2, o, e})), ... - sprintf('Prior Spread, avg: %.2f' , nanmean(forecast{4, o, e})), ... - sprintf('Posterior Spread, avg: %.2f', nanmean(analysis{4, o, e})), ... + sprintf('Prior Mean, RMSE: %.2f' , mean(forecast{2, o, e}, 'omitnan')), ... + sprintf('Posterior Mean, RMSE: %.2f' , mean(analysis{2, o, e}, 'omitnan')), ... 'Location', 'NorthEast'); end - set(L, 'Interpreter', 'none', 'FontSize', 10) - title(L, {'Experiment', dir_exps{e}}, 'FontSize', 12) + set(L, 'Interpreter', 'none', 'FontSize', 12, 'color', 'none') + title(L, exp_name(e), 'FontSize', 14) - str2 = 'Statistics Time-Series (Hydrograph)'; + str2 = [ 'Hydrograph: ', gauges.want.names(o, :), ', Gauge ID: ', num2str(gauges.want.OID(o)) ]; title(str2, 'FontSize', 20, 'FontWeight', 'bold', 'Interpreter', 'none') + + subplot('Position', [start_x(e), bot_y, fig_wid, fig_ht2]); + + if sum(inf_flav_n) > 1 + i_pr_m = plot(current, forecast{9, o, e}, '-', 'Color', bK, 'LineWidth', 2); hold on + i_po_m = plot(current, analysis{8, o, e}, '-', 'Color', bL, 'LineWidth', 2); grid on + + plot(current, ones(1, Nt), '--', 'Color', gY); ax1 = gca; + + limsy = get(gca, 'YLim'); + + set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0.9 limsy(2)]) + ylabel('Inflation', 'FontSize', 18) + + if hyb_flav_n(e) > 0 + yyaxis right + hyb_m = plot(current, forecast{11, o, e}, '-', 'Color', gR, 'LineWidth', 2); + set(gca, 'YColor', bK, 'YLim', [-0.1, 1.1], 'YTick', [0, 0.5, 1]) + ylabel('Hyb. Weight', 'FontSize', 18) + + L = legend(ax1, [i_pr_m, i_po_m, hyb_m], ... + sprintf('Prior Inflation Mean, avg: %.2f' , mean(forecast{9, o, e}, 'omitnan')), ... + sprintf('Posterior Inflation Mean, avg: %.2f' , mean(analysis{8, o, e}, 'omitnan')), ... + sprintf('Hybrid Weight Mean, avg: %.2f' , mean(forecast{11, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + else + L = legend(ax1, [i_pr_m, i_po_m], ... + sprintf('Prior Inflation Mean, avg: %.2f' , mean(forecast{9, o, e}, 'omitnan')), ... + sprintf('Posterior Inflation Mean, avg: %.2f' , mean(analysis{8, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + end + + set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12, 'color', 'none') + else + i_pr_m = plot(current, forecast{9, o, e}, '-', 'Color', bK, 'LineWidth', 2); hold on + + plot(current, ones(1, Nt), '--', 'Color', gY); grid on + + limsy = get(gca, 'YLim'); + + set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0.9 limsy(2)]) + ylabel('Inflation', 'FontSize', 18) + + if hyb_flav_n(e) > 0 + yyaxis right + hyb_m = plot(current, forecast{11, o, e}, '-', 'Color', gR, 'LineWidth', 2); + set(gca, 'YColor', bK, 'YLim', [-0.1, 1.1], 'YTick', [0, 0.5, 1]) + ylabel('Hyb. Weight', 'FontSize', 18) + + L = legend(ax1, [i_pr_m, hyb_m], ... + sprintf('Prior Inflation Mean, avg: %.2f' , mean(forecast{9, o, e}, 'omitnan')), ... + sprintf('Hybrid Weight Mean, avg: %.2f' , mean(forecast{11, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + else + L = legend(sprintf('Prior Inflation Mean, avg: %.2f', mean(forecast{9, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + end + + set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12, 'color', 'none') + end else - % num_exps > 2 + % num_exps > 2 :: no place for inflation! rows = ceil(num_exps/2); subplot(rows, 2, e) @@ -616,36 +779,46 @@ set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0 limsy(2)]) ylabel(['Gauge: ', num2str(observation{2, o, e})], 'FontSize', 18) + if mean(observation{9, o, e}, 'omitnan') < obserr_L + obs_status = 'Used Obs (Assimilated)'; + else + obs_status = 'Used Obs (Evaluated only)'; + end + if plot_ol L = legend([ob_a, ob_r, en_f(1), en_a(1), op, mf , ... - ma, so, sf, sa], 'Used Obs' , ... + ma, so, sf, sa], obs_status , ... sprintf('Rejected Obs: %.2f%%' , 100-observation{4, o, e}/observation{3, o, e}*100), ... 'Prior Members', 'Posterior Members' , ... - sprintf('Open Loop, RMSE: %.2f' , nanmean(openloop{2, o})), ... - sprintf('Prior Mean, RMSE: %.2f' , nanmean(forecast{2, o, e})), ... - sprintf('Posterior Mean, RMSE: %.2f' , nanmean(analysis{2, o, e})), ... - sprintf('Open loop Spread, avg: %.2f', nanmean(openloop{3, o})), ... - sprintf('Prior Spread, avg: %.2f' , nanmean(forecast{4, o, e})), ... - sprintf('Posterior Spread, avg: %.2f', nanmean(analysis{4, o, e})), ... + sprintf('Open Loop, RMSE: %.2f' , mean(openloop{2, o}, 'omitnan')), ... + sprintf('Prior Mean, RMSE: %.2f' , mean(forecast{2, o, e}, 'omitnan')), ... + sprintf('Posterior Mean, RMSE: %.2f' , mean(analysis{2, o, e}, 'omitnan')), ... + sprintf('Open loop Spread, avg: %.2f', mean(openloop{3, o}, 'omitnan')), ... + sprintf('Prior Spread, avg: %.2f' , mean(forecast{4, o, e}, 'omitnan')), ... + sprintf('Posterior Spread, avg: %.2f', mean(analysis{4, o, e}, 'omitnan')), ... 'Location', 'NorthEast'); else L = legend([ob_a, ob_r, en_f(1), en_a(1), mf , ... - ma, sf, sa], 'Used Obs' , ... + ma, sf, sa], obs_status , ... sprintf('Rejected Obs: %.2f%%' , 100-observation{4, o, e}/observation{3, o, e}*100), ... 'Prior Members', 'Posterior Members' , ... - sprintf('Prior Mean, RMSE: %.2f' , nanmean(forecast{2, o, e})), ... - sprintf('Posterior Mean, RMSE: %.2f' , nanmean(analysis{2, o, e})), ... - sprintf('Prior Spread, avg: %.2f' , nanmean(forecast{4, o, e})), ... - sprintf('Posterior Spread, avg: %.2f', nanmean(analysis{4, o, e})), ... + sprintf('Prior Mean, RMSE: %.2f' , mean(forecast{2, o, e}, 'omitnan')), ... + sprintf('Posterior Mean, RMSE: %.2f' , mean(analysis{2, o, e}, 'omitnan')), ... + sprintf('Prior Spread, avg: %.2f' , mean(forecast{4, o, e}, 'omitnan')), ... + sprintf('Posterior Spread, avg: %.2f', mean(analysis{4, o, e}, 'omitnan')), ... 'Location', 'NorthEast'); end - set(L, 'Interpreter', 'none', 'FontSize', 10) + set(L, 'Interpreter', 'none', 'FontSize', 10, 'color', 'none') - title(['Exp: ', dir_exps{e}], 'FontSize', 16, 'FontWeight', 'bold', 'Interpreter', 'none') + title('Exp: ' + exp_name(e), 'FontSize', 16, 'FontWeight', 'bold', 'Interpreter', 'none') end end + if fig_to_pdf + exportgraphics(gcf, fig_to_pdf, 'Append', true, 'ContentType', 'vector') + close + end else @@ -656,9 +829,9 @@ fig_wid = .80; fig_ht1 = .50; - fig_ht2 = .30; + fig_ht2 = .26; - sep = .01; + sep = .05; bot_y = .08; top_y = bot_y + sep + fig_ht2; @@ -669,7 +842,7 @@ en_a = plot(current, analysis{7, o, e} , '-', 'Color', lB); grid on ob_a = plot(current, observation{7, o, e}, '*', 'Color', gR); - ob_r = plot(current, observation{8, o, e}, '*', 'Color', rD); + ob_r = plot(current, observation{8, o, e}, '*', 'Color', rD); if plot_ol, op = plot(current, openloop{1, o} , '-', 'Color', oR, 'LineWidth', 3); end mf = plot(current, forecast{6, o, e} , '-', 'Color', bK, 'LineWidth', 3); @@ -684,30 +857,36 @@ set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', {}, 'Ylim', [0 limsy(2)]) ylabel('Stream flow (cms)', 'FontSize', 18) + if mean(observation{9, o, e}, 'omitnan') < obserr_L + obs_status = 'Used Obs (Assimilated)'; + else + obs_status = 'Used Obs (Evaluated only)'; + end + if plot_ol L = legend([ob_a, ob_r, en_f(1), en_a(1), op, mf, ma, ... - so, sf, sa], 'Used Obs' , ... + so, sf, sa], obs_status , ... sprintf('Rejected Obs: %.2f%%' , 100-observation{4, o, e}/observation{3, o, e}*100), ... 'Prior Members', 'Posterior Members' , ... - sprintf('Open Loop, RMSE: %.2f' , nanmean(openloop{2, o })), ... - sprintf('Prior Mean, RMSE: %.2f' , nanmean(forecast{2, o, e})), ... - sprintf('Posterior Mean, RMSE: %.2f' , nanmean(analysis{2, o, e})), ... - sprintf('Open loop Spread, avg: %.2f', nanmean(openloop{3, o })), ... - sprintf('Prior Spread, avg: %.2f' , nanmean(forecast{4, o, e})), ... - sprintf('Posterior Spread, avg: %.2f', nanmean(analysis{4, o, e})), ... + sprintf('Open Loop, RMSE: %.2f' , mean(openloop{2, o }, 'omitnan')), ... + sprintf('Prior Mean, RMSE: %.2f' , mean(forecast{2, o, e}, 'omitnan')), ... + sprintf('Posterior Mean, RMSE: %.2f' , mean(analysis{2, o, e}, 'omitnan')), ... + sprintf('Open loop Spread, avg: %.2f', mean(openloop{3, o }, 'omitnan')), ... + sprintf('Prior Spread, avg: %.2f' , mean(forecast{4, o, e}, 'omitnan')), ... + sprintf('Posterior Spread, avg: %.2f', mean(analysis{4, o, e}, 'omitnan')), ... 'Location', 'NorthEast'); else L = legend([ob_a, ob_r, en_f(1), en_a(1), mf, ma, ... - sf, sa], 'Used Obs' , ... + sf, sa], obs_status , ... sprintf('Rejected Obs: %.2f%%' , 100-observation{4, o, e}/observation{3, o, e}*100), ... 'Prior Members', 'Posterior Members' , ... - sprintf('Prior Mean, RMSE: %.2f' , nanmean(forecast{2, o, e})), ... - sprintf('Posterior Mean, RMSE: %.2f' , nanmean(analysis{2, o, e})), ... - sprintf('Prior Spread, avg: %.2f' , nanmean(forecast{4, o, e})), ... - sprintf('Posterior Spread, avg: %.2f', nanmean(analysis{4, o, e})), ... + sprintf('Prior Mean, RMSE: %.2f' , mean(forecast{2, o, e}, 'omitnan')), ... + sprintf('Posterior Mean, RMSE: %.2f' , mean(analysis{2, o, e}, 'omitnan')), ... + sprintf('Prior Spread, avg: %.2f' , mean(forecast{4, o, e}, 'omitnan')), ... + sprintf('Posterior Spread, avg: %.2f', mean(analysis{4, o, e}, 'omitnan')), ... 'Location', 'NorthEast'); end - set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12) + set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12, 'color', 'none') str1 = [ gauges.want.names(o, :), ' | Gauge ID: ', num2str(gauges.want.OID(o)) ]; str2 = 'Hydrograph: Obs, Prior/Posterior Ensemble, Mean, Spread & Inflation'; @@ -715,23 +894,69 @@ subplot('Position', [start_x, bot_y, fig_wid, fig_ht2]); - i_pr_m = plot(current, forecast{9, o, e}, '-', 'Color', bK, 'LineWidth', 2); hold on - i_po_m = plot(current, analysis{8, o, e}, '-', 'Color', bL, 'LineWidth', 2); grid on - - plot(current, ones(1, Nt), '--', 'Color', gY) - - limsy = get(gca, 'YLim'); - - set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0.8 limsy(2)-0.1]) - ylabel('Inflation', 'FontSize', 18) - - L = legend([i_pr_m, i_po_m], ... - sprintf('Prior Inflation Mean, avg: %.2f' , nanmean(forecast{9, o, e})), ... - sprintf('Posterior Inflation Mean, avg: %.2f' , nanmean(analysis{8, o, e})), ... - 'Location', 'NorthEast'); + if sum(inf_flav_n) > 1 + i_pr_m = plot(current, forecast{9, o, e}, '-', 'Color', bK, 'LineWidth', 2); hold on + i_po_m = plot(current, analysis{8, o, e}, '-', 'Color', bL, 'LineWidth', 2); grid on + + plot(current, ones(1, Nt), '--', 'Color', gY); ax1 = gca; + + limsy = get(gca, 'YLim'); + + set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0.9 limsy(2)]) + ylabel('Inflation', 'FontSize', 18) + + if hyb_flav_n(e) > 0 + yyaxis right + hyb_m = plot(current, forecast{11, o, e}, '-', 'Color', gR, 'LineWidth', 2); + set(gca, 'YColor', bK, 'YLim', [-0.1, 1.1], 'YTick', [0, 0.5, 1]) + ylabel('Hyb. Weight', 'FontSize', 18) + + L = legend(ax1, [i_pr_m, i_po_m, hyb_m], ... + sprintf('Prior Inflation Mean, avg: %.2f' , mean(forecast{9, o, e}, 'omitnan')), ... + sprintf('Posterior Inflation Mean, avg: %.2f' , mean(analysis{8, o, e}, 'omitnan')), ... + sprintf('Hybrid Weight Mean, avg: %.2f' , mean(forecast{11, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + else + L = legend(ax1, [i_pr_m, i_po_m], ... + sprintf('Prior Inflation Mean, avg: %.2f' , mean(forecast{9, o, e}, 'omitnan')), ... + sprintf('Posterior Inflation Mean, avg: %.2f' , mean(analysis{8, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + end + + set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12, 'color', 'none') + else + i_pr_m = plot(current, forecast{9, o, e}, '-', 'Color', bK, 'LineWidth', 2); hold on + + plot(current, ones(1, Nt), '--', 'Color', gY); grid on + + limsy = get(gca, 'YLim'); + + set(gca, 'FontSize', 16, 'XLim', [xticks(1), xticks(end)], 'XTick', xticks, 'XTickLabel', xtickslabel, 'Ylim', [0.9 limsy(2)]) + ylabel('Inflation', 'FontSize', 18) + + if hyb_flav_n(e) > 0 + yyaxis right + hyb_m = plot(current, forecast{11, o, e}, '-', 'Color', gR, 'LineWidth', 2); + set(gca, 'YColor', bK, 'YLim', [-0.1, 1.1], 'YTick', [0, 0.5, 1]) + ylabel('Hyb. Weight', 'FontSize', 18) + + L = legend(ax1, [i_pr_m, hyb_m], ... + sprintf('Prior Inflation Mean, avg: %.2f' , mean(forecast{9, o, e}, 'omitnan')), ... + sprintf('Hybrid Weight Mean, avg: %.2f' , mean(forecast{11, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + else + L = legend(sprintf('Prior Inflation Mean, avg: %.2f', mean(forecast{9, o, e}, 'omitnan')), ... + 'Location', 'NorthEast'); + end + + set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12, 'color', 'none') + end - set(L, 'Interpreter', 'none', 'Box', 'off', 'FontSize', 12) - + if fig_to_pdf + %print(gcf, '-dpsc', '-vector', '-append', '-bestfit', pdf_filename) + exportgraphics(gcf, fig_to_pdf, 'Append', true, 'ContentType', 'vector') + close + end if plot_state && o == gauges.want.num @@ -748,20 +973,20 @@ subplot(221) plot_connections(Xm1_f, nc(e).routelink, get(gca, 'position'), 'cms', tiny_flow_s) - title({['Experiment: ' dir_exps{e}],'Stream Flow: Time-Avg. Prior Mean'}, 'FontSize', 14, 'Interpreter', 'none') + title({'Experiment: ' + exp_name(e),'Stream Flow: Time-Avg. Prior Mean'}, 'FontSize', 14, 'Interpreter', 'none') subplot(222) plot_connections(Xs1_f, nc(e).routelink, get(gca, 'position'), 'cms', tiny_flow_s) - title({['Experiment: ' dir_exps{e}],'Stream Flow: Time-Avg. Prior Spread'}, 'FontSize', 14, 'Interpreter', 'none') + title({'Experiment: ' + exp_name(e),'Stream Flow: Time-Avg. Prior Spread'}, 'FontSize', 14, 'Interpreter', 'none') subplot(223) if bucket Xm2_f = mean(exp(e).pr.state.x2 , 2); plot_connections(Xm2_f, nc(e).routelink, get(gca, 'position'), 'mm', tiny_flow_b) - title({['Experiment: ' dir_exps{e}],'Bucket: Time-Avg. Prior Mean'}, 'FontSize', 16, 'Interpreter', 'none') + title({'Experiment: ' + exp_name(e),'Bucket: Time-Avg. Prior Mean'}, 'FontSize', 16, 'Interpreter', 'none') else plot_connections(Xm1_a, nc(e).routelink, get(gca, 'position'), 'cms', tiny_flow_s) - title({['Experiment: ' dir_exps{e}],'Stream Flow: Time-Avg. Posterior Mean'}, 'FontSize', 16, 'Interpreter', 'none') + title({'Experiment: ' + exp_name(e),'Stream Flow: Time-Avg. Posterior Mean'}, 'FontSize', 16, 'Interpreter', 'none') end @@ -769,10 +994,10 @@ if bucket Xs2_f = mean(exp(e).pr.spread.x2, 2); plot_connections(Xs2_f, nc(e).routelink, get(gca, 'position'), 'mm', tiny_flow_b) - title({['Experiment: ' dir_exps{e}],'Bucket: Time-Avg. Prior Spread'}, 'FontSize', 16, 'Interpreter', 'none') + title({'Experiment: ' + exp_name(e),'Bucket: Time-Avg. Prior Spread'}, 'FontSize', 16, 'Interpreter', 'none') else plot_connections(Xs1_a, nc(e).routelink, get(gca, 'position'), 'cms', tiny_flow_s) - title({['Experiment: ' dir_exps{e}],'Stream Flow: Time-Avg. Posterior Spread'}, 'FontSize', 16, 'Interpreter', 'none') + title({'Experiment: ' + dir_exps(e),'Stream Flow: Time-Avg. Posterior Spread'}, 'FontSize', 16, 'Interpreter', 'none') end % display increment @@ -785,18 +1010,18 @@ subplot(121) plot_connections(Xi1, nc(e).routelink, get(gca, 'position'), 'cms', tiny_flow_s) - title({['Experiment: ' dir_exps{e}],'Stream Flow: DA Increment (Prior-Posterior)', ... + title({'Experiment: ' + exp_name(e),'Stream Flow: DA Increment (Prior-Posterior)', ... ['Event: ' longtime(event, :)]}, 'FontSize', 16, 'Interpreter', 'none') subplot(122) if bucket Xi2 = exp(e).pr.state.x2(:, event) - exp(e).po.state.x2(:, event); plot_connections(Xi2, nc(e).routelink, get(gca, 'position'), 'mm', tiny_flow_b) - title({['Experiment: ' dir_exps{e}],'Bucket: DA Increment (Prior-Posterior)', ... + title({'Experiment: ' + exp_name(e),'Bucket: DA Increment (Prior-Posterior)', ... ['Event: ' longtime(event, :)]}, 'FontSize', 16, 'Interpreter', 'none') else plot_connections(Xia, nc(e).routelink, get(gca, 'position'), 'cms', tiny_flow_s) - title({['Experiment: ' dir_exps{e}],'Stream Flow: DA Increment (Prior-Posterior)', ... + title({'Experiment: ' + exp_name(e),'Stream Flow: DA Increment (Prior-Posterior)', ... 'Time-Average'}, 'FontSize', 16, 'Interpreter', 'none') end @@ -812,5 +1037,4 @@ % % % % $URL: $ % % $Revision: $ -% % $Date: $ - +% % $Date: $ \ No newline at end of file diff --git a/models/wrf_hydro/model_mod.f90 b/models/wrf_hydro/model_mod.f90 index da4b2bdb4a..8f943989ea 100644 --- a/models/wrf_hydro/model_mod.f90 +++ b/models/wrf_hydro/model_mod.f90 @@ -26,7 +26,8 @@ module model_mod use netcdf_utilities_mod, only : nc_check, nc_add_global_attribute, & nc_synchronize_file, nc_end_define_mode, & - nc_add_global_creation_time, nc_begin_define_mode + nc_add_global_creation_time, nc_begin_define_mode, & + nc_get_dimension_size, nc_open_file_readonly use obs_def_utilities_mod, only : track_status @@ -534,8 +535,9 @@ function read_model_time(filename) character(len=STRINGLENGTH) :: datestring_scalar integer :: year, month, day, hour, minute, second integer :: DimID, VarID, strlen, ntimes -logical :: isLsmFile +logical :: isLsmFile, isClimFile integer :: ncid, io +integer :: c_link io = nf90_open(filename, NF90_NOWRITE, ncid) call nc_check(io,routine,'open',filename) @@ -543,6 +545,10 @@ function read_model_time(filename) ! Test if "Time" is a dimension in the file. isLsmFile = nf90_inq_dimid(ncid, 'Time', DimID) == NF90_NOERR +! Test if "time" is a dimension +! Only read model time from the restart, use a dummy one here! +isClimFile = nf90_inq_varid(ncid, 'static_time', VarID) == NF90_NOERR + if(isLsmFile) then ! Get the time from the LSM restart file ! TJH ... my preference is to read the dimension IDs for the Times variable @@ -576,6 +582,28 @@ function read_model_time(filename) io = nf90_get_var(ncid, VarID, datestring) call nc_check(io, routine, 'get_var','Times',filename) +elseif (isClimFile) then + + ! Dummy time for static files + ntimes = 1 + allocate(datestring(ntimes)) + datestring(1) = '1980-01-01_00:00:00' + + ! Also check if the state in the climatology is consistent + ! with the state in the restarts + ncid = nc_open_file_readonly(filename, routine) + c_link = nc_get_dimension_size(ncid, 'links', routine) + + if ( c_link /= n_link ) then + write(string1,'(A)')'The size of the state in the climatology files is not consistent with the current domain size.' + write(string2, * )'number of links: ', c_link, & + ' from "'//trim(filename)//'"' + write(string3,*)'number of links: ',int(n_link,i8), & + ' from "'//get_hydro_domain_filename()//'"' + call error_handler(E_ERR, routine, string1, & + source, revision, revdate, text2=string2, text3=string3) + endif + else ! Get the time from the hydro or parameter file io = nf90_inquire_attribute(ncid, NF90_GLOBAL, 'Restart_Time', len=strlen) @@ -637,7 +665,7 @@ subroutine get_state_meta_data(index_in, location, var_type) location = domain_info(domid)%location(iloc,jloc,kloc) -if (do_output() .and. debug > 99) then +if (do_output() .and. debug > 1000) then call write_location(0,location,charstring=string1) write(*,*)'gsmd index,i,j,k = ',index_in, iloc, jloc, kloc, trim(string1) endif @@ -1385,22 +1413,39 @@ subroutine get_my_close(num_superset, superset_indices, superset_distances, & integer, intent(out) :: close_ind(:) real(r8), intent(out) :: dist(:) -integer :: itask, isuper +integer, dimension(:), allocatable :: index_map +integer :: i, idx, il, ir num_close = 0 -do itask = 1,size(my_task_indices) - do isuper = 1,num_superset - - ! if stuff on my task ... equals ... global stuff I want ... - if ( my_task_indices(itask) == superset_indices(isuper) ) then - num_close = num_close + 1 - close_ind(num_close) = itask - dist(num_close) = superset_distances(isuper) - endif - - enddo -enddo +! Determine the range of my_task_indices +il = minval(my_task_indices) +ir = maxval(my_task_indices) + +! Create a map for quick lookup +allocate(index_map(il:ir)) +index_map = 0 +do i = 1, num_superset + idx = superset_indices(i) + if (idx >= il .and. idx <= ir) then + index_map(idx) = i + end if +end do + +! Loop over my_task_indices and find matches using the map +do i = 1, size(my_task_indices) + idx = my_task_indices(i) + if (idx >= il .and. idx <= ir) then + if (index_map(idx) > 0) then + num_close = num_close + 1 + close_ind(num_close) = i + dist(num_close) = superset_distances(index_map(idx)) + end if + end if +end do + +! Deallocate the map +deallocate(index_map) end subroutine get_my_close diff --git a/models/wrf_hydro/noah_hydro_mod.f90 b/models/wrf_hydro/noah_hydro_mod.f90 index f31ea43a11..8af0cee699 100644 --- a/models/wrf_hydro/noah_hydro_mod.f90 +++ b/models/wrf_hydro/noah_hydro_mod.f90 @@ -150,6 +150,7 @@ module noah_hydro_mod real(r8), allocatable, dimension(:) :: length integer, allocatable, dimension(:) :: to integer, allocatable, dimension(:) :: BucketMask +integer, allocatable, dimension(:) :: num_up_links integer, parameter :: IDSTRLEN = 15 ! must match declaration in netCDF file character(len=IDSTRLEN), allocatable, dimension(:) :: gageID ! NHD Gage Event ID from SOURCE_FEA field in Gages feature class @@ -456,127 +457,125 @@ subroutine get_hydro_constants(filename) integer :: i, ii, jj, io integer :: ncid, DimID, VarID, numdims, dimlen, xtype - -io = nf90_open(filename, NF90_NOWRITE, ncid) -call nc_check(io, routine, 'open', filename) - -! The number of latitudes is dimension 'y' -io = nf90_inq_dimid(ncid, 'y', DimID) -call nc_check(io, routine, 'inq_dimid y', filename) - -io = nf90_inquire_dimension(ncid, DimID, len=n_hlat) -call nc_check(io, routine,'inquire_dimension y',filename) - -! The number of longitudes is dimension 'x' -io = nf90_inq_dimid(ncid, 'x', DimID) -call nc_check(io, routine,'inq_dimid x',filename) - -io = nf90_inquire_dimension(ncid, DimID, len=n_hlong) -call nc_check(io, routine,'inquire_dimension x',filename) - -!>@todo could just check the dimension lengths for LONGITUDE -!> and use them for all ... removes the dependency on knowing -!> the dimension names are 'y' and 'x' ... and the order. - -!! module allocation -allocate(hlong(n_hlong, n_hlat)) -allocate( hlat(n_hlong, n_hlat)) - -!! local allocation -allocate(hlongFlip(n_hlong, n_hlat)) -allocate( hlatFlip(n_hlong, n_hlat)) - -! Require that the xlong and xlat are the same shape.?? -io = nf90_inq_varid(ncid, 'LONGITUDE', VarID) -call nc_check(io, routine,'inq_varid LONGITUDE',filename) - -io = nf90_inquire_variable(ncid, VarID, dimids=dimIDs, & - ndims=numdims, xtype=xtype) -call nc_check(io, routine, 'inquire_variable LONGITUDE',filename) - -! numdims = 2, these are all 2D fields -! Form the start/count such that we always get the 'latest' time. -ncstart(:) = 0 -nccount(:) = 0 -do i = 1,numdims - write(string1,'(''LONGITUDE inquire dimension '',i2,A)') i,trim(filename) - io = nf90_inquire_dimension(ncid, dimIDs(i), name=dimname, len=dimlen) - call nc_check(io, routine, string1) - ncstart(i) = 1 - nccount(i) = dimlen - if ((trim(dimname) == 'Time') .or. (trim(dimname) == 'time')) then - ncstart(i) = dimlen - nccount(i) = 1 - endif -enddo - -if (debug > 99) then - write(*,*)'DEBUG get_hydro_constants ncstart is',ncstart(1:numdims) - write(*,*)'DEBUG get_hydro_constants nccount is',nccount(1:numdims) -endif - -!get the longitudes -io = nf90_get_var(ncid, VarID, hlong, start=ncstart(1:numdims), & - count=nccount(1:numdims)) -call nc_check(io, routine, 'get_var LONGITUDE',filename) - -where(hlong < 0.0_r8) hlong = hlong + 360.0_r8 -where(hlong == 360.0_r8) hlong = 0.0_r8 - -!get the latitudes -io = nf90_inq_varid(ncid, 'LATITUDE', VarID) -call nc_check(io, routine,'inq_varid LATITUDE',filename) -io = nf90_get_var(ncid, VarID, hlat, start=ncstart(1:numdims), & - count=nccount(1:numdims)) -call nc_check(io, routine, 'get_var LATITUDE',filename) - -where (hlat < -90.0_r8) hlat = -90.0_r8 -where (hlat > 90.0_r8) hlat = 90.0_r8 - -! Flip the longitues and latitudes -do ii=1,n_hlong - do jj=1,n_hlat - hlongFlip(ii,jj) = hlong(ii,n_hlat-jj+1) - hlatFlip(ii,jj) = hlat(ii,n_hlat-jj+1) - end do -end do -hlong = hlongFlip -hlat = hlatFlip -deallocate(hlongFlip, hlatFlip) - -!get the channelgrid -! i'm doing this exactly to match how it's done in the wrf_hydro code -! (line 1044 of /home/jamesmcc/WRF_Hydro/ndhms_wrf_hydro/trunk/NDHMS/Routing/module_HYDRO_io.F) -! so that the output set of indices correspond to the grid in the Fulldom file -! and therefore these can be used to grab other channel attributes in that file. -! but the code is really long so I've put it in a module subroutine. -! Dont need to flip lat and lon in this (already done) but will flip other vars from Fulldom file. -! Specify channel routing option: 1=Muskingam-reach, 2=Musk.-Cunge-reach, 3=Diff.Wave-gridded - +integer, allocatable, dimension(:) :: col + + !get the channelgrid + ! i'm doing this exactly to match how it's done in the wrf_hydro code + ! (line 1044 of /home/jamesmcc/WRF_Hydro/ndhms_wrf_hydro/trunk/NDHMS/Routing/module_HYDRO_io.F) + ! so that the output set of indices correspond to the grid in the Fulldom file + ! and therefore these can be used to grab other channel attributes in that file. + ! but the code is really long so I've put it in a module subroutine. + ! Dont need to flip lat and lon in this (already done) but will flip other vars from Fulldom file. + ! Specify channel routing option: 1=Muskingam-reach, 2=Musk.-Cunge-reach, 3=Diff.Wave-gridded + if ( chanrtswcrt == 1 ) then - - if ( channel_option == 2) then - - call get_routelink_constants(route_link_f) - - else if ( channel_option == 3) then - - call getChannelGridCoords(filename, ncid, numdims, ncstart, nccount) - call get_basn_msk( filename, ncid, numdims, ncstart, nccount, n_hlong, n_hlat) + + if ( channel_option == 2) then + + call get_routelink_constants(route_link_f) + + else if ( channel_option == 3) then + + io = nf90_open(filename, NF90_NOWRITE, ncid) + call nc_check(io, routine, 'open', filename) + + ! The number of latitudes is dimension 'y' + io = nf90_inq_dimid(ncid, 'y', DimID) + call nc_check(io, routine, 'inq_dimid y', filename) + + io = nf90_inquire_dimension(ncid, DimID, len=n_hlat) + call nc_check(io, routine,'inquire_dimension y',filename) + + ! The number of longitudes is dimension 'x' + io = nf90_inq_dimid(ncid, 'x', DimID) + call nc_check(io, routine,'inq_dimid x',filename) + + io = nf90_inquire_dimension(ncid, DimID, len=n_hlong) + call nc_check(io, routine,'inquire_dimension x',filename) + + !>@todo could just check the dimension lengths for LONGITUDE + !> and use them for all ... removes the dependency on knowing + !> the dimension names are 'y' and 'x' ... and the order. + + !! module allocation + allocate(hlong(n_hlong, n_hlat), hlat(n_hlong, n_hlat)) + + !! local allocation + !allocate(hlongFlip(n_hlong, n_hlat), hlatFlip(n_hlong, n_hlat)) + allocate(col(n_hlat)) + + ! Require that the xlong and xlat are the same shape.?? + io = nf90_inq_varid(ncid, 'LONGITUDE', VarID) + call nc_check(io, routine,'inq_varid LONGITUDE',filename) + + io = nf90_inquire_variable(ncid, VarID, dimids=dimIDs, & + ndims=numdims, xtype=xtype) + call nc_check(io, routine, 'inquire_variable LONGITUDE',filename) + + ! numdims = 2, these are all 2D fields + ! Form the start/count such that we always get the 'latest' time. + ncstart(:) = 0 + nccount(:) = 0 + do i = 1,numdims + write(string1,'(''LONGITUDE inquire dimension '',i2,A)') i,trim(filename) + io = nf90_inquire_dimension(ncid, dimIDs(i), name=dimname, len=dimlen) + call nc_check(io, routine, string1) + ncstart(i) = 1 + nccount(i) = dimlen + if ((trim(dimname) == 'Time') .or. (trim(dimname) == 'time')) then + ncstart(i) = dimlen + nccount(i) = 1 + endif + enddo + + if (debug > 99) then + write(*,*)'DEBUG get_hydro_constants ncstart is',ncstart(1:numdims) + write(*,*)'DEBUG get_hydro_constants nccount is',nccount(1:numdims) + endif + + !get the longitudes + io = nf90_get_var(ncid, VarID, hlong, start=ncstart(1:numdims), & + count=nccount(1:numdims)) + call nc_check(io, routine, 'get_var LONGITUDE',filename) + + where(hlong < 0.0_r8) hlong = hlong + 360.0_r8 + where(hlong == 360.0_r8) hlong = 0.0_r8 + + !get the latitudes + io = nf90_inq_varid(ncid, 'LATITUDE', VarID) + call nc_check(io, routine,'inq_varid LATITUDE',filename) + io = nf90_get_var(ncid, VarID, hlat, start=ncstart(1:numdims), & + count=nccount(1:numdims)) + call nc_check(io, routine, 'get_var LATITUDE',filename) + + where (hlat < -90.0_r8) hlat = -90.0_r8 + where (hlat > 90.0_r8) hlat = 90.0_r8 + + ! Flip the longitues and latitudes + do jj = 1, n_hlat + col(jj) = n_hlat-jj+1 + enddo + hlong = hlong(:, col) + hlat = hlat(:, col) + + call getChannelGridCoords(filename, ncid, numdims, ncstart, nccount) + call get_basn_msk( filename, ncid, numdims, ncstart, nccount, n_hlong, n_hlat) + + io = nf90_close(ncid) + call nc_check(io, routine, filename) else write(string1,'("channel_option ",i1," is not supported.")')channel_option call error_handler(E_ERR,routine,string1,source) - endif + else + write(string1,'("CHANRTSWCRT ",i1," is not supported.")')chanrtswcrt write(string2,*)'This is specified in hydro.namelist' call error_handler(E_ERR,routine,string1,source) -endif -io = nf90_close(ncid) -call nc_check(io, routine, filename) +endif end subroutine get_hydro_constants @@ -654,10 +653,8 @@ subroutine getChannelGridCoords(filename, iunit, numdims, ncstart, nccount) end do deallocate(CH_NETRT_in, LAKE_MSKRT_in, DIRECTION_in, ELRT_in) -! This replaces a double for loop that counts NLINKS which can be removed from the -! code inserted below. ! subset to the 1D channel network as presented in the hydro restart file. -n_link = sum(CH_NETRT*0+1, mask = CH_NETRT >= 0) +n_link = count(CH_NETRT>=0) ! allocate the necessary wrf_hydro variables with module scope allocate(channelIndsX(n_link), channelIndsY(n_link)) @@ -1000,25 +997,25 @@ subroutine get_routelink_constants(filename) allocate (fromIndsStart( n_link)) allocate (fromIndsEnd( n_link)) allocate (toIndex( n_link)) +allocate (num_up_links( n_link)) call nc_get_variable(ncid,'fromIndices', fromIndices, routine) call nc_get_variable(ncid,'fromIndsStart',fromIndsStart,routine) call nc_get_variable(ncid,'fromIndsEnd', fromIndsEnd, routine) call nc_get_variable(ncid,'toIndex', toIndex, routine) -n_upstream = maxval(fromIndsEnd - fromIndsStart) + 1 +n_upstream = maxval(fromIndsEnd - fromIndsStart) + 1 +num_up_links = fromIndsEnd - fromIndsStart + 1 !! Allocate these module variables allocate(linkLong(n_link), linkLat(n_link), linkAlt(n_link)) -allocate(roughness(n_link)) -allocate(linkID(n_link)) -allocate(gageID(n_link)) allocate(channelIndsX(n_link), channelIndsY(n_link)) allocate(connections(n_link)) + do i = 1, n_link - allocate(connections(i)%upstream_linkID(n_upstream)) - allocate(connections(i)%upstream_index(n_upstream)) + allocate(connections(i)%upstream_index(num_up_links(i))) enddo + allocate(length(n_link)) allocate(to(n_link)) allocate(BucketMask(n_link)) @@ -1030,24 +1027,13 @@ subroutine get_routelink_constants(filename) ! length: Length (Stream length (m)) ! to: "To Link ID (PlusFlow table TOCOMID for every FROMCOMID)" -call nc_get_variable(ncid,'lon', linkLong, routine) -call nc_get_variable(ncid,'lat', linkLat, routine) -call nc_get_variable(ncid,'alt', linkAlt, routine) -call nc_get_variable(ncid,'n', roughness, routine) -call nc_get_variable(ncid,'link', linkID, routine) -call nc_get_variable(ncid,'Length',length, routine) -call nc_get_variable(ncid,'to', to, routine) +call nc_get_variable(ncid,'lon', linkLong, routine) +call nc_get_variable(ncid,'lat', linkLat, routine) +call nc_get_variable(ncid,'alt', linkAlt, routine) +call nc_get_variable(ncid,'Length', length, routine) +call nc_get_variable(ncid,'to', to, routine) call nc_get_variable(ncid,'bucket_comid_mask',BucketMask,routine) -! no snappy accessor routine for character arrays -! call nc_get_variable(ncid,'gages', gageID, routine) -io = nf90_inq_varid(ncid,'gages', VarID) -call nc_check(io, routine, 'inq_varid', 'gages', filename) -io = nf90_get_var(ncid, VarID, gageID) -call nc_check(io, routine, 'get_var', 'gages', filename) - -call nc_close_file(ncid, routine) - ! Longitude [DART uses longitudes [0,360)] where(linkLong < 0.0_r8) linkLong = linkLong + 360.0_r8 where(linkLong == 360.0_r8) linkLong = 0.0_r8 @@ -1060,15 +1046,13 @@ subroutine get_routelink_constants(filename) if (debug > 99) then do i=1,n_link - write(*,*)'link ',i,linkLong(i),linkLat(i),linkAlt(i),gageID(i),roughness(i),linkID(i),BucketMask(i) + write(*,*)'link ',i,linkLong(i),linkLat(i),linkAlt(i),BucketMask(i) enddo write(*,*)'Longitude range is ',minval(linkLong),maxval(linkLong) write(*,*)'Latitude range is ',minval(linkLat),maxval(linkLat) write(*,*)'Altitude range is ',minval(linkAlt),maxval(linkAlt) endif -deallocate(linkID) -deallocate(gageID) deallocate(length) deallocate(to) deallocate(toIndex) @@ -1090,17 +1074,16 @@ subroutine fill_connections(toIndex,fromIndices,fromIndsStart,fromIndsEnd) integer, intent(in) :: fromIndsEnd(:) integer :: i, j, id, nfound +integer, parameter :: MAX_UPSTREAM_LINKS = 5 + ! hydro_domain_offset = 0 !>@todo get the actual offset somehow do i = 1,n_link - connections(i)%gageName = gageID(i) - connections(i)%linkID = linkID(i) connections(i)%linkLength = length(i) connections(i)%domain_offset = i connections(i)%downstream_linkID = to(i) connections(i)%downstream_index = toIndex(i) - connections(i)%upstream_linkID(:) = MISSING_I connections(i)%upstream_index(:) = MISSING_I enddo @@ -1116,32 +1099,29 @@ subroutine fill_connections(toIndex,fromIndices,fromIndsStart,fromIndsEnd) UPSTREAM : do id = 1,n_link ! If there is nothing upstream ... already set to MISSING - if ( fromIndsStart(id) == 0 ) cycle UPSTREAM + if ( fromIndsStart(id) == 0 ) then + num_up_links(id) = 0 + cycle UPSTREAM + endif + connections(id)%upstream_index(:) = fromIndices(fromIndsStart(id):fromIndsEnd(id)) - nfound = 0 - do j = fromIndsStart(id),fromIndsEnd(id) ! loops over dimension to query - nfound = nfound + 1 - connections(id)%upstream_linkID(nfound) = linkID(fromIndices(j)) - connections(id)%upstream_index(nfound) = fromIndices(j) - enddo enddo UPSTREAM +! Ignore links that have outlets at the lakes +! This removes those having an extreme number of upstream links from +! the localization tree search +where(num_up_links > MAX_UPSTREAM_LINKS) num_up_links = 0 + if (debug > 99) then write(string1,'("PE ",i3)') my_task_id() -! do i = 1,n_link - do i = 54034,54034 - write(*,*)'' - write(*,*)trim(string1),' connectivity for link : ',i - write(*,*)trim(string1),' gageName : ',connections(i)%gageName - write(*,*)trim(string1),' linkID : ',connections(i)%linkID - write(*,*)trim(string1),' linkLength : ',connections(i)%linkLength - write(*,*)trim(string1),' domain_offset : ',connections(i)%domain_offset - - write(*,*)trim(string1),' downstream_linkID : ',connections(i)%downstream_linkID - write(*,*)trim(string1),' downstream_index : ',connections(i)%downstream_index - - write(*,*)trim(string1),' upstream_linkID : ',connections(i)%upstream_linkID - write(*,*)trim(string1),' upstream_index : ',connections(i)%upstream_index + do i = 1,n_link + write(*,*)'' + write(*,*)trim(string1),' connectivity for link : ',i + write(*,*)trim(string1),' linkLength : ',connections(i)%linkLength + write(*,*)trim(string1),' domain_offset : ',connections(i)%domain_offset + write(*,*)trim(string1),' downstream_linkID : ',connections(i)%downstream_linkID + write(*,*)trim(string1),' downstream_index : ',connections(i)%downstream_index + write(*,*)trim(string1),' upstream_index : ',connections(i)%upstream_index enddo endif @@ -1189,13 +1169,13 @@ recursive subroutine get_link_tree(my_index, reach_cutoff, depth, & write(*,*)trim(string1),' glt:task, nclose ', nclose write(*,*)trim(string1),' glt:task, close_indices ', close_indices(1:nclose) write(*,*)trim(string1),' glt:task, distances ', distances(1:nclose) + write(*, '(A, i5, f10.2, i8, i4, 5i8)') 'depth, distance, node, # upstream, up nodes: ', & + depth, direct_length, my_index, num_up_links(my_index), connections(my_index)%upstream_index(:) endif -do iup = 1,n_upstream - if ( connections(my_index)%upstream_index(iup) /= MISSING_I8 ) then - call get_link_tree( connections(my_index)%upstream_index(iup), & - reach_cutoff, depth+1, direct_length, nclose, close_indices, distances) - endif +do iup = 1,num_up_links(my_index) + call get_link_tree(connections(my_index)%upstream_index(iup), & + reach_cutoff, depth+1, direct_length, nclose, close_indices, distances) enddo end subroutine get_link_tree diff --git a/models/wrf_hydro/pmo/gauges.sh b/models/wrf_hydro/pmo/gauges.sh new file mode 100755 index 0000000000..d91c46521c --- /dev/null +++ b/models/wrf_hydro/pmo/gauges.sh @@ -0,0 +1,59 @@ +#!/bin/bash + +# Directories of the truth run +# and the routelink file here: +truth_d=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/ +route_l=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/RouteLink.nc +gageind=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/gage_ind_small.txt +gageids=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/gauge_ids.txt +cd $truth_d + +echo ' ' + +rm combo_IDs || true + +ncdump -f F -v gages RouteLink.nc | grep " 01" | cut -d, -f1 | tr '"' ' ' | sed 's/^[ \t]*//;s/[ \t]*$//' > gauges_rl +ncdump -f F -v gages RouteLink.nc | grep " 01" | cut -d, -f3 | tr ")" " " > indxes_rl + +num_gauges_rl=`cat gauges_rl | wc -l` +echo $num_gauges_rl + +k=1 +while read -r line; do + gauges[$k]=$line + let 'k+=1' +done < "gauges_rl" + +k=1 +while read -r line; do + indxes[$k]=$line + let 'k+=1' +done < "indxes_rl" + +# All gauges available in the RL file +for k in `seq 1 $num_gauges_rl`; do + echo $k,${gauges[$k]},${indxes[$k]} >> combo_IDs + echo Gauge: $k, ID: ${gauges[$k]}, Feature: ${indxes[$k]} +done + + +# Fetch Gauges/Links/ +# If both link file and gauge file are given, priority +# is for the gauge IDs: +if [[ -f "${gageids}" ]]; then + # User provided set of gauges + while read -r line; do + grep $line combo_IDs | cut -d, -f3 >> tmp + done < "${gageids}" + echo -e "\nThe truth will be computed at user-provided gauge ID locations." + +elif [[ -f "${gageind}" ]]; then + # User provided set of links + cp ${gageind} ${truth_d}/tmp + echo -e "\nThe truth will be computed at user-provided index locations." + +else + # Couldn't find files that contain either gauge IDs or links + mv indxes_rl tmp + echo -e "\nThe gage locations from the RouteLink file are used to form the truth." +fi diff --git a/models/wrf_hydro/pmo/gen_truth.sh b/models/wrf_hydro/pmo/gen_truth.sh new file mode 100755 index 0000000000..467d7a47a5 --- /dev/null +++ b/models/wrf_hydro/pmo/gen_truth.sh @@ -0,0 +1,188 @@ +#!/bin/bash + +# Directories of the truth run, gauges indices, gauge IDs +# and the routelink file here: +ref_dir=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/ +route_l=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/RouteLink.nc +gageind=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/gage_ind.txt +gageids=/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/gage_ids.txt + +cd ${ref_dir} + +echo ' ' + +# time stamps: OSSE start to OSSE end +osse_s=2019-06-01_00 +osse_e=2019-06-01_23 + +# hydro restart file +hydroL=HYDRO_RST. +hydroR=_DOMAIN1 + +# Truth file name +truth_x=pmo_drb_truth.nc +truth_d=pmo_drb_truth_all_gages.nc +truth_g=pmo_drb_truth_des_gages.nc + +rm -f $truth_x $truth_d $truth_g + +# Form a list of all needed files +ls -1 ${hydroL}*${hydroR} > majorlist.txt + +line1=`grep -n $osse_s majorlist.txt | head -n 1 | cut -d: -f1` +line2=`grep -n $osse_e majorlist.txt | head -n 1 | cut -d: -f1` + +sed -n "${line1},${line2}p" majorlist.txt > newlist.txt + +nhoura=`cat newlist.txt | wc -l` +nhourb=`echo "($nhoura - 1)" | bc -l` +timesq=`seq 0 $nhourb` + +# Remove all variables and keep qlink1 +f=0 + +for file in `cat newlist.txt`; do + + let "f+=1" + + echo $file, Cycle: $f + + ex=`printf '%05.f' $f` + + # Extract qlink1 only + ncks -O -v qlink1 $file member${ex}.nc + + # Rename variable and dimension + ncrename -O -d links,feature_id -v qlink1,streamflow member${ex}.nc member${ex}.nc + + # Add record time dimension + ncecat -O -u time member${ex}.nc member${ex}.nc +done + +# Concatenate all files +ncrcat -F -O -h -H member?????.nc $truth_x + +rm member*.nc || true + +ncap2 -O -s 'streamflow@units="m3 s-1";streamflow@long_name="River Flow";streamflow@grid_mapping="crs"; streamflow@missing_value=-9999.f' $truth_x $truth_x +ncatted -O -a cell_methods,streamflow,d,, $truth_x $truth_x + +# Add time variable +ncap2 -O -s 'time=array(0,1,$time)' $truth_x $truth_x +ncap2 -O -s 'time@long_name="valid output time";time@standard_name="time";time@units="hours since ${osse_s}";time@calendar="proleptic_gregorian"' $truth_x $truth_x +ncatted -O -a units,time,o,c,"hours since ${osse_s}:00" $truth_x + +# Bring in the feature IDs from RL +ncks -A -v link $route_l $truth_x +ncrename -O -v link,feature_ids $truth_x $truth_x + +# Clean-up some global attributes +ncatted -O -a Restart_Time,global,d,, $truth_x +ncatted -O -a Since_Date,global,d,, $truth_x +ncatted -O -a his_out_counts,global,d,, $truth_x +ncatted -O -a featureType,global,a,c,"timeSeries" $truth_x +ncatted -O -a station_dimension,global,a,c,"feature_id" $truth_x +ncatted -O -a NCO,global,d,, $truth_x +ncatted -O -a history,global,d,, $truth_x + +echo -e "\n** Created reference truth trajectory: ${ref_dir}${truth_x}\n" + + +# Still need to subset based on gages +ncdump -f F -v gages RouteLink.nc | grep " 01" | cut -d, -f1 | tr '"' ' ' | sed 's/^[ \t]*//;s/[ \t]*$//' > gauges_rl +ncdump -f F -v gages RouteLink.nc | grep " 01" | cut -d, -f3 | tr ")" " " > indxes_rl + +num_gauges_rl=`cat gauges_rl | wc -l` +echo -e "** The number of gauges in the Route Link file is: $num_gauges_rl\n" + +k=1 +while read -r line; do + gauges[$k]=$line + let 'k+=1' +done < "gauges_rl" + +k=1 +while read -r line; do + indxes[$k]=$line + let 'k+=1' +done < "indxes_rl" + +sleep 0.5 + +# All gauges available in the RL file +for k in `seq 1 $num_gauges_rl`; do + echo $k,${gauges[$k]},${indxes[$k]} >> combo_IDs + echo Gauge: $k, ID: ${gauges[$k]}, Index: ${indxes[$k]} +done + +# Fetch Gauges/Links/ +# If both link file and gauge file are given, priority +# is for the gauge IDs: +if [[ -f "${gageids}" ]]; then + # User provided set of gauges + while read -r line; do + grep $line combo_IDs | cut -d, -f3 >> tmp + done < "${gageids}" + + num_gauges_des=`cat ${gageids} | wc -l` + + echo -e "\n** The truth will be computed at user-provided gauge ID locations only." + +elif [[ -f "${gageind}" ]]; then + # User provided set of links + cp ${gageind} ${ref_dir}/tmp + + num_gauges_des=`cat ${gageind} | wc -l` + + echo -e "\n** The truth will be computed at user-provided index locations only." + +else + # Couldn't find files that contain either gauge IDs or links + cp indxes_rl tmp + + num_gauges_des=${num_gauges_rl} + + echo -e "\n** The gage locations from the RouteLink file are used to form the truth." +fi + +# Permutate the record to feature_id instead of time +ncpdq -O -a feature_id,time $truth_x new_pmo.nc + +echo -e "\n** Creating truth file at the desired gauges: ${ref_dir}$truth_g" + +# Create individual files at the gage locations +cc=0 +for i in `cat tmp`; do + let 'cc+=1' + + fid=$(($i-1)) + ncks -O -v feature_ids,time,streamflow -d feature_id,$fid new_pmo.nc test_${cc}.nc +done + +# Now, concatenate the resulting files +ncrcat -O test_*.nc $truth_g +ncpdq -O -a time,feature_id $truth_g $truth_g +ncatted -O -a history,global,d,, $truth_g + +# Check if we need to provide the truth for all gauges +if [[ $num_gauges_rl != $num_gauges_des ]]; then + cp indxes_rl tmp + + cc=0 + for i in `cat tmp`; do + let 'cc+=1' + + fid=$(($i-1)) + ncks -O -v feature_ids,time,streamflow -d feature_id,$fid new_pmo.nc test_a${cc}.nc + done + + ncrcat -O test_a*.nc $truth_d + ncpdq -O -a time,feature_id $truth_d $truth_d + ncatted -O -a history,global,d,, $truth_d +else + cp $truth_g $truth_d +fi + +rm test*.nc new_pmo.nc tmp combo_IDs indxes_rl gauges_rl || true + +echo -e "\n ##### Done #####" diff --git a/models/wrf_hydro/pmo/pmo_osse.py b/models/wrf_hydro/pmo/pmo_osse.py new file mode 100644 index 0000000000..ddc0fc1476 --- /dev/null +++ b/models/wrf_hydro/pmo/pmo_osse.py @@ -0,0 +1,304 @@ +#!/usr/bin/env python +# coding: utf-8 + +# When given a NetCDF file from a deterministic run of WRF-Hydro, this script +# creates daily obs_sequence files, denoted by YYYYMMDD strings appended to +# the end of the obs_seq string. + +# This script needs one third-party module: netcdf4-python. +# On CISL resources (Cheyenne and Casper) please run these two commands first: +# > module load python +# > ncar_pylib + +# Then run this script: +# > python pmo_osse.py + +# IMPORT STANDARD LIBRARIES + +from __future__ import print_function +from __future__ import division +import os +import time +import sys +import datetime +from math import pi + +# IMPORT THIRD PARTY MODULE(S) +import netCDF4 + +# PRINT SCRIPT INFORMATION +# Print script data in case output is redirected from standard output to file. +this_script = sys.argv[0] +this_script = this_script[0:-3] +# This prints the name of the script +print('Running', this_script) +# This prints the last time the script was modified +print('Which was last modified at', + time.ctime(os.path.getmtime(os.path.realpath(__file__)))) +print('\n') + +# CONSTANTS +# Get the name of the user running the script so we can output to their work +# directory. +user = os.environ['USER'] +# Frequency of output +ntimes_per_day = 24 +# Following the observation error procedure from +# create_identity_streamflow_obs.f90 we define a mininum error bound and a +# observational error fraction (40%) and pick whichever is larger. +min_err = 0.1 +max_err = 1000000.0 +obs_fraction_for_error = 0.4 +deg2rad = pi/180.0 + +# STRINGS AND PATHS +# Change the following strings to match the experiment names and locations of +# the files. +domain_name = 'drb' +pmo_name = 'osse_id2' +input_path = '/glade/work/gharamti/DART/DART_development/models/wrf_hydro/pmo/drb_mem0/' +output_path = '/glade/work/' + user + '/wrfhydro_dart/' + +# Check to see if the output path exists. +obs_seq_path = output_path + domain_name + '/obs_seqs/' + pmo_name + '/' +if not os.path.exists(obs_seq_path): + # If not, create the path. + print('Making directory for obs_seq files:', obs_seq_path) + os.makedirs(obs_seq_path) + +# Perfect Model Output +# This file, when created by DART'S PMO, is typically called perfect_output.nc +pmo_path = input_path + 'pmo_drb_truth_des_gages.nc' +pmo_all_path = input_path + 'pmo_drb_truth_all_gages.nc' +# Route Link File +rl_path = input_path + 'RouteLink.nc' + +# PMO +# Get the necessary data from the PMO file +pmo_pointer = netCDF4.Dataset(pmo_path) +ntimes = len(pmo_pointer.dimensions['time']) +print('ntimes:', ntimes) + +pmo_all_pointer = netCDF4.Dataset(pmo_all_path) + +# TIMES +# Use the units string on the time variable to assign integers for year, month, +# day, etc +pmo_time = pmo_pointer.variables['time'] +start_year = int(pmo_time.units[12:16]) +start_month = int(pmo_time.units[17:19]) +start_day = int(pmo_time.units[20:22]) +start_hour = int(pmo_time.units[23:25]) +start_minute = 0 +start_second = 0 + +print('Start date:', start_year, start_month, start_day, start_hour) + +# Create an integration start_time using the integers from the time units +# string. +integration_start_time = datetime.datetime(start_year, start_month, start_day, + start_hour, start_minute, + start_second) + +# If obs_seq files should only be output after a certain day, change it here. +# The default behavior is that the output_start_day is the same as the +# deterministic run start day. +output_start_day = datetime.datetime(start_year, start_month, start_day) +# output_start_day = datetime.datetime(2018, 9, 7) + +# This loop starts the observation sequence loop only after the +# output_start_day specified by the user. +start_time = 0 +nday = -1 +ndays = int(ntimes/ntimes_per_day) + +for iday in range(0, ndays): + if output_start_day > integration_start_time+datetime.timedelta(days=iday): + nday = nday + 1 + start_time = start_time + ntimes_per_day + +# DART uses time since 1601-01-01 00:00:00 +overall_start_time = datetime.datetime(1601, 1, 1, 0, 0, 0) +# Get the time since DART start time at the beginning of the file +file_start_time_delta = integration_start_time-overall_start_time +print('DART start time:', file_start_time_delta.seconds, file_start_time_delta.days) + +# Get feature information from the perfect obs file +nfeatures = len(pmo_all_pointer.dimensions['feature_id']) +nfeatures_des = len(pmo_pointer.dimensions['feature_id']) +print('RL gauges:', nfeatures, ', desired ones:', nfeatures_des) + +nobs = ntimes*nfeatures +nobs_day = ntimes_per_day*nfeatures + +pmo_reach_id = pmo_all_pointer.variables['feature_ids'] +pmo_time = pmo_all_pointer.variables['time'] +pmo_streamflow = pmo_all_pointer.variables['streamflow'] + +pmo_des_reach_id = pmo_pointer.variables['feature_ids'] + +# ROUTELINK +# Get the necessary data from the Route Link file. +rl_pointer = netCDF4.Dataset(rl_path) +# Get the variables from the Route Link file. +lat = rl_pointer.variables['lat'] +lon = rl_pointer.variables['lon'] +alt = rl_pointer.variables['alt'] +link = rl_pointer.variables['link'] + +# GAGE LISTS +# Build lists with the following: +# 1. Index of the link with the desired gage +ilinks = [] +# 2. Latitude of link +lats = [] +# 3. Longitude of link +lons = [] +# 4. Altitude of link +alts = [] + +# Loop through the reach ids to build the lists +print('\n') +print('Looping through the links in the Route Link file to get the location ' + 'data for each desired gage.') +print('Thank you', user, 'for your patience.') +print('\n') + +gg = 0 +for ipmo_reach, this_pmo_reach in enumerate(pmo_reach_id): + for ilink, this_link in enumerate(link): + if this_pmo_reach == this_link: + gg = gg + 1 + print('Gauge no:', gg) + print('Feature ID:', this_pmo_reach, 'and Link Index:', ilink+1) + print('Location: lat', lat[ilink], ', lon', lon[ilink], ', alt', alt[ilink], '\n') + ilinks.append(ilink) + this_lat = lat[ilink]*deg2rad + lats.append(this_lat) + this_lon = lon[ilink] + if this_lon < 0.0: + this_lon = this_lon+360.0 + this_lon = this_lon*deg2rad + lons.append(this_lon) + alts.append(alt[ilink]) + +# OBS SEQUENCE LOOP +# Loop through the times in the PMO file to build the obs_seq files + +for itime in range(start_time, ntimes): + # The commented line assumes that we want to make the observation time half + # an observation period ahead of when it actually occurs so that the window + # is centered on when the observation was taken. Do we want this? + # If so, uncomment the next line and comment the following one. + # this_time = file_start_time_delta+datetime.timedelta(hours=itime) - \ + # datetime.timedelta(hours=0.5) + this_time = file_start_time_delta+datetime.timedelta(hours=itime) + + # If the time index modulo the number of times per day equals zero, then + # that means we're at the start of a day and we should thus open a new + # obs_seq file designated with the proper YYYYMMDD string and write the + # appropriate header information to it. + if itime % ntimes_per_day == 0: + # Get the YYYYMMDD string of this new day so that we can name the + # obs_seq file appropriately. + nday = nday+1 + + file_start_day = integration_start_time+datetime.timedelta(days=nday) + time_string = str(file_start_day.year) + \ + str(file_start_day.month).zfill(2) + \ + str(file_start_day.day).zfill(2) + + # Append 'obs_seq.' with the YYYYMMDD string + obs_seq_file = obs_seq_path + 'obs_seq.' + time_string + print('Writing observation sequences for day', str(nday).zfill(2), + 'to:', obs_seq_file) + + # Create the file pointer for writing + obs_seq = open(obs_seq_file, 'w') + + # Write the header strings 'obs_sequence', 'obs_kind_definitions', + # etc to the file + print(' obs_sequence', file=obs_seq) + print('obs_kind_definitions', file=obs_seq) + # There aren't any obs_kind_definitions because this file only contains + # identity obs. + print(' 0', file=obs_seq) + print('num_copies: 1 num_qc: 1', file=obs_seq) + print('num_obs: ', nobs_day, ' max_num_obs: ', nobs_day, + file=obs_seq) + print(' observation', file=obs_seq) + print('QC VALUE', file=obs_seq) + print('first:', '1'.rjust(8), 'last:', str(nobs_day).rjust(8), + file=obs_seq) + # Reset the obs counter so the OBS line is correct and the linked list + # strings are correct as well. + iobs = -1 + + for ifeature in range(0, nfeatures): + # Now we're looping through the actual gages and writing them, their + # linked list, state_vector strings and observation error variance + # to the obs_sequence files. + iobs = iobs + 1 + this_obs = pmo_streamflow[itime, ifeature] + + # The observation error standard deviation is specified as the larger + # of the observation magnitude times error fraction or the minimum + # error threshold. + if any(pmo_des_reach_id == pmo_reach_id[ifeature]): + obs_err = max(this_obs*obs_fraction_for_error, min_err) + else: + obs_err = max_err + + # Square it to get the variance + obs_var = obs_err*obs_err + + # The observations are 1 indexed, but python loops are 0 indexed so + # we add 1 to the observation index before writing to the file + print('OBS', str(iobs+1).rjust(10), file=obs_seq) + # Write the value of the observation + print(this_obs, file=obs_seq) + # write the QC value, 0 + print('1.000000000000000E+000', file=obs_seq) + + # The linked list line has three configurations + if iobs == 0: + # If it's the first observation, the first integer is -1 + print('-1'.rjust(4), str(iobs+2).rjust(4), '-1'.rjust(4), + file=obs_seq) + elif iobs == nobs_day-1: + # If it's the last observation the second integer is -1 + print(str(iobs).rjust(4), '-1'.rjust(4), '-1'.rjust(4), + file=obs_seq) + else: + # If it's any other observation the first integer is the obs + # number minus 1 (assuming the observations are 1 indexed) and + # the second integer is obs number plus one (again assuming + # observations are 1 indexed). + print(str(iobs).rjust(4), str(iobs+2).rjust(4), '-1'.rjust(4), + file=obs_seq) + # Then we have the obdef section of the observation. + print('obdef', file=obs_seq) + # This is a 3-D observation with.... + print('loc3d', file=obs_seq) + # ...latitude, longitude, altitude and the -1 denotes that the + # vertical coordinate is a surface value, VERTISSURFACE. + print(str(lons[ifeature]).rjust(12), str(lats[ifeature]).rjust(12), + str(alts[ifeature]).rjust(12), '3'.rjust(12), file=obs_seq) + print('kind', file=obs_seq) + # Since these are identity observations they're the negative of the + # position within the state vector. + print(' -'+str(ilinks[ifeature]+1), file=obs_seq) + # The time of the observation is days and seconds since 1601-01-01 + # 00:00:00 + print(this_time.seconds, this_time.days, file=obs_seq) + # Finally, write the observation error variance + print(obs_var, file=obs_seq) + + # If the next time modulo the number of times per day equals zero, for + # example if we just wrote the observations for 11PM (23 hours), then we + # just wrote the last appropriate observations to this file and we need + # to close the file. + if itime+1 % ntimes_per_day == 0: + obs_seq.close() + + diff --git a/observations/forward_operators/obs_def_rttov13_mod.f90 b/observations/forward_operators/obs_def_rttov13_mod.f90 index 6403995051..a1d65773a4 100644 --- a/observations/forward_operators/obs_def_rttov13_mod.f90 +++ b/observations/forward_operators/obs_def_rttov13_mod.f90 @@ -592,7 +592,7 @@ module obs_def_rttov_mod type(rttov_platform_type), pointer :: platforms(:) ! version controlled file description for error handling, do not edit -character(len=*), parameter :: source = 'obs_def_rttov_mod.f90' +character(len=*), parameter :: source = 'obs_def_rttov13_mod.f90' character(len=*), parameter :: revision = '' character(len=*), parameter :: revdate = '' @@ -629,7 +629,7 @@ module obs_def_rttov_mod ! ----------------------------------------------------------------------------- ! DART/RTTOV options in the input.nml namelist. ! -! DART exposes all of the RTTOV 12.3 options available and passes them to +! DART exposes all of the RTTOV 13 options available and passes them to ! RTTOV with little to no additional checking for consistency. The default in ! most cases can be used and need not be specified in the namelist. ! @@ -708,6 +708,8 @@ module obs_def_rttov_mod integer :: htfrtc_n_pc = -1 ! number of PCs to use (HTFRTC only, max 300) logical :: htfrtc_simple_cloud = .false. ! use simple-cloud scattering (HTFRTC only) logical :: htfrtc_overcast = .false. ! calculate overcast radiances (HTFRTC only) +real(r8) :: wfetc_value = 100000.0_r8 ! Real wfetc Wind fetch (m) (length of water over which the wind has blown, typical + ! value 100000m for open ocean). Used if wfetc not provided by model. namelist / obs_def_rttov_nml/ rttov_sensor_db_file, & first_lvl_is_sfc, & @@ -779,7 +781,8 @@ module obs_def_rttov_mod use_htfrtc, & htfrtc_n_pc, & htfrtc_simple_cloud, & - htfrtc_overcast + htfrtc_overcast, & + wfetc_value type(atmos_profile_type) :: atmos type(trace_gas_profile_type) :: trace_gas @@ -2211,6 +2214,7 @@ subroutine do_forward_model(ens_size, nlevels, flavor, location, & end if ! depending on the vertical velocity and land type, classify clouds the way RTTOV wants + runtime % profiles(imem) % cloud(:,:) = 0.0_jprb if (.not. is_cumulus) then ! stratus if (surftype == 0) then @@ -2342,6 +2346,8 @@ subroutine do_forward_model(ens_size, nlevels, flavor, location, & if (allocated(atmos % wfetch)) then ! Wind fetch over the ocean (m) runtime % profiles(imem) % s2m % wfetc = atmos % wfetch(imem) + else + runtime % profiles(imem) % s2m % wfetc = wfetc_value end if ! Surface type (0=land, 1=sea, 2=sea-ice) diff --git a/observations/forward_operators/obs_def_rttov_mod.rst b/observations/forward_operators/obs_def_rttov_mod.rst index f6d29de44c..df98b706cc 100644 --- a/observations/forward_operators/obs_def_rttov_mod.rst +++ b/observations/forward_operators/obs_def_rttov_mod.rst @@ -10,11 +10,18 @@ DART RTTOV observation module, including the observation operators for the two p RTTOV-observation types -- visible/infrared radiances and microwave radiances/brightness temperatures. -The obs_def_rttov_mod.f90 module acts as a pass-through for RTTOV version 12.3. For more information, -see `the RTTOV site `__. +DART can be built with either RTTOV v12 *or* v13. Edit :ref:`&preprocess_nml ` to select +the appropriate obs_def: -For RTTOV v13 use the obs_def_rttov13_mod.f90 module contributed by Lukas Kugler -of the University of Vienna. +- obs_def_rttov_mod.f90 for v12.3 +- obs_def_rttov13_mod.f90 for v13 (contributed by Lukas Kugler of the University of Vienna). + +Note the namelist options for &obs_def_rttov_nml differ for v12 and v13. + +- `RTTOV v12 Namelist`_ &obs_def_rttov_nml +- `RTTOV v13 Namelist`_ &obs_def_rttov_nml + +For more detail on RTTOV see the `RTTOV user guide `__. DART supports both RTTOV-direct for visible/infrared/microwave as well as RTTOV-scatt for microwave computations. The code, in principle, supports all features of version 12.3 @@ -32,10 +39,24 @@ in cloud phase (ice versus water) makes a much larger difference. Trace gases a may be important for actual observation system experiments using visible/infrared; this may depend on the precise frequencies you wish to use. +For RTTOV 13 DART has a ``wfetch_value`` namelist option. This allows you to set a wfetch value +to use when ``use_wfetch = .true.`` if the model you are using does not provide QTY_WIND_FETCH. + Although a model may not have the necessary inputs by itself, the defaults in RTTOV based on climatology can be used. The impact on the quality of the results should be investigated. +The quanities for each observation type are defined in obs_def_rttov{13}_mod.f90, like so: + +.. code:: + + ! HIMAWARI_8_AHI_RADIANCE, QTY_RADIANCE + +If you want to change the quantity associated with an observation, for example, if you want +to assimilate HIMAWARI_8_AHI_RADIANCE as QTY_BRIGHTNESS_TEMPERATURE, edit the QTY +in obs_def_rttov{13}_mod.f90 and rerun quickbuild.sh. + + Known issues: - DART does not yet provide any type of bias correction @@ -45,200 +66,15 @@ Known issues: number of ensemble members. Using the maximum peak of the weighting function or the cloud-top may be appropriate. There are also other potential approaches being investigated. -| Author and Contact information: - -- DART Code: Jeff Steward -- Original DART/RTTOV work: Nancy Collins, Johnny Hendricks - -Backward compatibility note -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Other modules used ------------------- - -:: - - types_mod - utilities_mod - location_mod (threed_sphere) - assim_model_mod - obs_def_utilitie_mod - ensemble_manager_mod - utilities_mod - parkind1 (from RTTOV) - rttov_types (from RTTOV) - obs_kind_mod - -Public interfaces ------------------ - -=============================== ======================== -*use obs_def_rttov_mod, only :* set_visir_metadata -\ set_mw_metadata -\ get_expected_radiance -\ get_rttov_option_logical -=============================== ======================== - -Namelist interface ``&obs_def_rttov_mod_nml`` is read from file ``input.nml``. - -A note about documentation style. Optional arguments are enclosed in brackets *[like this]*. - -| - -.. container:: routine - - *call set_visir_metadata(key, sat_az, sat_ze, sun_az, sun_ze, & platform_id, sat_id, sensor_id, channel, - specularity)* - :: - - integer, intent(out) :: key - real(r8), intent(in) :: sat_az - real(r8), intent(in) :: sat_ze - real(r8), intent(in) :: sun_az - real(r8), intent(in) :: sun_ze - integer, intent(in) :: platform_id, sat_id, sensor_id, channel - real(r8), intent(in) :: specularity - -.. container:: indent1 - - Visible / infrared observations have several auxillary metadata variables. Other than the key, which is standard DART - fare, the RTTOV satellite azimuth and satellite zenith angle must be specified. See the RTTOV user guide for more - information (in particular, see figure 4). If the ``addsolar`` namelist value is set to true, then the solar azimuth - and solar zenith angles must be specified - again see the RTTOV user guide. In addition to the platform/satellite/ - sensor ID numbers, which are the RTTOV unique identifiers, the channel specifies the chanenl number in the RTTOV - coefficient file. Finally, if ``do_lambertian`` is true, specularity must be specified here. Again, see the RTTOV - user guide for more information. - - =============== ================================================================ - ``key`` The DART observation key. - ``sat_az`` The satellite azimuth angle. - ``sat_ze`` The satellite zenith angle. - ``sun_az`` The solar azimuth angle. Only relevant if addsolar is true. - ``sun_ze`` The solar zenith angle. Only relevant if addsolar is true. - ``platform_id`` The RTTOV platform ID. - ``sat_id`` The RTTOV satellite ID. - ``sensor_id`` The RTTOV sensor ID. - ``channel`` The RTTOV channel number. - ``specularity`` The surface specularity. Only relevant if do_lambertian is true. - =============== ================================================================ - -| - -.. container:: routine - - *call set_mw_metadata(key, sat_az, sat_ze, platform_id, sat_id, sensor_id, channel, mag_field, cosbk, fastem_p1, - fastem_p2, fastem_p3, fastem_p4, fastem_p5)* - :: - - integer, intent(out) :: key - real(r8), intent(in) :: sat_az - real(r8), intent(in) :: sat_ze - integer, intent(in) :: platform_id, sat_id, sensor_id, channel - real(r8), intent(in) :: mag_field - real(r8), intent(in) :: cosbk - real(r8), intent(in) :: fastem_p[1-5] - -.. container:: indent1 - - Microwave observations have several auxillary metadata variables. Other than the key, which is standard DART fare, - the RTTOV satellite azimuth and satellite zenith angle must be specified. See the RTTOV user guide for more - information (in particular, see figure 4). In addition to the platform/satellite/ sensor ID numbers, which are the - RTTOV unique identifiers, the channel specifies the chanenl number in the RTTOV coefficient file. In addition, if - ``use_zeeman`` is true, the magnetic field and cosine of the angle between the magnetic field and angle of - propagation must be specified. See the RTTOV user guide for more information. Finally, the fastem parameters for land - must be specified here. This may be difficult for observations to set, so default values (see table 21 in the RTTOV - user guide) can be used until a better solution is devised. - - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``key`` | The DART observation key. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``sat_az`` | The satellite azimuth angle. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``sat_ze`` | The satellite zenith angle. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``platform_id`` | The RTTOV platform ID. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``sat_id`` | The RTTOV satellite ID. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``sensor_id`` | The RTTOV sensor ID. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``channel`` | The RTTOV channel number. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``mag_field`` | The strength of the magnetic field. Only relevant if add_zeeman is true. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``cosbk`` | The cosine of the angle between the magnetic field and direction of EM propagation. Only | - | | relevant if add_zeeman is true. | - +-------------------+-------------------------------------------------------------------------------------------------+ - | ``fastem_p[1-5]`` | The five parameters used for fastem land/sea ice emissivities. For ocean emissivities, an | - | | internal model is used based on the value of fastem_version. | - +-------------------+-------------------------------------------------------------------------------------------------+ - -| - -.. container:: routine - - *call get_expected_radiance(obs_kind_ind, state_handle, ens_size, location, key, val, istatus)* - :: - - integer, intent(in) :: obs_kind_ind - type(ensemble_type), intent(in) :: state_handle - integer, intent(in) :: ens_size - type(location_type), intent(in) :: location - integer, intent(in) :: key - real(r8), intent(out) :: val(ens_size) - integer, intent(out) :: istatus(ens_size) - -.. container:: indent1 - - Given a location and the state vector from one of the ensemble members, compute the model-predicted satellite - observation. This can be either in units of radiance (mW/cm-1/sr/sq.m) or a brightness temperature (in K), depending - on if this is a visible/infrared observation or a microwave observation. - - +------------------+--------------------------------------------------------------------------------------------------+ - | ``obs_kind_ind`` | The index of the observation kind; since many observation kinds are handled by this module, this | - | | can be used to determine precisely which observation kind is being used. | - +------------------+--------------------------------------------------------------------------------------------------+ - | ``state_handle`` | The ensemble of model states to be used for the observation operator calculations. | - +------------------+--------------------------------------------------------------------------------------------------+ - | ``location`` | Location of this observation | - +------------------+--------------------------------------------------------------------------------------------------+ - | ``key`` | Unique identifier associated with this satellite observation | - +------------------+--------------------------------------------------------------------------------------------------+ - | ``val`` | The returned observation in units of either radiance or brightness temperature. | - +------------------+--------------------------------------------------------------------------------------------------+ - | ``istatus`` | Returned integer status code describing problems with applying forward operator. 0 is a good | - | | value; any positive value indicates an error; negative values are reserved for internal DART use | - | | only. | - +------------------+--------------------------------------------------------------------------------------------------+ - -| - -.. container:: routine - - *p = get_rttov_option_logical(field_name)* - :: - - character(len=*), intent(in) :: field_name - logical, result :: p - -.. container:: indent1 - - Return the logical value of the RTTOV parameter associated with the field_name. - - ============== ======================================================= - ``field_name`` The name of the RTTOV parameter from the namelist. - ``p`` The logical return value associated with the parameter. - ============== ======================================================= - -| - -Namelist --------- -This namelist is read from the file ``input.nml``. Namelists start with an ampersand '&' and terminate with a slash '/'. +The namelist ``&obs_def_rttov_mod_nml`` is read from file ``input.nml``. Namelists start with an ampersand '&' +and terminate with a slash '/'. Character strings that contain a '/' must be enclosed in quotes to prevent them from prematurely terminating the namelist. +RTTOV v12 Namelist +------------------ + :: &obs_def_rttov_nml @@ -540,62 +376,92 @@ namelist. | | | user guide). | +------------------------+--------------------+----------------------------------------------------------------------+ -| -Files ------ +RTTOV v13 namelist +------------------ -- A DART observation sequence file containing Radar obs. +.. code-block:: text + + &obs_def_rttov_nml + first_lvl_is_sfc = .true. ! is level 1 the surface (true) or top of atmosphere (false)? + mw_clear_sky_only = .false. ! only use clear-sky for MW (plus clw emission if clw_data is true) or full RTTOV-SCATT (false)? + interp_mode = 1 ! Interpolation mode: Rochon on OD (1), Log-linear (2), Rochon on log-linear OD (3), Rochon on WF (4), Rochon on log-linear WF (5) + do_checkinput = .true. ! check if profiles are within absolute and regression limits + apply_reg_limits = .false. ! clamp to min/max values + verbose = .true. ! if false, only fatal errors output + fix_hgpl = .true. ! surface elevation assigned to 2m pressure (true) or surface pressure (true) + do_lambertian = .false. ! treat surface as Lambertian instead of specular? (all) + lambertian_fixed_angle = .true. ! use fixed angle for Lambertian calculations? (all, do_lambertian only) + rad_down_lin_tau = .true. ! use linear-in-tau approximation? (all) + max_zenith_angle = 75. ! maximum zenith angle to accept (in degrees) (all) + use_q2m = .false. ! use surface humidity? (all) + use_uv10m = .false. ! use u and v 10 meters? (all, used in sea surface emissivity and BRDF models) + use_wfetch = .false. ! use wind fetch (length of water wind has blown over in m) (all, used in sea surface BRDF models) + use_water_type = .false. ! use water type (0 = fresh, ocean = 1) (all, used in surface BRDF atlas and models) + addrefrac = .true. ! enable atmospheric refraction (all) + plane_parallel = .false. ! treat atmosphere as strictly plane-parallel? (all) + use_salinity = .false. ! use ocean salinity (in practical salinity units) (MW, FASTEM 4-6 and TESSEM2) + cfrac_data = .false. ! specify cloud fraction? (VIS/IR/MW) + clw_data = .false. ! specify non-precip cloud liquid water? (VIS/IR/MW) + rain_data = .false. ! specify precip cloud liquid water? (VIS/IR/MW) + ciw_data = .false. ! specify non-precip cloud ice? (VIS/IR) + snow_data = .false. ! specify precip cloud fluffy ice? (VIS/IR/MW) + graupel_data = .false. ! specify precip cloud soft-hail? (VIS/IR/MW) + hail_data = .false. ! specify precip cloud hard-hail? (VIS/IR/MW) + w_data = .false. ! specify vertical velocity (used for classifying clouds as cumulus versus stratus)? (VIS/IR) + clw_scheme = 2 ! Liebe (1) or Rosenkranz (2) or TKC (3) (MW, clear-sky only) + clw_cloud_top = 322.0_r8 ! lower hPa limit for clw calculations; clw at lower pressures is ignored (MW, clear-sky only) + fastem_version = 6 ! MW sea-surface emissivity model to use (0-6). 1-6: FASTEM version 1-6, 0: TESSEM2 (MW) + supply_foam_fraction = .false. ! include foam fraction in skin%foam_fraction? FASTEM only. (MW) + use_totalice = .false. ! Specify totalice instead of precip/non-precip ice (MW, RTTOV-SCATT only) + use_zeeman = .false. ! Simulate Zeeman effect (MW) + cc_threshold = 0.001_r8 ! if effective cloud fraction below this value, treat simulation as clear-sky (MW, 0-1, RTTOV-SCATT only) + ozone_data = .false. ! specify ozone profiles? (VIS/IR) + co2_data = .false. ! specify CO2 profiles? (VIS/IR) + n2o_data = .false. ! specify N2O profiles? (VIS/IR) + co_data = .false. ! specify CO profiles? (VIS/IR) + ch4_data = .false. ! specify CH4 profiles? (VIS/IR) + so2_data = .false. ! specify SO2 profiles? (VIS/IR) + addsolar = .false. ! include solar calculations (VIS/IR) + rayleigh_single_scatt = .true. ! if false, disable Rayleigh (VIS, addsolar only) + do_nlte_correction = .false. ! if true include non-LTE bias correction for hires sounders (VIS/IR) + solar_sea_brdf_model = 2 ! JONSWAP (1) or Elfouhaily (2) (VIS) + ir_sea_emis_model = 2 ! ISEM (1) or IREMIS (2) (IR) + use_sfc_snow_frac = .false. ! use sfc snow cover (0-1) (IR, used in emis atlas) + add_aerosl = .false. ! enable aerosol scattering (VIS/IR) + aerosl_type = 1 ! OPAC (1) or CAMS (2) (VIS/IR, add_aerosl only) + add_clouds = .true. ! enable cloud scattering (VIS/IR) + ice_scheme = 1 ! SSEC (1) or Baran 2014 (2) or Baran 2018 (3) (VIS/IR, add_clouds only) + use_icede = .false. ! use ice effective diameter (IR, add_clouds, ice_scheme = 1) + idg_scheme = 2 ! Ou and Liou (1), Wyser (2), Boudala (3), McFarquar (2003) (VIS/IR, add_clouds only, ice_scheme = 1) + user_aer_opt_param = .false. ! specify aerosol scattering properties (VIS/IR, add_clouds only) + user_cld_opt_param = .false. ! specify cloud scattering properties (VIS/IR, add_clouds only) + grid_box_avg_cloud = .true. ! cloud concentrations are grid box averages. False = concentrations for cloudy layer only. (VIS/IR, add_clouds and not user_cld_opt_param only) + cldcol_threshold = -1.0_r8 ! threshold for cloud stream weights for scattering (VIS/IR, add_clouds only) + cloud_overlap = 1 ! default: 1 (max/random overlap) + cc_low_cloud_top = 750.0_r8 ! cloud fraction maximum in layers from ToA down to specified hPa (VIS/IR, cloud_overlap only) + ir_scatt_model = 2 ! DOM (1) or Chou-scaling (2) (IR, add_clouds or add_aerosl only) + vis_scatt_model = 1 ! DOM (1), single scat (2), or MFASIS (3) (VIS, addsolar and add_clouds or add_aerosl only) + dom_nstreams = 8 ! number of streams to use with DOM (VIS/IR, add_clouds or add_aerosl and DOM model only, must be >= 2 and even) + dom_accuracy = 0.0_r8 ! convergence criteria for DOM (VIS/IR, add_clouds or addaerosol and DOM model only) + dom_opdep_threshold = 0.0_r8 ! DOM ignores layers below this optical depth (VIS/IR, add_clouds or addaerosol and DOM model only) + addpc = .false. ! do principal component calculations? (VIS/IR) + npcscores = -1 ! number of PC scores to use (VIS/IR, addpc only) + addradrec = .false. ! reconstruct the radiances (VIS/IR, addpc only) + ipcreg = 1 ! number of predictors, see Table 29 of user guide (VIS/IR, addpc only) + use_htfrtc = .false. ! use HTFRTC of Havemann 2018 + htfrtc_n_pc = -1 ! number of PCs to use (HTFRTC only, max 300) + htfrtc_simple_cloud = .false. ! use simple-cloud scattering (HTFRTC only) + htfrtc_overcast = .false. ! calculate overcast radiances (HTFRTC only) + wfetc_value = 100000.0_r8 ! Real wfetc Wind fetch (m) (length of water over which the wind has blown, typical + ! value 100000m for open ocean). Used if wfetc not provided by model. + / References ---------- - `RTTOV user guide `__ -Private components ------------------- - -=============================== =============================== -*use obs_def_rttov_mod, only :* initialize_module -\ initialize_rttov_sensor_runtime -\ initialize_rttov_sensor_runtime -=============================== =============================== - -| - -.. container:: routine - - *call initialize_module()* - -.. container:: indent1 - - Reads the namelist, allocates space for the auxiliary data associated wtih satellite observations, initializes the - constants used in subsequent computations (possibly altered by values in the namelist), and prints out the list of - constants and the values in use. - -| - -.. container:: routine - - *call initialize_rttov_sensor_runtime(sensor,ens_size,nlevels)* - :: - - type(rttov_sensor_type), pointer :: sensor - integer, intent(in) :: ens_size - integer, intent(in) :: nlevels - -.. container:: indent1 - - Initialize a RTTOV sensor runtime. A rttov_sensor_type instance contains information such as options and coefficients - that are initialized in a "lazy" fashion only when it will be used for the first time. - - ============ =============================================== - ``sensor`` The sensor type to be initialized - ``ens_size`` The size of the ensemble - ``nlevels`` The number of vertical levels in the atmosphere - ============ =============================================== - -| Error codes and conditions diff --git a/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh b/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh deleted file mode 100755 index 3cba5115f3..0000000000 --- a/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh +++ /dev/null @@ -1,270 +0,0 @@ -#!/bin/sh -# -# updated 4 Dec 2020 - -echo -echo 'These converters require either the HDF-EOS or the HDF-EOS5 libraries.' -echo 'These libraries are, in general, not compatible with each other.' -echo 'There is a compatibility library that "provides uniform access to HDF-EOS2' -echo 'and 5 files though one set of API calls." which sounds great.' -echo -echo 'The HDF-EOS5 libraries are installed on the supercomputers, and are' -echo 'available via MacPorts (hdfeos5). The HDF-EOS libraries are older and' -echo 'are much less available. Consequently, I have used the HDF-EOS5 interfaces' -echo 'where possible.' -echo -echo 'If the he5_hdfeos libraries are installed on your system, you are in luck.' -echo 'On our system, it has been useful to define variables like:' -echo -echo 'setenv("NCAR_INC_HDFEOS5", "/glade/u/apps/ch/opt/hdf-eos5/5.1.16/intel/19.0.5/include")' -echo 'setenv("NCAR_LDFLAGS_HDFEOS5","/glade/u/apps/ch/opt/hdf-eos5/5.1.16/intel/19.0.5/lib")' -echo 'setenv("NCAR_LIBS_HDFEOS5","-Wl,-Bstatic -lGctp -lhe5_hdfeos -lsz -lz -Wl,-Bdynamic")' -echo 'which we then use in when compiling convert_airs_L2' -echo -echo 'If you need to build the HDF-EOS and/or the HDF-EOS5 libraries, you may ' -echo 'try to follow the steps outlined in this script. They will need to be ' -echo 'modified for your system.' -echo -echo 'You will have to edit this script, first, by removing the early exit ...' -echo - -exit - -# ------------------------------------------------------------------------------ -## -## The NASA Earthdata Data Access Services portal serves as the download site: -## https://wiki.earthdata.nasa.gov/display/DAS/Toolkit+Downloads -## -## The following packages were downloaded: -## -## zlib-1.2.11.tar.gz -## jpegsrc.v9b.tar.gz -## hdf-4.2.13.tar.gz -## HDF-EOS2.20v1.00.tar.Z -## HDF-EOS2.20v1.00_TestDriver.tar.Z -## szip-2.1.1.tar.gz -## hdf5-1.8.19.tar.gz -## HDF-EOS5-5.1.16.tar.Z -## HDF-EOS5-5.1.16_TESTDRIVERS.tar.Z -## -## The documentation files were downloaded: -## -## HDF-EOS_REF.pdf -## HDF-EOS_UG.pdf -## HDF-EOS5_REF.pdf -## HDF-EOS5_UG.pdf -## -## Some other useful websites for HDF and HDF-related products are: -## https://portal.hdfgroup.org/display/support/Downloads -## https://hdfeos.org/software/library.php#HDF-EOS2 -## https://opensource.gsfc.nasa.gov/projects/HDF-EOS2/index.php - -# Change this to 'true' to uncompress the packages. You only need to uncompress them -# once, but you may need to run this script several times. - -if ( `false` ); then - - for i in zlib-1.2.11.tar.gz \ - jpegsrc.v9b.tar.gz \ - hdf-4.2.13.tar.gz \ - szip-2.1.1.tar.gz \ - hdf5-1.8.19.tar.gz - do - tar -zxvf $i - done - - uncompress HDF-EOS2.20v1.00.tar.Z - uncompress HDF-EOS2.20v1.00_TestDriver.tar.Z - uncompress HDF-EOS5.1.16.tar.Z - uncompress HDF-EOS5.1.16_TESTDRIVERS.tar.Z - - tar -xvf HDF-EOS2.20v1.00.tar - tar -xvf HDF-EOS5-5.1.16.tar - -fi - -# ------------------------------------------------------------------------------ -# start with smaller libs, work up to HDF-EOS. -# ------------------------------------------------------------------------------ - -# set the installation location of the final libraries -H4_PREFIX=/glade/work/${USER}/local/hdf-eos -H5_PREFIX=/glade/work/${USER}/local/hdf-eos5 - -# make the target install dirs -mkdir -p ${H4_PREFIX}/{lib,bin,include,man,man/man1,share} -mkdir -p ${H5_PREFIX}/{lib,bin,include,man,man/man1,share} - -# record the build script and environment -echo > ${H4_PREFIX}/build_environment_log.txt -echo 'the build script' >> ${H4_PREFIX}/build_environment_log.txt -cat $0 >> ${H4_PREFIX}/build_environment_log.txt -echo >> ${H4_PREFIX}/build_environment_log.txt -echo '=====================' >> ${H4_PREFIX}/build_environment_log.txt -echo 'the build environment' >> ${H4_PREFIX}/build_environment_log.txt -echo >> ${H4_PREFIX}/build_environment_log.txt -env | sort >> ${H4_PREFIX}/build_environment_log.txt - -# start with smaller libs, work up to HDF-EOS. - -echo '' -echo '======================================================================' -if [ -f ${H4_PREFIX}/lib/libz.a ]; then - echo 'zlib already exists - no need to build.' -else - - export CC='icc' - export CFLAGS='-fPIC' - export FFLAGS='-fPIC' - - echo 'building zlib at '`date` - cd zlib-1.2.11 || exit 1 - ./configure --prefix=${H4_PREFIX} || exit 1 - make clean || exit 1 - make || exit 1 - make test || exit 1 - make install || exit 1 - cd .. -fi - - -echo '' -echo '======================================================================' -if [ -f ${H4_PREFIX}/lib/libsz.a ]; then - echo 'szip already exists - no need to build.' -else - - export CC='icc' - export CFLAGS='-fPIC' - export FFLAGS='-fPIC' - - echo 'building szip at '`date` - cd szip-2.1.1 || exit 1 - ./configure --prefix=${H4_PREFIX} || exit 1 - make clean || exit 1 - make || exit 1 - make test || exit 1 - make install || exit 1 - cd .. -fi - -echo '' -echo '======================================================================' -# This is peculiar - on Cheyenne: -# If I build with --libdir=H4_PREFIX, subsequent linking works. -# If I build with --libdir=H4_PREFIX/lib, subsequent linking FAILS with an -# undefined reference to 'rpl_malloc'. -if [ -f ${H4_PREFIX}/lib/libjpeg.a ]; then - echo 'jpeg already exists - no need to build.' -else - echo 'buiding jpeg at '`date` - cd jpeg-9b || exit 2 - ./configure CC='icc -Df2cFortran' CFLAGS='-fPIC' FFLAGS='-fPIC' \ - --prefix=${H4_PREFIX} || exit 2 - make clean || exit 2 - make || exit 2 - make test || exit 2 - make install || exit 2 - cd .. - cd ${H4_PREFIX} - \ln -s lib/libjpeg* . - cd - -fi - -echo '' -echo '======================================================================' -if [ -f ${H4_PREFIX}/lib/libmfhdf.a ]; then - echo 'hdf4 already exists - no need to build.' -else - echo 'building hdf4 at '`date` - # (apparently there is no 'make test') - - cd hdf-4.2.13 || exit 3 - ./configure CC='icc -Df2cFortran' CFLAGS='-fPIC' FFLAGS='-fPIC' \ - --prefix=${H4_PREFIX} \ - --disable-netcdf \ - --with-zlib=${H4_PREFIX} \ - --with-jpeg=${H4_PREFIX} || exit 3 - make clean || exit 3 - make || exit 3 - make install || exit 3 - cd .. -fi - -echo '' -echo '======================================================================' -if [ -f ${H4_PREFIX}/lib/libhdfeos.a ]; then - echo 'hdf-eos already exists - no need to build.' -else - echo 'building HDF-EOS2.20v1.00 at '`date` - echo 'after expanding the .tar.gz file, the source is in "hdfeos"' - cd hdfeos || exit 4 - # (the CC options are crucial to provide Fortran interoperability) - ./configure CC='icc -Df2cFortran' CFLAGS='-fPIC' FFLAGS='-fPIC' \ - --prefix=${H4_PREFIX} \ - --enable-install-include \ - --with-zlib=${H4_PREFIX} \ - --with-jpeg=${H4_PREFIX} \ - --with-hdf=${H4_PREFIX} || exit 4 - make clean || exit 4 - make || exit 4 - make install || exit 4 - cd .. -fi - -#------------------------------------------------------------------------------- -# HDF-EOS5 record the build script and environment -#------------------------------------------------------------------------------- - -echo > ${H5_PREFIX}/build_environment_log.txt -echo 'the build script' >> ${H5_PREFIX}/build_environment_log.txt -cat $0 >> ${H5_PREFIX}/build_environment_log.txt -echo >> ${H5_PREFIX}/build_environment_log.txt -echo '=====================' >> ${H5_PREFIX}/build_environment_log.txt -echo 'the build environment' >> ${H5_PREFIX}/build_environment_log.txt -echo >> ${H5_PREFIX}/build_environment_log.txt -env | sort >> ${H5_PREFIX}/build_environment_log.txt - -echo '======================================================================' -if [ -f ${H5_PREFIX}/lib/libhdf5.a ]; then - echo 'hdf5 already exists - no need to build.' -else - echo 'building hdf5 at '`date` - - cd hdf5-1.8.19 || exit 3 - ./configure CC='icc -Df2cFortran' CFLAGS='-fPIC' FFLAGS='-fPIC' \ - --prefix=${H5_PREFIX} \ - --enable-fortran \ - --enable-fortran2003 \ - --enable-production \ - --with-zlib=${H4_PREFIX} || exit 3 - make clean || exit 3 - make || exit 3 - make check || exit 3 - make install || exit 3 - make check-install || exit 3 - cd .. -fi - -echo '' -echo '======================================================================' -if [ -f ${H5_PREFIX}/lib/libhe5_hdfeos.a ]; then - echo 'hdf-eos5 already exists - no need to build.' -else - echo 'building HDF-EOS5.1.16 at '`date` - echo 'after expanding the .tar.Z file, the source is in "hdfeos5"' - cd hdfeos5 || exit 4 - # (the CC options are crucial to provide Fortran interoperability) - ./configure CC='icc -Df2cFortran' CFLAGS='-fPIC' FFLAGS='-fPIC' \ - --prefix=${H5_PREFIX} \ - --enable-install-include \ - --with-zlib=${H4_PREFIX} \ - --with-hdf5=${H5_PREFIX} || exit 4 - make clean || exit 4 - make || exit 4 - make check || exit 4 - make install || exit 4 - cd .. -fi - -exit 0 diff --git a/observations/obs_converters/AIRS/L1_AMSUA_to_netcdf.f90 b/observations/obs_converters/AIRS/L1_AMSUA_to_netcdf.f90 deleted file mode 100644 index 4f6ce23293..0000000000 --- a/observations/obs_converters/AIRS/L1_AMSUA_to_netcdf.f90 +++ /dev/null @@ -1,157 +0,0 @@ -! DART software - Copyright UCAR. This open source software is provided -! by UCAR, "as is", without charge, subject to all terms of use at -! http://www.image.ucar.edu/DAReS/DART/DART_download - -program L1_AMSUA_to_netcdf - -use utilities_mod, only : initialize_utilities, register_module, & - error_handler, finalize_utilities, E_ERR, E_MSG, & - find_namelist_in_file, check_namelist_read, & - do_nml_file, do_nml_term, set_filename_list, & - nmlfileunit, get_next_filename - -use netcdf_utilities_mod, only : nc_create_file, nc_begin_define_mode, & - nc_end_define_mode, nc_close_file - -use amsua_netCDF_support_mod, only : define_amsua_dimensions, & - define_amsua_variables, & - fill_amsua_variables - -use amsua_bt_mod, only : amsua_bt_granule, amsua_bt_rdr, & - AMSUA_BT_GEOXTRACK, AMSUA_BT_GEOTRACK, AMSUA_BT_CHANNEL, & - AMSUA_BT_CALXTRACK, AMSUA_BT_SPACEXTRACK, AMSUA_BT_BBXTRACK, & - AMSUA_BT_WARMPRTA11, AMSUA_BT_WARMPRTA12, AMSUA_BT_WARMPRTA2 - -implicit none - -! ---------------------------------------------------------------------- -! Declare local parameters -! ---------------------------------------------------------------------- - -! version controlled file description for error handling, do not edit -character(len=*), parameter :: source = 'L1_AMSUA_to_netcdf.f90' -character(len=*), parameter :: revision = '' -character(len=*), parameter :: revdate = '' - -type(amsua_bt_granule) :: amsua_bt_gran - -integer :: iunit, io, ncid -integer :: chan - -! ---------------------------------------------------------------------- -! Declare namelist parameters -! ---------------------------------------------------------------------- - -character(len=256) :: file_name = '' -character(len=256) :: outputfile = 'amsua_bt_granule.nc' -integer :: track = 1 ! 1-based index along track -integer :: xtrack = 0 ! 1-based index across-track - -namelist /L1_AMSUA_to_netcdf_nml/ file_name, outputfile, & - track, xtrack - -! ---------------------------------------------------------------------- -! start of executable program code -! ---------------------------------------------------------------------- - -call initialize_utilities('L1_AMSUA_to_netcdf') -call register_module(source,revision,revdate) - -call error_handler(E_ERR,source,'ROUTINE NOT USABLE.', & - text2='Routine barely started. Needs a lot of work and expect', & - text3='complications with simultaneous HDF4, netCDF, and HDF5.') - -!---------------------------------------------------------------------- -! Read the namelist -!---------------------------------------------------------------------- - -call find_namelist_in_file('input.nml', 'L1_AMSUA_to_netcdf_nml', iunit) -read(iunit, nml = L1_AMSUA_to_netcdf_nml, iostat = io) -call check_namelist_read(iunit, io, 'L1_AMSUA_to_netcdf_nml') - -! Record the namelist values used for the run ... -if (do_nml_file()) write(nmlfileunit, nml=L1_AMSUA_to_netcdf_nml) -if (do_nml_term()) write( * , nml=L1_AMSUA_to_netcdf_nml) - - -!if (iargc().ne.3) then - print *, "This code extracts a single profile from a specified" - print *, " input file to stdout. It requires exactly three " - print *, "arguments." - print *, " 1) scan line number [1, 45]" - print *, " 2) field-of-view number [1, 30]" - print *, " 3) file name" -! STOP -! end if - -if (track.lt.1.OR.track.gt.45) then - print *, "Error: along-track scan line number [1, 45]" - print *, "got ", track - STOP -endif - -if (xtrack.lt.1.OR.xtrack.gt.30) then - print *, "Error: second argument must be scan line number [1, 30]" - print *, "got ", xtrack - STOP -endif - -call amsua_bt_rdr(file_name, amsua_bt_gran) - -! Each AMSU-A scan has 2 "state"s, indicating whether the AMSU-A1 and -! AMSU-A2 instruments were in science mode when the data -! was taken and whether the data was successfully transmitted. - -if (amsua_bt_gran%state1(track).ne.0) then - if (amsua_bt_gran%state1(track).EQ.1) then - print *, "Warning, AMSU-A1 state for this profile is SPECIAL" - else if (amsua_bt_gran%state1(track).EQ.2) then - print *, "Warning, AMSU-A1 state for this profile is ERRONEOUS" - else if (amsua_bt_gran%state1(track).EQ.3) then - print *, "Warning, AMSU-A1 state for this profile is MISSING" - else - print *, "Warning, AMSU-A1 state for this profile is UNKNOWN" - endif - - print *, "NOT PROCESS" - -endif - -if (amsua_bt_gran%state2(track).ne.0) then - if (amsua_bt_gran%state2(track).EQ.1) then - print *, "Warning, AMSU-A2 state for this profile is SPECIAL" - else if (amsua_bt_gran%state2(track).EQ.2) then - print *, "Warning, AMSU-A2 state for this profile is ERRONEOUS" - else if (amsua_bt_gran%state2(track).EQ.3) then - print *, "Warning, AMSU-A2 state for this profile is MISSING" - else - print *, "Warning, AMSU-A2 state for this profile is UNKNOWN" - endif - - print *, "NOT PROCESS" - -endif - -print *, "# AMSU Brightness Temperatures (Kelvins)" -print *, "# Channels 1-15" -print *, "# -9999 flags bad value" - -do chan = 1, AMSUA_BT_CHANNEL - write(*, "(f8.2)") amsua_bt_gran%brightness_temp(chan,xtrack,track) -enddo - -!------------------------------------------------------------------------------- -! convert the granule to netCDF -!------------------------------------------------------------------------------- - -ncid = nc_create_file( outputfile, source) -call nc_begin_define_mode( ncid, source) -call define_amsua_dimensions(ncid, source) -call define_amsua_variables( amsua_bt_gran, ncid, source) -call nc_end_define_mode( ncid, source) -call fill_amsua_variables( amsua_bt_gran, ncid, source) -call nc_close_file( ncid, source) - -call finalize_utilities() - -end program L1_AMSUA_to_netcdf diff --git a/observations/obs_converters/AIRS/README.rst b/observations/obs_converters/AIRS/README.rst index 117656f40a..7fcaf77d6b 100644 --- a/observations/obs_converters/AIRS/README.rst +++ b/observations/obs_converters/AIRS/README.rst @@ -1,137 +1,50 @@ AIRS and AMSU ============= -.. caution:: +The AIRS directory contains both the AIRS and AMSU-A observation converters. +AIRS is the Atmospheric Infrared Sounder (AIRS) Level 2 observations. +AMSU-A is the Advanced Microwave Sounding Unit (AMSU-A) L1B Brightness Temperatures. - Before you begin: Installing the libraries needed to read these files can be - fairly troublesome. The NASA Earthdata Data Access Services website is the - `download site `__ - for the necessary libraries. An example build script (`AIRS/Build_HDF-EOS.sh`) - is intended to provide some guidance. - -This directory covers two observation converters: - -- :doc:`./convert_airs_L2` for temperature and moisture retrievals. - -- :doc:`./convert_amsu_L1` for radiances. +- :doc:`./convert_airs_L2` is used to convert AIRS temperature and + specific humidity vertical profile observations. +- :doc:`./convert_amsu_L1` is used to convert AMSU-A radiances (brightness temperature) + observations. Both converters are in the AIRS directory because of the complicated history of the data used to create the AIRS L2 product (which includes some AMSU observations). Since both datasets are HDF - it was believed that some of the routines could be used by both converters. Alas, that has not proven to be the case. -Atmospheric Infrared Sounder (AIRS) Level 2 observations --------------------------------------------------------- - -The `AIRS `__ instrument is an Atmospheric -Infrared Sounder flying on the `Aqua `__ -spacecraft. Aqua is one of a group of satellites flying close together -in a polar orbit, collectively known as the “A-train”. The programs in -this directory help to extract the data from the distribution files and -put them into DART observation sequence (obs_seq) file format. - -AIRS data includes atmospheric temperature in the troposphere, derived -moisture profiles, land and ocean surface temperatures, surface -emissivity, cloud fraction, cloud top height, and ozone burden in the -atmosphere. - - -Advanced Microwave Sounding Unit (AMSU-A) L1B Brightness Temperatures ---------------------------------------------------------------------- - -The *DART/observations/obs_converters/AIRS* directory contains the code -to convert the L1B AMSU-A Brightness Temperatures in HDF-EOS2 format to -the DART observation sequence file format. - -There is a little bit of confusing history to be aware of for AMSU/A: - -https://en.wikipedia.org/wiki/Advanced_microwave_sounding_unit#History - -AMSU/A was flown on NOAA 15-17. It is also on the Aqua satellite (that -also houses AIRS) as well as the European MetOp. It has been replaced by -ATMS on NOAA-20. Dependencies ------------ -Both *convert_airs_L2* and *convert_amsu_L1* require the HDF-EOS libraries. -*convert_amsu_L1* also requires HDF5 support because of -the RTTOV libraries. HDF5 is incompatible with HDF-EOS, so a two-step -conversion is necessary for the AMSU observations. -The data must be converted from HDF to netCDF -(which can be done without HDF5) and then the netCDF files can be -converted to DART radiance observation format - which requires -``obs_def_rttov_mod.f90``, which depends on HDF5. To simplify things, -An example build script (*DART/observations/obs_converters/AIRS/Build_HDF-EOS.sh*) -is supplied and may provide some guidance on downloading and building -the libraries required by NASA. - -The NASA Earthdata Data Access Services website is the `download -site `__, -at press time, the following packages were required to build HDF-EOS -Release v2.20: - -- hdf-4.2.13.tar.gz -- HDF-EOS2.20v1.00.tar.Z -- HDF-EOS2.20v1.00_TestDriver.tar.Z -- HDF-EOS_REF.pdf -- HDF-EOS_UG.pdf -- jpegsrc.v9b.tar.gz -- zlib-1.2.11.tar.gz - -Similarly for HDF-EOS5 Release v5.1.16: - -- HDF-EOS5.1.16.tar.Z -- HDF-EOS5.1.16_TESTDRIVERS.tar.Z -- HDF-EOS5_REF.pdf -- HDF-EOS5_UG.pdf -- hdf5-1.8.19.tar.gz -- szip-2.1.1.tar.gz - -*BUILD_HDF-EOS.sh* may help you build these libraries. -You *will* have to modify it for your -system, and you *probably will* have to iterate on that process. The -script takes the stance that if you have to build HDF4, HDF-EOS, HDF5 … -you might as well build HDF-EOS5 too. The HDF-EOS5 is entirely optional. -The HDF5 will be needed by RTTOV. - -Converting from HDF4 to netCDF ------------------------------- +Both ``convert_airs_L2`` and ``convert_amsu_L1`` require the HDF-EOS2 libraries, +which, in turn, requires HDF4. HDF4 is available on Derecho using the ``module load hdf`` +command. -There are multiple ways to convert from HDF4 to netCDF. The HDF-EOS -Tools and Information Center provides binaries for several common -platforms as well as source code should you need to build your own. +The ``convert_amsu_L1`` script also requires the RTTOV libraries. -HDF4 CF CONVERSION TOOLKIT -^^^^^^^^^^^^^^^^^^^^^^^^^^ +The following mkmf.templates for gfortran and intel compilers respectively, +are available in DART/build_templates, and they have been used to compile +the AIRS and AMSUA observation converters on Derecho. They include the +proper library paths for both HDF-EOS2 and RTTOV. The HDF-EOS2 library +required a patch to work with DART observation converters. +For details on the patch see `issue #590 `_. -The HDF-EOS Tools and Information Center provides the `HDF4 CF -CONVERSION TOOLKIT `__ +.. code :: text + + mkmf.template.AIRS.gfortran + mkmf.template.AIRS.intel - The HDF4 CF (H4CF) Conversion Toolkit can access various NASA HDF4 - external and HDF-EOS2 external files by following the CF conventions - external. The toolkit includes a conversion library for application - developers and a conversion utility for NetCDF users. We have - translated the information obtained from various NASA HDF-EOS2 and - HDF4 files and the corresponding product documents into the - information required by CF into the conversion library. We also have - implemented an HDF4-to-NetCDF (either NetCDF-3 or NetCDF-4 classic) - conversion tool by using this conversion library. In this web page, - we will first introduce how to build the conversion library and the - tool from the source. Then, we will provide basic usage of the tool - and the conversion library APIs. The information for the supported - NASA HDF-EOS2 and HDF4 products and visualization screenshots of some - converted NetCDF files will also be presented. +In addition to gfortran and intel compiled hdf-eos as mentioned above, +Derecho also includes nvhpc and cray compiled hdf-eos libraries, with +the paths provided below. -If you download a binary, it’s a good habit to verify the checksum. The download page has a link -to a .pdf that has the known checksums. `Here’s how to generate the -checksum `__. -Be aware that when I downloaded the file (via Chrome or ‘wget’) on an -OSX system, the checksum did not match. When I downloaded the file on a -linux system, the checksum *did* match. +.. code:: text -If you download the source, the tar file comes with a ``README`` and an ``INSTALL``. Please become -familiar with them. DART also has a build script: -``AIRS/shell_scripts/Build_HDF_to_netCDF.csh`` that you can customize -after you read the ``INSTALL`` document. + /glade/campaign/cisl/dares/libraries/hdf-eos_intel + /glade/campaign/cisl/dares/libraries/hdf-eos_gfortran + /glade/campaign/cisl/dares/libraries/hdf-eos_nvhpc + /glade/campaign/cisl/dares/libraries/hdf-eos_cray diff --git a/observations/obs_converters/AIRS/airs_JPL_mod.f90 b/observations/obs_converters/AIRS/airs_JPL_mod.f90 index 9e4e8acb61..19f6468be4 100644 --- a/observations/obs_converters/AIRS/airs_JPL_mod.f90 +++ b/observations/obs_converters/AIRS/airs_JPL_mod.f90 @@ -2,7 +2,7 @@ ! adapted from original JPL code, example AIRS readers ! -! updated for version 6 of the AIRS data formats +! updated for version 6 and 7 of the AIRS data formats ! added fields needed to support radiances ! removed unused items to streamline the code. ! @@ -511,7 +511,7 @@ subroutine airs_ret_rdr(file_name, airs_ret_gran, ver) print *, "Error ", statn, " reading field ", & "TAirStdErr" - if (ver == 6) then + if (ver .eq. 6 .or. ver .eq. 7) then edge(3) = 45 edge(2) = 30 edge(1) = 28 @@ -543,7 +543,7 @@ subroutine airs_ret_rdr(file_name, airs_ret_gran, ver) print *, "Error ", statn, " reading field ", & "TSurfAirErr" - if (ver == 6) then + if (ver .eq. 6 .or. ver .eq. 7) then edge(2) = 45 edge(1) = 30 statn = SWrdfld(swid, "TSurfAir_QC", & @@ -621,7 +621,7 @@ subroutine airs_ret_rdr(file_name, airs_ret_gran, ver) print *, "Error ", statn, " reading field ", & "H2OMMRStdErr" - if (ver == 6) then + if (ver .eq. 6 .or. ver .eq. 7) then edge(3) = 45 edge(2) = 30 edge(1) = 14 @@ -653,7 +653,7 @@ subroutine airs_ret_rdr(file_name, airs_ret_gran, ver) print *, "Error ", statn, " reading field ", & "totH2OStdErr" - if (ver == 6) then + if (ver .eq. 6 .or. ver .eq. 7) then edge(2) = 45 edge(1) = 30 statn = SWrdfld(swid, "totH2OStd_QC", & diff --git a/observations/obs_converters/AIRS/airs_obs_mod.f90 b/observations/obs_converters/AIRS/airs_obs_mod.f90 index aca52d2b38..c1ba55e665 100644 --- a/observations/obs_converters/AIRS/airs_obs_mod.f90 +++ b/observations/obs_converters/AIRS/airs_obs_mod.f90 @@ -268,7 +268,7 @@ subroutine make_obs_sequence ( seq, granule, lon1, lon2, lat1, lat2, & vert_Q_loop: do ivert=istart,humidity_top_index - if ((version == 6) .and. (granule%H2OMMRStd_QC(ivert, icol, irow) > 0)) cycle vert_Q_loop + if ((version.eq.6 .or. version.eq.7) .and. (granule%H2OMMRStd_QC(ivert, icol, irow) > 0)) cycle vert_Q_loop qqc = 0 ! if we get here, the quality control is 'Best' == 0 diff --git a/observations/obs_converters/AIRS/convert_airs_L2.nml b/observations/obs_converters/AIRS/convert_airs_L2.nml index 497de7da07..3aaf0f9661 100644 --- a/observations/obs_converters/AIRS/convert_airs_L2.nml +++ b/observations/obs_converters/AIRS/convert_airs_L2.nml @@ -11,6 +11,6 @@ cross_track_thin = 0 along_track_thin = 0 use_NCEP_errs = .false. - version = 6 + version = 7 / diff --git a/observations/obs_converters/AIRS/convert_airs_L2.rst b/observations/obs_converters/AIRS/convert_airs_L2.rst index 5703a5985a..8fbc451fd6 100644 --- a/observations/obs_converters/AIRS/convert_airs_L2.rst +++ b/observations/obs_converters/AIRS/convert_airs_L2.rst @@ -1,43 +1,35 @@ Program ``convert_airs_L2`` =========================== -.. caution:: - - Before you begin: Installing the libraries needed to read these files can be - fairly troublesome. The NASA Earthdata Data Access Services website is the - `download site `__ - for the necessary libraries. An example build script (`AIRS/Build_HDF-EOS.sh`) - is intended to provide some guidance. - - Overview -------- -The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second -Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination with +The Atmospheric Infrared Sounder `(AIRS) `_ is a facility +instrument aboard the second Earth Observing System (EOS) polar-orbiting platform +`Aqua `_. Aqua is one of a group of satellites flying close +together in a polar orbit, collectively known as the “A-train”. In combination with the Advanced Microwave Sounding Unit (AMSU) and the Humidity Sounder for Brazil (HSB), AIRS constitutes an innovative atmospheric sounding group of visible, infrared, and -microwave sensors. AIRS data will be generated continuously. Global coverage will -be obtained twice daily (day and night) on a 1:30pm sun synchronous orbit from a -705-km altitude. +microwave sensors. AIRS data will be generated continuously. The AIRS Standard Retrieval Product consists of retrieved estimates of cloud and surface properties, plus profiles of retrieved temperature, water vapor, ozone, carbon monoxide and methane. Estimates of the errors associated with these quantities will also be part of the Standard Product. The temperature profile -vertical resolution is 28 levels total between 1100 mb and 0.1 mb, while moisture -profile is reported at 14 atmospheric layers between 1100 mb and 50 mb. The +vertical resolution is 28 levels total between 1100 and 0.1 hPa, while moisture +profile is reported at 14 atmospheric layers between 1100 hPa and 50 hPa. The horizontal resolution is 50 km. An AIRS granule has been set as 6 minutes of data, -30 footprints cross track by 45 lines along track. The Shortname for this product -is AIRX2RET. (AIRS2RET is the same product but without the AMSU data.) +There are 240 granules per day, with orbit repeat cycle of approximately 16 days. -Atmospheric Infrared Sounder (AIRS) Level 2 observations -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Overview of L1-L3 Atmospheric Infrared Sounder (AIRS) Observations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``convert_airs_L2`` converter is designed specifically for +**temperature and moisture retrievals for L2 observations** only. +For reference, we provide a brief description of the L1-L3 AIRS data +products below. For more detailed information please see the +`AIRS documentation page: `_ -Several types of AIRS data, with varying levels of processing, are available. -The following descriptions are taken from the -`V5_Data_Release_UG `__ -document: The L1B data product includes geolocated, calibrated observed microwave, infrared and visible/near infrared radiances, as well as Quality Assessment @@ -55,66 +47,77 @@ document: There are three products: daily, 8-day and monthly. Each product provides separate ascending (daytime) and descending (nighttime) binned data sets. -The converter in this directory processes level 2 (L2) data files, using data -set ``AIRS_DP`` and data product ``AIRX2RET`` or ``AIRS2RET`` without ``HSB`` -(the instrument measuring humidity which failed). - -Getting the data currently means putting in a start/stop time at -`this web page `__. -The keyword is ``AIRX2RET`` and put in the time range of interest and optionally a -geographic region. Each file contains 6 minutes of data, is about 2.3 Megabytes, -and globally there are 240 files/day (about 550 Megabytes/day). There are additional -options for getting only particular variables of interest, but the current reader -expects whole files to be present. Depending on your connection to the internet, -there are various options for downloading. We have chosen to download a ``wget`` -script which is created by the web page after adding the selected files to a 'cart' -and 'checking out'. The script has a series of ``wget`` commands which downloads -each file, one at a time, which is run on the machine where you want the data. + +Downloading Atmospheric Infrared Sounder (AIRS) L2 Observations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There are several data file types and versions that contain L2 +observations for temperature and moisture profiles. **We recommend the use of +the AIRS2RET version 7 (AIRS2RETv7) data product.** The ``AIRS2RET`` data (AIRS data only) +product is preferred to the ``AIRX2RET`` (AIRS/AMSU data) because the radiometric +noise in several AMSU channels increased (since June 2007) degrading the +``AIRX2RET`` product. Furthermore, the version 7 product is higher quality than version 6 +because of an improved retrieval algorithm leading to significantly improved RMSE and bias statistics. +See the `AIRS2RETv7 documentation `_ +for more information. + +Although we recommend ``AIRS2RETv7``, the ``convert_airs_L2`` converter is compatible +with ``AIRS2RET`` and ``AIRX2RET`` versions 5-7. Version 5 is no longer available +within the GES DISC database. For more information on these data products see the +links below: + +- `AIRS2RETv6 `_ +- `AIRX2RETv6 `_ +- `AIRX2RETv7 `_ + +The AIRS data is located within the Goddard Earth Sciences Data and Information +Services Center (GES DISC) `located here `_. You need +to create an Earthdata account before you can download data. As an example, to +access the AIRS2RETv7 data, search on keyword ``AIRS2RET`` and locate +the AIRS2RET 7.0 data set within your search results. The full name is listed as +**Aqua/AIRS L2 Standard Physical Retrieval (AIRS-only) V7.0 (AIRS2RET)**. Next, click on the +``Subset/Get Data`` link within the `Data Access` portion of the webpage. This will +bring up a separate window that allows you to refine your search results +by 1) ``Refine range (time)`` and 2) ``Refine region (spatial)``. + +There are various options for downloading, however, the most straightforward approach +for macOS and Linux users is to use the ``wget`` command. The ``download instructions`` +provide the proper wget flags/options. The ``Download Links List`` provides +the AIRS file list based on your search results. convert_airs_L2.f90 ------------------- The ``convert_airs_L2`` converter is for **temperature and moisture retrievals** from -the L2 data. The temperature observations are at the -corresponding vertical pressure levels. However, the moisture obs are the mean for -the layer, so the location in the vertical is the midpoint, in log space, of the -current layer and the layer above it. There is an alternative computation for the -moisture across the layer which may be more accurate, but requires a forward -operator subroutine to be written and for the observation to contain metadata. -The observation could be defined with a layer top, in pressure, and a number of -points to use for the integration across the layer. Then the forward operator would -query the model at each of the N points in the vertical for a given horizontal -location, and compute the mean moisture value. This code has not been implemented -yet, and would require a different QTY_xxx to distinguish it from the simple -location/value moisture obs. See the GPS non-local operator code for an example -of how this would need to be implemented. - -The temperature observations are located on standard levels; there is a single array -of heights in each file and all temperature data is located on one of these levels. -The moisture observations, however, are an integrated quantity for the space between -the levels; in their terminology the fixed heights are 'levels' and the space between -them are 'layers'. The current converter locates the moisture obs at the midpoint, -in log space, between the levels. - -The hdf files need to be downloaded from the data server, in any manner you choose. -The converter program reads each hdf granule and outputs a DART obs_seq file +the L2 data. +The vertical coordinate is pressure. +The temperature observations are defined at standard pressure levels (see Overview). +Those are defined in each file by the array +'StdPressureLev:L2_Standard_atmospheric&surface_product'. +Between 2 levels is a "layer". +A moisture observation is an average across the layer +and is defined at the midpoint (in log(pressure)) of the layer. +This choice makes half of the mass of the layer above the midpoint and half below. +The midpoints are defined in 'H2OPressureLay:L2_Standard_atmospheric&surface_product'. + +There is an alternative computation for the moisture across the layer +which may be more accurate, but requires a forward operator subroutine +to be written and for the observation converter to include additional metadata +to support this forward operator. +For more information see the Future Plans section below. + +The converter program reads each AIRS hdf file granule and outputs a DART obs_seq file containing up to 56700 observations. Only those with a quality control of 0 (Best) are kept. The resulting obs_seq files can be merged with the -:doc:`../../../assimilation_code/programs/obs_sequence_tool/obs_sequence_tool` into +:ref:`obs sequence tool` into larger time periods. -It is possible to restrict the output observation sequence to contain data from a -region of interest throught the use of the namelist parameters. If you need a region -that spans the Prime Meridian lon1 can be a larger number than lon2, for example, -a region from 300 E to 40 E and 60 S to 30 S (some of the South Atlantic), -would be *lon1 = 300, lon2 = 40, lat1 = -60, lat2 = -30*. - -The ``DART/observations/obs_converters/AIRS/shell_scripts`` directory includes scripts -(``download_L2.sh`` and ``oneday_down.sh``) that make use of the fact that the AIRS data -is also archived on the NSF NCAR HPSS (tape library) in daily tar files. -``oneday_down.sh`` has options to download a day of granule files, convert them, merge them -into daily files, and remove the original data files and repeat the process for any -specified time period. +During the excecution of the obs converter, It is possible to restrict the output +observation sequence to contain data from a region of interest throught the use of +the namelist parameters (described in Namelist section below). If you need a region +that spans the Prime Meridian, ``lon1`` can be a larger number than ``lon2``. +For example, a region from 300 E to 40 E and 60 S to 30 S (some of the South Atlantic), +would be ``lon1 = 300``, ``lon2 = 40``, ``lat1 = -60``, ``lat2 = -30``. Namelist @@ -142,7 +145,7 @@ The default values are shown below. More realistic values are provided in cross_track_thin = 0 along_track_thin = 0 use_NCEP_errs = .false. - version = 6 + version = 7 / | @@ -152,54 +155,53 @@ The default values are shown below. More realistic values are provided in +--------------------+------------------------+--------------------------------------------------------------+ | Contents | Type | Description | +====================+========================+==============================================================+ - | l2_files | character(len=256), | A list of one or more names of the HDF file(s) to read, | - | | dimension(512) | NOT including the directory. If multiple files are listed, | - | | | each will be read and the results will be placed in a | - | | | separate file with an output filename constructed based on | - | | | the input filename. | + | l2_files | character(len=256), | A list of one or more names of the HDF file(s) to read. | + | | dimension(512) | If multiple files are listed, each will be read and | + | | | the results will be placed in a separate file with | + | | | an output filename constructed based on the input filename. | +--------------------+------------------------+--------------------------------------------------------------+ | l2_file_list | character(len=256) | The name of an ascii text file which contains one filename | - | | | per line, NOT including the directory. Each file will be | - | | | read and the observations converted into an output file | - | | | where the output filename is based on the input filename. | - | | | Only one of 'l2_files' and 'l2_file_list' can be | - | | | specified. The other must be ' ' (empty). | + | | | per line. Each file will be read and the observations | + | | | converted into an output file where the output filename | + | | | is based on the input filename. | + | | | Only one of 'l2_files' and 'l2_file_list' can be specified. | + | | | The other must be ' ' (empty). | +--------------------+------------------------+--------------------------------------------------------------+ | outputfile | character(len=256) | The name of the output observation sequence file. | +--------------------+------------------------+--------------------------------------------------------------+ - | lon1 | real(r8) | the West-most longitude of interest in degrees. [0.0, 360] | + | lon1 | real(r8) | The West-most longitude of interest in degrees. [0.0, 360] | +--------------------+------------------------+--------------------------------------------------------------+ - | lon2 | real(r8) | the East-most longitude of interest in degrees. [0.0, 360] | + | lon2 | real(r8) | The East-most longitude of interest in degrees. [0.0, 360] | +--------------------+------------------------+--------------------------------------------------------------+ - | lat1 | real(r8) | the South-most latitude of interest in degrees. [-90.0,90.0] | + | lat1 | real(r8) | The South-most latitude of interest in degrees. [-90.0,90.0] | +--------------------+------------------------+--------------------------------------------------------------+ - | lat2 | real(r8) | the North-most latitude of interest in degrees. [-90.0,90.0] | + | lat2 | real(r8) | The North-most latitude of interest in degrees. [-90.0,90.0] | +--------------------+------------------------+--------------------------------------------------------------+ - | min_MMR_threshold | real(r8) | The data files contains 'Retrieved Water Vapor Mass Mixing | - | | | Ratio'. This is the minimum threshold, in gm/kg, that will | - | | | be converted into a specific humidity observation. | + | min_MMR_threshold | real(r8) | The data files contain 'Retrieved Water Vapor Mass Mixing | + | | | Ratio'. This is the minimum threshold (g/kg) that will | + | | | be converted into a specific humidity observation (kg/kg). | +--------------------+------------------------+--------------------------------------------------------------+ - | top_pressure_level | real(r8) | The highest pressure level of interest (in mb). | + | top_pressure_level | real(r8) | The highest pressure level of interest (in hPa). | +--------------------+------------------------+--------------------------------------------------------------+ - | cross_track_thin | integer | provides ability to thin the data by keeping every Nth data | + | cross_track_thin | integer | Provides ability to thin the data by keeping every Nth data | | | | value in the cross-track scan. [0,30] | | | | e.g. 3 == keep every third value. 0 is no thinning. | +--------------------+------------------------+--------------------------------------------------------------+ - | along_track_thin | integer | provides ability to thin the data by keeping every Nth data | + | along_track_thin | integer | Provides ability to thin the data by keeping every Nth data | | | | value in the along-track scan. [0,45] | | | | e.g. 4 == keep only every 4th row. 0 is no thinning. | +--------------------+------------------------+--------------------------------------------------------------+ - | use_NCEP_errs | logical | if .true. use the maximum observation error from either the | + | use_NCEP_errs | logical | If .true. use the maximum observation error from either the | | | | granule or the NCEP equivalent (from ``obs_error_mod.f90``) | +--------------------+------------------------+--------------------------------------------------------------+ - | version | integer | The AIRS file format version. | + | version | integer | The AIRS file format version. Version 7 is recommended, but | + | | | the converter is compatible with versions 5-7. | +--------------------+------------------------+--------------------------------------------------------------+ - -Dependencies -~~~~~~~~~~~~ - -See the :doc:`Dependencies Section<./README>` of the AIRS/README. + | Included here are some example values for the l2_files namelist option. + | Version 5 file: ``l2_files = '../data/AIRS.2007.11.01.001.L2.RetStd.v5.2.2.0.G08078150655.hdf'`` + | Version 6 file: ``l2_files = '../data/AIRS.2017.01.01.110.L2.RetStd_IR.v6.0.31.1.G19058124823.hdf'`` + | Version 7 file: ``l2_files = '../data/AIRS.2020.06.15.224.L2.RetStd_IR.v7.0.4.0.G20330033505.hdf'`` Known Bugs ~~~~~~~~~~ @@ -217,5 +219,13 @@ Future Plans ~~~~~~~~~~~~ If a more accurate moisture observation was needed, the observation value could be computed by actually integrating multiple values between the levels. -At this point it doesn't seem necessary. +The observation could be defined with a layer top, in pressure, and a number of +points to use for the integration across the layer. Then the forward operator would +query the model at each of the N points in the vertical for a given horizontal +location, and compute the mean moisture value. This code has not been implemented +yet, and would require a different QTY_xxx to distinguish it from the simple +location/value moisture obs. The observation converter would also have to bring +in moisture observation metadata for this forward operator. See the +GPS non-local operator code (:ref:`gps`) for an example of how this +would need to be implemented. diff --git a/observations/obs_converters/AIRS/convert_amsu_L1.rst b/observations/obs_converters/AIRS/convert_amsu_L1.rst index f3bcd1a57f..2ef3724097 100644 --- a/observations/obs_converters/AIRS/convert_amsu_L1.rst +++ b/observations/obs_converters/AIRS/convert_amsu_L1.rst @@ -1,87 +1,158 @@ Program ``convert_amsu_L1`` =========================== -.. caution:: +Overview +--------- + +The following is an excerpt from the AIRS L1B AMSU-A documentation. +The complete documentation provided by the Goddard Earth Sciences Data +and Information Services Center `(GES DISC) `_ +can be within the Documentation->README Document `found here `_. + +The Atmospheric Infrared Sounder (AIRS) Version 5 Level 1B Advanced Microwave +Sounding Unit (AMSU)-A Products (AIRABRAD) contain calibrated and +geolocated brightness temperatures in degrees Kelvin. AIRABRAD_NRT (Near Real Time) +products are also available within ~3 hours of observations globally and stay for +about 5 days from the time they are generated. This data set is generated from +AMSU-A level 1A digital numbers (DN) and contains 15 microwave channels in the +50-90 GHz and 23-32 GHz regions of the spectrum. A day's worth of data is divided +into 240 scenes (granules), each of 6 minute duration. An AMSU-A scene contains +30 cross-track footprints in each of 45 along-track scanlines, for a total of +45 x 30 = 1350 footprints per scene. AMSU-A scans three times as slowly as AIRS +(once per 8 seconds) and its footprints are approximately three times as large as +those of AIRS (45 km at nadir). This results in three AIRS scans per AMSU-A scans +and nine AIRS footprints per AMSU-A footprint. + +For more details on the history of the AMSU/A satellite instrumentation +see the following `link `_. + +To summarize, AMSU/A was flown on satellites NOAA 15-17. Versions of AMSU-A also +fly on the Aqua satellite (that also houses AIRS) as well as the European MetOp +satellite. It has been replaced by the Advance Technology Microwave Sounder (ATMS) +on the satellite NOAA-20. + +Instructions to download the AMSU-A L1B Version 5 (AIRABRAD) dataset +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The AMSU-A data is located within the Goddard Earth Sciences Data and Information +Services Center (GES DISC) `located here `_. You need +to create an Earthdata account before you can download data. To access the +AMSU-A data, search on keyword ``AIRABRAD`` and locate +the **AIRS/Aqua L1B AMSU (A1/A2) geolocated and calibrated brightness temperatures V005 +(AIRABRAD)** heading within your search results. + +Next, under the Data Access header, click on `Subset/Get Data`, then refine your +search results by 1) data range (time) and 2) spatial region. + +There are various options for downloading, however, the most straightforward approach +for macOS and Linux users is to use the ``wget`` command. The ``download instructions`` +provide the proper wget flags/options. The ``Download Links List`` provides +the AMSU-A file list based on your search results. + + +| Each granule is about 560K and has names like + +:: + + AIRS.2019.06.22.236.L1B.AMSU_Rad.v5.0.0.0.G19174110442.hdf + +Advanced Microwave Sounding Unit (AMSU-A) L1B Brightness Temperatures +--------------------------------------------------------------------- + +Perform the following steps to convert the AMSU_L1 observations: + +1. Download the `h4tonccf_nc4 tool `_ provided + from the hdf-eos website. Options are provided for Mac, Linux and Windows platforms. + For example, the following command downloads the CentOS7 v1.3 executable that + works for Derecho: + :: + + wget https://hdfeos.org/software/h4cflib/bin/linux/v1.3/CentOS7/h4tonccf_nc4 + +2. Convert the AMSU data file from HDF-EOS to netCDF format using the ``h4tonccf_nc4`` + exectuable as shown below. Be sure to provide execute permission first: + :: + + chmod +x h4tonccf_nc4 + ./h4tonccf_nc4 AMSU.hdf + + Done with writing netcdf file AMSU.nc + +2. b. Optional: The netCDF files have two global attributes that are exceedingly large and uninformative. If needed you can remove these attributes, you can use the + ``ncatted`` command from + `NCO `_ through the following command: + :: + + module load nco + ncatted -a coremetadata,global,d,,, -a StructMetadata_0,global,d,,, AMSU.nc AMSU_final.nc + +3. Run ``convert_amsu_L1`` to convert the AMSU_final.nc file to the DART obs_seq format. + **Important:** Be sure to configure your namelist settings (below) before running the + converter. Also be sure you have compiled the ``convert_amsu_L1`` executable using + the proper ~/DART/build_templates/mkmf.template that includes both RTTOV and HDF-EOS2 + libraries as described here: :doc:`./README` + + :: + + ./convert_amsu_L1 + + +Check the completed ``obs_seq``. It should include brightness temperatures for +the ``EOS_2_AMSUA_TB`` observation type. The converter should also produce the +following metadata underneath the ``mw`` (microwave) header as shown in the table +below. For more information on the metadata see the +`RTTOV documentation `_ + +.. container:: + + +-----------------------+------------------------+ + | Metadata variable Name| Description | + +=======================+========================+ + | Sat_az | Azimuth of satellite | + | | position (degrees) | + +-----------------------+------------------------+ + | Sat_ze | Aenith of satellite | + | | position (degrees) | + +-----------------------+------------------------+ + | Platform_id | EOS (9), RTTOV User | + | | Guide, Table 2 | + +-----------------------+------------------------+ + | Sat_id | (2), RTTOV User | + | | Guide, Table 2 | + +-----------------------+------------------------+ + | Sensor_id | AMSU-A (3), RTTOV User | + | | Guide, Table 2 | + +-----------------------+------------------------+ + | Channel | Microwave frequency | + | | channel (1-15) | + +-----------------------+------------------------+ + | Mag_field | Earth magnetic field | + | | strength (Gauss) | + +-----------------------+------------------------+ + | cosbk | Cosine of angle between| + | | magnetic field and | + | | viewing direction | + +-----------------------+------------------------+ + | Fastem_p(1-5) | Land/sea-ice parameters| + | | 1-5 for FASTEM | + | | emissivity model | + | | Table 21, RTTOV User | + | | Guide | + +-----------------------+------------------------+ - Before you begin: Installing the libraries needed to read these files can be - fairly troublesome. The NASA Earthdata Data Access Services website is the - `download site `__ - for the necessary libraries. An example build script (`AIRS/Build_HDF-EOS.sh`) - is intended to provide some guidance. -Overview --------- - -There is a little bit of confusing history to be aware of for AMSU/A: - -https://en.wikipedia.org/wiki/Advanced_microwave_sounding_unit#History - -AMSU/A was flown on NOAA 15-17. It is also on the Aqua satellite (that -also houses AIRS) as well as the European MetOp. It has been replaced by -ATMS on NOAA-20. - -The datset of interest is: “AIRS/Aqua L1B AMSU (A1/A2) geolocated and -calibrated brightness temperatures V005 (AIRABRAD) at GES DISC” The -*short name* for this dataset is ‘AIRABRAD’ - -The introductory paragraph for the dataset is: - - Version 5 is the current version of the data set.tmospheric Infrared - Sounder (AIRS) is a grating spectrometer (R = 1200) aboard the second - Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In - combination with the Advanced Microwave Sounding Unit (AMSU) and the - Humidity Sounder for Brazil (HSB), AIRS constitutes an innovative - atmospheric sounding group of visible, infrared, and microwave - sensors. The AMSU-A instrument is co-aligned with AIRS so that - successive blocks of 3 x 3 AIRS footprints are contained within one - AMSU-A footprint. AMSU-A is primarily a temperature sounder that - provides atmospheric information in the presence of clouds, which can - be used to correct the AIRS infrared measurements for the effects of - clouds. This is possible because non-precipitating clouds are for the - most part transparent to microwave radiation, in contrast to visible - and infrared radiation which are strongly scattered and absorbed by - clouds. AMSU-A1 has 13 channels from 50 - 90 GHz and AMSU-A2 has 2 - channels from 23 - 32 GHz. The AIRABRAD_005 products are stored in - files (often referred to as “granules”) that contain 6 minutes of - data, 30 footprints across track by 45 lines along track. - -The citation information for this dataset is: - - Title: AIRS/Aqua L1B AMSU (A1/A2) geolocated and calibrated - brightness temperatures V005 Version: 005 Creator: AIRS project - Publisher: Goddard Earth Sciences Data and Information Services - Center (GES DISC) Release Date: 2007-07-26T00:00:00.000Z Linkage: - https://disc.gsfc.nasa.gov/datacollection/AIRABRAD_005.html - -NASA provides a `README.AIRABRAD.pdf `__ -through the Goddard Earth Sciences Data and Information Services Center. - -convert_amsua_L1.f90 --------------------- - -``convert_amsua_L1`` converts the L1B AMSU-A Brightness -Temperatures in netCDF format to the DART observation sequence file format. -The native HDF-EOS2 format files must be converted to netCDF. -The conversion from HDF-EOS2 to netCDF is easily performed by the -`h4tonccf_nc4 `__ converter. - -As you can imagine, you need to download each satellite’s data in a -different way. Also, just for your information, AMSU/B has been replaced -on newer satellites by MHS and HSB, but especially MHS is almost -identical. Namelist ~~~~~~~~ -DARTs design structure has the support for radiance observations (like brightness -temperatures) provided by the :doc:`../../forward_operators/obs_def_rttov_mod` -which depends on HDF5 libraries. Consequently, the ``obs_def_rttov_mod_nml`` namelist -must appear in the ``input.nml``. However, only two options are used when converting -the observations: *use_zeeman* and *rttov_sensor_db_file*. +The ``convert_amsu_L1`` converter requires :ref:`obs_def_rttov_mod`. +Only two ``&obs_def_rttov_nml`` options are required when converting +the observations: ``use_zeeman`` and ``rttov_sensor_db_file``. Be aware that if the RTTOV namelist option ``use_zeeman = .true.`` certain metadata must be available in the observation. This is not fully -implemented in the AMSU-A observation converter. For more information, +implemented in the AMSU-A observation converter, so we recommend setting +``use_zeeman = .false.``. For more information, please see GitHub Issue 99 “`AIRS AMSUA observation converter … Zeeman coefficients and channels `__” @@ -94,7 +165,7 @@ The default values are shown below. More realistic values are provided in :: - &convert_amsua_L1_nml + &convert_amsu_L1_nml l1_files = '' l1_file_list = '' outputfile = '' @@ -109,6 +180,12 @@ The default values are shown below. More realistic values are provided in verbose = 0 / +:: + + &obs_def_rttov_nml + rttov_sensor_db_file = '../../../forward_operators/rttov_sensor_db.csv' + use_zeeman = .false. + / .. container:: @@ -138,21 +215,21 @@ The default values are shown below. More realistic values are provided in | channel_list | character(len=8), | The AMSU channels desired. | | | dimension(15) | See the table below for valid input. | +--------------------+------------------------+--------------------------------------------------------------+ - | along_track_thin | integer | provides ability to thin the data by keeping every Nth data | + | along_track_thin | integer | Provides ability to thin the data by keeping every Nth data | | | | value in the along-track scan. [0,45] | | | | e.g. 4 == keep only every 4th row. 0 is no thinning. | +--------------------+------------------------+--------------------------------------------------------------+ - | cross_track_thin | integer | provides ability to thin the data by keeping every Nth data | + | cross_track_thin | integer | Provides ability to thin the data by keeping every Nth data | | | | value in the cross-track scan. [0,30] | | | | e.g. 3 == keep every third value. 0 is no thinning. | +--------------------+------------------------+--------------------------------------------------------------+ - | lon1 | real(r8) | the West-most longitude of interest in degrees. [0.0, 360] | + | lon1 | real(r8) | The West-most longitude of interest in degrees. [0.0, 360] | +--------------------+------------------------+--------------------------------------------------------------+ - | lon2 | real(r8) | the East-most longitude of interest in degrees. [0.0, 360] | + | lon2 | real(r8) | The East-most longitude of interest in degrees. [0.0, 360] | +--------------------+------------------------+--------------------------------------------------------------+ - | lat1 | real(r8) | the South-most latitude of interest in degrees. [-90.0,90.0] | + | lat1 | real(r8) | The South-most latitude of interest in degrees. [-90.0,90.0] | +--------------------+------------------------+--------------------------------------------------------------+ - | lat2 | real(r8) | the North-most latitude of interest in degrees. [-90.0,90.0] | + | lat2 | real(r8) | The North-most latitude of interest in degrees. [-90.0,90.0] | +--------------------+------------------------+--------------------------------------------------------------+ | verbose | integer | Controls the amount of run-time output. | | | | 0 == bare minimum. 3 is very verbose. | @@ -163,6 +240,10 @@ The default values are shown below. More realistic values are provided in Channel Specification ~~~~~~~~~~~~~~~~~~~~~ +The following channel description is excerpted from the +Documentation->README Document `found here `_. + + "AMSU-A primarily provides temperature soundings. It is a 15-channel microwave temperature sounder implemented as two independently operated modules. Module 1 (AMSU-A1) has 12 channels in the 50-58 GHz oxygen absorption band which provide @@ -172,11 +253,17 @@ Channel Specification precipitable water and cloud liquid water)." -To facilitate the selection of channels, either the 'Integer' or 'String' values -may be used to specify ``channel_list``. The 'Documentation' and 'netCDF' values -are provided for reference only. The 'Documentation' values are from the -`README.AIRABRAD.pdf `__ document. +To facilitate the selection of channels, either the ``Integer`` or ``String`` values +may be used to specify ``channel_list`` within ``&convert_amsu_L1_nml``. The +`Documentation` and `netCDF` values are provided for reference only. +For example the following ``channel list`` settings are identical and +specify the AMSU channels centered on 50.3 and 89 GHz: + +:: + + channel_list = 3,15 + channel_list = 'A1-1','A1-13' .. container:: @@ -221,158 +308,5 @@ are provided for reference only. The 'Documentation' values are from the +---------+---------+---------------+---------------+ -Known Bugs -~~~~~~~~~~ - -None. - - -Future Plans -~~~~~~~~~~~~ - -None. - - ----------- - - -.. _instructions-to-download-the-airabrad-dataset-1: - -Instructions to download the AIRABRAD dataset -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -1. Go to https://earthdata.nasa.gov -2. Log in (or create an account if necessary) -3. Search for AIRABRAD -4. Scroll down past datasets to “Matching results.” - -- Follow the link to “AIRS/Aqua L1B AMSU (A1/A2) geolocated and - calibrated brightness temperatures V005 (AIRABRAD) at GES DISC” - -5. You should now be at - ‘https://cmr.earthdata.nasa.gov/search/concepts/C1243477366-GES_DISC.html’ - (unless they’ve changed the site). - -- Select the ‘Download data’ tab -- Select ‘Earthdata search’ -- Select the AIRS link under ‘Matching datasets’ (I have not tested the - NRT products) - -6. You can now select ‘Granule filters’ to choose your start and end - dates. -7. Select the granules you want, then click ‘download all’ and - 'download data’ -8. Click download access script -9. Follow the instructions on that page to download the data. - - -| Each granule is about 560K and has names like - -:: - - AIRS.2019.06.22.236.L1B.AMSU_Rad.v5.0.0.0.G19174110442.hdf - - -Build -^^^^^^ - -See the :doc:`Dependencies Section<./README>` of the AIRS/README. - -Because the data are distributed in HDF-EOS format, and the RTTOV -libraries require HDF5 (incompatible with HDF-EOS) a two-step conversion -is necessary. The data must be converted from HDF to netCDF (which can -be done without HDF5) and then the netCDF files can be converted to DART -radiance observation format - which is the part that requires -``obs_def_rttov_mod.f90``, which is the part that requires HDF5. - -The NASA Earthdata Data Access Services website is the `download -site `__, -at press time, the following packages were required to build HDF-EOS -Release v2.20: - -- hdf-4.2.13.tar.gz -- HDF-EOS2.20v1.00.tar.Z -- HDF-EOS2.20v1.00_TestDriver.tar.Z -- HDF-EOS_REF.pdf -- HDF-EOS_UG.pdf -- jpegsrc.v9b.tar.gz -- zlib-1.2.11.tar.gz - -Similarly for HDF-EOS5 Release v5.1.16: - -- HDF-EOS5.1.16.tar.Z -- HDF-EOS5.1.16_TESTDRIVERS.tar.Z -- HDF-EOS5_REF.pdf -- HDF-EOS5_UG.pdf -- hdf5-1.8.19.tar.gz -- szip-2.1.1.tar.gz - -DART provides a script ``DART/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh`` -that may help provide support for these libraries. You *will* have to modify it for your -system, and you *probably will* have to iterate on that process. The -script takes the stance that if you have to build HDF4, HDF-EOS, HDF5 … -you might as well build HDF-EOS5 too. The HDF-EOS5 is entirely optional. -The HDF5 will be needed by RTTOV. - -Converting from HDF4 to netCDF ------------------------------- - -There are multiple ways to convert from HDF4 to netCDF. The HDF-EOS -Tools and Information Center provides binaries for several common -platforms as well as source code should you need to build your own. - -HDF4 CF CONVERSION TOOLKIT -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The HDF-EOS Tools and Information Center provides the `HDF4 CF -CONVERSION TOOLKIT `__ - - The HDF4 CF (H4CF) Conversion Toolkit can access various NASA HDF4 - external and HDF-EOS2 external files by following the CF conventions - external. The toolkit includes a conversion library for application - developers and a conversion utility for NetCDF users. We have - translated the information obtained from various NASA HDF-EOS2 and - HDF4 files and the corresponding product documents into the - information required by CF into the conversion library. We also have - implemented an HDF4-to-NetCDF (either NetCDF-3 or NetCDF-4 classic) - conversion tool by using this conversion library. In this web page, - we will first introduce how to build the conversion library and the - tool from the source. Then, we will provide basic usage of the tool - and the conversion library APIs. The information for the supported - NASA HDF-EOS2 and HDF4 products and visualization screenshots of some - converted NetCDF files will also be presented. - -If you download a binary, it’s a good habit to verify the checksum. -The download page has a link -to a .pdf that has the known checksums. -`Here’s how to generate the checksum `__. -Be aware that when I downloaded the file (via Chrome or ‘wget’) on an -OSX system, the checksum did not match. When I downloaded the file on a -linux system, the checksum *did* match. - -If you download the source, the tar file comes with a ``README`` and an -``INSTALL``. Please become familiar with them. DART also has a build script: -``AIRS/shell_scripts/Build_HDF_to_netCDF.csh`` that you can customize -after you read the ``INSTALL`` document. - -Actually converting to netCDF -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -While the converter creates very nice netCDF files, there are two global -attributes that are exceedingly large and uninformative. Should you want -to remove them, I suggest using the ``ncatted`` command from -`NCO `__. - -:: - - h4tonccf_nc4 AIRS.2019.06.22.236.L1B.AMSU_Rad.v5.0.0.0.G19174110442.hdf bob.nc - ncatted -a coremetadata,global,d,,, -a StructMetadata_0,global,d,,, bob.nc bill.nc -The DART ``L1_AMSUA_to_netcdf.f90`` program -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Before I became aware of ``h4tonccf_nc4``, I was in the process of -writing my own converter ``L1_AMSUA_to_netcdf.f90``. *It is not -finished.* Furthermore, at this stage, I don’t know which variables are -needed to be a viable DART observation sequence file, and I don’t see -the point in converting EVERYTHING. diff --git a/observations/obs_converters/AIRS/shell_scripts/Build_HDF_to_netCDF.sh b/observations/obs_converters/AIRS/shell_scripts/Build_HDF_to_netCDF.sh deleted file mode 100755 index fe4e67c8d5..0000000000 --- a/observations/obs_converters/AIRS/shell_scripts/Build_HDF_to_netCDF.sh +++ /dev/null @@ -1,57 +0,0 @@ -#!/bin/bash -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# This file is intended to provide guidance on how to compile the -# "HDF4 CF CONVERSION TOOLKIT" from the HDF-EOS Tools and Information Center -# -# The URL of the HDF-EOS Tools and Information Center is: -# http://hdfeos.org/software/h4cflib.php -# -# The URL of the "HDF4 CF CONVERSION TOOLKIT" is: -# http://hdfeos.org/software/h4cflib/h4cflib_1.3.tar.gz -# -# This is not a substitute for the README and INSTALL contained in the tar file. -# -# My habit is to install software for my personal use in my $HOME/local directory. - -export MYHOME=/glade/work/thoar - -./configure \ - --prefix=${MYHOME}/local/h4cf_1.3 \ - --with-hdf4=${MYHOME}/local/hdf-eos \ - --with-jpeg=${MYHOME}/local/hdf-eos \ - --with-zlib=${MYHOME}/local/hdf-eos \ - --with-hdfeos2=${MYHOME}/local/hdf-eos \ - --with-netcdf=$NETCDF \ - --with-szlib=/usr/local/szip \ - CPPFLAGS=-I${MYHOME}/local/hdf-eos/include \ - LDFLAGS=-L${MYHOME}/local/hdf-eos/lib || exit 1 - -make || exit 2 -make check || exit 3 -make install || exit 4 - -exit - -# The best way to get the most current configure options is to use configure: -# ./configure --help -# -# Here is a recap of just the environment variables, there are many more options -# -# Some influential environment variables: -# CC C compiler command -# CFLAGS C compiler flags -# LDFLAGS linker flags, e.g. -L if you have libraries in a -# nonstandard directory -# LIBS libraries to pass to the linker, e.g. -l -# CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if -# you have headers in a nonstandard directory -# LT_SYS_LIBRARY_PATH -# User-defined run-time library search path. -# CPP C preprocessor -# CXX C++ compiler command -# CXXFLAGS C++ compiler flags -# CXXCPP C++ preprocessor diff --git a/observations/obs_converters/AIRS/shell_scripts/Convert_HDF_to_netCDF.csh b/observations/obs_converters/AIRS/shell_scripts/Convert_HDF_to_netCDF.csh deleted file mode 100755 index 7035e07158..0000000000 --- a/observations/obs_converters/AIRS/shell_scripts/Convert_HDF_to_netCDF.csh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/csh -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download - - -cd ../data - -foreach FILE ( *hdf ) - - set BASE = $FILE:r - set NEWNAME = $BASE.nc - - echo - echo "Converting $FILE to" - echo " $NEWNAME" - echo - - \rm -f bob.nc - h4tonccf_nc4 $FILE bob.nc || exit 1 - ncatted -a coremetadata,global,d,,, -a StructMetadata_0,global,d,,, bob.nc $NEWNAME - -end - -\rm -f bob.nc - diff --git a/observations/obs_converters/AIRS/shell_scripts/README b/observations/obs_converters/AIRS/shell_scripts/README deleted file mode 100644 index 3da99d9312..0000000000 --- a/observations/obs_converters/AIRS/shell_scripts/README +++ /dev/null @@ -1,16 +0,0 @@ -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# DART $Id$ - -These scripts are intended to help download the original AIRS hdf -data files, convert them in bulk, and merge the resulting obs_seq files. - -In most cases, they're intended to be copied over to the ../work directory -and then customized for the particular time period and local directory names. - -# -# $URL$ -# $Revision$ -# $Date$ diff --git a/observations/obs_converters/AIRS/shell_scripts/download_L2.sh b/observations/obs_converters/AIRS/shell_scripts/download_L2.sh deleted file mode 100755 index 71cedb74a7..0000000000 --- a/observations/obs_converters/AIRS/shell_scripts/download_L2.sh +++ /dev/null @@ -1,62 +0,0 @@ -#!/bin/bash -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# download the requested tar files from the NCAR mass store. - -# set the first and last days. can roll over -# month and year boundaries now! -let start_year=2006 -let start_month=10 -let start_day=1 - -let end_year=2007 -let end_month=1 -let end_day=31 - -# end of things you should have to set in this script - -# convert the start and stop times to gregorian days, so we can -# compute total number of days including rolling over month and -# year boundaries. make sure all values have leading 0s if they -# are < 10. do the end time first so we can use the same values -# to set the initial day while we are doing the total day calc. -mon2=`printf %02d $end_month` -day2=`printf %02d $end_day` -end_d=(`echo ${end_year}${mon2}${day2}00 0 -g | ./advance_time`) - -mon2=`printf %02d $start_month` -day2=`printf %02d $start_day` -start_d=(`echo ${start_year}${mon2}${day2}00 0 -g | ./advance_time`) - -curday=(`echo ${start_year}${mon2}${day2}00 0 | ./advance_time`) - -# how many total days are going to be converted (for the loop counter) -let totaldays=${end_d[0]}-${start_d[0]}+1 - -# loop over each day -let d=1 -while (( d <= totaldays)) ; do - - # parse out the parts from a string which is YYYYMMDDHH - year=${curday:0:4} - month=${curday:4:2} - day=${curday:6:2} - - - echo getting ${year}${month}${day}.tar from mass store - hsi get /MIJEONG/AIRS/V5/L2/${year}${month}/${year}${month}${day}.tar - - - # advance the day; the output is YYYYMMDD00 - curday=(`echo ${year}${month}${day}00 +1d | ./advance_time`) - - # advance the loop counter - let d=d+1 - -done - -exit 0 - diff --git a/observations/obs_converters/AIRS/shell_scripts/oneday_down.sh b/observations/obs_converters/AIRS/shell_scripts/oneday_down.sh deleted file mode 100755 index bc8b44a329..0000000000 --- a/observations/obs_converters/AIRS/shell_scripts/oneday_down.sh +++ /dev/null @@ -1,118 +0,0 @@ -#!/bin/bash -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# this version gets the tar file from the mass store first. -# unpack one day of tar files at a time, convert them into -# individual obs_seq files. this program also does the merge -# of the 240 individual daily swaths into a single obs_seq file. -# -# this program should be started from the work directory. -# it assumes ../data, ../tars, the output dir, etc -# exist relative to starting from AIRS/work. - -# set the first and last days to be converted. can roll over -# month and year boundaries now! -let start_year=2006 -let start_month=10 -let start_day=1 - -let end_year=2007 -let end_month=1 -let end_day=31 - -# relative to work dir -output_dir=../output.thin - -# whether to download the tar file from the mass store or not -# set to one of: true or false -download=true - -# end of things you should have to set in this script - -# convert the start and stop times to gregorian days, so we can -# compute total number of days including rolling over month and -# year boundaries. make sure all values have leading 0s if they -# are < 10. do the end time first so we can use the same values -# to set the initial day while we are doing the total day calc. -mon2=`printf %02d $end_month` -day2=`printf %02d $end_day` -end_d=(`echo ${end_year}${mon2}${day2}00 0 -g | ./advance_time`) - -mon2=`printf %02d $start_month` -day2=`printf %02d $start_day` -start_d=(`echo ${start_year}${mon2}${day2}00 0 -g | ./advance_time`) - -curday=(`echo ${start_year}${mon2}${day2}00 0 | ./advance_time`) - -# how many total days are going to be converted (for the loop counter) -let totaldays=${end_d[0]}-${start_d[0]}+1 - -# loop over each day -let d=1 -while (( d <= totaldays)) ; do - - # parse out the parts from a string which is YYYYMMDDHH - year=${curday:0:4} - month=${curday:4:2} - day=${curday:6:2} - - # compute the equivalent gregorian day here. - g=(`echo ${year}${month}${day}00 0 -g | ./advance_time`) - greg=${g[0]} - - echo starting AIRS to obs ${year}${month}${day} - echo gregorian: $greg - - # download the tar file from the hpss first - if [[ "$download" = "true" ]]; then - echo getting ${year}${month}${day}.tar from mass store - (cd ../tars; hsi get /MIJEONG/AIRS/V5/L2/${year}${month}/${year}${month}${day}.tar ) - fi - - # assume the original collection of data (hdf files, one per swath) - # are in ../tars and that the filenames inside the tar files are named - # YYYYMM/YYYYMMDD/*.hdf - (cd ../data; tar -xvf ../tars/${year}${month}${day}.tar >> tarlog) - - # construct the input list of files for the converter. - # cd there first in a subshell so the ls just contains simple file names - (cd ../data/${year}${month}/${year}${month}${day}; ls AIR*hdf > flist) - - # get back to work dir and edit a template file to set the - # values that change in the namelists. - sed -e "s/YYYY/${year}/g" \ - -e "s/MM/${month}/g" \ - -e "s/DD/${day}/g" \ - -e "s/GREG/${greg}/g" < ./input.nml.template > input.nml - - # actually make the obs_seq files, one per input. these still need to - # be merged if you want daily files. - ./convert_airs_L2 - - # do the merge now - ls ${output_dir}/AIRS.${year}.${month}.${day}.*.out > olist - ./obs_sequence_tool - - # start local mods - # ok, this is a local mod - to try to keep from running out of disk space - remote_dir=/gpfs/ptmp/dart/Obs_sets/AIRS_24_subx4_ascii/${year}${month}/ - cp -f ${output_dir}/AIRS.${year}${month}${day}.out $remote_dir - # and clean up so we don't run out of disk space - (cd ../data/${year}${month}/${year}${month}${day}; rm AIR*hdf) - (cd ${output_dir}; rm AIRS.${year}.${month}.${day}.*.out) - (cd ../tars; rm ${year}${month}${day}.tar) - # end local mods - - # advance the day; the output is YYYYMMDD00 - curday=(`echo ${year}${month}${day}00 +1d | ./advance_time`) - - # advance the loop counter - let d=d+1 - -done - -exit 0 - diff --git a/observations/obs_converters/AIRS/work/input.nml b/observations/obs_converters/AIRS/work/input.nml index 83303ae5e9..0651cef097 100644 --- a/observations/obs_converters/AIRS/work/input.nml +++ b/observations/obs_converters/AIRS/work/input.nml @@ -12,11 +12,6 @@ '../../../../observations/forward_operators/obs_def_AIRS_mod.f90' / -! version 5 file?: -! l2_files = '../data/AIRS.2007.11.01.001.L2.RetStd.v5.2.2.0.G08078150655.hdf' -! version 6 file?: -! l2_files = '../data/AIRS.2017.01.01.110.L2.RetStd_IR.v6.0.31.1.G19058124823.hdf' - &convert_airs_L2_nml l2_files = 'AIRS.2017.01.01.110.L2.RetStd_IR.v6.0.31.1.G19058124823.hdf' l2_file_list = '' @@ -29,6 +24,7 @@ lon2 = 360.0 lat1 = -90.0 lat2 = 90.0 + version = 7 / @@ -45,7 +41,6 @@ # All these are identical: # channel_list = 3,15 # channel_list = 'A1-1','A1-13' -# channel_list = 50.3,89 &convert_amsu_L1_nml l1_files = '../data/AIRS.2019.06.22.236.L1B.AMSU_Rad.v5.0.0.0.G19174110442.nc', @@ -63,14 +58,6 @@ verbose = 1 / -# The 'L1_AMSUA_to_netcdf.f90' program is not working yet. -&L1_AMSUA_to_netcdf_nml - file_name = '../data/AIRS.2019.06.22.236.L1B.AMSU_Rad.v5.0.0.0.G19174110442.hdf' - outputfile = 'amsua_bt.nc' - track = 23 - xtrack = 30 - / - &obs_sequence_nml write_binary_obs_sequence = .false. @@ -127,76 +114,6 @@ &obs_def_rttov_nml rttov_sensor_db_file = '../../../forward_operators/rttov_sensor_db.csv' - first_lvl_is_sfc = .true. - mw_clear_sky_only = .false. - interp_mode = 1 - do_checkinput = .true. - apply_reg_limits = .true. - verbose = .true. - fix_hgpl = .false. - do_lambertian = .false. - lambertian_fixed_angle = .true. - rad_down_lin_tau = .true. - use_q2m = .true. - use_uv10m = .true. - use_wfetch = .false. - use_water_type = .false. - addrefrac = .false. - plane_parallel = .false. - use_salinity = .false. - do_lambertian = .false. - apply_band_correction = .true. - cfrac_data = .true. - clw_data = .true. - rain_data = .true. - ciw_data = .true. - snow_data = .true. - graupel_data = .true. - hail_data = .false. - w_data = .true. - clw_scheme = 1 - clw_cloud_top = 322. - fastem_version = 6 - supply_foam_fraction = .false. - use_totalice = .true. use_zeeman = .false. - cc_threshold = 0.05 - ozone_data = .false. - co2_data = .false. - n2o_data = .false. - co_data = .false. - ch4_data = .false. - so2_data = .false. - addsolar = .false. - rayleigh_single_scatt = .true. - do_nlte_correction = .false. - solar_sea_brdf_model = 2 - ir_sea_emis_model = 2 - use_sfc_snow_frac = .false. - add_aerosl = .false. - aerosl_type = 1 - add_clouds = .true. - ice_scheme = 1 - use_icede = .false. - idg_scheme = 2 - user_aer_opt_param = .false. - user_cld_opt_param = .false. - grid_box_avg_cloud = .true. - cldstr_threshold = -1.0 - cldstr_simple = .false. - cldstr_low_cloud_top = 750.0 - ir_scatt_model = 2 - vis_scatt_model = 1 - dom_nstreams = 8 - dom_accuracy = 0.0 - dom_opdep_threshold = 0.0 - addpc = .false. - npcscores = -1 - addradrec = .false. - ipcreg = 1 - use_htfrtc = .false. - htfrtc_n_pc = -1 - htfrtc_simple_cloud = .false. - htfrtc_overcast = .false. / diff --git a/observations/obs_converters/AIRS/work/quickbuild.sh b/observations/obs_converters/AIRS/work/quickbuild.sh index e5c2bc45f3..4b1fcedd61 100755 --- a/observations/obs_converters/AIRS/work/quickbuild.sh +++ b/observations/obs_converters/AIRS/work/quickbuild.sh @@ -15,7 +15,6 @@ EXTRA="$DART/observations/obs_converters/obs_error/ncep_obs_err_mod.f90" programs=( -L1_AMSUA_to_netcdf advance_time convert_airs_L2 convert_amsu_L1 diff --git a/observations/obs_converters/AVISO/convert_aviso.f90 b/observations/obs_converters/AVISO/convert_aviso.f90 index 6b203a75b6..5b8ee08656 100644 --- a/observations/obs_converters/AVISO/convert_aviso.f90 +++ b/observations/obs_converters/AVISO/convert_aviso.f90 @@ -89,7 +89,7 @@ program convert_aviso call initialize_utilities('convert_aviso') ! command line argument -call getarg(1, input_file) +call GET_COMMAND_ARGUMENT(1, input_file) if (input_file == '') then write(string1,*)'.. Require a command-line argument specifying the input file.' diff --git a/observations/obs_converters/gps/gps.rst b/observations/obs_converters/gps/gps.rst index 2f018a51e2..772ce806b6 100644 --- a/observations/obs_converters/gps/gps.rst +++ b/observations/obs_converters/gps/gps.rst @@ -1,3 +1,5 @@ +.. _gps: + GPS Observations ================ diff --git a/observations/obs_converters/quikscat/QuikSCAT.rst b/observations/obs_converters/quikscat/QuikSCAT.rst index e942f9a8e8..5ff3692158 100644 --- a/observations/obs_converters/quikscat/QuikSCAT.rst +++ b/observations/obs_converters/quikscat/QuikSCAT.rst @@ -59,23 +59,27 @@ convert_L2b.f90 ~~~~~~~~~~~~~~~ ``convert_L2b`` converts the HDF files distributed by JPL to an obs_sequence file. -To build ``convert_l2b`` using ``quickbuild.sh`` you will first need to build the HDF4 library. +To build ``convert_l2b`` using ``quickbuild.sh`` you will need the HDF4 library. +HDF4 is available on the NSF NCAR machine Derecho: ``module load hdf``. .. warning:: To avoid conflicts with netCDF library required by DART, we recommend building HDF4 *without* the HDF4 versions of the NetCDF API. -After successfully building HDF, add the appropriate library flags to your mkmf.template file. -Below is a snippet from an mkmf.template file used to link to both NetCDF and HDF4. +After successfully building HDF, add the appropriate library flags to your mkmf.template file, +or for Derecho users, use the already available files at +*DART/build_templates/mkmf.template.quikscat.intel* or +*DART/build_templates/mkmf.template.quikscat.gfortran*. Below is a snippet from an +mkmf.template file used to link to both NetCDF and HDF4. .. code:: text - NETCDF = /glade/u/apps/ch/opt/netcdf/4.8.1/intel/19.1.1 - HDF = /glade/p/cisl/dares/libraries/hdf + NETCDF = /glade/u/apps/derecho/23.06/spack/opt/spack/netcdf/4.9.2/oneapi/2023.0.0/iijr + HDF = /glade/u/apps/derecho/23.06/spack/opt/spack/hdf/4.2.15/oneapi/2023.0.0/yo2r INCS = -I$(NETCDF)/include -I$(HDF)/include - LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf -L$(HDF)/lib -lmfhdf -ljpeg + LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf -L$(HDF)/lib -lmfhdf -ljpeg -lz -ltirpc -lsz -ldf FFLAGS = -O -assume buffered_io $(INCS) LDFLAGS = $(FFLAGS) $(LIBS)