-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Physics update in cumulus convection, PBL & surface layer for UFS_P7 #665
Conversation
@JongilHan66 Please consider merging this PR from me (JongilHan66#1) into your PR branch. These changes WILL NOT affect FV3/UFS in any way; they are only to make sure that the SCM stays compatible with the changes in this PR when specified surface fluxes are used in the SCM. If JongilHan66#1 is not merged into your branch, it will need to go into ccpp-physics in a followup PR, so including it now will probably save some work on @climbfuji and my side. Thanks! |
It is now being regression test for each machine with a new baseline. Could you check the merging issue with Jun Wang (Jun.Wang@noaa.gov) who has a PR schedule for each update? |
Sure, I can ask Jun. The changed files in ccpp-physics in JongilHan66#1 are not even parsed by the FV3 build system, so I think it is impossible for those changes to affect the UFS regression tests. |
I think I can speak for the ufs code managers, this PR can be merged as long as you update the submodule pointers in fv3atm and ufs-weather-model accordingly. I recommend taking a local backup of the entire ufs-weather-model directory, then merge in Grant's changes and update the submodule pointers. Then you confirm that only the two files have changed. And then you check out your ufs-weather-model PR recursively in a different directory and make sure it is identical to where you did the merge and submodule pointer updates. I can do this last step for you if you prefer. But this way you are 100% certain that everything is as expected and that the ufs-weather-model will not be affected. |
Thanks @climbfuji. The alternative (and what typically happens) is to do a follow-up PR with any SCM-only changes to ccpp-physics to be merged along with a future PR. It's not a big deal to do it that way, there is just a lag for SCM users, and there has been a request to use Jongil's changes with the SCM already. |
@JongilHan66 @grantfirl @junwang-noaa here is what we will do. We merge Jongil's PR as is (assuming that all the testing is successful). Afterwards, we merge Grant's changes into ccpp-physics main directly w/o testing in UFS, but the changes must be tested in SCM (since this is the only model reading the two files that changed). The next PR has to update to the head of main anyway and will get Grant's SCM-only changes automatically. |
OK, @climbfuji that is fine. I'll remove the PR into Jongil's branch and put it into main, to be updated once #665 is merged. |
Well, Grant may have to update some more with my updates next. |
else | ||
ztmax1 = 99.0_kp | ||
xkzo = gdx / xkgdx | ||
endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not write this as just "xkzo = min(one, gdx/xkgdx)"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. You can write that as "xkzo = min(one, gdx/xkgdx)".
! | ||
! --- ... flag for open water | ||
do i = 1, im | ||
|
||
flag(i) = (wet(i) .and. flag_iter(i) .and. .not. use_flake(i)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need to define this extra array "flag"? It is used only once!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I take that back. I didn't see the code below
Since there were more problems with these code changes on Cheyenne (which uses a newer Intel compiler) than on other systems, I am going to test this code with Intel 2020.2 on Hera today. |
I ran
All other tests completed successfully within the walltime. @junwang-noaa @DeniseWorthen @DusanJovic-NOAA |
Thanks Dom for doing these tests. So there is no stability issue with Intel
2020.2 (tests time out rather than failed) while there is with INtel 2019.
Currently the two tests take 170s and 290s in the develop branch on hera,
it looks to me there is some issue with parallel netcdf library. Can you
run Intel 2020.2 with the develop branch to see if the same slowness
happens for the two tests?.
…On Fri, Jun 25, 2021 at 4:45 PM Dom Heinzeller ***@***.***> wrote:
Since there were more problems with these code changes on Cheyenne (which
uses a newer Intel compiler) than on other systems, I am going to test this
code with Intel 2020.2 on Hera today.
I ran rt.conf on Hera with Intel 2020.2. The following two tests failed,
because they timed out towards the end of the run:
control_wrtGauss_netcdf_parallel 013 failed
control_wrtGauss_netcdf_parallel 013 failed in run_test
regional_quilt_netcdf_parallel 036 failed
regional_quilt_netcdf_parallel 036 failed in run_test
All other tests completed successfully within the walltime. @junwang-noaa
<https://github.com/junwang-noaa> @DeniseWorthen
<https://github.com/DeniseWorthen> @DusanJovic-NOAA
<https://github.com/DusanJovic-NOAA>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#665 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI7D6TKSCSSZSF3BO56HAITTUTTEVANCNFSM45ND3WEQ>
.
|
Maybe not Intel 19(.1.1), maybe with SGI MPT or something else on Cheyenne.
Will try develop on hera with 2020.2.
… On Jun 25, 2021, at 8:55 PM, Jun Wang ***@***.***> wrote:
Thanks Dom for doing these tests. So there is no stability issue with Intel
2020.2 (tests time out rather than failed) while there is with INtel 2019.
Currently the two tests take 170s and 290s in the develop branch on hera,
it looks to me there is some issue with parallel netcdf library. Can you
run Intel 2020.2 with the develop branch to see if the same slowness
happens for the two tests?.
On Fri, Jun 25, 2021 at 4:45 PM Dom Heinzeller ***@***.***>
wrote:
> Since there were more problems with these code changes on Cheyenne (which
> uses a newer Intel compiler) than on other systems, I am going to test this
> code with Intel 2020.2 on Hera today.
>
> I ran rt.conf on Hera with Intel 2020.2. The following two tests failed,
> because they timed out towards the end of the run:
>
> control_wrtGauss_netcdf_parallel 013 failed
> control_wrtGauss_netcdf_parallel 013 failed in run_test
> regional_quilt_netcdf_parallel 036 failed
> regional_quilt_netcdf_parallel 036 failed in run_test
>
> All other tests completed successfully within the walltime. @junwang-noaa
> <https://github.com/junwang-noaa> @DeniseWorthen
> <https://github.com/DeniseWorthen> @DusanJovic-NOAA
> <https://github.com/DusanJovic-NOAA>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#665 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AI7D6TKSCSSZSF3BO56HAITTUTTEVANCNFSM45ND3WEQ>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#665 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AB5C2RJZMNQ24WNNLSKS7ODTUU6S7ANCNFSM45ND3WEQ>.
|
Same for develop. Could also be a mistake on my end when installing hpc-stack for Intel 2020.2. Definitely something to keep in mind. |
@@ -917,13 +950,23 @@ subroutine satmedmfvdifq_run(im,km,ntrac,ntcw,ntiw,ntke, & | |||
if(n == 1) then | |||
dz = zl(i,1) | |||
tem1 = tsea(i)*(1.+fv*max(q1(i,1,1),qmin)) | |||
! tem1 = 0.5 * (tem1 + thvx(i,n)) | |||
!! tem1 = 0.5 * (tem1 + thlvx(i,n)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At some point (not this PR) we should remove all these comments.
Regression testing completed, will merge this PR now. |
Cumulus convection:
a) More strict convection trigger
b) Reduced entrainment rate below cloud base
c) Enhanced downdraft detrainments starting from 60mb above the ground surface
d) Reduced rain evaporation
e) Modification of cloud depth separating shallow convection from deep convection
PBL and surface layer:
a) Inclusion of wind shear effect reducing characteristic mixing length
b) Reduction of background diffusivity in the inversion layers as a function of surface roughness and green vegetation fraction
c) PBL updraft overshooting limited by bulk Richardson number-based-PBL depth
d) Inclusion of new canopy heat storage parameterization
e) Modification of thermal roughness calculation over land
f) Increase of momentum roughness length over sea
g) Inclusion of sea spray effect parameterization
The verification results for the physics updates above can be found in https://www.emc.ncep.noaa.gov/gmb/jhan/vsdbw/ccaa14
More detailed description for the updates and ccaa14 results are given in the attached ppt file.
Tests_physics_MRF_MJO_JHan_WLi.pptx
Verification results for additional physics updates are in:
Winter: https://www.emc.ncep.noaa.gov/gmb/jhan/vsdbw/ccaa41
Summer: https://www.emc.ncep.noaa.gov/gmb/jhan/vsdbw/ccaa41x
The final update is based on 'ccaa41'.
Description for the new maximum z/L parameterization can be found in the attached ppt file.
PBL_SL_inv_JHan.pptx
Dependencies
NOAA-EMC/fv3atm#314
ufs-community/ufs-weather-model#592