Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixes addressing leaf area and demotion/promotion #501

Merged
merged 37 commits into from
Apr 5, 2019

Conversation

rgknox
Copy link
Contributor

@rgknox rgknox commented Mar 8, 2019

Description:

This pull request contains a suite of fixes aimed towards increasing model stability, particularly addressing instabilities related to (1) overflowing within crown leaf-area, and (2) demotion/promotion logic.

See issue #500 for details on the crashes. Following that issue, here is a list of fixes. Fix 5 and fix 6 are new.

Fix 1: A genuine bug has been found that was sending plants in the lowest canopy layer to the litter pool (killing them) when they should had been ignored

https://github.com/NGEET/fates/pull/501/files#diff-14f02c3d54c6afd6502e04e81b2917a6R724

Also: the error checking tolerances for demotion/promotion precision were expanded to include absolute and relative error checks.

Fix 2: Charlie’s crown-area fusion conservation algorithm will make the promotion/demotion scheme “smoother”. Promotion/Demotion uses fusion, but if the total crown area changes during fusion, promotion/demotion will constantly be re-shuffling after being affected by this

Fix 3: Charlie and Ryan found that it was important to keep one of the grass biomass pools tied to allometry (like structure is with woody plants). With woody plants, if the structural biomass pool exceeds the maximum structural biomass associated with the tree's stature (dbh), we reset the dbh of the plant so that its maximum structural biomass (aka target biomass) matches the existing structural biomass. The scenario where actual biomass is higher than the target is an artifact, and one that occurs due to numerical integration error, or because of non-linearity during fusion. It is important to keep the biomass pools at, or below, the target biomass pool, so that the growth algorithm works correctly. Grasses, don't have structural biomass, and up to now, we had not been forcing grass plants to have diameters that are tied to a biomass pool (like with woody plants). In this fix, we tie grasses dbh to root biomass; thus if the root biomass exceeds the carrying capacity of the plant, we increase the dbh to match the root biomass. As a result, now when fusion generates grass plants with abnormally high leaf biomass relative to its size, the plant's stature is adjusted (increased) and the the leaf area to crown area ratios will gradually reduce until they are within the expected range.

This is mediated by calls through "ForceDBH()":

https://github.com/NGEET/fates/pull/501/files#diff-6403a5d68498c9a46a99461bedeb157aR2203
https://github.com/NGEET/fates/pull/501/files#diff-4bbd4c8a44000d218ae88b6b78485536R1003
https://github.com/NGEET/fates/pull/501/files#diff-4bbd4c8a44000d218ae88b6b78485536R1040
https://github.com/NGEET/fates/pull/501/files#diff-d180d6c64267eff0250516fea633a2e3R469
https://github.com/NGEET/fates/pull/501/files#diff-d180d6c64267eff0250516fea633a2e3R497

Fix 4: A modification of the demotion and promotion logic was introduced to avoid using an iterative solve, and thus reducing number of math operations. The iterative solve is necessary because the probabalistic demotion/promotion algorithm may impose that 1 or more cohorts demote >100% of their area on the first pass. The algorithm that re-scales these demotion/promotion fractions so they are all <=100% is the complicated part.

https://drive.google.com/file/d/1cyylwhtBzoRSPGIwS_Hr4Ma0vgwisLWz/view

https://github.com/NGEET/fates/pull/501/files#diff-14f02c3d54c6afd6502e04e81b2917a6R408

Fix 5: The patch termination algorithm was modified to at all costs fuse patches with areas less than 0.01 m2. Excessively small patches were surviving that algorithm, and were creating numerical havoc in demotion/promotion and potentially radiation transfer.

https://github.com/rgknox/fates/blob/rgknox-demprob-merge-carea/biogeochem/EDPatchDynamicsMod.F90#L1865

Fix 6: The maximum number of canopy layers was reduced back to 2. Errors in the radiation transfer algorithm were being generated, reducing the layering back to 2 removed this errors. Understanding these erorrs is beyond the scope of these fixes and should be targeted as a more comprehensive analysis.

Collaborators:

@ckoven @jkshuman @peterhor

Expectation of Answer Changes:

Yes, answers should change. The crown area conservation in the fusion routine, as well as the change in the number of canopy layers should have non-trivial impacts on the simulation. The rest of the fixes should have at least round-off changes.

Checklist:

  • My change requires a change to the documentation.
  • I have updated the in-code documentation .AND. (the technical note .OR. the wiki) accordingly.
  • I have read the CONTRIBUTING document.
  • FATES PASS/FAIL regression tests were run
  • If answers were expected to change, evaluation was performed and provided

Test Results:

Long-term tests using the 14 pft default file on the f45 grid are on-going and showing promise.

Standard FATES test suite: TBD

CTSM (or) E3SM (specify which) test hash-tag:

CTSM (or) E3SM (specify which) baseline hash-tag:

FATES baseline hash-tag:

Test Output:

ckoven and others added 25 commits December 4, 2018 20:05
…mization to inverse crown area allometry. Implemented safer logic against a real.
…ster

integrate minor updates to crown-conservation
… out cohorts that surpass maxlayers during demotion.
…ceeds its target, dbh will increase to match. This will allow grasses to not force growth, and should reduce divergence from allometries.
… radiation errors. Also, made provisions to forcefully terminate patches that have super small areas, even if they want to fuse with the youngest patch.
@rgknox
Copy link
Contributor Author

rgknox commented Mar 8, 2019

long term tests are on-going. A good amount of new code has temporary diagnostics I was using to track down bugs, which still need to be removed.

@rgknox
Copy link
Contributor Author

rgknox commented Mar 11, 2019

A 120 year test with 14 pfts on the 4x5 grid, and a 100 year base-comparison test has completed. Will follow up with some diagnostics.

@rgknox
Copy link
Contributor Author

rgknox commented Mar 15, 2019

Here is a new comparison. Global simulations, 16 years into simulation.

Base: master branch with 3 small tweaks (or else it won't complete), 2 canopy layers, 40 leaf layers, and the bug fix to erroneously terminating cohorts in the lowest layer during fusion.

Test: the current branch, after merging in the master (thus including the new photosynthesis parameters)

https://drive.google.com/file/d/1_VzK8sjFbTSmCn4Wu_OkLkbmUiF2XQUr/view?usp=sharing

My take-aways:

  1. Current master in this test configuration shows similar-ish patterns to leaf and biomass, as the test branch.
  2. Although, re 1), the test branch does include answer changing code (ie demotion logic, and crown fusion), which features added specifically to overcome crashes and were an important part of these changes
  3. I think the default parameterization, or the set of PFTs used needs to be tweaked. It would be nice to generate a global demographic of biomass and leaf area that is more representative of reality, which would really help testing

@ckoven
Copy link
Contributor

ckoven commented Mar 15, 2019

thanks @rgknox. glad that the code changes here don't seem too have much of a qualitative effect on the outcomes, and that the default-pft-file C4 grass world takeover is a separate problem. could you assign code reviewers to this PR so that we can get it merged soon?

@rgknox rgknox requested review from ckoven, jkshuman and peterhor and removed request for ckoven March 15, 2019 19:32
… to use leaves instead of fineroots for grasses
@rgknox
Copy link
Contributor Author

rgknox commented Mar 25, 2019

All expected PASS:
/glade/u/home/rgknox/clmed-tests/fates.cheyenne.intel.demprom-carea-fixes-Ccb5be8e-Fd6c81e0

sumweights_old = sumweights_old + currentCohort%excl_weight
end if

currentCohort%excl_weight = currentCohort%excl_weight/sumweights
Copy link
Contributor

@glemieux glemieux Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly here that the code will still run through the normal (stochastic) case first (starting old line 381)? Otherwise sumweights would start off as 0.0_r8 and result in a divide-by-zero?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I answered my own question: this has to be true given that the currentCohort%excl_weight is also computed in that section as well.

Copy link
Contributor

@glemieux glemieux Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see; we need to run through the existing weights first to get their sum and then run back through them again to normalize. Hence the two loops.

if ( currentCohort%excl_weight < (currentCohort%c_area-nearzero) ) then
do while (associated(currentCohort))
if(currentCohort%canopy_layer == i_lyr) then
currentCohort%excl_weight = currentCohort%c_area * currentCohort%excl_weight * scale_factor
Copy link
Contributor

@glemieux glemieux Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: scale_factor includes c_area so needs to be cancelled out. This is f_{c(i)}.

Copy link
Contributor

@glemieux glemieux Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update: See comment on line 574 below. The intent is to compute the excluded area immediately here, which makes sense looking at the debug below and the ed_cohort_type comments.

if( currentCohort%excl_weight < (currentCohort%c_area-nearzero) ) then
sumweights = sumweights + currentCohort%excl_weight
currentCohort%excl_weight = currentCohort%c_area * &
(currentCohort%excl_weight * scale_factor_min + &
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rgknox I don't think you need to multiply by the area here since scale_factor_min doesn't include the area like scale_factor correct?

Copy link
Contributor

@glemieux glemieux Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I assumed that this was computing f_c (as in the technical paper) and then using that elsewhere to calculate the excluded weight. I see now that the intent is to do that here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I'm looking through this now as well and double checking too


end do
! Non-trivial case, at least 1 cohort's demotion
Copy link
Contributor

@glemieux glemieux Apr 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rgknox Comment typo: should read promotion, not demotion, correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, good catch

@glemieux
Copy link
Contributor

glemieux commented Apr 4, 2019

Finished reviewing the promotion/demotion subroutines; looks good. Per our Gchat conversation, my brain itched to make the repeated calls to:

do while (associated(currentCohort))
               if(currentCohort%canopy_layer  ==  i_lyr) then 

into a callable subroutine. @rgknox noted that;

Typically things in-line are more efficient than subroutines in fortran, but this code is not called every second, so improvements for clarity that slows code is an acceptable trade-off here

I'm assuming that temporarily containing the whole of a cohort exclusion or promotion area in a vector wouldn't be (memory) efficient either, correct? My thought was to allow for using min() instead of an 'if' loop to find the scale_factor_min.

For posterity and potential future reference here's the link the the Jupyter notebook where I worked through the demotion technical paper by hand to get a feel for it.

@rgknox
Copy link
Contributor Author

rgknox commented Apr 5, 2019

running final test, results permitting, will integrate

@rgknox
Copy link
Contributor Author

rgknox commented Apr 5, 2019

all expected PASS:
/gpfs/fs1/scratch/rgknox/clmed-tests/fates.cheyenne.intel.demprom-vfinal-Cbc17668-F459f09e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants