Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

large share of memory footprint in two variables #272

Open
rgknox opened this issue Sep 15, 2017 · 12 comments
Open

large share of memory footprint in two variables #272

rgknox opened this issue Sep 15, 2017 · 12 comments
Assignees

Comments

@rgknox
Copy link
Contributor

rgknox commented Sep 15, 2017

We have two cohort arrays that are dimensioned by nlevleaf:

     real(r8) ::  ts_net_uptake(nlevleaf)              ! Net uptake of leaf layers: kgC/m2/s
     real(r8) ::  year_net_uptake(nlevleaf)            ! Net uptake of leaf layers: kgC/m2/year

nlevleaf is quite large, default = 40

This is the equivalent of 80 cohort level variables that are scalar values. These two variables may be taking up more than half of the total memory. We should investigate how to reduce/remove these arrays.

@rosiealice
Copy link
Contributor

OK, so the reason we have two of these is because ts_net_uptake is calculated inside iteration loops, and so we cant just update year_net_uptake directly. The reason we have 40 layers is less clear. Ostensibly, this is the maximum plausible LAI / dinc, which is the layer thickness and is currently 1.0. 40 seems a bit high but is probably to accommodate crazy things happening. I can imagine we could half it and all would be well (maybe we want it to crash if LAI>20?).

The alternative is only looking at the bottom 'n' layers of the canopy, but that gets bogged down in canopy thickness changing. Or not do this optimization thing at all, or something else I haven't thought of yet...

@rgknox
Copy link
Contributor Author

rgknox commented Sep 20, 2017

Without having looked at the code recently... Maybe we could create a patch level array to hold this info that is dimensioned (pft,nlayer,nlevleaf)? I don't know how much memory that would save, it depends on how much large maxcohortperpatch is than numpft*nclmax.

@rosiealice
Copy link
Contributor

Hmm. I guess it is really just a PFT/leaflayer quantity. I think?

@rgknox
Copy link
Contributor Author

rgknox commented Sep 22, 2017

Looking more at the code. I think we can get away with just one of the two; or rather I think we can remove ts_net_uptake. We could add to ccohort%year_net_uptake in the same place we fill-in ts_net_uptake: Line 475 of FatesPlantRespPhotosynthMod.F90.

Another option: we could just track the bottom-most (and thereby the most carbon starved, hypothetically) layer, as well as the index of that layer. And instead of calling the routine yearly, we could call at a higher frequency, which would help to ameliorate the possibility of plants that would want to trim multiple layers per year.

@rosiealice
Copy link
Contributor

The issue with sub-annual calling is places that have an off season, as I recall. If say leaves come into positive carbon balance in March, and you are cropping leaves on their 3-month net uptake, it's still going to look like the bottom leaves are highly unproductive for a while after they start to have npp>0.

On the bottom layer tracking, how does it work if the Lai grows from 5.9 to 6.1, and so 5-6 is no longer the bottom? (Particularly if it then declines back to 5.9 having missed out the intervening period?)

@rosiealice
Copy link
Contributor

Isn't the ts_net_update calculation still inside the temperature iteration also? (Apologies for negative responses to ideas. I've been grappling with this too...)

@rgknox
Copy link
Contributor Author

rgknox commented Oct 1, 2018

Sorry for the late response @rosiealice , this probably made my brain hurt trying to figure this out.

@rosiealice, regarding your point on the bottom layer tracking issue. Thinking out loud here. I guess the uptake rate we would track would be for whatever the lowest layer is for the fully flushed plant at its target (or carrying capacity), at the current trimming rate. If a plant is constantly at a leaf area/biomass that does not reach this layer, than I suppose we would not be updating the uptake rate in that lowest layer, and wouldn't need to because the plant doesn't even fill out and wouldnt be relevent anyway.

Now, I guess if the plant grows in stature, and the index of the bottom leaf layer gets larger as it grows; I'm now asking myself if the uptakes that have been tracked at the lower layers, are still relevant/meaningful/accurate to influence the decision on how to trim with respect to the new and larger lowest leaf layer index.

If the plant was not productive in lower leaf layers as a smaller plant, I would suppose that information would still be relevant as the plant gets larger, and would be meaningful as we decide to trim leaf layers at its larger stature.

This assumes that it has not jumped into a new canopy layer (but the vectorized uptake array would not catch this either).

@ckoven
Copy link
Contributor

ckoven commented Oct 1, 2018

following up on @rgknox's update and responses to @rosiealice's comment #272 (comment): in general the bottom layer shouldn't change with growth, via the same logic laid out in #420 (comment), which says that, so long as the bleaf and canopy area use the same exponent, then LAI should be invariant with tree size. This isn't strictly true in the understory as the SLA scaling could change due to things happening above, even in that case, the inexactness arising from only tracking the carbon balance of the lowermost expressed leaf layer shouldn't be very large.

So I support trying to see if only tracking the lowermost layer works.

@ckoven
Copy link
Contributor

ckoven commented Oct 2, 2018

a (possibly) better idea: what if instead of just tracking the bottom layer, we tracked the bottom two leaf layers, and then use the local values of carbon balance increment as a function of bleaf increment to try to identify the approximate intercept of that function, and then trim to that value (assuming that it is within +/- trim_inc of the present value) rather than always trimming by increments of trim_inc itself. This might be a better way to do the optimization and possibly avoid the oscillations that @kovenock described in #383?

@rgknox
Copy link
Contributor Author

rgknox commented Oct 2, 2018

@ckoven could you explain in a little more detail, sounds interesting:

local values of carbon balance increment as a function of bleaf increment to

@rgknox
Copy link
Contributor Author

rgknox commented Oct 3, 2018

@ckoven and I stepped up to the whiteboard and I think I have a loose idea of how a solution could approached. My interpretation is that by knowing the annual carbon balance of the two lowest layers, we can calculate the slope, and then use a linear extrapolation to see at what trimming value we would have 0 net uptake. I have to look at the code more, but will self assign and keep you all updated.

@rgknox rgknox self-assigned this Oct 3, 2018
@rosiealice
Copy link
Contributor

rosiealice commented Oct 3, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants