You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is motivated by issue #986. It may address part of the problem identified in that post, but perhaps not all.
In all FATES modes, FATES must tell the HLM how much memory space will be needed to store its highest density information in the restart files. This most often is anything at the cohort scale. However, it is feasible that if the user specifies a small maximum cohort count, that the litter data is the highest density information. The variable that FATES passes to the HLM is fates_maxElementsPerSite, see here https://github.com/NGEET/fates/blob/main/main/FatesInterfaceMod.F90#L848. This value is based off of fates_maxElementsPerPatch and the maximum number of patches.
When FATES SP is active, the value of fates_maxElementsPerSite should be relatively small. This is because there need only be 1 cohort per patch, and we do not use litter at all. As far as I can tell, the number of patches per site seems accurate in an SP case.
I propose to first, modify fates_maxElementsPerPatch, so in an SP run it is simplified:
if(hlm_use_sp) then
fates_maxElementsPerPatch = 1
else
fates_maxElementsPerPatch = max(max_cohort_per_patch, ndcmpy*hlm_maxlevsoil ,ncwd*hlm_maxlevsoil)
end
And second (optionally), avoid allocating the litter object that is embedded on the patch structure, when SP is active. This code should not be touched when SP is on, and preventing its allocation is a good way to make sure we are not accessing and using junk values in the object when we aren't supposed to be.
@rgknox if I made your proposed modification, how do I modify so this error (seems to be related to restart file) won't show:
if( (nlevsoil_inndcmpy) > fates_maxElementsPerPatch .or. &
(nlevsoil_inncwd) > fates_maxElementsPerPatch) then
write(fates_log(), *) 'The restart files require that space is allocated'
write(fates_log(), ) 'to accomodate the multi-dimensional patch arrays'
write(fates_log(), ) 'that are nlevsoilnumpft and nlevsoilncwd'
write(fates_log(), *) 'fates_maxElementsPerPatch = ',fates_maxElementsPerPatch
write(fates_log(), *) 'nlevsoil = ',nlevsoil_in
write(fates_log(), *) 'dcmpy = ',ndcmpy
write(fates_log(), ) 'ncwd = ',ncwd
write(fates_log(), ) 'numpftnlevsoil = ',nlevsoil_innumpft
write(fates_log(), ) 'ncwdnlevsoil = ',ncwd * nlevsoil_in
write(fates_log(), *) 'To increase max_elements, change numlevsoil_max'
call endrun(msg=errMsg(sourcefile, LINE))
end if
If I comment it out for sp mode, will it affect the memory for restart file?
or modify like this:
if(hlm_use_sp) then
fates_maxElementsPerPatch = max(1, ndcmpyhlm_maxlevsoil ,ncwdhlm_maxlevsoil)
This is motivated by issue #986. It may address part of the problem identified in that post, but perhaps not all.
In all FATES modes, FATES must tell the HLM how much memory space will be needed to store its highest density information in the restart files. This most often is anything at the cohort scale. However, it is feasible that if the user specifies a small maximum cohort count, that the litter data is the highest density information. The variable that FATES passes to the HLM is fates_maxElementsPerSite, see here https://github.com/NGEET/fates/blob/main/main/FatesInterfaceMod.F90#L848. This value is based off of fates_maxElementsPerPatch and the maximum number of patches.
When FATES SP is active, the value of fates_maxElementsPerSite should be relatively small. This is because there need only be 1 cohort per patch, and we do not use litter at all. As far as I can tell, the number of patches per site seems accurate in an SP case.
I propose to first, modify fates_maxElementsPerPatch, so in an SP run it is simplified:
And second (optionally), avoid allocating the litter object that is embedded on the patch structure, when SP is active. This code should not be touched when SP is on, and preventing its allocation is a good way to make sure we are not accessing and using junk values in the object when we aren't supposed to be.
@pnlfang @glemieux
The text was updated successfully, but these errors were encountered: