-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging event code > 10000 is not working due to mass balance error #961
Comments
@sharma-bharat the trigger for this error is due to the fact that the Can you provide some more details about the case you are running and the particular versions of fates? |
The error triggers immediately after the event, which suggests to me that the problem is directly tied to the event itself. Here is the call to the balance check, which the log indicated, was call index 3: Note that the check fails right after spawn_patches. So there must be some problem with the litter (also noting what @glemieux said) being generated from the disturbance event. Here is the call to litter generation from a logging event: I would put print statements in that routine (or run a debugger), wherever we update the new_litter% and cur_litter% terms. The print statement will very often fail if trying to print a nan or inf, so keep that in mind too. Also, updating the xml file in your case director to debug mode might help: ./xmlchange DEBUG=True Here is the first place that the litter gets updated: |
I'm not sure if this is relevant but logging events will throw a mass balance error with values of |
@JoshuaRady thanks, that's an interesting point. In principle the |
A comment in response to immediately above, I don't think |
@sharma-bharat has been digging into the logging code and logging event failures that cause the code to crash. Our heads are in knots a little but I think we're getting an idea of what's going on. The proximal issue is that with a single or annual logging event This can be fixed by writing a catch into the litter partitioning logic that zeros litter fluxes when I think that's worth doing for robustness but it doesn't answer why we get This is all good but when we initiate an annual logging event from the parameter file, we set harvest rate to 1. Combined with the above disturbance fraction logic, which remains the same, this means that when we initiate an annual logging event all of our area is disturbed. I don't think that's the logic we want right? I'm not sure what degradation disturbance is supposed to represent in the case where we initiate a logging event from the param file. Seems like collateral and mechanical should cover that? I think it's a fairly easy fix, I just don't know exactly what logic we want. The fix can go here and look something like:
Or add an additional case to represent the degraded disturbance logic for a logging event initiated from the parameter file. TL;DR: annual logging event from the param file causes all of a patch to be disturbed, resulting in a divide by zero in the new patch generation logic. Can fix this by 1) changing the harvest_rate / degraded land logic for a parameter file initiated logging event and 2) making the new patch creation logic robust to a zero remainder area (poss. only during a logging event). @ckoven @rgknox @jkshuman @glemieux @JoshuaRady -- let us know what you think and we'll implement the fixes. At the least fix 1) poss 2). |
@walkeranthonyp I think the key thing is to distinguish what is meant by each of the terms here.
In the case of the event-based logging, since the assumption is that it is applied to the entire area, I can see how the |
@walkeranthonyp I'm still trying to think this through some more in real time. The one concern that occurs to me is the subsequent dynamics due to patch labeling (which wasn't yet a thing in FATES as of https://bg.copernicus.org/articles/17/4999/2020/ ). If you set |
For what we want to do with FACE simulations we actually want to clear the whole area, so it would all be labeled But for the more broad case where not the whole area is disturbed by logging - would what you suggest be a problem? Would it get messy with successive logging events, the code wouldn't know where to apply them or something. |
@walkeranthonyp so long as your canopy is completely closed, I think it should work fine. And yeah, maybe it shouldn't matter in the general case what the labels are at the site scale anyway, they really only get used in the larger-scale sense through the separate drivers for subsequent logging in order to allow large-scale forest rotations. Maybe give it a try and just make sure nothing weird happens? |
As @ckoven confirmed above the addition of If you want to split the patch with logging, the change you propose should allow this but I would proceed with caution. In addition to the primary vs. secondary issue mentioned by @ckoven, there may be other issues. I reviewed the code at some point in the past and I recall I had some concerns that the mass partitioning code might not work as expected if the patch is split. Sorry to be vague but I haven't found my notes on that. I just know I had to do a lot of careful debugging when I implemented patch splitting in the VM Module. Additionally, when a patch is split after a harvest the canopy can undergo rearrangement that leads to unpleasant and unrealistic side effects (again see issue #714). |
Sorry, my comment above was too slow and @walkeranthonyp's subsequent post renders part it moot. @walkeranthonyp, I think for what you want the current behavior is what you want. If you change Clearly the litter divide by zero issue should be fixed. |
Thanks for the rapid comments @ckoven @JoshuaRady. I'm somewhat agnostic to what we do with From your comments and reading of the code, logging enacted from the param file is supposed to affect 100% of the area so that all of a logged patch becomes a new patch. But damage to plants only happens through
|
I have a fix for this that's working by catching when My Q's above still stand. Thinking more on this, by setting |
This discussion will continue in #1003. |
Hi all, I’m on a recent commit of main branch (7f448b0 sci.1.65.2_api.25.4.0) that includes the fix from PR #996 but I’m seeing (nearly) the same error as described above - NaN in mass balance checks following event based logging. I’m trying to simulate a one time stand clearing disturbance (all trees in all patches dead) and total logging seemed like the easiest way to go. My fail has call index 4, rather than 3, so it is being triggered here after patch fusion (the TotalBalanceCheck after spawn_patches seems to pass ok). Looks like a patch with very small negative area was somehow created? Logging Event Enacted on date: 01-01-1993 patch area: -6.2527760746888816E-013 Maybe I missed something in the discussion above. Is it possible to allow fates_landuse_logging_direct_frac = 1.0 and fates_landuse_logging_dbhmin = 0.0? |
Hi @JessicaNeedham . These options (fates_landuse_logging_direct_frac = 1.0 and fates_landuse_logging_dbhmin = 0.0) work fine in my cases ( |
Thanks @sshu88 , that's good to know. I only see this error with the logging event on certain dates in full FATES. I guess a certain patch structure prior to logging must be triggering it. |
We (Dr. Anthony Walker and I) were trying to perform logging on a specific day (icode > 10000) and we are getting a mass balance error, as follows:
In the log file, it looks like the abs error is a NaN. That would likely be causing the mass balance error.
Could you please look into this issue and help us resolve it?
Thanks,
Bharat
The text was updated successfully, but these errors were encountered: