-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
track the coupling fluxes separately, to avoid confusion before/after scale_fluxes #67
Comments
That looks right to me. If I understand logic then |
I think we should have a conversation about this. It has just come up again in the context of the ponds. So, the pond fraction is: apond = sum(apondn(n)*aicen(n)), n = 1, ncat However, there is a history variable apond_ai. That is, we take apond*aice. However, do we ever divide apond by aice in the code? If not, that means apond_ai has aice multiplied twice effectively. We have a couple variables like this that are not sent to the coupler and hence are not scaled. |
Note that I am using apondn as the short form for trcrn(:,:,nt_apnd,:,:) here. |
Pinging @zswolff |
Actually, when level ponds are turned on then apond and apond_ai are:
|
I'm not seeing the problem here. E.g. at the end of compute_ponds_lvl, the tracer array is reloaded correctly:
That's also done for the cesm and topo cases. History uses the tracer fields directly, as you note above, and there's a special subroutine for merging (aggregating) tracer fields based on their dependencies. Can you be a little more specific about what you are looking at in the code? |
Thanks for highlighting this. I guess I didn't realize that this was done in the melt pond code. I do think it is worth making sure all of our _ai and non _ai fields are done correctly. |
Actually, I looked at the aggregate subroutine which does indeed aggregate (merge) the tracers according to their dependencies. For example for the level ice: atrcr(:,:,:) = atrcr(:,:,:) + trcrn(:,:,nt_apnd,:,n)*trcrn(:,:,nt_alvl,:,n)*aicen(:,:,:,n) However, I don't see where trcr(:,:,nt_apnd,:) is divided by the gridcell trcr(:,:,nt_alvl,:) and aice(:,:,:). Then it is multiplied by these in history ... I'm very confused. |
I'm still not sure what you're looking at. There are two aggregate routines now, aggregate_area (which only does area, not tracers) and icepack_aggregate, which does everything. The division is done in another routine called at the end of icepack_aggregate, subroutine icepack_compute_tracers. Which version of the code are you looking at? I'm not finding the line of code above: Also, the history output uses the trcr arrays directly, not atrcr. |
…nd make namelist (CICE-Consortium#67) * - Rename ecy to elasticDamp and add to dynamics namelist - Update gridsys tests, run sym tests 1 day - Fix minor formatting issues with namelist output in log file * update documentation for elasticDamp * Clean up viscous_coeffs_and_rep_pressure, improve code reuse, rename subroutines Covert grid_ice "select case" to "if" * clean up arg list
This has come up yet again in #764. Cleaning up the coupling/non-coupling fields/fieldnames should be a higher priority. In #764, we have decided to use _iavg to indicate a field that is "per ice area" (divided by aice). We'll want to go thru the code carefully to instantiate new fields with the suffix _iavg, refactor subroutine scale_fluxes, and then review where those fields are used elsewhere in the code. |
I think I am ready to take this on finally. I am always getting confused about the _ai quantities. I'm going to propose as an example: flux = sum(fluxn(n)*aicen(n)) is the flux over the ice covered region only then flux_gbm = flux / aice is the "grid box mean" We use _gbm in a few places already. These are what are sent to the coupler, but for CICE standalone the _gbm extension makes more sense. |
I'd like to suggest flux_cell instead of flux_gbm. Models on unstructured meshes don't generally refer to their cells as grid boxes (or grid cells), because 'grid' implies a rectangular geometry. |
I like that idea. However, even "cell" is not necessarily general enough. In a sense they should be elements. Though I guess the CMIP / CF term is cell_measures. I will start prototyping something with "cell". Back to your original point, it is super confusing with downward fluxes over the cell, because:
because flwdn is the same over all the subgridscale categories! |
This makes sense because flwdn is arriving from the atmosphere component as a mean value for the entire cell, per square meter, and so it should have the same value over the ice as over the whole cell. You can skip the division by |
Also, I always get these backwards. I'm not sure "cell" is the right term. So the way to write this is: flux = sum(aicen(n)*fluxn(n)) where n = 0, ncat. However, the ice fraction in category zero is zero. So, it does not contribute to this value. So we can also think about flux_cell as: flux_cell = sum(aicen(n)*fluxn(n)) / sum(aicen(n)) for n=0, ncat. Again the open ocean category does not contribute anything. This is I guess where iavg came in as Tony notes above. |
I would also like @duvivier and @marikaholland to chime in here. |
I have always found the _ai outputs to be really confusing. I like the idea of having _cell as the suffix for these types of fields. Since I only work with structured grids (so far) it makes sense to me that we'd call them this, but even for an unstructured grid I think cell could work. I guess when I think of a cell I just think of something bounded by walls (like the biological cell), so to me even this term for an unstructured grid is okay. And if it's the CF/CMIP standard, makes sense there to me too. |
Some (but not all) fluxes are divided by aice before being sent to the cesm coupler. This is highly confusing because normally, multiplying a non-flux value (e.g. thickness) specific to the ice-covered area by aice (or similarly area-averaging over categories) produces a grid-cell-mean value. The subroutine scale_fluxes divides by aice, so multiplying by aice then just brings it back to the ice-covered area value. It would be better to save the coupling fluxes separately so that the physical interpretations of the primary variables aren't changing. E.g. from a user's question on how to compute net longwave from the history output:
flwdn in history is flw elsewhere in the code, and that is the value for any point in the grid cell, whether it’s ice or ocean. It’s not multiplied by aice, but it is still a grid-cell average because it’s the same over ice and ocean alike.
flwup in history is flwout elsewhere in the code, and in the code calculations this is the value only over sea ice. However it’s later divided by aice for the cesm coupler, and that happens before it’s sent to history. flwup_ai in history is flwout*aice, so it’s back to the only-over-ice value.
So the net longwave is
over ice: flwnet = flwup_ai-flwdn
over the grid cell: flwnet(ice) + flwnet(ocn) = (flwup_ai-flwdn)aice + (sigmaSST^4 - flwdn)*(1-aice)
Is that right?
The text was updated successfully, but these errors were encountered: