-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VL+CT QoL Improvements #94
Conversation
…the primitives. The dual energy cloud test currently fails for the hllc solver. The expected answer needs to be updated.
…nstead of the integration quantities) and tweaked the docstring to convert integrable->integration
… the names reconstructable->primitive and integrable->integration. Other changes include: - I stopped storing quantities that are both a primitive and an integration quantity as aliases. - I dropped all mentions of the left and right reconstructed pressure from EnzoMethodMHDVlct (including mentions in EnzoMethodMHDVlct::setup_arrays_ and EnzoMethodMHDVlct::compute) - I additionally deleted a note from the EnzoMethodMHDVlct::compute_flux_ docstring where I had suggested breaking the compute_flux_ into 2 parts. I don't really think that would be a good idea (especially now that the instance method is fairly short) - Starting to change the names used by EnzoMethodMHDVlct::compute and EnzoMethodMHDVlct::timestep
…ssive scalars in conserved form. - This also includes some changes to EnzoIntegrableUpdate and EnzoEquationOfState (plus its subclasses) - Finished changing integrable->integration and reconstructable->primitive throughout VL+CT integrator and EnzoEquationOfState (plus subclasses)
… integrable->integration throughout the class definition
…he Riemann Solver) to reflect the fact that we have shifted from using the names reconstructable and integrable to primitive and integration. These changes also reflect the fact that we have - stopped using aliased arrays to hold the quantities that are both primitives and integration quantities - switched the passive scalars held with the integration quantities to be in conserved form
…mpute the fluxes (and the new "integration quantity" terminology) Other changes include: - Adjusted the max asymmetry residual in the Dual Energy Cloud Test for the HLLC Riemann Solver. - I transitioned from registering the integration quantities with the Riemann Solver (in the Riemann Solver factory method) to having the Riemann Solver directly specify its expected primitives and integrations quantities. This seems to make much more sense than independently coming up with lists of integration and primitive quanities in EnzoMethodMHDVlct and making sure that they are consistent with the Riemann Solver's requirements. This change includes tweaks to EnzoRiemann::construct_riemann and the introduction of the EnzoRiemann::integration_quantities() and EnzoRiemann::primitive_quantities() interface methods. - This change made the EnzoMethodMHDVlct::determine_quantities_ helper method unnecessary, so I deleted it. - I also deleted the now unnecessary method from EnzoEquationOfState, eint_from_primitive. Finally, I updated the website documentation for EnzoRiemann to reflect these changes
To fix the diff in this PR (since it builds on changes from PR #74 ), I'm going to follow the instructions from here - I've seen this work before. Basically I'm going to:
This should not cause any issues for @forrestglines's branch since I'm not actually going to be modifying the commit history |
I'm not currently sure why this test is failing (A couple of days ago, when I ran it locally, it passed). I'm going to try to look into it this evening. I suspect that the tolerance might have to be tweaked (to account for machine-dependent floating point differences) |
Don't worry about my branch, I can always rebase to this branch since I expect any changes to be fairly self-contained. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reviewed this last week but forgot to finish the review. I made a lot of suggestions about what things could be const
but these were mostly to help me examine the code more closely.
I think these improvements are great, the code organization and flow is improving.
The EOS design is heavier than I would prefer. In another code I had success defining inline functions for computing pressure/sound speeds and another function for inverting the conserved variables. The Riemann solvers were then templated on the EOS. We don't have to implement that now but we could explore that later.
I'm still concerned about evolving the velocity instead of momentum. The more I look at it, the more I think it would be more robust long term to evolve only conserved variables. However, that change is way beyond the scope of this PR.
EFlt3DArray velocity_j = integration_map.at(v_names[j]); | ||
EFlt3DArray velocity_k = integration_map.at(v_names[k]); | ||
EFlt3DArray bfield_j = integration_map.at(b_names[j]); | ||
EFlt3DArray bfield_k = integration_map.at(b_names[k]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EFlt3DArray velocity_j = integration_map.at(v_names[j]); | |
EFlt3DArray velocity_k = integration_map.at(v_names[k]); | |
EFlt3DArray bfield_j = integration_map.at(b_names[j]); | |
EFlt3DArray bfield_k = integration_map.at(b_names[k]); | |
const EFlt3DArray &velocity_j = integration_map.at(v_names[j]); | |
const EFlt3DArray &velocity_k = integration_map.at(v_names[k]); | |
const EFlt3DArray &bfield_j = integration_map.at(b_names[j]); | |
const EFlt3DArray &bfield_k = integration_map.at(b_names[k]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've thought about this exact sort of change a lot, and I'm not sure whether this is an improvement or not.
The thing is that EFlt3DArray can be thought of as a shared pointer. By making a copy of it (what we have now), we pay the overhead of incrementing/decrementing the reference count (although the current case probably uses move semantics). By storing a reference to it, we avoid increments/decrements, but now we effectively have a reference to a pointer (which is similar to a pointer to a pointer). I'm not sure what's better
An alternative idea is just to mark velocity_j
, velocity_k
, bfield_j
, and bfield_k
as const
.
The same sort of semantics apply to Kokkos::Views
, so I'm willing to go with your instinct here. But before I change it, do you have anything else to add?.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is that EFlt3DArray can be thought of as a shared pointer. By making a copy of it (what we have now), we pay the overhead of incrementing/decrementing the reference count (although the current case probably uses move semantics). By storing a reference to it, we avoid increments/decrements, but now we effectively have a reference to a pointer (which is similar to a pointer to a pointer). I'm not sure what's better
We've been using essentially const Kokkos:View&
recently, I appreciate that we keep the same object. Sometimes it can make debugging easier since the address of the container doesn't change between contexts.
The other confusing piece to me is that all of our arrays can probably be made const &
, since we even though we're changing data in the array we're not changing the data itself.
The same sort of semantics apply to
Kokkos::Views
, so I'm willing to go with your instinct here. But before I change it, do you have anything else to add?.
I'm not planning on adding any more suggestions. I would suggest to try batching all of my suggestions together into one commit via github
, then see if it compiles and runs. I've probably missed a few things that would also need to be constant but that should be straightfoward to find via the compiler. Worst case you could undo the commit entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other confusing piece to me is that all of our arrays can probably be made const &, since we even though we're changing data in the array we're not changing the data itself
That's part of the reason why I've largely avoided referring to EFlt3DArray
as a const
in most places yet.
When I originally developed CelloArray
, a lot of it's design was strongly motivated by my familiarity with NumPy. I wasn't fond of the way that traditional "container-semantics" will produce a deepcopy when you perform an assignment (it also makes subarray creation more confusing). It was originally my intention to prevent modification of the contents for const CelloArray
, but that led to some strange quirks in the Copy-Constructor and Copy-Assignment operation. (Some stuff with temporaries break. It's confusing to new users why certain things break, but I don't think it actually limits what we want to do in the main codebase)
At around the time that I was struggling with this, I started reading about Kokkos
and I realized that CelloArray
has pointer-semantics just like Kokkos::View
(a better name for CelloArray
is probably CelloView
), so I decided to lean into it.
I'm working on something that might help the situation. I think I mentioned this before, but I've slowly been working on a new PR, in which I introduce support for CelloArray<const T, D>
which prevents the contents of the array from being mutated (I needed to make sure that this case element access works right support and add code to facilitate casts from CelloArray<T, D>
and CelloArray<const T, D>
). I got a little side-tracked over the past 2 weeks, but I hope to get back to it soon-ish. Once I'm done, it's my intention to alias CelloArray<const enzo_float, D>
as something like RdOnlyEFlt3DArray
(I'm definitely open to better names) and start introducing that throughout the VL+CT integrator.
But, It might be worthwhile to discuss changing the CelloArray
semantics:
- If you and @pgrete, don't think it would be confusing, it might be worth revisiting the idea of making
const CelloArray
totally immutable. I'm not convinced that it's completely impossible. It might be easier now that I have a working test suite and I'm planning to remove the bulk assignment stuff from theCelloArray
API (which will drastically simplify the implementation). - I'd also be willing to consider a shift to container semantics (I don't think this would break anything). But if we did that, I would want some help with improving
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just as an aside, the aforementioned support for CelloArray<const T, D>
is being implemented as part of PR #113
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the EFlt3DArray
arrays that are currently used, I do think using const EFlt3DArray&
is preferable to EFlt3DArray
since the array shouldn't be changed even if the data is changed, and using the compiler to enforce that is preferable.
I'm working on something that might help the situation. I think I mentioned this before, but I've slowly been working on a new PR, in which I introduce support for
CelloArray<const T, D>
which prevents the contents of the array from being mutated (I needed to make sure that this case element access works right support and add code to facilitate casts fromCelloArray<T, D>
andCelloArray<const T, D>
).
This would be the preferable syntax so that it's clear the array points to const
data.
I got a little side-tracked over the past 2 weeks, but I hope to get back to it soon-ish. Once I'm done, it's my intention to alias
CelloArray<const enzo_float, D>
as something likeRdOnlyEFlt3DArray
(I'm definitely open to better names) and start introducing that throughout the VL+CT integrator.
I'd lean against aliasing the type. The code would be more explicit using CelloArray<const enzo_float, D>
and CelloArray<enzo_float, D>
everywhere so that it's clear that it's a CelloArray
and references const or non-const data.
* If you and @pgrete, don't think it would be confusing, it might be worth revisiting the idea of making `const CelloArray` totally immutable. I'm not convinced that it's completely impossible. It might be easier now that I have a working test suite and I'm planning to remove the bulk assignment stuff from the `CelloArray` API (which will drastically simplify the implementation).
Make it so const CelloArray
means the data it points to can't be modified? I'm against that approach, I think it makes sense to preserve CelloArray<const T, D>
vs. const CelloArray<T, D>
type.
eint = prim_map.get("internal_energy", stale_depth); | ||
rho_center = coord.get_subarray(rho, full_ax, full_ax, CSlice(1, -1)); | ||
eint_center = coord.get_subarray(eint, full_ax, full_ax, CSlice(1, -1)); | ||
EFlt3DArray pressure = prim_map.get("pressure", stale_depth); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EFlt3DArray pressure = prim_map.get("pressure", stale_depth); | |
const EFlt3DArray& pressure = prim_map.get("pressure", stale_depth); |
An additional commit will follow after testing the remaining changes Co-authored-by: forrestglines <[email protected]>
I also included relevant requested changes from (PR enzo-project#74). These were changes were delayed to avoid creating merge conflicts.
@forrestglines I do intend to come back to this and finish implementing your suggestions in the next few days, but I wanted to briefly respond to your broader comments:
I think we're on a similar page about this. I've been holding off on refactoring the EOS until I refactor the Riemann Solver (my plans to do that are briefly described in PR #113), since that might influence some design choices. Some thoughts I had include:
You've mentioned this a number of times and I just want to make sure I'm clear about the benefits:
As I think I've said before, I would be on board with this change (and would be happy to update the VL+CT integrator and it's tests), as long as this is implemented uniformly across Enzo-E. Changing the VL+CT integrator is probably the easy part. You would need to get buy-in from the other developers (who might be resistant since it could make porting things from Enzo more difficult). A lot of small changes would also need to be implemented in:
|
Yes, that sounds right. The Ideal EOS might look like:
modulo any complexities with dual energy formalism.
The more I think about it, the more I think the SRMHD would work poorly or not at all when evolving the velocity. First, the three-velocity we use varies very slowly for relativistic flows, so a relativistic method would be imprecise when relativistic velocities appear. Second, the relationship between the primitives and the conservatives is much more complicated than for relativistic flows. Going from velocity to conserved momentum requires the density, Lorentz factor, and enthalpy, so essentially all the other primitives. This would make the current approach for flux corrections, where we correct density first then use the new density to fix the other integrated quantites, imcompatible with a relativistic method. However, I don't know how many people would be interested in a relativistic method in Enzo-E, I don't have have plans at the moment to do so, so this point might be moot.
Yes. This might also arise when we add other nonstandard physics or extend the conservation laws (maybe adding new source terms) you have to consider converted to the non-conserved quantities. It's manageable but not ideal.
Everytime we do the division that moves the integrated quantities from conserved to nonconserved variables, we multiply the error introduced by division. For long running simulations, you would expect that error to multiply over time instead of adding over time via the error introduced when converting to compute fluxes. Or at least that's the conventional wisdom. Otherwise, the conservation laws are written in terms of conserved variables -- it's more natural to work with conserved variables. Except for apparently Enzo, all other codes I've worked with evolve the conserved variables.
This is a huge change and not a priority right now. I don't think you should take it on alone.
Yes, many things would need to change. If other developers are on board, then sometime in the intermediate future would be the best time to make this change before too many things become dependent on the current design. |
@forrestglines @pgrete I just wanted to offer a brief update. I've been plugging away at PR #113 (hopefully I'll have that done in about a week). As a consequence of a design decision that I made regarding the const-correctness of const EFlt3DArray& pressure_array_l = prim_map_l.at("pressure"); Instead, we would do something like the following (i.e. we make a copy of the const CelloArray<const enzo_float, 3> pressure_array_l = prim_map_l.at("pressure"); I'll try to add a more detailed explanation later today. But, I think this means that I should defer Forrest's remaining suggestions to be addressed in PR #113 (or later if that make his MHD Flux Correction refactor easier). I'll try to add a more detailed explanation later today. It's worth noting that I could change that design decision to make this possible. |
…sibly move to prior PR)
That last commit made a minor tweak to the docs (it corrected a section that I had previously missed).
|
86ea47b
to
860d2a6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks very nice to me! My comments are all mostly typos in the docs.
This change was inspired by one of @gregbryan's comments from PR enzo-project#94
This change was inspired by one of @gregbryan's comments from PR enzo-project#94
This PR introduces some quality of life improvements to the VL+CT integrator that should make the implementation easier to understand. This PR also includes all relevant updates to the website documentation. This PR builds on #74 and should be nominally be reviewed after #74 is approved.
@pgrete and @forrestglines, this PR introduces the changes that I was mentioned a few weeks back on slack.
In more detail, this PR introduces the following changes:
Throughout the VL+CT integrator I've updated the variable names to now classify hydro/MHD quantities as "integration quantities" and "primitives". These effectively replace the old names "integrable" and "reconstructable".
The "integration quantities" are the quantities used throughout the rest of the Enzo-E to characterize the current state of the fluid. These are the quantities that a Hydro/MHD integrator is expected to update from one timestep to the next. In a hydro-only simulation (without passive scalars or the dual energy formalsim) this would include density, velocity, and specific total energy.
The "primitives" follow the textbook definition and are basically just used for reconstruction
In accordance with this change, I have renamed the
EnzoIntegrableUpdate
class; it is now calledEnzoIntegrationQuanUpdate
(I am flexible on this name change)The integration quantities and primitives are now stored separate instances of
EnzoEFltArrayMap
. Previously, quantities that are considered both an integration quantity and a primitive would have been stored in a single array that would have effectively been aliased. To make the control-flow easier to follow, such quantities are now duplicated (as deep copies), instead.I've tweaked the treatment of the passively advected scalars. Now the integration quantities include the passively advected scalars in "conserved form" (as densities) and the primitive quantities store the passively advected scalars in "specific form" (as mass fractions). Under the previous approach, the passively advected scalars included with the integration quantities would have been in "specific form". This change makes things slightly more self-consistent (since the conserved form of passively advected scalars is the default format throughout the rest of Enzo-E) and simplifies some code.
I made a series of small changes to
EnzoRiemann
that were largely motivated by the change 2:EnzoRiemann::solve
now directly computes the fluxes from the reconstructed primitives. A more faithful adaptation of the old approach would involved computing the reconstructed integration quantities (from the reconstructed primitives) and then passing the reconstructed integration quantities to the Riemann Solver. This approach was much less expensive prior to change 2 (and 3), because of the high-degree of duplication between the integration quantities and primitives.I've also introduced the
EnzoRiemann::integration_quantities()
andEnzoRiemann::primitive_quantities()
instance methods so that a Riemann Solver can specify the expected the primitives and integration quantities forEnzoRiemann::solve
. These methods are now used byEnzoMethodMHDVlct
to determine the list of all (non-passive) quantities (and fields) involved in its execution. The alternative (under the old approach) would have been to pass expected lists of primitives and integration quantities scalars to the Riemann Solver during construction and checking that they were compatible with the Riemann Solver algorithm. This is actually change I've been anticipating making for a while now.I grouped together some helper functions for the Riemann Solver in
enzo_EnzoRiemannUtils.hpp
as static methods of anEOSStructIdeal
struct, that explicitly assume an ideal equation of state. I'm not still not sure what the best long-term solution is for supporting multiple equations of state, but this seemed like a start. However, if you would prefer I would be happy to move these functions back into theenzo_riemann_utils
namespace until we have a better plan.To help illustrate how the primitive and integration quantities are used, I have included two flow charts that illustrate the new data-flow for pure hydro and MHD simulations (they are hidden behind the summary tags). These flow charts were inspired by some figures that I saw for Athena++.
Click here for hydro flow chart
Click here for MHD (CT) flow chart
In these flow charts:
EnzoMethodMHDVlct
doesn't directly interact with these. Instead all operations involving them occur throughEnzoBfieldMethodCT
.Finally, I wanted to note that
EnzoRiemannImpl
could probably stand to be refactored. However, that seems somewhat beyond the scope of this PR. Moreover, it would probably be a good idea to make some design choices with actual performance benchmarks. I plan to introduce some changes toEnzoEFltArrayMap
, relatively soon, that may impact this (if you're interested, I'm happy to expand on this).