Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VL+CT QoL Improvements #94

Merged
merged 15 commits into from
Nov 30, 2021
Merged

VL+CT QoL Improvements #94

merged 15 commits into from
Nov 30, 2021

Conversation

mabruzzo
Copy link
Contributor

@mabruzzo mabruzzo commented May 9, 2021

This PR introduces some quality of life improvements to the VL+CT integrator that should make the implementation easier to understand. This PR also includes all relevant updates to the website documentation. This PR builds on #74 and should be nominally be reviewed after #74 is approved.

@pgrete and @forrestglines, this PR introduces the changes that I was mentioned a few weeks back on slack.

In more detail, this PR introduces the following changes:

  1. Throughout the VL+CT integrator I've updated the variable names to now classify hydro/MHD quantities as "integration quantities" and "primitives". These effectively replace the old names "integrable" and "reconstructable".

    • The "integration quantities" are the quantities used throughout the rest of the Enzo-E to characterize the current state of the fluid. These are the quantities that a Hydro/MHD integrator is expected to update from one timestep to the next. In a hydro-only simulation (without passive scalars or the dual energy formalsim) this would include density, velocity, and specific total energy.

    • The "primitives" follow the textbook definition and are basically just used for reconstruction

    • In accordance with this change, I have renamed the EnzoIntegrableUpdate class; it is now called EnzoIntegrationQuanUpdate (I am flexible on this name change)

  2. The integration quantities and primitives are now stored separate instances of EnzoEFltArrayMap. Previously, quantities that are considered both an integration quantity and a primitive would have been stored in a single array that would have effectively been aliased. To make the control-flow easier to follow, such quantities are now duplicated (as deep copies), instead.

  3. I've tweaked the treatment of the passively advected scalars. Now the integration quantities include the passively advected scalars in "conserved form" (as densities) and the primitive quantities store the passively advected scalars in "specific form" (as mass fractions). Under the previous approach, the passively advected scalars included with the integration quantities would have been in "specific form". This change makes things slightly more self-consistent (since the conserved form of passively advected scalars is the default format throughout the rest of Enzo-E) and simplifies some code.

  4. I made a series of small changes to EnzoRiemann that were largely motivated by the change 2:

    • EnzoRiemann::solve now directly computes the fluxes from the reconstructed primitives. A more faithful adaptation of the old approach would involved computing the reconstructed integration quantities (from the reconstructed primitives) and then passing the reconstructed integration quantities to the Riemann Solver. This approach was much less expensive prior to change 2 (and 3), because of the high-degree of duplication between the integration quantities and primitives.

    • I've also introduced the EnzoRiemann::integration_quantities() and EnzoRiemann::primitive_quantities() instance methods so that a Riemann Solver can specify the expected the primitives and integration quantities for EnzoRiemann::solve. These methods are now used by EnzoMethodMHDVlct to determine the list of all (non-passive) quantities (and fields) involved in its execution. The alternative (under the old approach) would have been to pass expected lists of primitives and integration quantities scalars to the Riemann Solver during construction and checking that they were compatible with the Riemann Solver algorithm. This is actually change I've been anticipating making for a while now.

    • I grouped together some helper functions for the Riemann Solver in enzo_EnzoRiemannUtils.hpp as static methods of an EOSStructIdeal struct, that explicitly assume an ideal equation of state. I'm not still not sure what the best long-term solution is for supporting multiple equations of state, but this seemed like a start. However, if you would prefer I would be happy to move these functions back into the enzo_riemann_utils namespace until we have a better plan.

To help illustrate how the primitive and integration quantities are used, I have included two flow charts that illustrate the new data-flow for pure hydro and MHD simulations (they are hidden behind the summary tags). These flow charts were inspired by some figures that I saw for Athena++.

Click here for hydro flow chart

hydro_flow_chart

Click here for MHD (CT) flow chart

MHD_flow_chart

In these flow charts:

  • I represents maps of integration quantities. For an ideal gas, this includes the density, velocity components, specific total energy, and any passive scalars in conserved form. In the MHD case, this additionally includes the magnetic field components. If using the dual energy formalism it also includes the specific internal energy.
  • P represents maps of primitives. For an ideal gas, this includes the density, velocity components, pressure, and any passive scalars in specific form. Again, in the MHD case, this also includes magnetic field components. The contents are NOT affected when using the dual energy formalism
  • F hold fluxes for every integration quantity.
  • Extra holds any extra face-centered quantities computed by the Riemann Solver. This mainly just includes the interface velocity for use in computing the internal energy source terms (when using the dual energy formalism).
  • dU is where the changes to the conserved form of every integration quantity is accumulated over the current (partial) timestep. Note, that when using Constrained Transport, the magnetic fields are omitted from here. When using the dual energy formalism, we treat the internal energy density as the conserved form of the specific internal energy (even though it's not technically conserved).
  • B_i holds the face-centered magnetic fields used for CT. EnzoMethodMHDVlct doesn't directly interact with these. Instead all operations involving them occur through EnzoBfieldMethodCT.

Finally, I wanted to note that EnzoRiemannImpl could probably stand to be refactored. However, that seems somewhat beyond the scope of this PR. Moreover, it would probably be a good idea to make some design choices with actual performance benchmarks. I plan to introduce some changes to EnzoEFltArrayMap, relatively soon, that may impact this (if you're interested, I'm happy to expand on this).

…the primitives.

The dual energy cloud test currently fails for the hllc solver. The expected answer needs to be updated.
…nstead of the integration quantities) and tweaked the docstring to convert integrable->integration
… the names reconstructable->primitive and integrable->integration.

Other changes include:
- I stopped storing quantities that are both a primitive and an integration quantity as aliases.
- I dropped all mentions of the left and right reconstructed pressure from EnzoMethodMHDVlct (including mentions in EnzoMethodMHDVlct::setup_arrays_ and EnzoMethodMHDVlct::compute)
- I additionally deleted a note from the EnzoMethodMHDVlct::compute_flux_ docstring where I had suggested breaking the compute_flux_ into 2 parts. I don't really think that would be a good idea (especially now that the instance method is fairly short)
- Starting to change the names used by EnzoMethodMHDVlct::compute and EnzoMethodMHDVlct::timestep
…ssive scalars in conserved form.

- This also includes some changes to EnzoIntegrableUpdate and EnzoEquationOfState (plus its subclasses)
- Finished changing integrable->integration and reconstructable->primitive throughout VL+CT integrator and EnzoEquationOfState (plus subclasses)
… integrable->integration throughout the class definition
…he Riemann Solver) to reflect the fact that we have shifted from using the names reconstructable and integrable to primitive and integration.

These changes also reflect the fact that we have
- stopped using aliased arrays to hold the quantities that are both primitives and integration quantities
- switched the passive scalars held with the integration quantities to be in conserved form
…mpute the fluxes (and the new "integration quantity" terminology)

Other changes include:
- Adjusted the max asymmetry residual in the Dual Energy Cloud Test for the HLLC Riemann Solver.
- I transitioned from registering the integration quantities with the Riemann Solver (in the Riemann Solver factory method) to having the Riemann Solver directly specify its expected primitives and integrations quantities. This seems to make much more sense than independently coming up with lists of integration and primitive quanities in EnzoMethodMHDVlct and making sure that they are consistent with the Riemann Solver's requirements. This change includes tweaks to EnzoRiemann::construct_riemann and the introduction of the EnzoRiemann::integration_quantities() and EnzoRiemann::primitive_quantities() interface methods.
- This change made the EnzoMethodMHDVlct::determine_quantities_ helper method unnecessary, so I deleted it.
- I also deleted the now unnecessary method from EnzoEquationOfState, eint_from_primitive.

Finally, I updated the website documentation for EnzoRiemann to reflect these changes
@mabruzzo
Copy link
Contributor Author

To fix the diff in this PR (since it builds on changes from PR #74 ), I'm going to follow the instructions from here - I've seen this work before. Basically I'm going to:

  1. Add a new empty commit to this branch
  2. I'll push this branch
  3. I'll drop the empty commit
  4. Then I'll force push the local branch.

This should not cause any issues for @forrestglines's branch since I'm not actually going to be modifying the commit history

@mabruzzo
Copy link
Contributor Author

mabruzzo commented May 20, 2021

I'm not currently sure why this test is failing (A couple of days ago, when I ran it locally, it passed). I'm going to try to look into it this evening.

I suspect that the tolerance might have to be tweaked (to account for machine-dependent floating point differences)

@forrestglines
Copy link
Contributor

This should not cause any issues for @forrestglines's branch since I'm not actually going to be modifying the commit history

Don't worry about my branch, I can always rebase to this branch since I expect any changes to be fairly self-contained.

Copy link
Contributor

@forrestglines forrestglines left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed this last week but forgot to finish the review. I made a lot of suggestions about what things could be const but these were mostly to help me examine the code more closely.

I think these improvements are great, the code organization and flow is improving.

The EOS design is heavier than I would prefer. In another code I had success defining inline functions for computing pressure/sound speeds and another function for inverting the conserved variables. The Riemann solvers were then templated on the EOS. We don't have to implement that now but we could explore that later.

I'm still concerned about evolving the velocity instead of momentum. The more I look at it, the more I think it would be more robust long term to evolve only conserved variables. However, that change is way beyond the scope of this PR.

src/Enzo/enzo_EnzoBfieldMethodCT.cpp Outdated Show resolved Hide resolved
Comment on lines +263 to +266
EFlt3DArray velocity_j = integration_map.at(v_names[j]);
EFlt3DArray velocity_k = integration_map.at(v_names[k]);
EFlt3DArray bfield_j = integration_map.at(b_names[j]);
EFlt3DArray bfield_k = integration_map.at(b_names[k]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
EFlt3DArray velocity_j = integration_map.at(v_names[j]);
EFlt3DArray velocity_k = integration_map.at(v_names[k]);
EFlt3DArray bfield_j = integration_map.at(b_names[j]);
EFlt3DArray bfield_k = integration_map.at(b_names[k]);
const EFlt3DArray &velocity_j = integration_map.at(v_names[j]);
const EFlt3DArray &velocity_k = integration_map.at(v_names[k]);
const EFlt3DArray &bfield_j = integration_map.at(b_names[j]);
const EFlt3DArray &bfield_k = integration_map.at(b_names[k]);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about this exact sort of change a lot, and I'm not sure whether this is an improvement or not.

The thing is that EFlt3DArray can be thought of as a shared pointer. By making a copy of it (what we have now), we pay the overhead of incrementing/decrementing the reference count (although the current case probably uses move semantics). By storing a reference to it, we avoid increments/decrements, but now we effectively have a reference to a pointer (which is similar to a pointer to a pointer). I'm not sure what's better

An alternative idea is just to mark velocity_j, velocity_k, bfield_j, and bfield_k as const.

The same sort of semantics apply to Kokkos::Views, so I'm willing to go with your instinct here. But before I change it, do you have anything else to add?.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is that EFlt3DArray can be thought of as a shared pointer. By making a copy of it (what we have now), we pay the overhead of incrementing/decrementing the reference count (although the current case probably uses move semantics). By storing a reference to it, we avoid increments/decrements, but now we effectively have a reference to a pointer (which is similar to a pointer to a pointer). I'm not sure what's better

We've been using essentially const Kokkos:View& recently, I appreciate that we keep the same object. Sometimes it can make debugging easier since the address of the container doesn't change between contexts.

The other confusing piece to me is that all of our arrays can probably be made const &, since we even though we're changing data in the array we're not changing the data itself.

The same sort of semantics apply to Kokkos::Views, so I'm willing to go with your instinct here. But before I change it, do you have anything else to add?.

I'm not planning on adding any more suggestions. I would suggest to try batching all of my suggestions together into one commit via github, then see if it compiles and runs. I've probably missed a few things that would also need to be constant but that should be straightfoward to find via the compiler. Worst case you could undo the commit entirely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other confusing piece to me is that all of our arrays can probably be made const &, since we even though we're changing data in the array we're not changing the data itself

That's part of the reason why I've largely avoided referring to EFlt3DArray as a const in most places yet.

When I originally developed CelloArray, a lot of it's design was strongly motivated by my familiarity with NumPy. I wasn't fond of the way that traditional "container-semantics" will produce a deepcopy when you perform an assignment (it also makes subarray creation more confusing). It was originally my intention to prevent modification of the contents for const CelloArray, but that led to some strange quirks in the Copy-Constructor and Copy-Assignment operation. (Some stuff with temporaries break. It's confusing to new users why certain things break, but I don't think it actually limits what we want to do in the main codebase)

At around the time that I was struggling with this, I started reading about Kokkos and I realized that CelloArray has pointer-semantics just like Kokkos::View (a better name for CelloArray is probably CelloView), so I decided to lean into it.

I'm working on something that might help the situation. I think I mentioned this before, but I've slowly been working on a new PR, in which I introduce support for CelloArray<const T, D> which prevents the contents of the array from being mutated (I needed to make sure that this case element access works right support and add code to facilitate casts from CelloArray<T, D> and CelloArray<const T, D>). I got a little side-tracked over the past 2 weeks, but I hope to get back to it soon-ish. Once I'm done, it's my intention to alias CelloArray<const enzo_float, D> as something like RdOnlyEFlt3DArray (I'm definitely open to better names) and start introducing that throughout the VL+CT integrator.

But, It might be worthwhile to discuss changing the CelloArray semantics:

  • If you and @pgrete, don't think it would be confusing, it might be worth revisiting the idea of making const CelloArray totally immutable. I'm not convinced that it's completely impossible. It might be easier now that I have a working test suite and I'm planning to remove the bulk assignment stuff from the CelloArray API (which will drastically simplify the implementation).
  • I'd also be willing to consider a shift to container semantics (I don't think this would break anything). But if we did that, I would want some help with improving

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just as an aside, the aforementioned support for CelloArray<const T, D> is being implemented as part of PR #113

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the EFlt3DArray arrays that are currently used, I do think using const EFlt3DArray& is preferable to EFlt3DArray since the array shouldn't be changed even if the data is changed, and using the compiler to enforce that is preferable.

I'm working on something that might help the situation. I think I mentioned this before, but I've slowly been working on a new PR, in which I introduce support for CelloArray<const T, D> which prevents the contents of the array from being mutated (I needed to make sure that this case element access works right support and add code to facilitate casts from CelloArray<T, D> and CelloArray<const T, D>).

This would be the preferable syntax so that it's clear the array points to const data.

I got a little side-tracked over the past 2 weeks, but I hope to get back to it soon-ish. Once I'm done, it's my intention to alias CelloArray<const enzo_float, D> as something like RdOnlyEFlt3DArray (I'm definitely open to better names) and start introducing that throughout the VL+CT integrator.

I'd lean against aliasing the type. The code would be more explicit using CelloArray<const enzo_float, D> and CelloArray<enzo_float, D> everywhere so that it's clear that it's a CelloArray and references const or non-const data.

* If you and @pgrete, don't think it would be confusing, it might be worth revisiting the idea of making `const CelloArray` totally immutable. I'm not convinced that it's completely impossible. It might be easier now that I have a working test suite and I'm planning to remove the bulk assignment stuff from the `CelloArray` API (which will drastically simplify the implementation).

Make it so const CelloArray means the data it points to can't be modified? I'm against that approach, I think it makes sense to preserve CelloArray<const T, D> vs. const CelloArray<T, D> type.

src/Enzo/enzo_EnzoCenteredFieldRegistry.cpp Outdated Show resolved Hide resolved
src/Enzo/enzo_EnzoCenteredFieldRegistry.hpp Outdated Show resolved Hide resolved
src/Enzo/enzo_EnzoEOSIdeal.cpp Outdated Show resolved Hide resolved
src/Enzo/enzo_EnzoRiemannUtils.hpp Outdated Show resolved Hide resolved
src/Enzo/enzo_EnzoRiemannUtils.hpp Outdated Show resolved Hide resolved
src/Enzo/enzo_EnzoRiemannUtils.hpp Outdated Show resolved Hide resolved
src/Enzo/enzo_EnzoRiemannUtils.hpp Outdated Show resolved Hide resolved
eint = prim_map.get("internal_energy", stale_depth);
rho_center = coord.get_subarray(rho, full_ax, full_ax, CSlice(1, -1));
eint_center = coord.get_subarray(eint, full_ax, full_ax, CSlice(1, -1));
EFlt3DArray pressure = prim_map.get("pressure", stale_depth);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
EFlt3DArray pressure = prim_map.get("pressure", stale_depth);
const EFlt3DArray& pressure = prim_map.get("pressure", stale_depth);

mabruzzo and others added 3 commits May 28, 2021 13:30
An additional commit will follow after testing the remaining changes

Co-authored-by: forrestglines <[email protected]>
I also included relevant requested changes from (PR enzo-project#74). These were changes were delayed to avoid creating merge conflicts.
@mabruzzo
Copy link
Contributor Author

mabruzzo commented Jun 9, 2021

@forrestglines I do intend to come back to this and finish implementing your suggestions in the next few days, but I wanted to briefly respond to your broader comments:

The EOS design is heavier than I would prefer. In another code I had success defining inline functions for computing pressure/sound speeds and another function for inverting the conserved variables. The Riemann solvers were then templated on the EOS. We don't have to implement that now but we could explore that later.

I think we're on a similar page about this. I've been holding off on refactoring the EOS until I refactor the Riemann Solver (my plans to do that are briefly described in PR #113), since that might influence some design choices. Some thoughts I had include:

  • I was basically thinking of defining the functions you mentioned, and packaging them together as static inline methods of a struct so that I could easily pass them as a single template argument to the Riemann Solver (This is sort of where I was headed when I introduced the EOSStructIdeal in enzo_EnzoRiemannUtils.hpp).
  • I don't have a full picture in mind for what this would look like, but I was tentatively imagining that these function would operate on an a single element basis. To apply these operations on the block level (if it's necessary), I was thinking that we could wrap these functions in something like EnzoEquationOfState/EnzoEOSIdeal (the names could obviously change of course), and possibly make those classes into Physics objects (if we wanted to unify the EOS operations across different methods).

I'm still concerned about evolving the velocity instead of momentum. The more I look at it, the more I think it would be more robust long term to evolve only conserved variables. However, that change is way beyond the scope of this PR.

You've mentioned this a number of times and I just want to make sure I'm clear about the benefits:

  • This would make implementation of relativistic (M)HD slightly easier
  • We don't have to convert non-conserved integration quantities (e.g. velocity) to-and-from conserved-form (e.g. momentum density) when using the fluxes to update the integration quantities. This would simplify the implementation of these operations and slightly improve the conservation precision (due to fewer slightly floating point operations).

As I think I've said before, I would be on board with this change (and would be happy to update the VL+CT integrator and it's tests), as long as this is implemented uniformly across Enzo-E.

Changing the VL+CT integrator is probably the easy part. You would need to get buy-in from the other developers (who might be resistant since it could make porting things from Enzo more difficult). A lot of small changes would also need to be implemented in:

  • Other Method subclasses (e.g. EnzoMethodPpm, EnzoMethodPpml, EnzoMethodGrackle, and probably the feedback methods)
  • The Compute, Initial, and (possibly) the Prolong subclasses
  • The parameter files for regression tests involving hydro integrators that are initialized with the "value" initializer
  • The yt frontend for Enzo-E

@forrestglines
Copy link
Contributor

I think we're on a similar page about this. I've been holding off on refactoring the EOS until I refactor the Riemann Solver (my plans to do that are briefly described in PR #113), since that might influence some design choices. Some thoughts I had include:

* I was basically thinking of defining the functions you mentioned, and packaging them together as static inline methods of a struct so that I could easily pass them as a single template argument to the Riemann Solver (This is sort of where I was headed when I introduced the `EOSStructIdeal` in `enzo_EnzoRiemannUtils.hpp`).

* I don't have a full picture in mind for what this would look like, but I was tentatively imagining that these function would operate on an a single element basis. To apply these operations on the block level (if it's necessary), I was thinking that we could wrap these functions in something like `EnzoEquationOfState`/`EnzoEOSIdeal` (the names could obviously change of course), and possibly make those classes into `Physics` objects (if we wanted to unify the EOS operations across different methods).

Yes, that sounds right. The Ideal EOS might look like:

class IdealEOS{
    const double adiabatic_index_;
  public:
    IdealEOS(const double adiabatic_index): 
      adiabatic_index_(adiabatic_index){}

    inline double internal_e(const double rho, const double pres) const{
      return pres/(rho*(adiabatic_index - 1.));
    }
    inline double cs(const double rho, const double pres) const{
      return sqrt(adiabatic_index*pres/rho);
    }

    inline void cons_to_prim( 
        const double dens, const double mx, const double my, const double mz, const double eng,
        double& rho, double& ux, double& uy, double& uz, double& pres) const{
      rho = dens;
      ux = mx/dens;
      uy = mx/dens;
      uz = mx/dens;
      pres = (eng/rho - 0.5*(ux*ux + uy*uy + uz*uz))*rho*(adiabatic_index - 1);
    }

    inline void prim_to_cons( 
        const double rho, const double ux, const double uy, const double uz, const double pres,
        double& dens, double& mx, double& my, double& mz, double& eng) const {
      dens = rho;
      mx = rho*ux;
      my = rho*uy;
      mz = rho*uz;
      eng = rho*( 0.5*(ux*ux + uy*uy + uz*uz) + internal_e(rho,pres));
    }
};

modulo any complexities with dual energy formalism.

You've mentioned this a number of times and I just want to make sure I'm clear about the benefits:

* This would make implementation of relativistic (M)HD slightly easier

The more I think about it, the more I think the SRMHD would work poorly or not at all when evolving the velocity. First, the three-velocity we use varies very slowly for relativistic flows, so a relativistic method would be imprecise when relativistic velocities appear.

Second, the relationship between the primitives and the conservatives is much more complicated than for relativistic flows. Going from velocity to conserved momentum requires the density, Lorentz factor, and enthalpy, so essentially all the other primitives. This would make the current approach for flux corrections, where we correct density first then use the new density to fix the other integrated quantites, imcompatible with a relativistic method.

However, I don't know how many people would be interested in a relativistic method in Enzo-E, I don't have have plans at the moment to do so, so this point might be moot.

* We don't have to convert non-conserved integration quantities (e.g. velocity) to-and-from conserved-form (e.g. momentum density) when using the fluxes to update the integration quantities. 

Yes. This might also arise when we add other nonstandard physics or extend the conservation laws (maybe adding new source terms) you have to consider converted to the non-conserved quantities. It's manageable but not ideal.

This would simplify the implementation of these operations and slightly improve the conservation precision (due to fewer slightly floating point operations).

Everytime we do the division that moves the integrated quantities from conserved to nonconserved variables, we multiply the error introduced by division. For long running simulations, you would expect that error to multiply over time instead of adding over time via the error introduced when converting to compute fluxes. Or at least that's the conventional wisdom.

Otherwise, the conservation laws are written in terms of conserved variables -- it's more natural to work with conserved variables. Except for apparently Enzo, all other codes I've worked with evolve the conserved variables.

As I think I've said before, I would be on board with this change (and would be happy to update the VL+CT integrator and it's tests), as long as this is implemented uniformly across Enzo-E.

This is a huge change and not a priority right now. I don't think you should take it on alone.

Changing the VL+CT integrator is probably the easy part. You would need to get buy-in from the other developers (who might be resistant since it could make porting things from Enzo more difficult). A lot of small changes would also need to be implemented in:

* Other `Method` subclasses (e.g. `EnzoMethodPpm`, `EnzoMethodPpml`, `EnzoMethodGrackle`, and probably the feedback methods)

* The `Compute`, `Initial`, and (possibly) the `Prolong` subclasses

* The parameter files for regression tests involving hydro integrators that are initialized with the "value" initializer

* The yt frontend for Enzo-E

Yes, many things would need to change. If other developers are on board, then sometime in the intermediate future would be the best time to make this change before too many things become dependent on the current design.

@mabruzzo
Copy link
Contributor Author

mabruzzo commented Oct 25, 2021

@forrestglines @pgrete I just wanted to offer a brief update. I've been plugging away at PR #113 (hopefully I'll have that done in about a week).

As a consequence of a design decision that I made regarding the const-correctness of EnzoEFltArray in that branch, it's become apparent that there might be a minor issue with assigning arrays loaded from EnzoEFltArrayMap to variables that hold const references to EFlt3DArray. This means that there might be a problem with doing something like the following:

const EFlt3DArray& pressure_array_l = prim_map_l.at("pressure");

Instead, we would do something like the following (i.e. we make a copy of the CelloArray type).

const CelloArray<const enzo_float, 3> pressure_array_l = prim_map_l.at("pressure");

I'll try to add a more detailed explanation later today. But, I think this means that I should defer Forrest's remaining suggestions to be addressed in PR #113 (or later if that make his MHD Flux Correction refactor easier).

I'll try to add a more detailed explanation later today. It's worth noting that I could change that design decision to make this possible.

@mabruzzo
Copy link
Contributor Author

mabruzzo commented Nov 13, 2021

That last commit made a minor tweak to the docs (it corrected a section that I had previously missed).

I can guarantee that the failing test is broken. Please ignore it! (I've started to follow up on this in Issue #129). EDIT: That failing test has now been resolved. Note that this force-push was necessary to relaunch the failing test (I pushed an empty commit and then removed it). It didn't actually affect the revision history, so it should not cause problems for any other branches.

Copy link
Contributor

@gregbryan gregbryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks very nice to me! My comments are all mostly typos in the docs.

doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Outdated Show resolved Hide resolved
doc/source/devel/hydro_infrastructure.rst Show resolved Hide resolved
mabruzzo added a commit to mabruzzo/enzo-e that referenced this pull request Nov 30, 2021
This change was inspired by one of @gregbryan's comments from PR enzo-project#94
@jwise77 jwise77 merged commit d0755a7 into enzo-project:main Nov 30, 2021
@mabruzzo mabruzzo deleted the vlct-varnames branch November 30, 2021 22:09
mabruzzo added a commit to mabruzzo/enzo-e that referenced this pull request Dec 9, 2021
This change was inspired by one of @gregbryan's comments from PR enzo-project#94
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants