diff --git a/doc/source/analyzing/fields.rst b/doc/source/analyzing/fields.rst index 50529db95b..32b452bbf6 100644 --- a/doc/source/analyzing/fields.rst +++ b/doc/source/analyzing/fields.rst @@ -439,9 +439,9 @@ which results in: p_B = \frac{B^2}{2} \\ v_A = \frac{B}{\sqrt{\rho}} -Using this convention is currently only available for :ref:`Athena` -and :ref:`Athena++` datasets, though it will likely be available -for more datasets in the future. +Using this convention is currently only available for :ref:`Athena`, +:ref:`Athena++`, and :ref:`AthenaPK` datasets, +though it will likely be available for more datasets in the future. yt automatically detects on a per-frontend basis what units the magnetic should be in, and allows conversion between different magnetic field units in the different unit systems as well. To determine how to set up special magnetic field handling when designing a new frontend, check out diff --git a/doc/source/examining/loading_data.rst b/doc/source/examining/loading_data.rst index 0415712367..55b209efa1 100644 --- a/doc/source/examining/loading_data.rst +++ b/doc/source/examining/loading_data.rst @@ -553,6 +553,123 @@ using a ``parameters`` dict, accepting the following keys: release. * Domains may be visualized assuming periodicity. +.. _loading-parthenon-data: + +Parthenon Data +-------------- + +Parthenon HDF5 data is supported and cared for by Forrest Glines and Philipp Grete. +The Parthenon framework is the basis for various downstream codes, e.g., +`AthenaPK `_, +`Phoebus `_, +`KHARMA `_, +RIOT, and the +`parthenon-hydro `_ miniapp. +Support for these codes is handled through the common Parthenon frontend with +specifics described in the following. +Note that only AthenaPK data is currently automatically converted to the +standard fields known by yt. +For other codes, the raw data of the fields stored in the output file is accessible +and a conversion between those fields and yt standard fields needs to be done manually. + + .. rubric:: Caveats + +* Reading particle data from Parthenon output is currently not supported. +* Spherical and cylindrical coordinates only work for AthenaPK data. +* Only periodic boundary conditions are properly handled. Calculating quantities requiring +larger stencils (like derivatives) will be incorrect at mesh boundaries that are not periodic. + +AthenaPK +^^^^^^^^ + +Fluid data on uniform-grid, SMR, and AMR datasets in Cartesian coordinates are fully supported. + +AthenaPK data may contain information on units in the output (when specified +via the ```` block in the input file when the simulation was originally run). +If that information is present, it will be used by yt. +Otherwise the default unit system will be the code unit +system with conversion of 1 ``code_length`` equalling 1 cm, and so on (given yt's default +cgs/"Gaussian" unit system). If you would like to provided different +conversions, you may supply conversions for length, time, and mass to ``load`` +using the ``units_override`` functionality: + +.. code-block:: python + + import yt + + units_override = { + "length_unit": (1.0, "Mpc"), + "time_unit": (1.0, "Myr"), + "mass_unit": (1.0e14, "Msun"), + } + + ds = yt.load("parthenon.restart.final.rhdf", units_override=units_override) + +This means that the yt fields, e.g. ``("gas","density")``, +``("gas","velocity_x")``, ``("gas","magnetic_field_x")``, will be in cgs units +(or whatever unit system was specified), but the AthenaPK fields, e.g., +``("parthenon","prim_density")``, ``("parthenon","prim_velocity_1")``, +``("parthenon","prim_magnetic_field_1")``, will be in code units. + +The default normalization for various magnetic-related quantities such as +magnetic pressure, Alfven speed, etc., as well as the conversion between +magnetic code units and other units, is Gaussian/CGS, meaning that factors +of :math:`4\pi` or :math:`\sqrt{4\pi}` will appear in these quantities, e.g. +:math:`p_B = B^2/8\pi`. To use the Lorentz-Heaviside normalization instead, +in which the factors of :math:`4\pi` are dropped (:math:`p_B = B^2/2), for +example), set ``magnetic_normalization="lorentz_heaviside"`` in the call to +``yt.load``: + +.. code-block:: python + + ds = yt.load( + "parthenon.restart.final.rhdf", + units_override=units_override, + magnetic_normalization="lorentz_heaviside", + ) + +Alternative values (i.e., overriding the default ones stored in the simulation +output) for the following simulation parameters may be specified +using a ``parameters`` dict, accepting the following keys: + +* ``gamma``: ratio of specific heats, Type: Float. If not specified, + :math:`\gamma = 5/3` is assumed. +* ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6` + (for a fully ionized primordial plasma) is assumed. + +Other Parthenon based codes +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +As mentioned above, a default conversion from code fields to yt fields (e.g., +from a density field to ``("gas","density")``) is currently not available -- +though more specialized frontends may be added in the future. + +All raw data of a Parthenon-based simulation output is available through +the ``("parthenon","NAME")`` fields where ``NAME`` varies between codes +and the respective code documentation should be consulted. + +One option to manually convert those raw fields to the standard yt fields +is by adding derived fields, e.g., for the field named "``mass.density``" +that is stored in cgs units on disk: + +.. code-block:: python + + from yt import derived_field + + + @derived_field(name="density", units="g*cm**-3", sampling_type="cell") + def _density(field, data): + return data[("parthenon", "mass.density")] * yt.units.g / yt.units.cm**3 + +Moreover, an ideal equation of state is assumed with the following parameters, +which may be specified using a ``parameters`` dict, accepting the following keys: + +* ``gamma``: ratio of specific heats, Type: Float. If not specified, + :math:`\gamma = 5/3` is assumed. +* ``mu``: mean molecular weight, Type: Float. If not specified, :math:`\mu = 0.6` + (for a fully ionized primordial plasma) is assumed. + + .. _loading-orion-data: AMReX / BoxLib Data diff --git a/doc/source/reference/code_support.rst b/doc/source/reference/code_support.rst index 1a513732e2..c583a884f8 100644 --- a/doc/source/reference/code_support.rst +++ b/doc/source/reference/code_support.rst @@ -70,6 +70,8 @@ each supported output format using yt. +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | OWLS/EAGLE | Y | Y | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ +| Parthenon | Y | N | Y | Y | Y | Y | Y | Partial | ++-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Piernik | Y | N/A | Y | Y | Y | Y | Y | Full | +-----------------------+------------+-----------+------------+-------+----------+----------+------------+-------------+ | Pluto | Y | N | Y | Y | Y | Y | Y | Partial | diff --git a/pyproject.toml b/pyproject.toml index 08d31233d6..ecd0649e75 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -124,6 +124,7 @@ nc4-cm1 = ["yt[netCDF4]"] open-pmd = ["yt[HDF5]"] owls = ["yt[HDF5]"] owls-subfind = ["yt[HDF5]"] +parthenon = ["yt[HDF5]"] ramses = ["yt[Fortran]"] rockstar = [] sdf = ["requests>=2.20.0"] @@ -183,6 +184,7 @@ full = [ "yt[open_pmd]", "yt[owls]", "yt[owls_subfind]", + "yt[parthenon]", "yt[ramses]", "yt[rockstar]", "yt[sdf]", @@ -353,6 +355,7 @@ addopts = ''' --ignore-glob='/*/yt/frontends/open_pmd/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/owls/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/owls_subfind/tests/test_outputs.py' + --ignore-glob='/*/yt/frontends/parthenon/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/ramses/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/rockstar/tests/test_outputs.py' --ignore-glob='/*/yt/frontends/tipsy/tests/test_outputs.py' diff --git a/tests/tests.yaml b/tests/tests.yaml index 40836b3381..267c1a95a8 100644 --- a/tests/tests.yaml +++ b/tests/tests.yaml @@ -171,6 +171,11 @@ answer_tests: local_nc4_cm1_002: # PR 2176, 2998 - yt/frontends/nc4_cm1/tests/test_outputs.py:test_cm1_mesh_fields + local_parthenon_001: # PR 4323 + - yt/frontends/parthenon/tests/test_outputs.py:test_loading_data + - yt/frontends/parthenon/tests/test_outputs.py:test_cluster + - yt/frontends/parthenon/tests/test_outputs.py:test_derived_fields + other_tests: unittests: # keep in sync with nose_ignores.txt diff --git a/yt/frontends/__init__.py b/yt/frontends/__init__.py index 4a5d6a2ff7..ef209207fb 100644 --- a/yt/frontends/__init__.py +++ b/yt/frontends/__init__.py @@ -35,6 +35,7 @@ "open_pmd", "owls", "owls_subfind", + "parthenon", "ramses", "rockstar", "sdf", diff --git a/yt/frontends/parthenon/__init__.py b/yt/frontends/parthenon/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/yt/frontends/parthenon/api.py b/yt/frontends/parthenon/api.py new file mode 100644 index 0000000000..317f8071aa --- /dev/null +++ b/yt/frontends/parthenon/api.py @@ -0,0 +1,4 @@ +from . import tests +from .data_structures import ParthenonDataset, ParthenonGrid, ParthenonHierarchy +from .fields import ParthenonFieldInfo +from .io import IOHandlerParthenon diff --git a/yt/frontends/parthenon/data_structures.py b/yt/frontends/parthenon/data_structures.py new file mode 100644 index 0000000000..db3dfd60cf --- /dev/null +++ b/yt/frontends/parthenon/data_structures.py @@ -0,0 +1,322 @@ +import os +import warnings +import weakref + +import numpy as np + +from yt.data_objects.index_subobjects.grid_patch import AMRGridPatch +from yt.data_objects.static_output import Dataset +from yt.fields.magnetic_field import get_magnetic_normalization +from yt.funcs import mylog, setdefaultattr +from yt.geometry.api import Geometry +from yt.geometry.grid_geometry_handler import GridIndex +from yt.utilities.chemical_formulas import compute_mu +from yt.utilities.file_handler import HDF5FileHandler + +from .fields import ParthenonFieldInfo + +_geom_map = { + "UniformCartesian": Geometry.CARTESIAN, + "UniformCylindrical": Geometry.CYLINDRICAL, + "UniformSpherical": Geometry.SPHERICAL, +} + +# fmt: off +_cis = np.array( + [ + [0, 0, 0], + [0, 0, 1], + [0, 1, 0], + [0, 1, 1], + [1, 0, 0], + [1, 0, 1], + [1, 1, 0], + [1, 1, 1], + ], + dtype="int64", +) +# fmt: on + + +class ParthenonGrid(AMRGridPatch): + _id_offset = 0 + + def __init__(self, id, index, level): + AMRGridPatch.__init__(self, id, filename=index.index_filename, index=index) + self.Parent = None + self.Children = [] + self.Level = level + + def _setup_dx(self): + # So first we figure out what the index is. We don't assume + # that dx=dy=dz , at least here. We probably do elsewhere. + id = self.id - self._id_offset + LE, RE = self.index.grid_left_edge[id, :], self.index.grid_right_edge[id, :] + self.dds = self.ds.arr((RE - LE) / self.ActiveDimensions, "code_length") + if self.ds.dimensionality < 2: + self.dds[1] = 1.0 + if self.ds.dimensionality < 3: + self.dds[2] = 1.0 + self.field_data["dx"], self.field_data["dy"], self.field_data["dz"] = self.dds + + def retrieve_ghost_zones(self, n_zones, fields, all_levels=False, smoothed=False): + if smoothed: + warnings.warn( + "ghost-zones interpolation/smoothing is not " + "currently supported for Parthenon data.", + category=RuntimeWarning, + stacklevel=2, + ) + smoothed = False + return super().retrieve_ghost_zones( + n_zones, fields, all_levels=all_levels, smoothed=smoothed + ) + + +class ParthenonHierarchy(GridIndex): + grid = ParthenonGrid + _dataset_type = "parthenon" + _data_file = None + + def __init__(self, ds, dataset_type="parthenon"): + self.dataset = weakref.proxy(ds) + self.directory = os.path.dirname(self.dataset.filename) + self.dataset_type = dataset_type + # for now, the index file is the dataset! + self.index_filename = self.dataset.filename + self._handle = ds._handle + GridIndex.__init__(self, ds, dataset_type) + + def _detect_output_fields(self): + self.field_list = [("parthenon", k) for k in self.dataset._field_map] + + def _count_grids(self): + self.num_grids = self._handle["Info"].attrs["NumMeshBlocks"] + + def _parse_index(self): + num_grids = self._handle["Info"].attrs["NumMeshBlocks"] + + # TODO: In an unlikely case this would use too much memory, implement + # chunked read along 1 dim + x = self._handle["Locations"]["x"][:, :] + y = self._handle["Locations"]["y"][:, :] + z = self._handle["Locations"]["z"][:, :] + mesh_block_size = self._handle["Info"].attrs["MeshBlockSize"] + + self.grids = np.empty(self.num_grids, dtype="object") + levels = self._handle["Levels"][:] + for i in range(num_grids): + self.grid_left_edge[i] = np.array( + [x[i, 0], y[i, 0], z[i, 0]], dtype="float64" + ) + self.grid_right_edge[i] = np.array( + [x[i, -1], y[i, -1], z[i, -1]], dtype="float64" + ) + self.grid_dimensions[i] = mesh_block_size + self.grids[i] = self.grid(i, self, levels[i]) + + if self.dataset.dimensionality <= 2: + self.grid_right_edge[:, 2] = self.dataset.domain_right_edge[2] + if self.dataset.dimensionality == 1: + self.grid_right_edge[:, 1:] = self.dataset.domain_right_edge[1:] + self.grid_particle_count = np.zeros([self.num_grids, 1], dtype="int64") + + def _populate_grid_objects(self): + for g in self.grids: + g._prepare_grid() + g._setup_dx() + self.max_level = self._handle["Info"].attrs["MaxLevel"] + + +class ParthenonDataset(Dataset): + _field_info_class = ParthenonFieldInfo + _dataset_type = "parthenon" + _index_class = ParthenonHierarchy + + def __init__( + self, + filename, + dataset_type="parthenon", + storage_filename=None, + parameters=None, + units_override=None, + unit_system="cgs", + default_species_fields=None, + magnetic_normalization="gaussian", + ): + self.fluid_types += ("parthenon",) + if parameters is None: + parameters = {} + self.specified_parameters = parameters + if units_override is None: + units_override = {} + self._handle = HDF5FileHandler(filename) + xrat = self._handle["Info"].attrs["RootGridDomain"][2] + yrat = self._handle["Info"].attrs["RootGridDomain"][5] + zrat = self._handle["Info"].attrs["RootGridDomain"][8] + if xrat != 1.0 or yrat != 1.0 or zrat != 1.0: + raise NotImplementedError( + "Logarithmic grids not yet supported/tested in Parthenon frontend." + ) + + self._magnetic_factor = get_magnetic_normalization(magnetic_normalization) + + self.geometry = _geom_map[self._handle["Info"].attrs["Coordinates"]] + + if self.geometry == "cylindrical": + axis_order = ("r", "theta", "z") + else: + axis_order = None + + Dataset.__init__( + self, + filename, + dataset_type, + units_override=units_override, + unit_system=unit_system, + default_species_fields=default_species_fields, + axis_order=axis_order, + ) + if storage_filename is None: + storage_filename = self.basename + ".yt" + self.storage_filename = storage_filename + + def _set_code_unit_attributes(self): + """ + Generates the conversion to various physical _units based on the + parameter file + """ + for unit, cgs in [ + ("length", "cm"), + ("time", "s"), + ("mass", "g"), + ]: + unit_param = f"Hydro/code_{unit}_cgs" + # Use units, if provided in output + if unit_param in self.parameters: + setdefaultattr( + self, f"{unit}_unit", self.quan(self.parameters[unit_param], cgs) + ) + # otherwise use code = cgs + else: + mylog.warning(f"Assuming 1.0 code_{unit} = 1.0 {cgs}") + setdefaultattr(self, f"{unit}_unit", self.quan(1.0, cgs)) + + self.magnetic_unit = np.sqrt( + self._magnetic_factor + * self.mass_unit + / (self.time_unit**2 * self.length_unit) + ) + self.magnetic_unit.convert_to_units("gauss") + self.velocity_unit = self.length_unit / self.time_unit + + def _parse_parameter_file(self): + self.parameters.update(self.specified_parameters) + for key, val in self._handle["Params"].attrs.items(): + if key in self.parameters.keys(): + mylog.warning( + f"Overriding existing {key!r} key in ds.parameters from data 'Params'" + ) + self.parameters[key] = val + + xmin, xmax = self._handle["Info"].attrs["RootGridDomain"][0:2] + ymin, ymax = self._handle["Info"].attrs["RootGridDomain"][3:5] + zmin, zmax = self._handle["Info"].attrs["RootGridDomain"][6:8] + + self.domain_left_edge = np.array([xmin, ymin, zmin], dtype="float64") + self.domain_right_edge = np.array([xmax, ymax, zmax], dtype="float64") + + self.domain_width = self.domain_right_edge - self.domain_left_edge + self.domain_dimensions = self._handle["Info"].attrs["RootGridSize"] + + self._field_map = {} + k = 0 + + dnames = self._handle["Info"].attrs["OutputDatasetNames"] + num_components = self._handle["Info"].attrs["NumComponents"] + + if "OutputFormatVersion" in self._handle["Info"].attrs.keys(): + self.output_format_version = self._handle["Info"].attrs[ + "OutputFormatVersion" + ] + else: + raise NotImplementedError("Could not determine OutputFormatVersion.") + + # For a single variable, we need to convert it to a list for the following + # zip to work. + if isinstance(num_components, np.uint64): + dnames = (dnames,) + num_components = (num_components,) + + component_name_offset = 0 + for dname, num_component in zip(dnames, num_components): + for j in range(num_component): + fname = self._handle["Info"].attrs["ComponentNames"][ + j + component_name_offset + ] + self._field_map[fname] = (dname, j) + k += 1 + component_name_offset = int(component_name_offset + num_component) + + self.refine_by = 2 + dimensionality = 3 + if self.domain_dimensions[2] == 1: + dimensionality = 2 + if self.domain_dimensions[1] == 1: + dimensionality = 1 + self.dimensionality = dimensionality + self.current_time = self._handle["Info"].attrs["Time"] + self.num_ghost_zones = 0 + self.field_ordering = "fortran" + self.boundary_conditions = [1] * 6 + self.cosmological_simulation = False + + if "periodicity" in self.parameters: + self._periodicity = tuple(self.parameters["periodicity"]) + else: + boundary_conditions = self._handle["Info"].attrs["BoundaryConditions"] + + inner_bcs = boundary_conditions[::2] + # outer_bcs = boundary_conditions[1::2] + ##Check self consistency + # for inner_bc,outer_bc in zip(inner_bcs,outer_bcs): + # if( (inner_bc == "periodicity" or outer_bc == "periodic") and inner_bc != outer_bc ): + # raise Exception("Inconsistent periodicity in boundary conditions") + + self._periodicity = tuple(bc == "periodic" for bc in inner_bcs) + + if "gamma" in self.parameters: + self.gamma = float(self.parameters["gamma"]) + elif "Hydro/AdiabaticIndex" in self.parameters: + self.gamma = self.parameters["Hydro/AdiabaticIndex"] + else: + mylog.warning( + "Adiabatic index gamma could not be determined. Falling back to 5/3." + ) + self.gamma = 5.0 / 3.0 + + if "mu" in self.parameters: + self.mu = self.parameters["mu"] + elif "Hydro/mu" in self.parameters: + self.mu = self.parameters["Hydro/mu"] + # Legacy He_mass_fraction parameter implemented in AthenaPK + elif "Hydro/He_mass_fraction" in self.parameters: + He_mass_fraction = self.parameters["Hydro/He_mass_fraction"] + self.mu = 1 / (He_mass_fraction * 3.0 / 4.0 + (1 - He_mass_fraction) * 2) + # Fallback to primorial gas composition (and show warning) + else: + mylog.warning( + "Plasma composition could not be determined in data file. Falling back to fully ionized primodial composition." + ) + self.mu = self.parameters.get("mu", compute_mu(self.default_species_fields)) + + @classmethod + def _is_valid(cls, filename: str, *args, **kwargs) -> bool: + return filename.endswith((".phdf", ".rhdf")) + + @property + def _skip_cache(self): + return True + + def __str__(self): + return self.basename.rsplit(".", 1)[0] diff --git a/yt/frontends/parthenon/definitions.py b/yt/frontends/parthenon/definitions.py new file mode 100644 index 0000000000..70a2797752 --- /dev/null +++ b/yt/frontends/parthenon/definitions.py @@ -0,0 +1,6 @@ +""" +Various definitions for various other modules and routines + + + +""" diff --git a/yt/frontends/parthenon/fields.py b/yt/frontends/parthenon/fields.py new file mode 100644 index 0000000000..90c342d29d --- /dev/null +++ b/yt/frontends/parthenon/fields.py @@ -0,0 +1,171 @@ +import numpy as np + +from yt._typing import KnownFieldsT +from yt.fields.field_info_container import FieldInfoContainer +from yt.funcs import mylog +from yt.utilities.physical_constants import kboltz, mh + +mag_units = "code_magnetic" +pres_units = "code_mass/(code_length*code_time**2)" +rho_units = "code_mass / code_length**3" +vel_units = "code_length / code_time" +mom_units = "code_mass / code_length**2 / code_time" +eng_units = "code_mass / code_length / code_time**2" + + +def velocity_field(mom_field): + def _velocity(field, data): + return data[mom_field] / data["gas", "density"] + + return _velocity + + +def _cooling_time_field(field, data): + cooling_time = ( + data["gas", "density"] + * data["gas", "specific_thermal_energy"] + / data["gas", "cooling_rate"] + ) + + # Set cooling time where Cooling_Rate==0 to infinity + inf_ct_mask = data["cooling_rate"] == 0 + cooling_time[inf_ct_mask] = data.ds.quan(np.inf, "s") + + return cooling_time + + +class ParthenonFieldInfo(FieldInfoContainer): + known_other_fields: KnownFieldsT = ( + # Need to provide info for both primitive and conserved variable as they + # can be written indepdendently (or even in the same output file). + # New field naming (i.e., "variable_component") of primitive variables + ("prim_density", (rho_units, ["density"], None)), + ("prim_velocity_1", (vel_units, ["velocity_x"], None)), + ("prim_velocity_2", (vel_units, ["velocity_y"], None)), + ("prim_velocity_3", (vel_units, ["velocity_z"], None)), + ("prim_pressure", (pres_units, ["pressure"], None)), + # Magnetic fields carry units of 1/sqrt(pi) so we cannot directly forward + # and need to setup aliases below. + ("prim_magnetic_field_1", (mag_units, [], None)), + ("prim_magnetic_field_2", (mag_units, [], None)), + ("prim_magnetic_field_3", (mag_units, [], None)), + # New field naming (i.e., "variable_component") of conserved variables + ("cons_density", (rho_units, ["density"], None)), + ("cons_momentum_density_1", (mom_units, ["momentum_density_x"], None)), + ("cons_momentum_density_2", (mom_units, ["momentum_density_y"], None)), + ("cons_momentum_density_3", (mom_units, ["momentum_density_z"], None)), + ("cons_total_energy_density", (eng_units, ["total_energy_density"], None)), + # Magnetic fields carry units of 1/sqrt(pi) so we cannot directly forward + # and need to setup aliases below. + ("cons_magnetic_field_1", (mag_units, [], None)), + ("cons_magnetic_field_2", (mag_units, [], None)), + ("cons_magnetic_field_3", (mag_units, [], None)), + # Legacy naming. Given that there's no conflict with the names above, + # we can just define those here so that the frontend works with older data. + ("Density", (rho_units, ["density"], None)), + ("Velocity1", (mom_units, ["velocity_x"], None)), + ("Velocity2", (mom_units, ["velocity_y"], None)), + ("Velocity3", (mom_units, ["velocity_z"], None)), + ("Pressure", (pres_units, ["pressure"], None)), + ("MagneticField1", (mag_units, [], None)), + ("MagneticField2", (mag_units, [], None)), + ("MagneticField3", (mag_units, [], None)), + ("MomentumDensity1", (mom_units, ["momentum_density_x"], None)), + ("MomentumDensity2", (mom_units, ["momentum_density_y"], None)), + ("MomentumDensity3", (mom_units, ["momentum_density_z"], None)), + ("TotalEnergyDensity", (eng_units, ["total_energy_density"], None)), + ) + + def setup_fluid_fields(self): + from yt.fields.magnetic_field import setup_magnetic_field_aliases + + unit_system = self.ds.unit_system + # Add velocity fields (if only momemtum densities are given) + for i, comp in enumerate(self.ds.coordinates.axis_order): + # Support both current and legacy scheme + for mom_field_name in ["MomentumDensity", "cons_momentum_density_"]: + mom_field = ("parthenon", f"{mom_field_name}{i+1}") + if mom_field in self.field_list: + self.add_field( + ("gas", f"velocity_{comp}"), + sampling_type="cell", + function=velocity_field(mom_field), + units=unit_system["velocity"], + ) + # Figure out thermal energy field + if ("parthenon", "Pressure") in self.field_list or ( + "parthenon", + "prim_pressure", + ) in self.field_list: + # only show warning for non-AthenaPK codes + if "Hydro/AdiabaticIndex" not in self.ds.parameters: + mylog.warning( + f"Adding a specific thermal energy field assuming an ideal gas with an " + f"adiabatic index of {self.ds.gamma}" + ) + + def _specific_thermal_energy(field, data): + return ( + data["gas", "pressure"] + / (data.ds.gamma - 1.0) + / data["gas", "density"] + ) + + self.add_field( + ("gas", "specific_thermal_energy"), + sampling_type="cell", + function=_specific_thermal_energy, + units=unit_system["specific_energy"], + ) + elif ("parthenon", "TotalEnergyDensity") in self.field_list or ( + "parthenon", + "cons_total_energy_density", + ) in self.field_list: + + def _specific_thermal_energy(field, data): + eint = ( + data["gas", "total_energy_density"] + - data["gas", "kinetic_energy_density"] + ) + + if ( + ("parthenon", "MagneticField1") in self.field_list + or ("parthenon", "prim_magnetic_field_1") in self.field_list + or ("parthenon", "cons_magnetic_field_1") in self.field_list + ): + eint -= data["gas", "magnetic_energy_density"] + return eint / data["gas", "density"] + + self.add_field( + ("gas", "specific_thermal_energy"), + sampling_type="cell", + function=_specific_thermal_energy, + units=unit_system["specific_energy"], + ) + + # Add temperature field + def _temperature(field, data): + return ( + (data["gas", "pressure"] / data["gas", "density"]) + * data.ds.mu + * mh + / kboltz + ) + + self.add_field( + ("gas", "temperature"), + sampling_type="cell", + function=_temperature, + units=unit_system["temperature"], + ) + + # We can simply all all variants as only fields present will be added + setup_magnetic_field_aliases( + self, "parthenon", ["MagneticField%d" % ax for ax in (1, 2, 3)] + ) + setup_magnetic_field_aliases( + self, "parthenon", ["prim_magnetic_field_%d" % ax for ax in (1, 2, 3)] + ) + setup_magnetic_field_aliases( + self, "parthenon", ["cons_magnetic_field_%d" % ax for ax in (1, 2, 3)] + ) diff --git a/yt/frontends/parthenon/io.py b/yt/frontends/parthenon/io.py new file mode 100644 index 0000000000..0880abdbf2 --- /dev/null +++ b/yt/frontends/parthenon/io.py @@ -0,0 +1,78 @@ +from itertools import groupby + +import numpy as np + +from yt.utilities.io_handler import BaseIOHandler +from yt.utilities.logger import ytLogger as mylog + + +# http://stackoverflow.com/questions/2361945/detecting-consecutive-integers-in-a-list +def grid_sequences(grids): + g_iter = sorted(grids, key=lambda g: g.id) + for _, g in groupby(enumerate(g_iter), lambda i_x1: i_x1[0] - i_x1[1].id): + seq = [v[1] for v in g] + yield seq + + +ii = [0, 1, 0, 1, 0, 1, 0, 1] +jj = [0, 0, 1, 1, 0, 0, 1, 1] +kk = [0, 0, 0, 0, 1, 1, 1, 1] + + +class IOHandlerParthenon(BaseIOHandler): + _particle_reader = False + _dataset_type = "parthenon" + + def __init__(self, ds): + super().__init__(ds) + self._handle = ds._handle + + def _read_fluid_selection(self, chunks, selector, fields, size): + chunks = list(chunks) + f = self._handle + rv = {} + for field in fields: + # Always use *native* 64-bit float. + rv[field] = np.empty(size, dtype="=f8") + ng = sum(len(c.objs) for c in chunks) + mylog.debug( + "Reading %s cells of %s fields in %s blocks", + size, + [f2 for f1, f2 in fields], + ng, + ) + last_dname = None + for field in fields: + ftype, fname = field + dname, fdi = self.ds._field_map[fname] + if dname != last_dname: + ds = f[f"/{dname}"] + ind = 0 + for chunk in chunks: + for gs in grid_sequences(chunk.objs): + start = gs[0].id - gs[0]._id_offset + end = gs[-1].id - gs[-1]._id_offset + 1 + data = ds[start:end, fdi, :, :, :].transpose() + for i, g in enumerate(gs): + ind += g.select(selector, data[..., i], rv[field], ind) + last_dname = dname + return rv + + def _read_chunk_data(self, chunk, fields): + f = self._handle + rv = {} + for g in chunk.objs: + rv[g.id] = {} + if len(fields) == 0: + return rv + for field in fields: + ftype, fname = field + dname, fdi = self.ds._field_map[fname] + ds = f[f"/{dname}"] + for gs in grid_sequences(chunk.objs): + start = gs[0].id - gs[0]._id_offset + end = gs[-1].id - gs[-1]._id_offset + 1 + data = ds[start:end, fdi, :, :, :].transpose() + for i, g in enumerate(gs): + rv[g.id][field] = np.asarray(data[..., i], "=f8") + return rv diff --git a/yt/frontends/parthenon/misc.py b/yt/frontends/parthenon/misc.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/yt/frontends/parthenon/tests/__init__.py b/yt/frontends/parthenon/tests/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/yt/frontends/parthenon/tests/test_outputs.py b/yt/frontends/parthenon/tests/test_outputs.py new file mode 100644 index 0000000000..2737b2cce4 --- /dev/null +++ b/yt/frontends/parthenon/tests/test_outputs.py @@ -0,0 +1,181 @@ +import numpy as np + +from yt.frontends.parthenon.api import ParthenonDataset +from yt.loaders import load +from yt.testing import ( + assert_allclose, + assert_equal, + assert_true, + requires_file, +) +from yt.utilities.answer_testing.framework import ( + GenericArrayTest, + data_dir_load, + requires_ds, + small_patch_amr, +) + +_fields_parthenon_advection = ( + ("parthenon", "advected_0_0"), + ("parthenon", "one_minus_advected"), + ("parthenon", "one_minus_advected_sq"), + ("parthenon", "one_minus_sqrt_one_minus_advected_sq_12"), + ("parthenon", "one_minus_sqrt_one_minus_advected_sq_37"), +) + +# Simple 2D test (advected spherical blob) with AMR from the main Parthenon test suite +# adjusted so that x1 != x2. +# Ran with `./example/advection/advection-example -i ../tst/regression/test_suites/output_hdf5/parthinput.advection parthenon/mesh/nx1=128 parthenon/mesh/x1min=-1.0 parthenon/mesh/x1max=1.0 Advection/vx=2` +# on changeset e5059ad +parthenon_advection = "parthenon_advection/advection_2d.out0.final.phdf" + + +@requires_ds(parthenon_advection) +def test_loading_data(): + ds = data_dir_load(parthenon_advection) + assert_equal(str(ds), "advection_2d.out0.final") + dd = ds.all_data() + # test mesh dims + vol = np.prod(ds.domain_right_edge - ds.domain_left_edge) + assert_equal(vol, ds.quan(2.0, "code_length**3")) + assert_allclose(dd.quantities.total_quantity("cell_volume"), vol) + # test data + for field in _fields_parthenon_advection: + + def field_func(name): + return dd[name] + + yield GenericArrayTest(ds, field_func, args=[field]) + + # reading data of two fields and compare against each other (data is squared in output) + ad = ds.all_data() + assert_allclose( + ad[("parthenon", "one_minus_advected")] ** 2.0, + ad[("parthenon", "one_minus_advected_sq")], + ) + + # check if the peak is in the domain center (and at the highest refinement level) + dist_of_max_from_center = np.linalg.norm( + ad.quantities.max_location(("parthenon", "advected_0_0"))[1:] - ds.domain_center + ) + + dx_min, dx_max = ad.quantities.extrema(("index", "dx")) + dy_min, dy_max = ad.quantities.extrema(("index", "dy")) + + assert_true(dist_of_max_from_center < np.min((dx_min, dy_min))) + + +# 3D magnetized cluster center from downstream Parthenon code AthenaPK (Restart, Conserveds) +athenapk_cluster = "athenapk_cluster/athenapk_cluster.restart.00000.rhdf" + +# Keplerian disk in 2D cylindrical from downstream Parthenon code AthenaPK (Data, Primitives) +athenapk_disk = "athenapk_disk/athenapk_disk.prim.00000.phdf" + + +@requires_file(athenapk_cluster) +def test_AthenaPK_rhdf(): + # Test that a downstream AthenaPK data set can be loaded with this Parthenon + # frontend + ds = data_dir_load(athenapk_cluster) + assert isinstance(ds, ParthenonDataset) + + assert_equal(ds.domain_left_edge.in_units("code_length").v, (-0.15, -0.18, -0.2)) + assert_equal(ds.domain_right_edge.in_units("code_length").v, (0.15, 0.18, 0.2)) + + +@requires_file(athenapk_disk) +def test_AthenaPK_phdf(): + # Test that a downstream AthenaPK data set can be loaded with this Parthenon + # frontend + assert isinstance(data_dir_load(athenapk_disk), ParthenonDataset) + + +_fields_derived = ( + ("gas", "temperature"), + ("gas", "specific_thermal_energy"), +) + +_fields_derived_cluster = (("gas", "magnetic_field_strength"),) + + +@requires_ds(athenapk_cluster) +def test_cluster(): + ds = data_dir_load(athenapk_cluster) + assert_equal(str(ds), "athenapk_cluster.restart.00000") + for test in small_patch_amr(ds, _fields_derived + _fields_derived_cluster): + test_cluster.__name__ = test.description + yield test + + +@requires_ds(athenapk_disk) +@requires_ds(athenapk_cluster) +def test_derived_fields(): + # Check that derived fields like temperature are present in downstream + # which define them + + # Check temperature and specific thermal energy becomes defined from primitives + ds = data_dir_load(athenapk_disk) + dd = ds.all_data() + + for field in _fields_derived: + + def field_func(name): + return dd[name] + + yield GenericArrayTest(ds, field_func, args=[field]) + + # Check hydro, magnetic, and cooling fields defined from conserveds + ds = data_dir_load(athenapk_cluster) + dd = ds.all_data() + + for field in _fields_derived + _fields_derived_cluster: + + def field_func(name): + return dd[name] + + yield GenericArrayTest(ds, field_func, args=[field]) + + +@requires_file(athenapk_cluster) +@requires_file(athenapk_disk) +def test_adiabatic_index(): + # Read adiabiatic index from dataset parameters + ds = data_dir_load(athenapk_cluster) + assert_allclose(ds.gamma, 5.0 / 3.0, rtol=1e-12) + + ds = data_dir_load(athenapk_disk) + assert_allclose(ds.gamma, 4.0 / 3.0, rtol=1e-12) + + # Change adiabatic index from dataset parameters + ds = load(athenapk_disk, parameters={"gamma": 9.0 / 8.0}) + assert_allclose(ds.gamma, 9.0 / 8.0, rtol=1e-12) + + +@requires_file(athenapk_cluster) +def test_molecular_mass(): + # Read mu from dataset parameters + ds = data_dir_load(athenapk_cluster) + assert_allclose(float(ds.mu), 0.5925925925925926, rtol=1e-12) + + # Change He mass fraction from dataset parameters + ds = load(athenapk_disk, parameters={"mu": 137}) + assert_equal(ds.mu, 137) + + +@requires_file(athenapk_cluster) +def test_units(): + # Check units in dataset are loaded correctly + ds = data_dir_load(athenapk_cluster) + assert_allclose(float(ds.quan(1, "code_time").in_units("Gyr")), 1, rtol=1e-12) + assert_allclose(float(ds.quan(1, "code_length").in_units("Mpc")), 1, rtol=1e-12) + assert_allclose(float(ds.quan(1, "code_mass").in_units("msun")), 1e14, rtol=1e-12) + + +@requires_file(athenapk_disk) +def test_load_cylindrical(): + # Load a cylindrical dataset of a full disk + ds = data_dir_load(athenapk_disk) + + # Check that the domain edges match r in [0.5,2.0], theta in [0, 2pi] + assert_equal(ds.domain_left_edge.in_units("code_length").v[:2], (0.5, 0)) + assert_equal(ds.domain_right_edge.in_units("code_length").v[:2], (2.0, 2 * np.pi)) diff --git a/yt/sample_data_registry.json b/yt/sample_data_registry.json index 5878a5c375..4e972fac0a 100644 --- a/yt/sample_data_registry.json +++ b/yt/sample_data_registry.json @@ -544,6 +544,18 @@ "load_name": "snap_N64L16_068.parameter", "url": "https://yt-project.org/data/ahf_halos.tar.gz" }, + "athenapk_cluster.tar.gz": { + "hash": "95f641b31cca00a39fa728be367861db9abc75ec3e3adcd9103325f675d3e9c1", + "load_kwargs": {}, + "load_name": "athenapk_cluster.restart.00000.rhdf", + "url": "https://yt-project.org/data/athenapk_cluster.tar.gz" + }, + "athenapk_disk.tar.gz": { + "hash": "3bd59a494a10b28cd691d1f69700f9e100a6a3a32541b0e16348d81163404cd0", + "load_kwargs": {}, + "load_name": "athenapk_disk.prim.00000.phdf", + "url": "https://yt-project.org/data/athenapk_disk.tar.gz" + }, "bw_cartesian_3d.tar.gz": { "hash": "de8b3b4c2a8c93f857c6729a118960eb1bcf244e1de07aee231d6fcc589c5628", "load_kwargs": {}, @@ -814,6 +826,12 @@ "load_name": "groups_008/group_008.0.hdf5", "url": "https://yt-project.org/data/owls_fof_halos.tar.gz" }, + "parthenon_advection.tar.gz": { + "hash": "93e9e747cc9c0f230f87ea960a116c7e8c117b348f1488d72402aaaa906e4e89", + "load_kwargs": {}, + "load_name": "advection_2d.out0.final.phdf", + "url": "https://yt-project.org/data/parthenon_advection.tar.gz" + }, "ramses_empty_record.tar.gz": { "hash": "1d119d8afa144abce5b980e5ba47c45bdfe5c851de8c7a43823b62524b0bd6e0", "load_kwargs": {},