Skip to content

Commit

Permalink
Mergeback v3.0.x release feature branch (#4276)
Browse files Browse the repository at this point in the history
* Add release highlights and pin rc version (#3898)

* Add release highlights and pin rc version

* review actions

* reorder release highlights (#3899)

Tweak release highlights

* Add whatsnew announcement (#3900)

* Fix spelling (#3903)

* Fix unit label handling (#3902)

* Add failing test of plotting

* Implement fix to pass test

* Update idiff to ignore irrelevant hyphens in path

* Update imagerepo (following docs)

* Update after review by @trexfeathers

* Add whatsnew entries

* Move whatsnew entries into correct file

* Release Docs Improvements (#3895)

* Minor phrasing change in 'Release candidate'.

* Before release deprecations.

* Whatsnew highlights section.

* Relax setup.py setup requirements (#3909)

* Updated CF saver version in User Guide and docstring (#3925)

* Updated CF saver version in User Guide and docstring

* Remove references to CF version of the loader in docstrings

* Added whatsnew

* Pin cftime<1.3.0

* Migrate to cirrus-ci (#3928)

* migrate from travis-ci to cirrus-ci

* added whatsnew entries

* ignore url for doc link check (#3929)

* whatsnew for coord default units (#3924)

* Cube._summary_coord_extra: efficiency and bugfix (#3922)

* Add Documentation Title Case Capitalization (#3940)

* Use Title Case Capitalisation for Documentation

* add whatsnew enter

* CI requirements drop pip packages (#3939)

* requirements pip to conda

* use pip install over develop

* default PY_VER to python versions

* update links (#3942)

* update links

* added s to http

* Add support for 1-d weights in collapse. (#3943)

* Remove warning for convert_units on lazy data (#3951)

* drop stickler references in docs (#3953)

* drop stickler references in docs

* remove sticker from common links

* update docs for travis-ci to cirrus-ci (#3954)

* update docs for travis-ci to cirrus-ci

* add 'travis-ci' reference locally to whatsnew

* update whatsnew comment

* docs for nox (#3955)

* docs for nox

* add titles, notices and additional detail

* review actions

* Resolve test coverage (#3947)

* test coverage for __init__ and __call__

* test coverage for metadata resolve and coverage

* partial test coverage for metadata mapping

* python 3.6 workaround for deepcopy of mock.sentinel

* test coverage for Resolve._free_mapping

* test coverage for Resolve convenience methods

* add test stub for Resolve._metadata_mapping

* fix Test__tgt_cube_position

* test coverage for shape

* test coverage for _as_compatible_cubes

* test coverage for Resolve._metadata_mapping

* test coverage for Resolve._prepare_common_dim_payload

* test coverage for Resolve._prepare_common_aux_payload

* test coverage for Resolve._prepare_points_and_bounds

* test coverage for Resolve._create_prepared_item

* test coverage for Resolve._prepare_local_payload_dim

* test coverage for Resolve._prepare_local_payload_aux

* test coverage for Resolve._prepare_local_payload_scalar + docs URL skip

* test coverage for Resolve._prepare_local_payload

* test coverage for Resolve._metadata_prepare

* added docs URL linkcheck skip

* test coverage for Resolve._prepare_factory_payload

* test coverage for Resolve._get_prepared_item

* review actions

* test coverage for Resolve.cube

* pin v3.0.0 version and whatnew date (#3956)

* update github ci checks image (#3957)

* Promote unknown units to dimensionless in aux factories (#3965)

* promote unknown to dimensionless units in aux factories

* patch aux factories to promote unknown to dimensionless units for formula terms

* add whatnew PR for entry

* Release branch prepare for v3.0.2 (#4044)

* update intersphinx mapping and matplotlib urls (#4003)

* update intersphinx mapping and matplotlib urls

* use matplotlib intersphinx where possible

* review actions

* review actions

* cirrus-ci compute credits (#4007)

* cirrus-ci conditional tasks (#4019)

* cirrus-ci conditional tasks

* use bc for bash arithmetic

* revert back to sed

* use expr

* reword

* minor documentation changes

* review actions

* prepare v3.0.2 release

* Fix test_incompatible_dimensions test (#3977)

* test_incompatible_dimensions used a ragged array for the test, which has been deprecated in numpy, and now fails if dtype is anything other than object.  This test appears to be checking that the addition of a [2x4] masked array to a [2x3] masked cube should raise a ValueError. This commit fixes the creation of `data3` object to be a [2x4] non-ragged array.

* Added entry to what's new

* Added name to core developer list :)

* Update latest.rst

Fixed space in PR macro call

* update whatsnew v3.0.2

Co-authored-by: James Penn <[email protected]>

* um_stash_source attribute improved handling (#4035)

* Modified pyke rule

* Tests added

* Black and whatsnew

* Include PR number

* Remove latest.rst

* Add what's new

* Support for py38 and Cartopy 0.19 (#4130)

* mpl 3.4.1 updates (#4087)

* replace most recent hashes (#4112)

* Corrected plot_anomaly_log_colouring for new Matplotlib linscale rules. (#4115)

* Cartopy 0.19 updates (#4128)

* Use assertArrayAllClose for sqrt test (#4118)

* using AllClose for sqrt test

* Omitting the checksum from test cml

* use ArrayAllClose (rebase reset it?)

* Iris py38 (#3976)

* support for py38

* update CI and noxfile

* enforce alphabetical xml element attribute order

* full tests for py38 + fix docs-tests

* add whatsnew entry

* update doc-strings + review actions

* Alternate xml handling routine (#29)

* all xml tests pass for nox tests-3.8

* restored docstrings

* move sort_xml_attrs

* make sort_xml_attrs a classmethod

* update sort_xml_attr doc-string

Co-authored-by: Bill Little <[email protected]>

* add jamesp to whatsnew + minor tweak

Co-authored-by: James Penn <[email protected]>

* reinstate black and nox links

* Linkcheck update (#4104)

* Updated links

* Added login remark

* Removed extra space

* change to kick cirrus

* kick cirrus

* test verbose on cirrus

* Removed test settings.

Co-authored-by: Bill Little <[email protected]>
Co-authored-by: Martin Yeo <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: James Penn <[email protected]>
Co-authored-by: James Penn <[email protected]>
Co-authored-by: tkknight <[email protected]>

* Intersection bounds fix (replacement PR) (#4059)

* fix intersection out of bounds point

* fix intersection out of bounds point take 2

* add referencing comment

* bootstrap ci for 3.0.x (#4154)

* add cirrus-ci docker support

* wip

* wip

* wip

* wip

* wip

* add pip

* revert docker

* fix add_weekday (#32)

* fix add_weekday

* fix black link

* fix link

Co-authored-by: Ruth Comer <[email protected]>

* Tweak to speed up dask wrapping of netcdf variables (#4135)

* Use 'meta' in da.from_array to stop it sampling netcdf variables, which is quite slow.

* Fix PR number.

* Fix test.

* Review changes.

* Update docs/iris/src/whatsnew/3.0.2.rst

Co-authored-by: lbdreyer <[email protected]>

* Fb fix cube coord arithmetic (#4159)

* fix coord with cube arithmetic

* add coord arithmetic test coverage

* add a whatsnew entry

* Pp daskfix (#4141)

* Remove workaround when dask-wrapping PP data, obsoleted by #4135.

* Remove old slice testing

* Add whats new

* move whitespace?

* missing line

Co-authored-by: lbdreyer <[email protected]>

* add release date to v3.0.2 whatsnew (#4160)

* update readme logo img src and href (#4006) (#4216)

* In cube.intersection, find split cells using a tolerant equality check (#4220)

* In cube.intersection, find split cells using a tolerant equality check

* remove unused var

* Add bounds check to tests; add what's new

* Fix whats new; update version number

* Update 3.0.3.rst

Update release date in `3.0.3.rst`

* Wide cubestr fix 3v0vx v2 (#4233)

* Widen cube printout for long ancil or cell-measure names.

* Adjust result for fixed cube-units printout.

* Added whatsnew.

* Include newest whatsnew in index.

* Review: fix whatsnew structure.

* Fix initial sections display.

* Fix some typos in the cube maths docs (#4248)

* (More of) Wide cubestr fix 3v0vx v2 (#4238)

* Fix whatsnew for #4233.

* Move 'empty slicings' whatsnew entry to 3.0.2 section.

* unpin cftime (#4222)

* Test wrangling vs v3.0.x (#4249)

* test changes

* add awol init

* rename system test cases

* fix PartialDateTime tests

* fix typo

* add whatsnew

* Fix mergeback PR #4035

* Fix mergeback PR #4035 tests

* Update mergeback nox conda-lock files

Co-authored-by: tkknight <[email protected]>
Co-authored-by: Zeb Nicholls <[email protected]>
Co-authored-by: Martin Yeo <[email protected]>
Co-authored-by: Jon Seddon <[email protected]>
Co-authored-by: Ruth Comer <[email protected]>
Co-authored-by: Patrick Peglar <[email protected]>
Co-authored-by: James Penn <[email protected]>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: James Penn <[email protected]>
Co-authored-by: Ruth Comer <[email protected]>
Co-authored-by: lbdreyer <[email protected]>
Co-authored-by: lbdreyer <[email protected]>
  • Loading branch information
13 people authored Aug 10, 2021
1 parent 06d5242 commit 4abaa8f
Show file tree
Hide file tree
Showing 39 changed files with 2,335 additions and 403 deletions.
4 changes: 2 additions & 2 deletions docs/src/userguide/cube_maths.rst
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ Let's extend this example slightly, by taking a slice from the middle
air_temperature / (K) (longitude: 49; time: 240)

Compared to our original time-series, the *air_temp_T_slice* cube has one
less dimension *and* it's shape if different. However, this doesn't prevent
less dimension *and* its shape is different. However, this doesn't prevent
us from performing cube arithmetic with it, thanks to the extended cube
broadcasting behaviour::

Expand Down Expand Up @@ -237,7 +237,7 @@ by a cube with unit ``'1'`` will preserve units, so the cube ``temperature``
will be given the same units as are in ``pot_temperature``. It should be
noted that some combinations of units, particularly those involving power
operations, will not result in a valid unit and will cause the calculation
to fail. For example, a cube ``a`` had units ``'m'`` then ``a ** 0.5``
to fail. For example, if a cube ``a`` had units ``'m'`` then ``a ** 0.5``
would result in an error since the square root of a meter has no meaningful
unit (if ``a`` had units ``'m2'`` then ``a ** 0.5`` would result in a cube
with units ``'m'``).
Expand Down
3 changes: 1 addition & 2 deletions docs/src/whatsnew/3.0.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,11 @@ This document explains the changes made to Iris for this release
factory, and the associated derived coordinate will be missing. (:pull:`3965`)


.. dropdown:: :opticon:`report` Release Highlights
.. dropdown:: :opticon:`report` v3.0.0 Release Highlights
:container: + shadow
:title: text-primary text-center font-weight-bold
:body: bg-light
:animate: fade-in
:open:

The highlights for this major release of Iris include:

Expand Down
571 changes: 571 additions & 0 deletions docs/src/whatsnew/3.0.2.rst

Large diffs are not rendered by default.

586 changes: 586 additions & 0 deletions docs/src/whatsnew/3.0.3.rst

Large diffs are not rendered by default.

620 changes: 620 additions & 0 deletions docs/src/whatsnew/3.0.4.rst

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions docs/src/whatsnew/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ Iris versions.
:maxdepth: 1

latest.rst
3.0.4.rst
3.0.3.rst
3.0.2.rst
3.0.1.rst
3.0.rst
2.4.rst
Expand Down
4 changes: 3 additions & 1 deletion lib/iris/_lazy_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,9 @@ def as_lazy_data(data, chunks=None, asarray=False):
if isinstance(data, ma.core.MaskedConstant):
data = ma.masked_array(data.data, mask=data.mask)
if not is_lazy_data(data):
data = da.from_array(data, chunks=chunks, asarray=asarray)
data = da.from_array(
data, chunks=chunks, asarray=asarray, meta=np.ndarray
)
return data


Expand Down
37 changes: 14 additions & 23 deletions lib/iris/coords.py
Original file line number Diff line number Diff line change
Expand Up @@ -403,28 +403,15 @@ def __binary_operator__(self, other, mode_constant):
# Note: this method includes bounds handling code, but it only runs
# within Coord type instances, as only these allow bounds to be set.

if isinstance(other, _DimensionalMetadata) or not isinstance(
other, (int, float, np.number)
):

def typename(obj):
if isinstance(obj, Coord):
result = "Coord"
else:
# We don't really expect this, but do something anyway.
result = self.__class__.__name__
return result

emsg = "{selftype} {operator} {othertype}".format(
selftype=typename(self),
operator=self._MODE_SYMBOL[mode_constant],
othertype=typename(other),
if isinstance(other, _DimensionalMetadata):
emsg = (
f"{self.__class__.__name__} "
f"{self._MODE_SYMBOL[mode_constant]} "
f"{other.__class__.__name__}"
)
raise iris.exceptions.NotYetImplementedError(emsg)

else:
# 'Other' is an array type : adjust points, and bounds if any.
result = NotImplemented
if isinstance(other, (int, float, np.number)):

def op(values):
if mode_constant == self._MODE_ADD:
Expand All @@ -441,8 +428,14 @@ def op(values):

new_values = op(self._values_dm.core_data())
result = self.copy(new_values)

if self.has_bounds():
result.bounds = op(self._bounds_dm.core_data())
else:
# must return NotImplemented to ensure invocation of any
# associated reflected operator on the "other" operand
# see https://docs.python.org/3/reference/datamodel.html#emulating-numeric-types
result = NotImplemented

return result

Expand All @@ -461,8 +454,7 @@ def __div__(self, other):
def __truediv__(self, other):
return self.__binary_operator__(other, self._MODE_DIV)

def __radd__(self, other):
return self + other
__radd__ = __add__

def __rsub__(self, other):
return (-self) + other
Expand All @@ -473,8 +465,7 @@ def __rdiv__(self, other):
def __rtruediv__(self, other):
return self.__binary_operator__(other, self._MODE_RDIV)

def __rmul__(self, other):
return self * other
__rmul__ = __mul__

def __neg__(self):
values = -self._core_values()
Expand Down
62 changes: 38 additions & 24 deletions lib/iris/cube.py
Original file line number Diff line number Diff line change
Expand Up @@ -3193,24 +3193,26 @@ def _intersect_modulus(
# and call the new bounds = the new points + the difference.
pre_wrap_delta = np.diff(coord.bounds[inside_indices])
post_wrap_delta = np.diff(bounds[inside_indices])
close_enough = np.allclose(pre_wrap_delta, post_wrap_delta)
if not close_enough:
split_cell_indices, _ = np.where(
pre_wrap_delta != post_wrap_delta
)

# Recalculate the extended minimum.
split_cell_indices, _ = np.where(
~np.isclose(pre_wrap_delta, post_wrap_delta)
)
if split_cell_indices.size:
indices = inside_indices[split_cell_indices]
cells = bounds[indices]
cells_delta = np.diff(coord.bounds[indices])

# Watch out for ascending/descending bounds
if cells_delta[0, 0] > 0:
cells[:, 0] = cells[:, 1] - cells_delta[:, 0]
minimum = np.min(cells[:, 0])
else:
cells[:, 1] = cells[:, 0] + cells_delta[:, 0]
minimum = np.min(cells[:, 1])
if maximum % modulus not in cells:
# Recalculate the extended minimum only if the output bounds
# do not span the requested (minimum, maximum) range. If
# they do span that range, this adjustment would give unexpected
# results (see #3391).
cells_delta = np.diff(coord.bounds[indices])

# Watch out for ascending/descending bounds.
if cells_delta[0, 0] > 0:
cells[:, 0] = cells[:, 1] - cells_delta[:, 0]
minimum = np.min(cells[:, 0])
else:
cells[:, 1] = cells[:, 0] + cells_delta[:, 0]
minimum = np.min(cells[:, 1])

points = wrap_lons(coord.points, minimum, modulus)

Expand Down Expand Up @@ -3764,37 +3766,49 @@ def __ne__(self, other):
def __hash__(self):
return hash(id(self))

def __add__(self, other):
return iris.analysis.maths.add(self, other)
__add__ = iris.analysis.maths.add

def __iadd__(self, other):
return iris.analysis.maths.add(self, other, in_place=True)

__radd__ = __add__

def __sub__(self, other):
return iris.analysis.maths.subtract(self, other)
__sub__ = iris.analysis.maths.subtract

def __isub__(self, other):
return iris.analysis.maths.subtract(self, other, in_place=True)

def __rsub__(self, other):
return (-self) + other

__mul__ = iris.analysis.maths.multiply
__rmul__ = iris.analysis.maths.multiply

def __imul__(self, other):
return iris.analysis.maths.multiply(self, other, in_place=True)

__rmul__ = __mul__

__div__ = iris.analysis.maths.divide

def __idiv__(self, other):
return iris.analysis.maths.divide(self, other, in_place=True)

__truediv__ = iris.analysis.maths.divide
def __rdiv__(self, other):
data = 1 / self.core_data()
reciprocal = self.copy(data=data)
return iris.analysis.maths.multiply(reciprocal, other)

def __itruediv__(self, other):
return iris.analysis.maths.divide(self, other, in_place=True)
__truediv__ = __div__

__itruediv__ = __idiv__

__rtruediv__ = __rdiv__

__pow__ = iris.analysis.maths.exponentiate

def __neg__(self):
return self.copy(data=-self.core_data())

# END OPERATOR OVERLOADS

def collapsed(self, coords, aggregator, **kwargs):
Expand Down
20 changes: 17 additions & 3 deletions lib/iris/fileformats/_nc_load_rules/actions.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,15 @@
from functools import wraps
import warnings

from iris.config import get_logger
import iris.fileformats.cf
import iris.fileformats.pp as pp

from . import helpers as hh

# Configure the logger.
logger = get_logger(__name__, fmt="[%(funcName)s]")


def _default_rulenamesfunc(func_name):
# A simple default function to deduce the rules-name from an action-name.
Expand Down Expand Up @@ -385,13 +389,23 @@ def action_ukmo_stash(engine):
attr_name = "ukmo__um_stash_source"
attr_value = getattr(var, attr_name, None)
if attr_value is None:
attr_altname = "um_stash_source" # legacy form
attr_value = getattr(var, attr_altname, None)
attr_name = "um_stash_source" # legacy form
attr_value = getattr(var, attr_name, None)
if attr_value is None:
rule_name += "(NOT-TRIGGERED)"
else:
# No helper routine : just do it
engine.cube.attributes["STASH"] = pp.STASH.from_msi(attr_value)
try:
stash_code = pp.STASH.from_msi(attr_value)
except (TypeError, ValueError):
engine.cube.attributes[attr_name] = attr_value
msg = (
"Unable to set attribute STASH as not a valid MSI "
f'string "mXXsXXiXXX", got "{attr_value}"'
)
logger.debug(msg)
else:
engine.cube.attributes["STASH"] = stash_code

return rule_name

Expand Down
8 changes: 4 additions & 4 deletions lib/iris/fileformats/name_loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -478,7 +478,7 @@ def _generate_cubes(
coord_units = _parse_units("FL")
if coord.name == "time":
coord_units = time_unit
pts = time_unit.date2num(coord.values)
pts = np.float_(time_unit.date2num(coord.values))

if coord.dimension is not None:
if coord.name == "longitude":
Expand All @@ -505,7 +505,7 @@ def _generate_cubes(
):
dt = coord.values - field_headings["Av or Int period"]
bnds = time_unit.date2num(np.vstack((dt, coord.values)).T)
icoord.bounds = bnds
icoord.bounds = np.float_(bnds)
else:
icoord.guess_bounds()
cube.add_dim_coord(icoord, coord.dimension)
Expand All @@ -522,7 +522,7 @@ def _generate_cubes(
):
dt = coord.values - field_headings["Av or Int period"]
bnds = time_unit.date2num(np.vstack((dt, coord.values)).T)
icoord.bounds = bnds[i, :]
icoord.bounds = np.float_(bnds[i, :])
cube.add_aux_coord(icoord)

# Headings/column headings which are encoded elsewhere.
Expand Down Expand Up @@ -1250,7 +1250,7 @@ def load_NAMEIII_trajectory(filename):

long_name = units = None
if isinstance(values[0], datetime.datetime):
values = time_unit.date2num(values)
values = np.float_(time_unit.date2num(values))
units = time_unit
if name == "Time":
name = "time"
Expand Down
4 changes: 2 additions & 2 deletions lib/iris/fileformats/netcdf.py
Original file line number Diff line number Diff line change
Expand Up @@ -1033,7 +1033,7 @@ def write(
dtype(i.e. 'i2', 'short', 'u4') or a dict of packing parameters as
described below. This provides support for netCDF data packing as
described in
http://www.unidata.ucar.edu/software/netcdf/documentation/NUG/best_practices.html#bp_Packed-Data-Values
https://www.unidata.ucar.edu/software/netcdf/documentation/NUG/best_practices.html#bp_Packed-Data-Values
If this argument is a type (or type string), appropriate values of
scale_factor and add_offset will be automatically calculated based
on `cube.data` and possible masking. For more control, pass a dict
Expand Down Expand Up @@ -2579,7 +2579,7 @@ def save(
(i.e. 'i2', 'short', 'u4') or a dict of packing parameters as described
below or an iterable of such types, strings, or dicts.
This provides support for netCDF data packing as described in
http://www.unidata.ucar.edu/software/netcdf/documentation/NUG/best_practices.html#bp_Packed-Data-Values
https://www.unidata.ucar.edu/software/netcdf/documentation/NUG/best_practices.html#bp_Packed-Data-Values
If this argument is a type (or type string), appropriate values of
scale_factor and add_offset will be automatically calculated based
on `cube.data` and possible masking. For more control, pass a dict with
Expand Down
Loading

0 comments on commit 4abaa8f

Please sign in to comment.