Skip to content

Commit

Permalink
Merge branch 'master' into nimrod_file_format
Browse files Browse the repository at this point in the history
* master:
  Remove TestGribMessage (SciTools#3672)
  Removed iris.tests.integration.test_grib_load and related CML files. (SciTools#3670)
  Removed grib-specific test to iris-grib. (SciTools#3671)
  Fixed asv project name to 'scitools-iris'. (SciTools#3660)
  Remove cube iter (SciTools#3656)
  Remove test_grib_save.py (SciTools#3669)
  Remove test_grib2 integration tests (SciTools#3664)
  Remove uri callback test which is moved to iris-grib (SciTools#3665)
  2v4 mergeback picks (SciTools#3668)
  Remove test_grib_save_rules.py which has been moved to iris-grib (SciTools#3666)
  Removed ununused skipIf. (SciTools#3632)
  Remove grib-specific test. (SciTools#3663)
  Remove obsolete test. (SciTools#3662)
  remove redundant tests (SciTools#3650)
  Fixed tests since Numpy 1.18 deprecation of non-int num arguments for linspace. (SciTools#3655)
  • Loading branch information
MoseleyS committed Mar 6, 2020
2 parents 2aca4c5 + 06e9e2b commit 5e63577
Show file tree
Hide file tree
Showing 61 changed files with 302 additions and 3,273 deletions.
2 changes: 1 addition & 1 deletion asv.conf.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
// details on what can be included in this file.
{
"version": 1,
"project": "iris",
"project": "scitools-iris",
"project_url": "https://github.com/SciTools/iris",
"repo": ".",
"environment_type": "conda",
Expand Down
3 changes: 3 additions & 0 deletions docs/iris/src/userguide/subsetting_a_cube.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,9 @@ same way as loading with constraints:

Cube iteration
^^^^^^^^^^^^^^^
It is not possible to directly iterate over an Iris cube. That is, you cannot use code such as
``for x in cube:``. However, you can iterate over cube slices, as this section details.

A useful way of dealing with a Cube in its **entirety** is by iterating over its layers or slices.
For example, to deal with a 3 dimensional cube (z,y,x) you could iterate over all 2 dimensional slices in y and x
which make up the full 3d cube.::
Expand Down
59 changes: 59 additions & 0 deletions docs/iris/src/whatsnew/2.4.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
What's New in Iris 2.4.0
************************

:Release: 2.4.0
:Date: 2020-02-20

This document explains the new/changed features of Iris in version 2.4.0
(:doc:`View all changes <index>`.)


Iris 2.4.0 Features
===================

.. admonition:: Last python 2 version of Iris

Iris 2.4 is a final extra release of Iris 2, which back-ports specific desired features from
Iris 3 (not yet released).

The purpose of this is both to support early adoption of certain newer features,
and to provide a final release for Python 2.

The next release of Iris will be version 3.0 : a major-version release which
introduces breaking API and behavioural changes, and only supports Python 3.

* :class:`iris.coord_systems.Geostationary` can now accept creation arguments of
`false_easting=None` or `false_northing=None`, equivalent to values of 0.
Previously these kwargs could be omitted, but could not be set to `None`.
This also enables loading of netcdf data on a Geostationary grid, where either of these
keys is not present as a grid-mapping variable property : Previously, loading any
such data caused an exception.
* The area weights used when performing area weighted regridding with :class:`iris.analysis.AreaWeighted`
are now cached.
This allows a significant speedup when regridding multiple similar cubes, by repeatedly using
a `'regridder' object <../iris/iris/analysis.html?highlight=regridder#iris.analysis.AreaWeighted.regridder>`_
which you created first.
* Name constraint matching against cubes during loading or extracting has been relaxed from strictly matching
against the :meth:`~iris.cube.Cube.name`, to matching against either the
``standard_name``, ``long_name``, NetCDF ``var_name``, or ``STASH`` attributes metadata of a cube.
* Cubes and coordinates now have a new ``names`` property that contains a tuple of the
``standard_name``, ``long_name``, NetCDF ``var_name``, and ``STASH`` attributes metadata.
* The :class:`~iris.NameConstraint` provides richer name constraint matching when loading or extracting
against cubes, by supporting a constraint against any combination of
``standard_name``, ``long_name``, NetCDF ``var_name`` and ``STASH``
from the attributes dictionary of a :class:`~iris.cube.Cube`.


Iris 2.4.0 Dependency Updates
=============================
* Iris is now able to use the latest version of matplotlib.


Bugs Fixed
==========
* Fixed a problem which was causing file loads to fetch *all* field data
whenever UM files (PP or Fieldsfiles) were loaded.
With large sourcefiles, initial file loads are slow, with large memory usage
before any cube data is even fetched. Large enough files will cause a crash.
The problem occurs only with Dask versions >= 2.0.

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
* The `__iter__()` method in class:`iris.cube.Cube` was set to `None`.
`TypeError` is still raised if a `Cube` is iterated over but
`isinstance(cube, collections.Iterable)` now behaves as expected.
1 change: 1 addition & 0 deletions docs/iris/src/whatsnew/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Iris versions.

latest.rst
3.0.rst
2.4.rst
2.3.rst
2.2.rst
2.1.rst
Expand Down
5 changes: 3 additions & 2 deletions lib/iris/cube.py
Original file line number Diff line number Diff line change
Expand Up @@ -2622,8 +2622,9 @@ def _repr_html_(self):
representer = CubeRepresentation(self)
return representer.repr_html()

def __iter__(self):
raise TypeError("Cube is not iterable")
# Indicate that the iter option is not available. Python will raise
# TypeError with a useful message if a Cube is iterated over.
__iter__ = None

def __getitem__(self, keys):
"""
Expand Down
34 changes: 20 additions & 14 deletions lib/iris/fileformats/pp.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
)
import iris.fileformats.rules
import iris.coord_systems

from iris.util import _array_slice_ifempty

try:
import mo_pack
Expand Down Expand Up @@ -594,19 +594,25 @@ def ndim(self):
return len(self.shape)

def __getitem__(self, keys):
with open(self.path, "rb") as pp_file:
pp_file.seek(self.offset, os.SEEK_SET)
data_bytes = pp_file.read(self.data_len)
data = _data_bytes_to_shaped_array(
data_bytes,
self.lbpack,
self.boundary_packing,
self.shape,
self.src_dtype,
self.mdi,
)
data = data.__getitem__(keys)
return np.asanyarray(data, dtype=self.dtype)
# Check for 'empty' slicings, in which case don't fetch the data.
# Because, since Dask v2, 'dask.array.from_array' performs an empty
# slicing and we must not fetch the data at that time.
result = _array_slice_ifempty(keys, self.shape, self.dtype)
if result is None:
with open(self.path, "rb") as pp_file:
pp_file.seek(self.offset, os.SEEK_SET)
data_bytes = pp_file.read(self.data_len)
data = _data_bytes_to_shaped_array(
data_bytes,
self.lbpack,
self.boundary_packing,
self.shape,
self.src_dtype,
self.mdi,
)
result = data.__getitem__(keys)

return np.asanyarray(result, dtype=self.dtype)

def __repr__(self):
fmt = (
Expand Down
110 changes: 0 additions & 110 deletions lib/iris/tests/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -1121,116 +1121,6 @@ class GraphicsTest_nometa(GraphicsTestMixin, IrisTest_nometa):
pass


class TestGribMessage(IrisTest):
def assertGribMessageContents(self, filename, contents):
"""
Evaluate whether all messages in a GRIB2 file contain the provided
contents.
* filename (string)
The path on disk of an existing GRIB file
* contents
An iterable of GRIB message keys and expected values.
"""
messages = GribMessage.messages_from_filename(filename)
for message in messages:
for element in contents:
section, key, val = element
self.assertEqual(message.sections[section][key], val)

def assertGribMessageDifference(
self, filename1, filename2, diffs, skip_keys=(), skip_sections=()
):
"""
Evaluate that the two messages only differ in the ways specified.
* filename[0|1] (string)
The path on disk of existing GRIB files
* diffs
An dictionary of GRIB message keys and expected diff values:
{key: (m1val, m2val),...} .
* skip_keys
An iterable of key names to ignore during comparison.
* skip_sections
An iterable of section numbers to ignore during comparison.
"""
messages1 = list(GribMessage.messages_from_filename(filename1))
messages2 = list(GribMessage.messages_from_filename(filename2))
self.assertEqual(len(messages1), len(messages2))
for m1, m2 in zip(messages1, messages2):
m1_sect = set(m1.sections.keys())
m2_sect = set(m2.sections.keys())

for missing_section in m1_sect ^ m2_sect:
what = (
"introduced" if missing_section in m1_sect else "removed"
)
# Assert that an introduced section is in the diffs.
self.assertIn(
missing_section,
skip_sections,
msg="Section {} {}".format(missing_section, what),
)

for section in m1_sect & m2_sect:
# For each section, check that the differences are
# known diffs.
m1_keys = set(m1.sections[section]._keys)
m2_keys = set(m2.sections[section]._keys)

difference = m1_keys ^ m2_keys
unexpected_differences = difference - set(skip_keys)
if unexpected_differences:
self.fail(
"There were keys in section {} which \n"
"weren't in both messages and which weren't "
"skipped.\n{}"
"".format(section, ", ".join(unexpected_differences))
)

keys_to_compare = m1_keys & m2_keys - set(skip_keys)

for key in keys_to_compare:
m1_value = m1.sections[section][key]
m2_value = m2.sections[section][key]
msg = "{} {} != {}"
if key not in diffs:
# We have a key which we expect to be the same for
# both messages.
if isinstance(m1_value, np.ndarray):
# A large tolerance appears to be required for
# gribapi 1.12, but not for 1.14.
self.assertArrayAlmostEqual(
m1_value, m2_value, decimal=2
)
else:
self.assertEqual(
m1_value,
m2_value,
msg=msg.format(key, m1_value, m2_value),
)
else:
# We have a key which we expect to be different
# for each message.
self.assertEqual(
m1_value,
diffs[key][0],
msg=msg.format(key, m1_value, diffs[key][0]),
)

self.assertEqual(
m2_value,
diffs[key][1],
msg=msg.format(key, m2_value, diffs[key][1]),
)


def skip_data(fn):
"""
Decorator to choose whether to run tests, based on the availability of
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,8 +108,9 @@ def _resampled_coord(coord, samplefactor):
delta = 0.00001 * np.sign(upper - lower) * abs(bounds[0, 1] - bounds[0, 0])
lower = lower + delta
upper = upper - delta
samples = int(len(bounds) * samplefactor)
new_points, step = np.linspace(
lower, upper, len(bounds) * samplefactor, endpoint=False, retstep=True
lower, upper, samples, endpoint=False, retstep=True
)
new_points += step * 0.5
new_coord = coord.copy(points=new_points)
Expand Down
Loading

0 comments on commit 5e63577

Please sign in to comment.