Skip to content

Commit

Permalink
Merge branch 'default_units' of https://github.com/SciTools/iris into…
Browse files Browse the repository at this point in the history
… default_units_patch

* 'default_units' of https://github.com/SciTools/iris:
  Unify saving behaviour of "unknown" and "no_unit" (SciTools#3711)
  Change default loading unit from "1" to "unknown" (correct branch) (SciTools#3709)
  Change default units to "unknown" for all DimensionalMetadata (SciTools#3713)
  Update docs CubeList.extract method (SciTools#3694)
  Correct and improve dev-guide section on fixing graphics-tests. (SciTools#3683)
  New image hashes for mpl 3x2 (SciTools#3682)
  Switched use of datetime.weekday() to datetime.dayofwk. (SciTools#3687)
  Remove TestGribMessage (SciTools#3672)
  Removed iris.tests.integration.test_grib_load and related CML files. (SciTools#3670)
  Removed grib-specific test to iris-grib. (SciTools#3671)
  Fixed asv project name to 'scitools-iris'. (SciTools#3660)
  Remove cube iter (SciTools#3656)
  Remove test_grib_save.py (SciTools#3669)
  Remove test_grib2 integration tests (SciTools#3664)
  Remove uri callback test which is moved to iris-grib (SciTools#3665)
  2v4 mergeback picks (SciTools#3668)
  Remove test_grib_save_rules.py which has been moved to iris-grib (SciTools#3666)
  Removed ununused skipIf. (SciTools#3632)
  Remove grib-specific test. (SciTools#3663)
  Remove obsolete test. (SciTools#3662)
  • Loading branch information
stephenworsley committed Jun 8, 2020
2 parents 4e70583 + 912f500 commit 4f267ad
Show file tree
Hide file tree
Showing 120 changed files with 712 additions and 3,491 deletions.
2 changes: 1 addition & 1 deletion asv.conf.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
// details on what can be included in this file.
{
"version": 1,
"project": "iris",
"project": "scitools-iris",
"project_url": "https://github.com/SciTools/iris",
"repo": ".",
"environment_type": "conda",
Expand Down
2 changes: 1 addition & 1 deletion docs/iris/example_code/Meteorology/lagged_ensemble.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def realization_metadata(cube, field, fname):
import iris.coords

realization_coord = iris.coords.AuxCoord(
np.int32(realization_number), "realization"
np.int32(realization_number), "realization", units="1"
)
cube.add_aux_coord(realization_coord)

Expand Down
173 changes: 102 additions & 71 deletions docs/iris/src/developers_guide/graphics_tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@ For this, a basic 'graphics test' assertion operation is provided in the method
match against a stored reference.
A "graphics test" is any test which employs this.

At present (Iris version 1.10), such tests include the testing for modules
`iris.tests.test_plot` and `iris.tests.test_quickplot`, and also some other
'legacy' style tests (as described in :ref:`developer_tests`).
At present, such tests include the testing for modules `iris.tests.test_plot`
and `iris.tests.test_quickplot`, all output plots from the gallery examples
(contained in `docs/iris/example_tests`), and a few other 'legacy' style tests
(as described in :ref:`developer_tests`).
It is conceivable that new 'graphics tests' of this sort can still be added.
However, as graphics tests are inherently "integration" style rather than true
unit tests, results can differ with the installed versions of dependent
Expand All @@ -38,80 +39,110 @@ Testing actual plot results introduces some significant difficulties :
Graphics Testing Strategy
=========================

Prior to Iris 1.10, all graphics tests compared against a stored reference
image with a small tolerance on pixel values.
In the Iris Travis matrix, and over time, graphics tests must run with
multiple versions of Python, and of key dependencies such as matplotlib.
To make this manageable, the "check_graphic" test routine tests against
multiple alternative 'acceptable' results. It does this using an image "hash"
comparison technique which avoids storing reference images in the Iris
repository itself, to avoid space problems.

From Iris v1.11 onward, we want to support testing Iris against multiple
versions of matplotlib (and some other dependencies).
To make this manageable, we have now rewritten "check_graphic" to allow
multiple alternative 'correct' results without including many more images in
the Iris repository.
This consists of :

* using a perceptual 'image hash' of the outputs (see
https://github.com/JohannesBuchner/imagehash) as the basis for checking
* The 'check_graphic' function uses a perceptual 'image hash' of the outputs
(see https://github.com/JohannesBuchner/imagehash) as the basis for checking
test results.
* storing the hashes of 'known accepted results' for each test in a
database in the repo (which is actually stored in
``lib/iris/tests/results/imagerepo.json``).
* storing associated reference images for each hash value in a separate public
repository, currently in https://github.com/SciTools/test-images-scitools ,
allowing human-eye judgement of 'valid equivalent' results.
* a new version of the 'iris/tests/idiff.py' assists in comparing proposed
new 'correct' result images with the existing accepted ones.

BRIEF...
There should be sufficient work-flow detail here to allow an iris developer to:

* understand the new check graphic test process
* understand the steps to take and tools to use to add a new graphic test
* understand the steps to take and tools to use to diagnose and fix an graphic test failure


Basic workflow
==============

If you notice that a graphics test in the Iris testing suite has failed
following changes in Iris or any of its dependencies, this is the process
you now need to follow:

#. Create a directory in iris/lib/iris/tests called 'result_image_comparison'.
#. From your Iris root directory, run the tests by using the command:
``python setup.py test``.
#. Navigate to iris/lib/iris/tests and run the command: ``python idiff.py``.
This will open a window for you to visually inspect the changes to the
graphic and then either accept or reject the new result.
#. Upon acceptance of a change or a new image, a copy of the output PNG file
is added to the reference image repository in
https://github.com/SciTools/test-images-scitools. The file is named
according to the image hash value, as ``<hash>.png``.
#. The hash value of the new result is added into the relevant set of 'valid
result hashes' in the image result database file,
``tests/results/imagerepo.json``.
#. The tests must now be re-run, and the 'new' result should be accepted.
Occasionally there are several graphics checks in a single test, only the
first of which will be run should it fail. If this is the case, then you
may well encounter further graphical test failures in your next runs, and
you must repeat the process until all the graphical tests pass.
#. To add your changes to Iris, you need to make two pull requests. The first
should be made to the test-images-scitools repository, and this should
contain all the newly-generated png files copied into the folder named
'image_files'.
#. The second pull request should be created in the Iris repository, and should
only include the change to the image results database
(``tests/results/imagerepo.json``) :
This pull request must contain a reference to the matching one in
test-images-scitools.
* The hashes of known 'acceptable' results for each test are stored in a
lookup dictionary, saved to the repo file
``lib/iris/tests/results/imagerepo.json`` .
* An actual reference image for each hash value is stored in a *separate*
public repository : https://github.com/SciTools/test-iris-imagehash .
* The reference images allow human-eye assessment of whether a new output is
judged to be 'close enough' to the older ones, or not.
* The utility script ``iris/tests/idiff.py`` automates checking, enabling the
developer to easily compare proposed new 'acceptable' result images against the
existing accepted reference images, for each failing test.

Note: the Iris pull-request will not test out successfully in Travis until the
test-images-scitools pull request has been merged : This is because there is
an Iris test which ensures the existence of the reference images (uris) for all
the targets in the image results database.

How to Add New 'Acceptable' Result Images to Existing Tests
========================================

When you find that a graphics test in the Iris testing suite has failed,
following changes in Iris or the run dependencies, this is the process
you should follow:

#. Create a new, empty directory to store temporary image results, at the path
``lib/iris/tests/result_image_comparison`` in your Iris repository checkout.

#. **In your Iris repo root directory**, run the relevant (failing) tests
directly as python scripts, or by using a command such as
``python -m unittest discover paths/to/test/files``.

#. **In the** ``iris/lib/iris/tests`` **folder**, run the command: ``python idiff.py``.
This will open a window for you to visually inspect side-by-side 'old', 'new'
and 'difference' images for each failed graphics test.
Hit a button to either "accept", "reject" or "skip" each new result ...

* If the change is *"accepted"* :

* the imagehash value of the new result image is added into the relevant
set of 'valid result hashes' in the image result database file,
``tests/results/imagerepo.json`` ;

* the relevant output file in ``tests/result_image_comparison`` is
renamed according to the image hash value, as ``<hash>.png``.
A copy of this new PNG file must then be added into the reference image
repository at https://github.com/SciTools/test-iris-imagehash.
(See below).

* If a change is *"skipped"* :

* no further changes are made in the repo.

* when you run idiff again, the skipped choice will be presented again.

Fixing a failing graphics test
==============================
* If a change is *"rejected"* :

* the output image is deleted from ``result_image_comparison``.

Adding a new graphics test
==========================
* when you run idiff again, the skipped choice will not appear, unless
and until the relevant failing test is re-run.

#. Now re-run the tests. The 'new' result should now be recognised and the
relevant test should pass. However, some tests can perform *multiple* graphics
checks within a single testcase function : In those cases, any failing
check will prevent the following ones from being run, so a test re-run may
encounter further (new) graphical test failures. If that happens, simply
repeat the check-and-accept process until all tests pass.

#. To add your changes to Iris, you need to make two pull requests :

* (1) The first PR is made in the test-iris-imagehash repository, at
https://github.com/SciTools/test-iris-imagehash.

* First, add all the newly-generated referenced PNG files into the
``images/v4`` directory. In your Iris repo, these files are to be found
in the temporary results folder ``iris/tests/result_image_comparison``.

.. Note::

The ``result_image_comparison`` folder is covered by a project
``.gitignore`` setting, so those files *will not show up* in a
``git status`` check.

* Then, run ``python recreate_v4_files_listing.py``, to update the file
which lists available images, ``v4_files_listing.txt``.

* Create a PR proposing these changes, in the usual way.

* (2) The second PR is created in the Iris repository, and
should only include the change to the image results database,
``tests/results/imagerepo.json`` :
The description box of this pull request should contain a reference to
the matching one in test-iris-imagehash.

Note: the Iris pull-request will not test out successfully in Travis until the
test-iris-imagehash pull request has been merged : This is because there is
an Iris test which ensures the existence of the reference images (uris) for all
the targets in the image results database. N.B. likewise, it will *also* fail
if you forgot to run ``recreate_v4_files_listing.py`` to update the image-listing
file in test-iris-imagehash.
13 changes: 1 addition & 12 deletions docs/iris/src/developers_guide/tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,16 +139,5 @@ This the only way of testing the modules :mod:`iris.plot` and
:mod:`iris.quickplot`, but is also used for some other legacy and integration-
style testcases.

Prior to Iris version 1.10, a single reference image for each testcase was
stored in the main Iris repository, and a 'tolerant' comparison was performed
against this.

From version 1.11 onwards, graphics testcase outputs are compared against
possibly *multiple* known-good images, of which only the signature is stored.
This uses a sophisticated perceptual "image hashing" scheme (see:
<https://github.com/JohannesBuchner/imagehash>).
Only imagehash signatures are stored in the Iris repo itself, thus freeing up
valuable space. Meanwhile, the actual reference *images* -- which are required
for human-eyes evaluation of proposed new "good results" -- are all stored
elsewhere in a separate public repository.
There are specific mechanisms for handling this.
See :ref:`developer_graphics_tests`.
2 changes: 1 addition & 1 deletion docs/iris/src/userguide/navigating_a_cube.rst
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ by field basis *before* they are automatically merged together:
# Add our own realization coordinate if it doesn't already exist.
if not cube.coords('realization'):
realization = np.int32(filename[-6:-3])
ensemble_coord = icoords.AuxCoord(realization, standard_name='realization')
ensemble_coord = icoords.AuxCoord(realization, standard_name='realization', units="1")
cube.add_aux_coord(ensemble_coord)

filename = iris.sample_data_path('GloSea4', '*.pp')
Expand Down
3 changes: 3 additions & 0 deletions docs/iris/src/userguide/subsetting_a_cube.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,9 @@ same way as loading with constraints:

Cube iteration
^^^^^^^^^^^^^^^
It is not possible to directly iterate over an Iris cube. That is, you cannot use code such as
``for x in cube:``. However, you can iterate over cube slices, as this section details.

A useful way of dealing with a Cube in its **entirety** is by iterating over its layers or slices.
For example, to deal with a 3 dimensional cube (z,y,x) you could iterate over all 2 dimensional slices in y and x
which make up the full 3d cube.::
Expand Down
59 changes: 59 additions & 0 deletions docs/iris/src/whatsnew/2.4.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
What's New in Iris 2.4.0
************************

:Release: 2.4.0
:Date: 2020-02-20

This document explains the new/changed features of Iris in version 2.4.0
(:doc:`View all changes <index>`.)


Iris 2.4.0 Features
===================

.. admonition:: Last python 2 version of Iris

Iris 2.4 is a final extra release of Iris 2, which back-ports specific desired features from
Iris 3 (not yet released).

The purpose of this is both to support early adoption of certain newer features,
and to provide a final release for Python 2.

The next release of Iris will be version 3.0 : a major-version release which
introduces breaking API and behavioural changes, and only supports Python 3.

* :class:`iris.coord_systems.Geostationary` can now accept creation arguments of
`false_easting=None` or `false_northing=None`, equivalent to values of 0.
Previously these kwargs could be omitted, but could not be set to `None`.
This also enables loading of netcdf data on a Geostationary grid, where either of these
keys is not present as a grid-mapping variable property : Previously, loading any
such data caused an exception.
* The area weights used when performing area weighted regridding with :class:`iris.analysis.AreaWeighted`
are now cached.
This allows a significant speedup when regridding multiple similar cubes, by repeatedly using
a `'regridder' object <../iris/iris/analysis.html?highlight=regridder#iris.analysis.AreaWeighted.regridder>`_
which you created first.
* Name constraint matching against cubes during loading or extracting has been relaxed from strictly matching
against the :meth:`~iris.cube.Cube.name`, to matching against either the
``standard_name``, ``long_name``, NetCDF ``var_name``, or ``STASH`` attributes metadata of a cube.
* Cubes and coordinates now have a new ``names`` property that contains a tuple of the
``standard_name``, ``long_name``, NetCDF ``var_name``, and ``STASH`` attributes metadata.
* The :class:`~iris.NameConstraint` provides richer name constraint matching when loading or extracting
against cubes, by supporting a constraint against any combination of
``standard_name``, ``long_name``, NetCDF ``var_name`` and ``STASH``
from the attributes dictionary of a :class:`~iris.cube.Cube`.


Iris 2.4.0 Dependency Updates
=============================
* Iris is now able to use the latest version of matplotlib.


Bugs Fixed
==========
* Fixed a problem which was causing file loads to fetch *all* field data
whenever UM files (PP or Fieldsfiles) were loaded.
With large sourcefiles, initial file loads are slow, with large memory usage
before any cube data is even fetched. Large enough files will cause a crash.
The problem occurs only with Dask versions >= 2.0.

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
* The `__iter__()` method in class:`iris.cube.Cube` was set to `None`.
`TypeError` is still raised if a `Cube` is iterated over but
`isinstance(cube, collections.Iterable)` now behaves as expected.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
* Updated the documentation for the :meth:`iris.cube.CubeList.extract` method, to specify when a single cube, as opposed to a CubeList, may be returned.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
* When loading data from netcdf-CF files, where a variable has no "units" property, the corresponding Iris object will have "units='unknown'". Prior to Iris 3.0, these cases defaulted to "units='1'".
1 change: 1 addition & 0 deletions docs/iris/src/whatsnew/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Iris versions.

latest.rst
3.0.rst
2.4.rst
2.3.rst
2.2.rst
2.1.rst
Expand Down
4 changes: 3 additions & 1 deletion lib/iris/analysis/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -802,7 +802,9 @@ def post_process(self, collapsed_cube, data_result, coords, **kwargs):
# order cube.
for point in points:
cube = collapsed_cube.copy()
coord = iris.coords.AuxCoord(point, long_name=coord_name)
coord = iris.coords.AuxCoord(
point, long_name=coord_name, units="percent"
)
cube.add_aux_coord(coord)
cubes.append(cube)

Expand Down
6 changes: 3 additions & 3 deletions lib/iris/coord_categorisation.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ def add_day_of_year(cube, coord, name="day_of_year"):
def add_weekday_number(cube, coord, name="weekday_number"):
"""Add a categorical weekday coordinate, values 0..6 [0=Monday]."""
add_categorised_coord(
cube, name, coord, lambda coord, x: _pt_date(coord, x).weekday()
cube, name, coord, lambda coord, x: _pt_date(coord, x).dayofwk
)


Expand All @@ -192,7 +192,7 @@ def add_weekday_fullname(cube, coord, name="weekday_fullname"):
cube,
name,
coord,
lambda coord, x: calendar.day_name[_pt_date(coord, x).weekday()],
lambda coord, x: calendar.day_name[_pt_date(coord, x).dayofwk],
units="no_unit",
)

Expand All @@ -203,7 +203,7 @@ def add_weekday(cube, coord, name="weekday"):
cube,
name,
coord,
lambda coord, x: calendar.day_abbr[_pt_date(coord, x).weekday()],
lambda coord, x: calendar.day_abbr[_pt_date(coord, x).dayofwk],
units="no_unit",
)

Expand Down
Loading

0 comments on commit 4f267ad

Please sign in to comment.