Skip to content

Commit

Permalink
Merge pull request #603 from ekatef/fix_docs_clean
Browse files Browse the repository at this point in the history
Update documentation with the cleaned history
  • Loading branch information
davide-f authored Mar 16, 2023
2 parents 85d010f + 1df3670 commit 679dd4d
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 15 deletions.
2 changes: 1 addition & 1 deletion doc/data_workflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ These data are used in the `build_renewable_profiles` rule. `GEBCO <https://www.

* **hydrobasins** datasets on watershed boundaries and basins, as available from HydroBASINS. These data are used to estimate the hydropower generation in the `build_renewable_profiles` rule.

* **landcover** describes the shapes of world protected areas that are needed to identify in what areas no (renewable) assets can be installed. Currently are used to generate a `natura.tiff` raster. Will be deprecated once the global `natura.tiff` will be available.
* **landcover** describes the shapes of world protected areas that are needed to identify in what areas no (renewable) assets can be installed. The `landcover` dataset was used to generate a `natura.tiff` raster. Nowadays the pre-compiled `natura.tiff` raster has global coverage, so there is no need to re-calculate it locally to being able run the modeling workflow.

Economical
------------------------------------
Expand Down
10 changes: 5 additions & 5 deletions doc/short_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,7 @@ To do that, you may want to do a reserve copy of your current configuration file
.../pypsa-earth (pypsa-earth) % cp config.tutorial.yaml config.yaml
In the configuration file `config.yaml` there is a flag `retrieve_databundle` which triggers data loading and a `tutorial` flag which determines that the loaded data belong to the tutorial kit. Currently the tutorial can be run only for Nigeria ("NG"), Benin ("BJ"), Bostwana ("BW") and Morocco ("MA").

:
In the configuration file `config.yaml` there is a flag `retrieve_databundle` which triggers data loading and a `tutorial` flag which determines that the loaded data belong to the tutorial kit. Currently the tutorial can be run only for Nigeria ("NG"), Benin ("BJ"), Botswana ("BW") and Morocco ("MA").

.. code:: yaml
Expand Down Expand Up @@ -85,15 +83,17 @@ Just open in our `notebooks repository <https://github.com/pypsa-meets-earth/doc
the file `sample-network-analysis.ipynb`. For further inspiration on what you can analyse and do with PyPSA,
you can explore the `examples section in the PyPSA framework documentation <https://pypsa.readthedocs.io/en/latest/examples-basic.html>`_.

After playing with the tutorial model and before playing with different fucntions,
After playing with the tutorial model and before playing with different functions,
it's important to clean-up data in your model folder before to proceed further to avoid data conflicts.
You may use the `clean` rule for making so:

.. code:: bash
.../pypsa-earth (pypsa-earth) % snakemake -j 1 clean
Generally, it's a good idea to repeat the cleaning procedure every time when the underlying data are changed.
Generally, it's a good idea to repeat the cleaning procedure every time when the underlying data are changed to avoid conflicts between run settings corresponding to different scenarios.

It is also possible to make manual clean-up removing folders "resources", "networks" and "results". Those folders store the intermediate output of the workflow and if you don't need them anymore it is safe to delete them.

.. note::

Expand Down
49 changes: 40 additions & 9 deletions doc/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,19 @@ Likewise, the example's temporal scope can be restricted (e.g. to 7 days):
end: "2013-03-7"
inclusive: "left" # end is not inclusive
Year-related parameters are also being used when specifying `load_options`:

.. code:: yaml
load_options:
ssp: "ssp2-2.6"
weather_year: 2013
prediction_year: 2030
scale: 1
The `weather_year` value corresponds to the weather data which was used to generate the electricity demand profiles for a selected area while `prediction_year` correspond to the point of a ssp trajectory. The available values for `weather_year` and `prediction_year` can be checked by looking into `pypsa-earth/data/ssp2-2.6` folder. Currently, there are pre-calculated demand data for 2011, 2013, 2018 weather years and for 2030, 2040, 2050, and 2100 scenario prediction years.

It is also possible to allow less or more carbon-dioxide emissions, while defining the current emissions.
It is possible to model a net-zero target by setting the `co2limit` to zero:

Expand Down Expand Up @@ -70,6 +83,9 @@ It is advisable to adapt the required range of coordinates to the selection of c
dy: 0.3 # cutout resolution
# The cutout time is automatically set by the snapshot range.
Note please that a temporal dimension of the cutout should be consistent with the values set for `snapshots` parameter. A time range of the cutout is determined by the parameters set when building this cutout while the time resolution corresponds to those of the used climate archives. In case of ERA5 dataset used in PyPSA-Earth by default, hourly resolution is implied.

It is also possible to decide which weather data source should be used to calculate potentials and capacity factor time-series for each carrier.
For example, we may want to use the ERA-5 dataset for solar and not the default SARAH-2 dataset.

Expand Down Expand Up @@ -156,11 +172,13 @@ It could be helpful to keep in mind the following points:

The cutout is the main concept of climate data management in PyPSA ecosystem introduced in `atlite <https://atlite.readthedocs.io/en/latest/>`_ package. The cutout is an archive containing a spatio-temporal subset of one or more topology and weather datasets. Since such datasets are typically global and span multiple decades, the Cutout class allows atlite to reduce the scope to a more manageable size. More details about the climate data processing concepts are contained in `JOSS paper <https://joss.theoj.org/papers/10.21105/joss.03294>`_.

.. note::
Skip this recommendation if the region of your interest is within Africa and you are fine with the 2013 weather year
Generally, the spatial and time resolution of the cutout data is determined by parameters of an underlying dataset. That is 30 km x 30 km grid and houtly resolution for ERA5 recommended for usage in PyPSA-Earth.

The pre-built cutout for Africa is available for 2013 year and can be loaded directly from zenodo through the rule `retrieve_cutout`. There is also a smaller cutout for Africa built for a two-weeks time span; it is automatically downloaded when retrieving common data with `retrieve_databundle_light`.

.. note::
Skip this recommendation if the region of your interest is within Africa and you are fine with the 2013 weather year

In case you are interested in other parts of the world you have to generate a cutout yourself using the `build_cutouts` rule. To run it you will need to:

1. be registered on the `Copernicus Climate Data Store <https://cds.climate.copernicus.eu>`_;
Expand All @@ -171,21 +189,34 @@ In case you are interested in other parts of the world you have to generate a cu

These steps are required to use CDS API which allows an automatic file download while executing `build_cutouts` rule.

The `build_cutout` flag should be set `true` to generate the cutout. After the cutout is ready, it's recommended to set `build_cutout` to `false` to avoid overwriting the existing cutout by accident. The `snapshots` values set when generating the cutout, will determine the temporal parameters of the cutout. Accessible years which can be used to build a cutout depend on ERA5 data availability. `ERA5 page <https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5>`_ explains that the data is available from 1950 and updated continuously with about 3 month delay while the data on 1950-1978 should be treated as preliminary as that is a rather recent development.

After the first run, if you don't change country and don't need to increase a considered time span wider than the one you created the cutout with, you may set to false both `retrieve_databundle` and `build_cutout`.

Spatial extent
^^^^^^^^^^^^^^

Normally cutout extent is calculated from the shape of the requested region defined by the `countries` parameter in the configuration file `config.yaml`. It could make sense to set the countries list as big as it's feasible when generating a cutout. A considered area can be narrowed anytime when building a specific model by adjusting content of the `countries` list.

There is also option to set the cutout extent specifying `x` and `y` values directly. However, these values will overwrite values extracted from the countries shape. Which means that nothing prevents `build_cutout` to extract data which has no relation to the requested countries. Please use direct definition of `x` and `y` only if you really understand what and why you are doing.

The `build_cutout` flag should be set `true` to generate the cutout. After the cutout is ready, it's recommended to set `build_cutout` to `false` to avoid overwriting the existing cutout by accident.
Temporal extent
^^^^^^^^^^^^^^^

3. Build a natura.tiff raster
If you create the cutout for a certain year (let's say 2013) and want to run scenarios for a subset of this year, you don't need to rerun the `build_cutout` as the cutout still contains all the hours of 2013. The workflow will automatically subset the cutout archive to extract data for the particular timeframe of interest. If you instead you want to run the 2014 scenario, then rerun `build_cutout` is needed.

In case you need model a number of years, a convenient approach may be to create the cutout for the whole period under interest (e.g. 2013-2015) so that you don't need to build any additional cutouts. Note, however, that the disk requirements increase in this case.

3. Check natura.tiff raster
-----------------------------

A raster file `natura.tiff` is used to store shapes of the protected and reserved nature areas. Such landuse restrictions can be taking into account when calculating the renewable potential with `build_renewable_profiles`.
A raster file `natura.tiff` is used to store shapes of the protected and reserved nature areas. Such landuse restrictions can be taking into account when calculating the renewable potential with `build_renewable_profiles` by switching-on `natura` option:

.. note::
Skip this recommendation if the region of your interest is within Africa
.. code:: bash
natura: true
A pre-built `natura.tiff` is loaded along with other data needed to run a model with `retrieve_databundle_light` rule. Currently this raster is valid for Africa, global `natura.tiff` raster is under development. You may generate the `natura.tiff` for a region of interest using `build_natura_raster` rule which aggregates data on protected areas along the cutout extent.
A pre-built `natura.tiff` is loaded along with other data needed to run a model with `retrieve_databundle_light` rule. This raster file contains data on the on protected areas around the world where areas no (renewable) assets can be installed. The `natura.tiff` raster has now global coverage so you don't need to create it locally.

How to validate?
================
Expand Down Expand Up @@ -238,7 +269,7 @@ The following validation notebooks are worth a look when validating your energy

1. A detailed `network validation <https://github.com/pypsa-meets-earth/documentation/blob/main/notebooks/validation/network_validation.ipynb>`_.

2. Analys of `the installed capacity <https://github.com/pypsa-meets-earth/documentation/blob/main/notebooks/validation/capacity_validation.ipynb>`_ for the considered area.
2. Analysis of `the installed capacity <https://github.com/pypsa-meets-earth/documentation/blob/main/notebooks/validation/capacity_validation.ipynb>`_ for the considered area.

3. Validation of `the power demand <https://github.com/pypsa-meets-earth/documentation/blob/main/notebooks/validation/demand_validation.ipynb>`_ values and profile.

Expand Down

0 comments on commit 679dd4d

Please sign in to comment.