You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When indexing (either sel or isel) over (lat, lon) GRIB files loaded with open_mfdataset (thus containing chunked data) cfrgib attempts to load all chunks into memory. This causes excessive RAM consumption and slow performance.
From the discussion we had the hypothesis is that cfgrib needs to scan the entire file to subset only in few dimensions.
Still, it should be possible not to load the entire dataset into memory when performing the opration.
The text was updated successfully, but these errors were encountered:
I'm interested to this too. I am trying to extract a small subset from a ERA5-land file but - independently from the chunk size - xarray/dask tries to read the entire file in memory.
If I understand the problem correctly, this issue is partly because ecCodes can only read the whole message (field) from disk, even if you only want some meta-data. We have plans to improve that situation, but there is no firm time-frame for it yet. When we do, cfgrib should benefit enormously from it.
Related to dask/dask#9451 (and probably to fsspec/kerchunk#198).
When indexing (either
sel
orisel
) over (lat, lon) GRIB files loaded withopen_mfdataset
(thus containing chunked data)cfrgib
attempts to load all chunks into memory. This causes excessive RAM consumption and slow performance.From the discussion we had the hypothesis is that
cfgrib
needs to scan the entire file to subset only in few dimensions.Still, it should be possible not to load the entire dataset into memory when performing the opration.
The text was updated successfully, but these errors were encountered: