Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help about Netcdf Data #2839

Closed
markjinz opened this issue Jun 26, 2024 · 7 comments
Closed

Help about Netcdf Data #2839

markjinz opened this issue Jun 26, 2024 · 7 comments
Labels

Comments

@markjinz
Copy link

markjinz commented Jun 26, 2024

Thank you for your help

@mraspaud
Copy link
Member

Hi @markjinz

Something like this can work to generate a netcdf file:

    scn = Scene(filenames=filenames,
                reader='seviri_l1b_hrit',
                )


    composites = ["VIS006", "VIS008", ...]
    scn.load(composites)
    rscn = scn.resample("eurol")
    rscn.save_datasets(writer="cf")

Or have you tried that already?

@markjinz
Copy link
Author

markjinz commented Jul 1, 2024

Hi @markjinz

Something like this can work to generate a netcdf file:

    scn = Scene(filenames=filenames,
                reader='seviri_l1b_hrit',
                )


    composites = ["VIS006", "VIS008", ...]
    scn.load(composites)
    rscn = scn.resample("eurol")
    rscn.save_datasets(writer="cf")

Or have you tried that already?

hello @mraspaud
this is my code :

from satpy import Scene
import glob
filenames = glob.glob('./raw02/H*202405281015*')
scn = Scene(filenames=filenames, reader='seviri_l1b_hrit')
scn.load(['IR_120', 'IR_108'])
scn.save_datasets(writer='cf', datasets=['IR_120', 'IR_108'], filename='seviri_test.nc',
                      exclude_attrs=['raw_metadata'],groups={'ir': ['IR_120', 'IR_108']})
                      

but i have this problem :

/home/mark/anaconda3/envs/finland/lib/python3.10/site-packages/satpy/writers/cf_writer.py:571: FutureWarning: The default behaviour of the CF writer will soon change to not compress data by default.
  warnings.warn("The default behaviour of the CF writer will soon change to not compress data by default.",
No time dimension in datasets of group ir, skipping time bounds creation.
[None, None]

And i have a heavy file as output , the main idea that i want convert my HRIT segments (8 files each channel ) to 1 file Netcdf or Grib but at the result i have a file with many times in seconds

@markjinz markjinz changed the title Help about Netcdf/Grib Data Help about Netcdf Data Jul 1, 2024
@djhoese
Copy link
Member

djhoese commented Jul 2, 2024

Sorry @markjinz, Martin is on holiday and I have very little experience with SEVIRI data and with outputting to NetCDF files in Satpy but maybe I can help or find someone who can help more.

  1. What exactly is the problem with the file being produced?
  2. You mentioned you have times in seconds, why is that bad?
  3. Could you provide the ncdump -h of the file being produced?
  4. Asking the above in another way, what do you have now and what do you want it to look like?
  5. I wouldn't worry about the warning you were getting for now about the compression deprecation (unless you really need deprecation).
  6. I'm not sure where the [None, None] output is coming from. Is that a print in your code or a print left over in the CF writer code somewhere?

Otherwise, @pytroll/seviri_power_users might be able to help.

@gerritholl
Copy link
Member

gerritholl commented Jul 2, 2024

To avoid huge files with the NetCDF writer, you need to pass encoding= for each of your variables:

    ls.save_datasets(
            writer="cf",
            format="NETCDF4",
            filename=os.path.join(
                out_dir,
                "{platform_name}-{sensor}-{area.area_id}-{name}-{start_time:%Y%m%d%H%M%S}-{end_time:%Y%m%d%H%M%S}.nc"),
            encoding={
                "IR_108":
                {"zlib": True,
                 "complevel": 9,
                 "scale_factor": 0.1,
                 "dtype": "int16",
                 "_FillValue": 0}},
            include_lonlats=False)

Example for FCI, producing a simple NetCDF file with a single uncalibrated channel:

import hdf5plugin
from satpy import Scene
from glob import glob
filenames = glob(".../W_XX-EUMETSAT-Darmstadt,IMG+SAT,MTI1+FCI-1C-RRAD-*-FD--CHK-BODY--DIS-NC4E_C_EUMT_20231115*_IDPFI_OPE_20231115*_20231115*_N_JLS_C_0054_*.nc")
sc = Scene(filenames=filenames, reader="fci_l1c_nc")
sc.load(["vis_06"], calibration="counts")
sc.save_dataset(
        "vis_06",
        "/media/nas/x21308/scratch/FCI/{platform_name}-{sensor}-{area.area_id}-{name}-{start_time:%Y%m%d%H%M%S}-{end_time:%Y%m%d%H%M%S}.nc",
        encoding={"vis_06": {"dtype": "int16", "zlib": True}},
        include_lonlats=False)

@gerritholl
Copy link
Member

The problem is that the output gives me multiple numbers such as 3712 ( number of x/y) , whereas I only want one specific timestamp. It seems as though there are no dimensions of time in the file test2.nc

I don't really understand what you mean by multiple numbers. It is accurate that x=3712 and y=3712 as two separate dimensions.

It is a known issue that there is no direct way to include a time dimension. At #2428 someone has proposed a (possibly noncompliant) way to include one.

@djhoese
Copy link
Member

djhoese commented Jul 3, 2024

Those are likely the acquisition times for each row. It looks like mcidas is finding those when you select the image variable (IR_108 versus IR_108_acq_time) because it is listed as the only thing under coordinates for the IR_108 variable. Otherwise, there are attributes on the variable that give the time you want (ex. start_time).

I would consider the fact that y and x are not listed as coordinates as a bug, but again I'm not an expert on the CF writer in Satpy.

@ameraner
Copy link
Member

Looks like some comments have been removed from the issue by the OP and the description has been edited, so it's impossible to follow the discussion anymore. I'll close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants