Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reference results for correctness checks and benchmarking #168

Open
DCaviedesV opened this issue Mar 3, 2023 · 7 comments
Open

Reference results for correctness checks and benchmarking #168

DCaviedesV opened this issue Mar 3, 2023 · 7 comments
Assignees
Labels
datasets Anything related to public datasets, model datasets, validation, etc test setups Concerning supported test setups

Comments

@DCaviedesV
Copy link
Contributor

DCaviedesV commented Mar 3, 2023

In order to be able to implement correctness checks for benchmarking worflows and possibly CI/CD, it is important to have a few reference results of the standard supported setups. Probably not necessary to have for all, just a few.
From #53 a reference solution for EUR11 was generated.
We need something similar for a few selected cases. We should discuss and agree which ones.

@DCaviedesV DCaviedesV added datasets Anything related to public datasets, model datasets, validation, etc test setups Concerning supported test setups labels Mar 3, 2023
@chartick
Copy link
Contributor

chartick commented Mar 7, 2023

The reference results for EUR11 are probably outdated because of changes in the static files. The creation of reference results for the ideal cases do not have this problem.

@DCaviedesV
Copy link
Contributor Author

The update of the static files for EUR11 is not currently reflected in neither the static fields in datapub which are fetched by the setup_tsmp workflow, nor in the reference solution, as @chartick points out.
I guess both need updating then.
Indeed, this is not a problem for the ideal cases. I would propose to have some reference solution for some size configuration of the idealRTD and for some size configuration of idealScal, although they are a bit similar.
@s-poll, is there another you would like to see?

@s-poll
Copy link
Member

s-poll commented Mar 7, 2023

I agree with @chartick that EUR11 reference data are not up to date with the changes in the static fields, and should propably be recreated with the new ones. But the new static data should be already updated in the setup_tsmp workflow. The download data script shell download the static fields v2 in datapub, and all setup scripts should point to the new data.

I think most important ones are already mentioned. If necessary we could adapt the pool of bench-marking case in future.

@DCaviedesV
Copy link
Contributor Author

DCaviedesV commented Mar 7, 2023

@s-poll ok, yes, I see that download_data_for_test_cases.ksh points to the static fields v2 in datapub.
But are these really up-to-date? They seem with latest updates in summer 2022. Shouldn't they account for the latest (last couple of months) fixes on the EUR11 domain?

@DCaviedesV
Copy link
Contributor Author

@chartick I think you can start collapsing the machine related stuff for the cases we are sure we want to keep... while we see what happens with the others.

@s-poll
Copy link
Member

s-poll commented Mar 7, 2023

@DCaviedesV These data include the correction of CLM coordinates and ParFlow slopes. But you are right that the namelist and data are not up-to-date with the recent developments especially within DETECT. Are the data/namelists already settled for the simulations of the EUR11 domain?

@kgoergen
Copy link
Contributor

kgoergen commented Mar 7, 2023

Within the DETECT CRC, we do have a setup and configuration, https://icg4geo.icg.kfa-juelich.de/Configurations/TSMP/DETECT_EUR-11_ECMWF-ERA5_evaluation_r1i1p1_FZJ-COSMO5-01-CLM3-5-0-ParFlow3-12-0_vBaseline, with related constant or static fields, https://icg4geo.icg.kfa-juelich.de/Configurations/TSMP_statfiles_IBG3/TSMP_EUR-11, aligned with EURO-CORDEX and CLM-Community; minor refinements are ongoing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
datasets Anything related to public datasets, model datasets, validation, etc test setups Concerning supported test setups
Projects
None yet
Development

No branches or pull requests

5 participants