Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(v3.6.9) - Best model metrics saving in evaluations #458

Merged
merged 7 commits into from
Nov 15, 2024

Conversation

AlejandroCN7
Copy link
Member

Description

This update mainly addresses the Sinergym callback called LoggerEvalCallback. When combined with integrated WandB functionality, it saves specific metrics about the best model obtained.

This considerably improves the information search and preprocessing, since the WandB platform does not allow to keep the summary rows in the run tables based on the maximum value of the average reward. In this way, it would be directly accessible on the platform.

Some minor updates have been included in this update, such as preventing future version errors with pandas. For more information, consult the changelog.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Improvement (of an existing feature)
  • Others

Checklist:

  • I've read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests.
  • I have updated the documentation accordingly.
  • I have reformatted the code using autopep8 second level aggressive.
  • I have reformatted the code using isort.
  • I have ensured cd docs && make spelling && make html pass (required if documentation has been updated.)
  • I have ensured pytest tests/ -vv pass. (required).
  • I have ensured pytype -d import-error sinergym/ pass. (required)

Changelog:

  • Wrapper documentation: Moved boxes to the beginning.
  • LoggerEvalCallback: Added truncated and terminated flags to be ignored by default.
  • LoggerEvalCallback: Define wandb metric for best_model save.
  • LoggerEvalCallback: Update evaluation_metrics concat, avoiding pandas FutureWarning for future version inconsistencies.
  • LoggerEvalCallback: Overwrite best model metrics in wandb when a best model is found.
  • LoggerEvalCallback: Update documentation for describe the new callback's feature.

@AlejandroCN7 AlejandroCN7 merged commit 7b08a44 into main Nov 15, 2024
6 checks passed
@AlejandroCN7 AlejandroCN7 deleted the feat/best-model-specific-metrics-wandb branch November 15, 2024 11:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant