You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Evaluation logger (tensorboard callback) doesn't register mean_reward and mean_power correctly. This happens in LoggerEvalCallback.
To Reproduce
Thanks to @manjavacas, who saw the problem. You have to run an experiment in DRL_battery.py with tensorboard and evaluation active, and we will have the next:
The information is OK due to it is a tensorboard scale visualization issue, but mean_reward and mean_power_consumption don't appear.
The code in which it exists this error (sinergym/utils/callbacks.py):
* Changed name .vscode to .vscode_conf in order to not to affect current local workspace vscode IDE (developer user)
* Solved bug #168
* Trying to solve issue #171
* Solved #173
* Fixed bug about duplicated progress.csv logs when DRL_battery.py script is used
* Added env.close() when DRL algorithm learning process doesn't close env automatically in DRL_battery.py (Added log message when simulation is closed in simulator Backend to detect this problem in the future easily).
* DRL_battery.py: The total number of timesteps is equal to the number of episodes multiplied by their size MINUS 1 in order to avoid a last reset and add an empty episode.
* Re-structure for evaluation callback (Separation between evaluation callback and evaluation policy in Sinergym code)
* Deleted reset in EvalCallback in order to avoid empty episodes
* Migrating evaluation policy from callbacks
* Separated env environment from train environment+
* Fixed tests in order to new changes
* Added MANIFEST.in sinergym/data files
Bug 🐛
Evaluation logger (tensorboard callback) doesn't register mean_reward and mean_power correctly. This happens in
LoggerEvalCallback
.To Reproduce
Thanks to @manjavacas, who saw the problem. You have to run an experiment in
DRL_battery.py
with tensorboard and evaluation active, and we will have the next:The information is OK due to it is a tensorboard scale visualization issue, but mean_reward and mean_power_consumption don't appear.
The code in which it exists this error (sinergym/utils/callbacks.py):
Expected behavior
cumulative_reward and power_consumption should be mean_reward and mean_power in tensorboard evaluation graphs.
Checklist
📝 Please, don't forget to include more labels besides
bug
if it is necessary.The text was updated successfully, but these errors were encountered: