-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add metric visualizations #429
Add metric visualizations #429
Conversation
* `VisualizerCallback` is refactored into a base `VisualizerCallbackBase` class which is used to construct `VisualizerCallbackImage` via inheritance. * Fix bug in `VisualizerCallback.on_test_end`, where `pl_module.logger` was accessed instead of iterating over `trainer.loggers` (wandb errort silently before iirc) * Skeleton for `VisualizerCallbackMetric` is added
for logger in trainer.loggers: | ||
if isinstance(logger, AnomalibWandbLogger): | ||
logger.save() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsure whether it's "bad practice" to call .save on a AnomalibWandbLogger
multiple times, as is done currently. Any insights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has been a while since I have looked into AnomalibWandbLogger but ideally it should only be called once at the very end. This is explained in the comment above This is because logging as a single batch ensures that all images appear as part of the same step.
. In case it is being called multiple times then my guess is that the isinstance
falls back to the base class and probably logger ends up always being an instance of AnomalibWandbLogger
(LightningLogger
more specifically). Maybe a good idea would be to use type()
but mypy or pylint might complain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opted with clearing AnomalibWandbLogger.image_list
on save. Thus, all predictions are in the first step, and all metrics in the second step (though still under the same namespace)
@ORippler, thanks a lot for your contribution! Just so know, code quality checks have failed due to number of reasons
For more details, you could refer to this. Will you be able to address this ? |
Sure, once I get all the functionality implemented. |
PyTorch provides a function for logging Figure objects, so lets not reinvent the wheel. https://pytorch.org/docs/stable/tensorboard.html?highlight=add_figure#torch.utils.tensorboard.writer.SummaryWriter.add_figure
Otherwise, matplotlib starts to complain and objects aren't gced properly
`Visualizer` is needed by all Callbacks for writing images to disk
Every metric that should be visualized needs to implement its own `generate_figure` function, and the resulting plot is then saved by `VisualizerCallbackMetric`
…alib-1 into feature/metrics_visualizer
* Needed to change signature of `AnomalyModule._collect_outputs` for pixel-wise metrics as we need spatial dimensions for connected-componenta-analysis in AUPRO * Add AUPRO, which uses kornia and the fact that per-region overlap == per-region tpr for fast AUPRO computation * Updated docstrings for AUPR/AUROC
Due to bugs in CLI, feature could not be tested well
I got the core functionality implemented and tested it locally so far. Some points from my side:
they don't necessarily all appear when I open a
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the efforts!
metrics = getattr(pl_module, metric_type) | ||
for metric in metrics.values(): | ||
# `generate_figure` needs to be defined for every metric that should be plotted automatically | ||
if hasattr(metric, "generate_figure"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree maybe having a base metric with generate_figure as an abstract method would be a cleaner approach. But we can address this in a different pr
Since unique fpr/tpr curves are generated for each label, we cannot use fpr_index across calls ro `roc`. Instead, we now bilinearly resample generated pro curves at `fpr <= self.fpr_limit` to fixed sampling points, and then aggregate over the resampled curve.
We follow a scheme similar to the ROC curve, where the PRO curve is composed of per-region TPR plotted against global FPR.
In addition to bugfixing the PRO-calculation, i passed the mypy checks and clarified wording. Btw, |
Thanks for the update. We'll have a look at this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ORippler, thanks again for such contribution! I only have a single comment regarding the naming. Other than, that looks great!
We now prepend the last part, i.e. `VisualizerCallbackBase` -> `BaseVisualizerCallback`
Alright so the test for ganomaly should pass now as well (There were a couple of bugs in visualizer for the |
Thanks, yeah, I also noticed the classification bug in the visualizer yesterday. Will fix it in a separate PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for your effort and valuable contribution!
Description
This PR:
VisualizerCallback
intoVisualizerCallbackBase
andVisualizerCallbackimage
, and by introducing a newVisualizerCallbackMetric
classChanges
Checklist