You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When there are no predictions in an update, the MAP class returns a ValueError. Although in the case where no predictions are made and there are targets, the False Negative count should rise and make Recall worse, yet the MAP class does not seem to handle this case.
To Reproduce
Run the code sample in metrics/detection_map.py but empty the tensors in the preds dictionary.
Code sample
preds = [
dict(
# The boxes keyword should contain an [N,4] tensor,
# where N is the number of detected boxes with boxes of the format
# [xmin, ymin, xmax, ymax] in absolute image coordinates
boxes=torch.Tensor([]),
# The scores keyword should contain an [N,] tensor where
# each element is confidence score between 0 and 1
scores=torch.Tensor([]),
# The labels keyword should contain an [N,] tensor
# with integers of the predicted classes
labels=torch.IntTensor([]),
)
]
Expected behavior
Recall to be affected, not a ValueError.
Environment
PyTorch Version (e.g., 1.0):
OS (e.g., Linux):
How you installed PyTorch (conda, pip, source):
Build command you used (if compiling from source):
Python version:
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context
I may be mission something when calculating Recall that is not immediately obvious. I just want to make sure the metrics are correct.
The text was updated successfully, but these errors were encountered:
Hey @Borda , I see the change now. Although I am still running into the issue when my tensor is torch.Tensor([ ]). It seems to address the case when a tensor is torch.Tensor([[]]). Should it handle both cases?
🐛 Bug
When there are no predictions in an update, the MAP class returns a ValueError. Although in the case where no predictions are made and there are targets, the False Negative count should rise and make Recall worse, yet the MAP class does not seem to handle this case.
To Reproduce
Run the code sample in
metrics/detection_map.py
but empty the tensors in thepreds
dictionary.Code sample
Expected behavior
Recall to be affected, not a ValueError.
Environment
conda
,pip
, source):Additional context
I may be mission something when calculating Recall that is not immediately obvious. I just want to make sure the metrics are correct.
The text was updated successfully, but these errors were encountered: