Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute per category metrics #6585

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

philippschw
Copy link

@philippschw philippschw commented Apr 16, 2019

Inspired by
#5010
#4778 (comment)

I would like to submit this pull request to tensorflow object detection API to enable everyone to readily compute per category metrics.

As described, three code additions are needed:

  1. addtion in "pycocotools/cocoeval.py":from line 499 to line 648
    I made a pull request on: Get mAP for each class cocodataset/cocoapi#282

  2. addtion in "tensorflow/models/research/object_detetcion/metrics/coco_tools.py":from line 240 to line 244

  3. add

{
    metrics_set: "coco_detection_metrics" 
    include_metrics_per_category: true 
}

to "eval_config" part in pipeline config file.
For instance:

eval_config: {
  num_examples: 8000
  max_evals: 10
  num_visualizations: 20
  metrics_set: "coco_detection_metrics" 
  include_metrics_per_category: true   
}

The code changes are tested in my code and see per category metrics stats successfully plotted in tensorboard. This is my first ever pull request, any feedback would be appreciated.

@ubill88
Copy link

ubill88 commented May 8, 2019

@philippschw Thanks for the reference! I can see all the classes and their mAP displayed on my TensorBoard, and stuff likes below on the terminal (I use legacy/eval.py to evaluate).

INFO:tensorflow:DetectionBoxes_PerformanceByCategory/mAP/bc: 0.152886
INFO:tensorflow:DetectionBoxes_PerformanceByCategory/mAP/br: -1.000000
INFO:tensorflow:DetectionBoxes_PerformanceByCategory/mAP/fchm: -1.000000
INFO:tensorflow:DetectionBoxes_PerformanceByCategory/mAP/fon|: 0.000000
INFO:tensorflow:DetectionBoxes_PerformanceByCategory/mAP/fow-: 0.166513
INFO:tensorflow:DetectionBoxes_PerformanceByCategory/mAP/fs: 0.288510

I'm just wondering if this mAP refer to below?
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ]

Also, is there any way I can see other evaluation results like small mAP, medium mAP, large mAP, and AR?

last, I think coco is a little strict, is there any way i can change the IoU threshold (e.g. 0.50:0.95 to 0.30:0.75)?

Thank you very much!
All the best!

@as1392
Copy link

as1392 commented Jul 22, 2019

@ubill88 Yes, it is. If you want to see other metrics, it seems like you should add --all_metrics_per_category=true in eval_config(or just set its default value=true in ComputeMetrics in coco_tools.py

Edit : It gives error when using protos.. maybe you should modify eval.proto and recompile it. (adding
optional bool all_metrics_per_category = 28 [default=false]; seems good though I'm not sure.)

@zishanahmed08
Copy link

@jaeyounkim Is there an update on this PR. This is a very important feature request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants