Different Results #104
Replies: 2 comments
-
Hi @Sammytop123 , Thank you for your message. I do not have details on the approach used to compute the results reported by method 1. By the name it seems to be YOLO, right? Two things that directly affect the results that can help your investigation are: the IOU threshold used by YOLO; and how they order their files if confidences of different detections in different files are the same. I hope that helps. |
Beta Was this translation helpful? Give feedback.
-
Hey @rafaelpadilla , Yes, I'm using a custom trained yolov4 network which was trained on a custom dataset.
Do you think that the ordering of the files when confidences of different detections are the same have such a big impact? Sammy |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm comparing 2 methods of calculating the mAP for my custom dataset with 5 classes
Here are the details in summary:
I'm using a test set of 121 imgs.
Method 1: When we use the
./darknet detector map xxx.data xxx.cfg xxx.weights
command on the test set, we get a certain AP results for every class and a mAP.Result:
Method 2:
a. I run the
./darknet detect -ext_output ... > result.txt'
, and thus exporting all of the detections in the 121 imgs in one txt fileb. I run a script that takes this result.txt file and generates 121 label files for every img in the following format:
<class_name> confidence left top width height (ABSOLUTE)
c. I run this repo's GUI .
Result:
The problem is that I get different results between Method 1 and Method 2....
Any suggestions?
Beta Was this translation helpful? Give feedback.
All reactions