Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov10n verification of COCO test2017 MAP bug #356

Open
yang-0201 opened this issue Jul 18, 2024 · 1 comment
Open

yolov10n verification of COCO test2017 MAP bug #356

yang-0201 opened this issue Jul 18, 2024 · 1 comment

Comments

@yang-0201
Copy link

When I used the official code expecting to get the accuracy of yolov10n in COCO test2017, I used the 'coco.yaml' configuration file, and reasoned about the test set, the code is:

`from ultralytics import YOLOv10

if name == 'main':
model = YOLOv10('yolov10n.pt')
model.val(data='coco.yaml', device=0,split='test',save_json=True)`

then I submitted the saved json file to the official measurement system:https://codalab.lisn.upsaclay.fr/competitions/7384#participate-submit_results. The submission results showed that the MAP was almost always 0. After troubleshooting it was determined that the category id was the problem. There is a bug in the code:

in ultralytics/models/yolo/detect/val.py
self.is_coco = isinstance(val, str) and "coco" in val and val.endswith(f"{os.sep}val2017.txt")

because the current code only considers val2017.txt as a coco dataset and does the label transformation, but if it is test-dev2017.txt can not be recognized as coco, resulting in category information can not be transformed. Change the code to:

self.is_coco = isinstance(val, str) and "coco" in val and (val.endswith(f"{os.sep}val2017.txt") or val.endswith(f"{os.sep}test-dev2017.txt"))

After modifying the code, you can get the correct result:
yolov10n-cocotest2017.txt

overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.387
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.541
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.421
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.174
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.417
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.537
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.323
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.537
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.585
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.328
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.637
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.779
Done (t=508.27s)

@yang-0201
Copy link
Author

I've resolved the issue and submitted a PR #357

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant