Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add evaluation scripts (Updated on 2023-11-07) #119

Open
fengyuentau opened this issue Jan 15, 2023 · 16 comments
Open

Add evaluation scripts (Updated on 2023-11-07) #119

fengyuentau opened this issue Jan 15, 2023 · 16 comments
Assignees
Labels
evaluation adding tools for evaluation or bugs of eval scripts feature New feature or request

Comments

@fengyuentau
Copy link
Member

fengyuentau commented Jan 15, 2023

We now have over 15 models covering more than 10 tasks in the zoo. Although most of the models are converted to ONNX straightly from its original format, such conversion may potentially lead to drop of accuracy, especially for FP16 and Int8-quantized models. To show the actual accuracy for our users, we now already have some evaluation scripts with the following conditions met in https://github.com/opencv/opencv_zoo/tree/master/tools/eval:

  1. Reproduce the claimed accuracy with the converted FP32 ONNX model using OpenCV DNN as inference framework. The claimed accuracy is either from the source repository or paper on the same dataset, which needs to be specified in the first comment of the pull request.
  2. Once it is reproduced, apply the same evaluation script on FP16 and Int8-quantized models.

Take a look at the task list below for current status. Feel free to leave a comment for application or discussion before you start to contribute.

Status Task Dataset Models Notes
✅ Done in #70 Face Detection WIDERFace YuNet -
✅ Done in #72 Face Recognition LFW SFace -
❗️ Need Contribution License Plate Detection ? LPD-YuNet -
❗️ Need Contribution Object Detection COCO YOLOX & NanoDet Refer to #91
❗️ Need Contribution Text Detection ? DB -
✅ Done in #71 Text Recognition ICDAR2003 & IIIT5K CRNN (EN & CN) -
✅ Done in #69 Image Classification ImageNet PP-ResNet50 & MobileNet V1 / V2
✅ Done in #130 Human Segmentation Mini Supervisely Persons PP-HumanSeg -
❗️ Need Contribution QR Code Detection / Parsing ? WeChatQRCode -
❗️ Need Contribution Person Re-identification ? YoutuReID -
❗️ Need Contribution Palm Detection ? MP-PalmDet -
❗️ Need Contribution Hand Pose Estimation ? MP-HandPose -
❗️ Need Contribution Person Detection ? MP-PersonDet -
❗️ Need Contribution Pose Estimation ? MP-Pose -
❗️ Need Contribution Facial Expression Recognition RAF-DB FER -
❗️ Need Contribution Object Tracking ? VitTrack Could be done via #205
@fengyuentau fengyuentau added evaluation adding tools for evaluation or bugs of eval scripts feature labels Jan 15, 2023
@fengyuentau fengyuentau self-assigned this Jan 15, 2023
@fengyuentau fengyuentau pinned this issue Jan 15, 2023
@labeeb-7z
Copy link
Contributor

hey @fengyuentau , i'd love to contribute to this and need some clarification.

say if I were to add eval script for the PP-HumanSeg model, I should

  • add a coco.py file in tools/eval/datasets
  • this file will implement a coco class with the following methods.
    • load_label(loads input and output for evaluation)
    • eval (run the evaluation process)
    • get_result
    • print_result
  • update datasets dictionary in eval.py
  • update DATASETS Registry.

Am I getting this right? thanks.

@fengyuentau
Copy link
Member Author

@labeeb-7z Basically yes. You can also take a look at #70 for reference.

@fengyuentau fengyuentau added feature New feature or request and removed feature labels Feb 1, 2023
@labeeb-7z
Copy link
Contributor

Hey @fengyuentau , I started by looking for datasets for the models, below is what I found and would like your input before prceeding further.

PP_HumanSeg Model

I found this Readme which contains some information about the original model trained by PaddlePaddle.
(Consider adding a direct link to this on HumanSeg model page)

They do provide a link to dataset for inferencing, validation and training However Im unsure whether this was the exact dataset used for training the model which is present on opencv_zoo.

I would like a confirmation to proceed with this dataset.

HandPose Estimation and Palm Detection Model

The estimation model is derived(?) from the Palm Detection model as per this blog from mediapipe. So same dataset.

Information about dataset is not mentioned in the blog, however it links a paper they published. The paper describes the dataset they used, but no links to the dataset.
The described datasets (In-the-wild and In-house collected gesture) couldn't be found online.
I could find some other datasets used for palm detection, should I proceed with them?

License plate detection Model

The provided refernces did not include any relevant information about the datasets, its of a face detection model(?).
Also could not find any relevant information at watrix.ai

WeChatQRcode Model

again the provided references do not contain any information related to the dataset used for training.

Maybe we can ask the original contributors of respective models for the dataset used in training?

I think including more information about the model(datasets, model architecture etc) in future would be helpful for users as well as developers.

@fengyuentau
Copy link
Member Author

Thank you for all the research! Please, see below.

PP_HumanSeg Model

In PaddleSeg, they provide everything including validation script and data for testing, but no accuracy numbers. The model here in the zoo is converted from PaddlePaddle using this command and you can see the keyword "hrnet18_small_v1" in the filename which is basically the same from PaddleSeg. If you have enough time, you can always get the model, test data and code from their repo and run val.py to get the accuracy, and validate the one here in the zoo using OpenCV DNN as inference framework and the same test data.

HandPose Estimation and Palm Detection Model

Since they do not provide the dataset, it worths a try with other datasets that are popular and widely used.

License plate detection Model

This one is adapted from a face detection model and trained with Chinese license plate datasets. I believe you can also find some datasets available online.

WeChatQRcode Model

This model comes from the WeChat-CV team and I am not positive that they will provide the data. But again it worths a try.


I think including more information about the model(datasets, model architecture etc) in future would be helpful for users as well as developers.

It could be helpful if the information is accurate. Normally we put a link to the source of the model and let people look for the thing they are interested in. Not everyone is willing or able to share datasets due to some limitations and restrictions.

@labeeb-7z
Copy link
Contributor

hey @fengyuentau , apologies for the delayed response, I was caught up with college work.

I've submitted a PR for PP_HumanSegmentation model #130 .

For Palm detection and Handpose estimation, I came across the HandNet dataset and think it can be good place to start for evaluation.
Other potential datasets can be found here awesome-hand-pose-estimation .
Let me know which one should I proceed with.

@fengyuentau
Copy link
Member Author

Hello @labeeb-7z , thank you for the update! I've reviewed the pull request and you could take a look at the comments.

As for the evaluation for palm detection and hand pose estimation, I suggest you take a look at palm detection first because we are planning to upgrade the hand pose estimation from 2D to 3D hand pose output.

@labeeb-7z
Copy link
Contributor

Hello @fengyuentau @zihaomu , I was interested in #7 and #8 of opencv's gsoc ideas for this year. I went through the resources and had some points to discuss before writing the proposal.

Is there a forum/mailing list for such discussions? The contributor+mentor mailing list provided on wiki page doesnt seem to work.

Also I see that the ideas for '23 are same as the ones for '22, so I just wanted to confirm whether they'll still be part of '23 gsoc.

@fengyuentau
Copy link
Member Author

fengyuentau commented Mar 20, 2023

Hello @labeeb-7z , the forum/mailing list is https://groups.google.com/g/opencv-gsoc-202x. Some of the ideas are the same because of limited slots and lack of proposals.

Please have discussion on that page. Here is for the discussion on evaluation scripts.

@fengyuentau fengyuentau changed the title Add evaluation scripts for models in the zoo Add evaluation scripts (Updated on 2023-11-07) Nov 7, 2023
@kshitijdshah99
Copy link

kshitijdshah99 commented Jan 4, 2024

Hey, @fengyuentau I would love to contribute to this issue.

I have some doubts regarding the dataset used in the Text Detection PP-OCRv3 model hence needed clarifications for the same. I was figuring out for the available datasets in this research paper where I found many of them, to test the accuracy of our model can we use any of these datasets or the one mentioned by you in the README(IC15 and TD500) of the project.

@fengyuentau
Copy link
Member Author

@kshitijdshah99 You are welcome to contribute. You can use any dataset as long as it is publicly accessible. We do favor datasets which are popular, for example, COCO dataset for object detection is used for evaluation in basically every object detection paper.

@ryan1288
Copy link
Contributor

Hello @fengyuentau, I'm also interested in tackling one of these evaluation scripts. Do you have a model that you'd prioritize over the others? 👍 I can take a look and come back with a brief proposal.

@ryan1288
Copy link
Contributor

I'm interested in the Object Detection Evaluation. If no one else is working on it, can I give it a try? @fengyuentau

@fengyuentau
Copy link
Member Author

I'm interested in the Object Detection Evaluation. If no one else is working on it, can I give it a try? @fengyuentau

Yes, feel free to do so.

@Hmm-1224
Copy link

Hmm-1224 commented Jan 9, 2025

Hello @fengyuentau,
My name is Sonal Kumari, and I am passionate about deep learning and machine learning. I am particularly interested in contributing to QR code detection and parsing.

I have explored the WeChatQRCode model and conducted research using publicly available datasets, including high-resolution QR code datasets and noisy/occluded datasets. I evaluated the model’s performance on my system using google colab and collected the test datasets from kaggle:
On high-resolution QR codes, the model achieves 100% accuracy with excellent efficiency.
On noisy and occluded datasets, the accuracy is 82.61%, with a processing speed of 0.024 seconds per image.

While these results are impressive, I believe there is room for improvement in handling challenging conditions. Additionally, I find areas like benchmarking, augmented dataset creation, and mobile optimization to be promising fields for tackling real-world problems.

I would love your guidance on where I should focus my contributions to make the most impactful improvements. Any suggestions on the most relevant direction would be greatly appreciated.

@fengyuentau
Copy link
Member Author

Hello @Hmm-1224 , thank you for exploring the qr code model! You are welcome to contribute the evaluation script to this repo, specifically https://github.com/opencv/opencv_zoo/tree/main/tools/eval.

Meanwhile you can train your own models based on the datasets that you found. If yours outperforms the one we have now, we are also welcome for contributing models.

@Hmm-1224
Copy link

Hmm-1224 commented Jan 12, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation adding tools for evaluation or bugs of eval scripts feature New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants