-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metric: CLIPIQA #348
Metric: CLIPIQA #348
Conversation
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
Ready for re-review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments for discussion
For some reason I cannot comment directly reply to this comment so I'll do it here. @denproc nice catch and this one actually let me guess with high probability why the original implementation had this type type conversion. It turns out that:
As a result, I think it will be fair to allow computation of the metric only in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must have missed some of the entries of pos_embedding
parameter. I think we should change it to True
value as well to match the original clip behavioue.
In addition, we have to add the CLIP-IQA into the documentation. This also makes me think if we have all our metrics covered in documentation. I might check it later. |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This PR implements CLIPIQA metric described in Wang et. al. 2022.
Closes #331
The main reason to implement CLIP-IQA here is to give a user an opportunity to use the metric without bringing additional dependencies (mmcv/mmedit) to the project.
Note that CLIP-IQA+ won't be implemented here because CLIP weights were fine-tuned with mmcv and hence cannot be loaded and run without it.
Note that the values of this implementation correspond to the values produced by the official CLIP-IQA implementation. SRCC scores of evaluations on public benchmarks may mismatch the ones listed in the paper. We consider official code and weights to be the ultimate source of truth and hence stick with it.
Proposed Changes
results_benchmark.py
);Some decisions that may be questioned in the future