-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about test dataset #3
Comments
Hi.
Yes. For fair evaluation, we aggregate these 3 datasets to form an entire real dataset, and combine it with every fake dataset for testing. |
Hi! This 'stop_count' parameter is equivalent to 'sample_size' in the bash for test. If you set the value as positive, only the first {stop_count} videos will be evaluated for each class. For precise evaluation, we set the value as '-1' for evaluation on all videos. To save time, we recommend that this parameter can be set as an appropriate positive number (e.g. 500, etc) to get a rough result.
I hope this message is helpful to you.
秋枫
***@***.***
|
Hi. My device is a single RTX 3090. May I ask how long does it take for you to test a single dataset? |
Hi. I think the inference time may be several hours for one single dataset, depending on the number of videos. |
hi, i want to know the real test datasets are from 3 datasets (VC1, Zscope, OSora), this means the test is on every fake test datasets of the diffusion generative methods ,and the the above 3 real test datasets?
The text was updated successfully, but these errors were encountered: