Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results of validation/test accuracy in NATS-Bench paper #38

Closed
Tommy787576 opened this issue Jan 27, 2022 · 4 comments
Closed

The results of validation/test accuracy in NATS-Bench paper #38

Tommy787576 opened this issue Jan 27, 2022 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@Tommy787576
Copy link

Tommy787576 commented Jan 27, 2022

Hi! I want to get the validation and test accuracy as Table 4 in "NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size" paper. I just want to check the following commands are correct or not:

After finishing the architecture search (I'm studying Weight-Sharing approach), I get the genotype. Then, I get the arch_index via:
arch_index = api.query_index_by_arch('......genotype here......')
Therefore, for Cifar10 validation accuracy:
info = api.get_more_info(arch_index, 'cifar10-valid', hp=200)
for Cifar10 test accuracy:
info = api.get_more_info(arch_index, 'cifar10', hp=200)
for Cifar100 validation/test accuracy:
info = api.get_more_info(arch_index, 'cifar100', hp=200)
for ImageNet16-120 validation/test accuracy:
info = api.get_more_info(arch_index, 'ImageNet16-120', hp=200)

Following are some points I want to check:

  1. Does Cifar10 test accuracy results use train + valid set for training and test set for testing? Thus, I should use 'cifar10' instead of 'cifar10-valid' to get test accuracy?
  2. What does valtest-accuracy mean in Cifar100 and ImageNet16-120?
  3. I get the architecture with validation accuracy higher than Optimal values reported in the paper for ImageNet16-120. Why?
    image

Great Thanks!

@D-X-Y D-X-Y self-assigned this Jan 27, 2022
@D-X-Y D-X-Y added the question Further information is requested label Jan 27, 2022
@D-X-Y
Copy link
Owner

D-X-Y commented Jan 28, 2022

Thanks for your interest in NATS-Bench.

1, Yes, please use 'cifar10' to get the test accuracy.

2, valtest-accuracy means the accuracy on the joint of validation and test sets. Note that, here the validation set and test set following the split strategy in our paper, which is different from the original CIFAR-100 setting.

3, Possibly because you are using is_random=True, which will randomly select a seed. When we report the Optimal values, we use the average results from all seeds.

@Tommy787576
Copy link
Author

Hi, thank you for quick reply!
Therefore, I should set is_random=False to get the average results from all seeds. Am I correct?

@D-X-Y
Copy link
Owner

D-X-Y commented Jan 28, 2022

Yes, when you are using is_random=False, you can get the average results.

If you are benchmarking your own NAS algorithm, it is also suggested to use the simulate_train_eval API as in our examples: https://github.com/D-X-Y/AutoDL-Projects/blob/main/exps/NATS-algos/regularized_ea.py#L144

@Tommy787576
Copy link
Author

Ok. Thank you so much!
Hope you have a wonderful day!

@D-X-Y D-X-Y pinned this issue Mar 5, 2022
@D-X-Y D-X-Y mentioned this issue Mar 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants