This repository contains code for paper NPENAS: Neural Predictor Guided Evolution for Neural Architecture Search.
If you use the code please cite our paper.
@article{Wei2020NPENASNP,
title={NPENAS: Neural Predictor Guided Evolution for Neural Architecture Search},
author={Chen Wei and Chuang Niu and Yiping Tang and Ji-min Liang},
journal={ArXiv},
year={2020},
volume={abs/2003.12857}
}
- Python 3.7
- Pytorch 1.3
- Tensorflow 1.14.0
- ptflops
pip install --upgrade git+https://github.com/sovrasov/flops-counter.pytorch.git
- torch-scatter
pip install torch-scatter==1.4.0
- torch-sparse
pip install torch-sparse==0.4.3
- torch-cluster
pip install torch-cluster==1.4.5
- torch-spline-conv
pip install torch-spline-conv==1.1.1
- Ubuntu 18.04
- cuda 10.0
- cudnn 7.5.1
git clone https://github.com/auroua/NPENASv1
cd NPENASv1
- Down load
NASBench-101
dataset first. We only use thenasbench_only108.tfrecord
file. - Modify the
tf_records_path
variable innas_lib/config.py
to store the absolute path ofnasbench_only108.tfrecord
. - You can test the default sampling pipeline via running the following command. Change the
save_dir
to your directory before running.
# gpus: the number of gpus used to execute searching.
# save_dir: the output path.
python train_multiple_gpus_close_domain.py --trials 600 --search_budget 150 --search_space nasbench_case1 --algo_params nasbench101_case1 --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_101/ --comparison_type algorithm --record_full_data F
- You can test the new sampling pipeline via running the following command. Change the
save_dir
to your directory before running.
# gpus: the number of gpus used to execute searching.
# save_dir: the output path.
python train_multiple_gpus_close_domain.py --trials 600 --search_budget 150 --search_space nasbench_case2 --algo_params nasbench101_case2 --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_101/ --comparison_type algorithm --record_full_data F
- Run the following command to visualize the comparison of algorithms. Change the
save_dir
to the path ofsave_dir
in step 3 or 4.
python tools_close_domain/visualize_results.py --search_space nasbench_101 --draw_type ERRORBAR --save_dir /home/albert_wei/Disk_A/train_output_npenas/close_domain_case1/
Visualize the results of comparing algorithms on NAS-Bench-101 search space via using the new sampling pipeline.
- Scaling Factor Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 150 --search_space nasbench_case2 --algo_params nasbench101_case2 --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_101/ --comparison_type scalar_compare --record_full_data T --record_kt T
- ReLU CELU Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 150 --search_space nasbench_case2 --algo_params nasbench101_case2 --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_101/ --comparison_type relu_celu --record_full_data T --record_kt T --relu_celu_comparison_algo_type NPENAS_NP
We only compared algorithms using the new sampling pipeline on the NASBench-201 dataset.
- Down load the
NASBench-201
dataset first. In this experiment, we use theNASBench-201
dataset with versionv1_1-096897
, and the file name isNAS-Bench-201-v1_1-096897.pth
. - Modify the
nas_bench_201_path
variable innas_lib/config.py
to store the absolute path ofNAS-Bench-201-v1_1-096897.pth
. - As the
NASBench-201
dataset is too large. and theNASBench-201
utilizes edges as operates and nodes as features. In order to use theNAS-Bench-201
dataset, we have to convert this dataset first. - Modify the variable
nas_bench_201_converted_path
innas_lib/config.py
to the path that store the processedNAS-Bench-201
dataset. - Run the following command to execute convert. This step is memory consuming. The memory of my computer is 32G.
# dataset: the dataset used to train the architectures in NASBench-201, choices: ['cifar10-valid', 'cifar100', 'ImageNet16-120']
python tools_close_domain/train_init_dataset.py --dataset cifar10-valid
- You can run the following command to compare algorithms on
NAS-Bench-201
dataset. Change thesave_dir
to your directory before running.
# gpus: the number of gpus used to execute searching.
# save_dir: the output path.
python train_multiple_gpus_close_domain.py --trials 600 --search_budget 100 --search_space nasbench_201 --algo_params nasbench_201 --gpus 1 --multiprocessing-distributed False --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_201/ --comparison_type algorithm --record_full_data F --dataset cifar100
- Run the following command to visualize the comparison of algorithms. Change the
save_dir
to the path ofsave_dir
in step 6.
python tools_close_domain/visualize_results.py --search_space nasbench_201 --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_201/ --draw_type ERRORBAR
- Scaling Factor Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 100 --search_space nasbench_201 --algo_params nasbench_201 --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_201/ --comparison_type scalar_compare --record_full_data T --record_kt T --dataset cifar100
- ReLU CELU Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 100 --search_space nasbench_201 --algo_params nasbench_201 --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_201/ --comparison_type relu_celu --record_full_data T --record_kt T --relu_celu_comparison_algo_type NPENAS_NP --dataset cifar100
We only compared algorithms using the new sampling pipeline on the NASBench-201 dataset.
- Down load the
NASBench-NLP
dataset first. - Modify the
nas_bench_nlp_path
variable innas_lib/config.py
to store the folder directory ofNASBench_NLP
. - You can run the following command to compare algorithms on
NAS-Bench-NLP
dataset. Change thesave_dir
to your directory before running.
# gpus: the number of gpus used to execute searching.
# save_dir: the output path.
python train_multiple_gpus_close_domain.py --trials 600 --search_budget 100 --search_space nasbench_nlp --algo_params nasbench_nlp --gpus 1 --multiprocessing-distributed False --save_dir /home/albert_wei/Disk_A/train_output_npenas/nasbench_nlp/ --comparison_type algorithm --record_full_data F
- Run the following command to visualize the comparison of algorithms. Change the
save_dir
to the path ofsave_dir
in step 6.
python tools_close_domain/visualize_results.py --search_space nasbench_201 --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_201/ --draw_type ERRORBAR
- Scaling Factor Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 100 --search_space nasbench_nlp --algo_params nasbench_nlp --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/nasbench_nlp/ --comparison_type scalar_compare --record_full_data T --record_kt T
- ReLU CELU Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 100 --search_space nasbench_nlp --algo_params nasbench_nlp --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/nasbench_nlp/ --comparison_type relu_celu --record_full_data T --record_kt T --relu_celu_comparison_algo_type NPENAS_BO
We only compared algorithms using the new sampling pipeline on the NASBench-201 dataset.
- Down load the
NASBench-ASR
dataset first. - Modify the
nas_bench_asr_path
variable innas_lib/config.py
to store the folder directory ofNASBench_ASR
. - You can run the following command to compare algorithms on
NAS-Bench-ASR
dataset. Change thesave_dir
to your directory before running.
# gpus: the number of gpus used to execute searching.
# save_dir: the output path.
python train_multiple_gpus_close_domain.py --trials 600 --search_budget 100 --search_space nasbench_asr --algo_params nasbench_asr --gpus 1 --multiprocessing-distributed False --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_asr/ --comparison_type algorithm --record_full_data F
- Run the following command to visualize the comparison of algorithms. Change the
save_dir
to the path ofsave_dir
in step 6.
python tools_close_domain/visualize_results.py --search_space nasbench_asr --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_201/ --draw_type ERRORBAR
- Scaling Factor Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 100 --search_space nasbench_asr --algo_params nasbench_asr --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_asr/ --comparison_type scalar_compare --record_full_data T --record_kt T
- ReLU CELU Analysis
python train_multiple_gpus_close_domain.py --trials 200 --search_budget 100 --search_space nasbench_asr --algo_params nasbench_asr --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_2021/npenas_asr/ --comparison_type relu_celu --record_full_data T --record_kt T --relu_celu_comparison_algo_type NPENAS_BO
- Run the following command to search architecture in
DARTS
search space via algorithmNPENAS-BO
. Change thesave_dir
to your directory before running.
python train_multiple_gpus_open_domain.py --gpus 1 --algorithm gin_uncertainty_predictor --budget 150 --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_open_domain_darts_1/
- Run the following command to search architecture in
DARTS
search space via algorithmNPENAS-NP
. Change thesave_dir
to your directory before running.
python train_multiple_gpus_open_domain.py --gpus 1 --algorithm gin_predictor --budget 100 --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_open_domain_darts_2/
- Run the following command to rank the searched architectures, and select the best to retrain. Replace
model_path
with the real searched architectures' output path.
python tools_open_domain/rank_searched_darts_arch.py --model_path /home/albert_wei/Disk_A/train_output_npenas/npenas_open_domain_darts_2/model_pkl/
- Retrain the selected architecture using the following command.
model_name: the id of select architecture
save_dir: the output path
python tools_open_domain/train_darts_cifar10.py --seed 1 --model_name ace85b6b1618a4e0ebdc0db40934f2982ac57a34ec9f31dcd8d209b3855dce1f.pkl --save_dir /home/albert_wei/Disk_A/train_output_npenas/npenas_open_domain_darts_2/
- Test the retrained architecture with the following command
model_name: the id of select architecture
save_dir: set with the save_dir in step 4
model_path: set with the save_dir in step 1 or 2
python tools_open_domain/test_darts_cifar10.py --model_name xxxx --save_dir xxxx --model_path xxxx
- Run the following command to visualize the normal cell and reduction cell of the searched best architecture.
python tools_open_domain/visualize_results.py --model_path xxxx --model_name xxxx
If you encounter the following problem please reference this link possible deadlock in dataloader
RuntimeError: unable to open shared memory object </torch_31124_2696026929> in read-write mode
Visualize the normal cell and the reduction cell searched by NPENAS-NP
, and this architecture achieves a testing error 2.44%
.
You can download the best architecture's genotype file from genotype with extract code itw9
. The address of the retrained weight file is ckpt with extract code t9xq
.
You can use the command in step 5 to verify the model.
There are two mutation strategies: one-parent-one-child, one-parent-multiple-children. Change the save_dir
to your directory before running.
- Run the following command.
python train_multiple_gpus_close_domain.py --trials 600 --search_budget 150 --search_space nasbench_case1 --algo_params evaluation_compare --gpus 1 --save_dir /home/albert_wei/Disk_A/train_output_npenas/evolutionary_compare/
- Visualize the results. Set the
save_dir
with thesave_dir
in step 1.
python tools_close_domain/visualize_results.py --search_space evaluation_compare --draw_type ERRORBAR --save_dir /home/albert_wei/Disk_A/train_output_npenas/evolutionary_compare/
There are two different architecture sampling pipelines: the default sampling pipeline and the new sampling pipeline. Run the following code to compare the paths distribution of different sampling pipelines:
python tools_close_domain/visualize_sample_distribution.py --sample_num 5000 --seed 98765
- Compare the four method mentioned in the paper.
python tools_close_domain/prediction_compare.py --trials 300 --seed 434 --search_space nasbench_case1 --save_path /home/albert_wei/Disk_A/train_output_npenas/prediction_compare/prediction_compare.pkl
- Parse the results generate from the above step.
python tools_close_domain/prediction_compare_parse.py --save_path /home/albert_wei/Disk_A/train_output_npenas/prediction_compare/prediction_compare.pkl
Experiment | visualization script* | link | password |
---|---|---|---|
npenas close domain search | tools_close_domain/visualize_results.py | link | k3iq |
scaling factor analysis | tools_close_domain/visualize_results_scaling_factor.py | link | qd1s |
relu celu comparison | tools_close_domain/visualize_results_relu_celu.py | link | pwvk |
search space correlation analysis | tools_ss_analysis/search_space_analysis_correlation.py | link | ebrh |
search space distance distribution analysis | tools_ss_analysis/search_space_analysis_dist_distribution.py | link | h63o |
statistical testing | tools_ss_analysis/stats_ttest.py | link | h63o |
mutation strategy analysis | tools_close_domain/visualizee_results_nasbench_nlp_mutation_strategy.py | link | pn7s |
* modify the parameters of the visualization script to view results.
- bananas
- naszilla
- pytorch_geometric
- maskrcnn-benchmark
- detectron2
- NAS-Bench-101
- NAS-Bench-201
- darts
- AlphaX
- NAS-Bench-NLP
- NAS-Bench-ASR
Chen Wei
email: [email protected]