From 6fad9f219dd472c0b35242d7c3d6918a808b16b8 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Mon, 21 Nov 2022 17:52:28 +0900 Subject: [PATCH 1/6] updated documents Signed-off-by: Yunchu Lee --- .gitignore | 2 +- QUICK_START_GUIDE.md | 462 ++++++++++++++---------------- README.md | 81 ++++-- ote_cli/ote_cli/tools/demo.py | 11 +- ote_cli/ote_cli/tools/deploy.py | 2 +- ote_cli/ote_cli/tools/eval.py | 9 +- ote_cli/ote_cli/tools/export.py | 2 +- ote_cli/ote_cli/tools/optimize.py | 12 +- ote_cli/ote_cli/tools/train.py | 11 + 9 files changed, 330 insertions(+), 262 deletions(-) diff --git a/.gitignore b/.gitignore index b923b765711..1a079c76fc4 100644 --- a/.gitignore +++ b/.gitignore @@ -4,7 +4,7 @@ __pycache__ .vscode/ *.iml -venv +*venv*/ env .env .tox diff --git a/QUICK_START_GUIDE.md b/QUICK_START_GUIDE.md index 40d04ef845e..7c76727c64d 100644 --- a/QUICK_START_GUIDE.md +++ b/QUICK_START_GUIDE.md @@ -2,135 +2,140 @@ ## Prerequisites -- Ubuntu 18.04 / 20.04 -- Python 3.8+ -- for training on GPU: [CUDA Toolkit 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) +Current version of project was tested under following environments -**Note:** If using CUDA, make sure you are using a proper driver version. To do so, use `ls -la /usr/local | grep cuda`. If necessary, [install CUDA 11.1](https://developer.nvidia.com/cuda-11.1.0-download-archive?target_os=Linux) and select it with `export CUDA_HOME=/usr/local/cuda-11.1`. +- Ubuntu 20.04 +- Python 3.8.x +- (Opional) To use the NVidia GPU for the training: [CUDA Toolkit 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) + +> **_Note:_** If using CUDA, make sure you are using a proper driver version. To do so, use `ls -la /usr/local | grep cuda`. If necessary, [install CUDA 11.1](https://developer.nvidia.com/cuda-11.1.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=2004&target_type=runfilelocal) (requires 'sudo' permission) and select it with `export CUDA_HOME=/usr/local/cuda-11.1`. ## Setup OpenVINO™ Training Extensions 1. Clone the training_extensions repository with the following commands: - ``` - git clone https://github.com/openvinotoolkit/training_extensions.git - cd training_extensions - git checkout develop - git submodule update --init --recursive + ```bash + $ git clone https://github.com/openvinotoolkit/training_extensions.git + $ cd training_extensions + $ git checkout develop + $ git submodule update --init --recursive ``` -2. Install prerequisites with: +1. Install prerequisites with: - ``` - sudo apt-get install python3-pip python3-venv + ```bash + $ sudo apt-get install python3-pip python3-venv + # verify your python version + $ python3 --version; pip3 --version; virtualenv --version + Python 3.8.10 + pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8) + virtualenv 20.0.17 from /usr/lib/python3/dist-packages/virtualenv/__init__.py ``` - Although they are not required, You may also want to use Jupyter notebooks or OTE CLI tools: + (Optional) You may also want to use Jupyter notebooks or OTE CLI tools: ``` - pip3 install notebook; cd ote_cli/notebooks/; jupyter notebook + $ pip3 install notebook; cd ote_cli/notebooks/; jupyter notebook ``` -3. Search for available scripts that create python virtual environments for different task types: +1. There available scripts that create python virtual environments for different task types: ```bash - find external/ -name init_venv.sh + $ find external/ -name init_venv.sh ``` - Sample output: + > **_Note_** The following scripts are valid for the current version of the project ``` - external/mmdetection/init_venv.sh - external/mmsegmentation/init_venv.sh - external/deep-object-reid/init_venv.sh + external/model-preparation-algorithm/init_venv.sh + external/anomaly/init_venv.sh ``` - Each line in the output gives an `init_venv.sh` script that creates a virtual environment - for the corresponding task type. - -4. Choose a task type, for example,`external/mmdetection` for Object Detection. + - `external/model-preparation-algorithm/init_venv.sh` can be used to create a virtual environment for the following task types. - ```bash - TASK_ALGO_DIR=./external/mmdetection/ - ``` + - Classification + - Detection + - Segmantation - Note that the variable `TASK_ALGO_DIR` is set in this example for simplicity and will not be used in scripts. + - `external/anomaly/init_venv.sh` can be used to create a virtual environment for the following task types. + - Anomaly-classification + - Anomaly-detection + - Anomaly-segmentation -5. Create and activate a virtual environment for the chosen task, then install the `ote_cli`. - Note that the virtual environment directory may be created anywhere on your system. - The `./cur_task_venv` is just an example used here for convenience. +1. Create and activate a virtual environment for the chosen task, then install the `ote_cli`. The following example shows that creating virtual environment to the `.venv_mpa` folder in your current directory for detection task. ```bash - bash $TASK_ALGO_DIR/init_venv.sh ./cur_task_venv python3.8 - source ./cur_task_venv/bin/activate - pip3 install -e ote_cli/ -c $TASK_ALGO_DIR/constraints.txt + # create virtual env. + $ external/model-preparation-algorithm/init_venv.sh .venv_mpa + # activate virtual env. + $ source .venv_mpa/bin/activate + # install 'ote_cli' to the activated virtual env. + (mpa)...$ pip3 install -e ote_cli/ -c external/model-preparation-algorithm/constraints.txt ``` - Note that `python3.8` is pointed as the second parameter of the script - `init_venv.sh` -- it is the version of python that should be used. You can - use any `python3.8+` version here if it is installed on your system. - - Also note that during installation of `ote_cli` the constraint file - from the chosen task folder is used to avoid breaking constraints - for the OTE task. + > **_note_** that during installation of `ote_cli` the constraint file + > from the chosen backend folder is used to avoid breaking constraints. -6. When `ote_cli` is installed in the virtual environment, you can use the - `ote` command line interface to perform various actions for templates related to the chosen task type, such as running, training, evaluating, exporting, etc. +1. Once `ote_cli` is installed to the virtual environment, you can use the + `ote` command line interface to perform various commands for templates related to the chosen task type, described in [OTE CLI commands](#ote-cli-commands) on that virutal environment. ## OTE CLI commands -### ote find +### Find -`ote find` lists model templates available for the given virtual environment. +`find` lists model templates available for the given virtual environment. ``` -ote find --root $TASK_ALGO_DIR -``` - -Output for the mmdetection used in the above example looks as follows: +usage: ote find [-h] [--root ROOT] [--task_type TASK_TYPE] [--experimental] +optional arguments: + -h, --help show this help message and exit + --root ROOT A root dir where templates should be searched. + --task_type TASK_TYPE filter with the task type (e.g., classification) + --experimental ``` -- id: Custom_Object_Detection_Gen3_VFNet - name: VFNet - path: ./external/mmdetection/configs/ote/custom-object-detection/gen3_resnet50_VFNet/template.yaml + +```bash +# example to find templates for the detection task +(mpa) ...$ ote find --task_type detection +- id: Custom_Object_Detection_Gen3_SSD + name: SSD + path: /local/yunchule/workspace/training_extensions/external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml + task_type: DETECTION +- id: Custom_Object_Detection_YOLOX + name: YOLOX + path: /local/yunchule/workspace/training_extensions/external/model-preparation-algorithm/configs/detection/cspdarknet_yolox_cls_incr/template.yaml task_type: DETECTION - id: Custom_Object_Detection_Gen3_ATSS name: ATSS - path: ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml + path: /local/yunchule/workspace/training_extensions/external/model-preparation-algorithm/configs/detection/mobilenetv2_atss_cls_incr/template.yaml task_type: DETECTION -- id: Custom_Object_Detection_Gen3_SSD - name: SSD - path: ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_SSD/template.yaml - task_type: DETECTION -- ... ``` -### ote train +### Training -`ote train` trains a model (a particular model template) on a dataset and saves results in two files: +`train` trains a model (a particular model template) on a dataset and saves results in two files: - `weights.pth` - a model snapshot - `label_schema.json` - a label schema used in training, created from a dataset -These files can be used by other `ote` commands: `ote export`, `ote eval`, `ote demo`. - -With the `--help` command, you can list additional information, such as its parameters common to all model templates and model-specific hyper parameters. +These files can be used by other commands: `export`, `eval`, and `demo`. -#### common parameters - -command example: +`train` command requires `template` as a positional arguement. it could be taken from the output of the `find` command above. ``` -ote train ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --help +usage: ote train template ``` -output example: +And with the `--help` command along with `template`, you can list additional information, such as its parameters common to all model templates and model-specific hyper parameters. -``` -usage: ote train [-h] --train-ann-files TRAIN_ANN_FILES --train-data-roots - TRAIN_DATA_ROOTS --val-ann-files VAL_ANN_FILES - --val-data-roots VAL_DATA_ROOTS [--load-weights LOAD_WEIGHTS] - --save-model-to SAVE_MODEL_TO +#### Common parameters + +```bash +# command example to get common paramters to any model templates +(mpa) ...$ ote train external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --help +usage: ote train [-h] --train-ann-files TRAIN_ANN_FILES --train-data-roots TRAIN_DATA_ROOTS --val-ann-files VAL_ANN_FILES --val-data-roots VAL_DATA_ROOTS [--load-weights LOAD_WEIGHTS] --save-model-to SAVE_MODEL_TO + [--enable-hpo] [--hpo-time-ratio HPO_TIME_RATIO] template {params} ... positional arguments: @@ -157,30 +162,21 @@ optional arguments: Expected ratio of total time to run HPO to time taken for full fine-tuning. ``` -#### model template-specific parameters +#### Model template-specific parameters command example: -``` -ote train ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml params --help -``` - -output example: - -``` -usage: ote train template params [-h] - [--learning_parameters.batch_size BATCH_SIZE] - [--learning_parameters.learning_rate LEARNING_RATE] - [--learning_parameters.learning_rate_warmup_iters LEARNING_RATE_WARMUP_ITERS] - [--learning_parameters.num_iters NUM_ITERS] - [--learning_parameters.enable_early_stopping ENABLE_EARLY_STOPPING] - [--learning_parameters.early_stop_patience EARLY_STOP_PATIENCE] - [--learning_parameters.early_stop_iteration_patience EARLY_STOP_ITERATION_PATIENCE] - [--postprocessing.confidence_threshold CONFIDENCE_THRESHOLD] - [--postprocessing.result_based_confidence_threshold RESULT_BASED_CONFIDENCE_THRESHOLD] - [--nncf_optimization.enable_quantization ENABLE_QUANTIZATION] - [--nncf_optimization.enable_pruning ENABLE_PRUNING] - [--nncf_optimization.maximal_accuracy_degradation MAXIMAL_ACCURACY_DEGRADATION] +```bash +# command example to get tamplate-specific parameters +(mpa) ...$ ote train external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml params --help +usage: ote train template params [-h] [--learning_parameters.batch_size BATCH_SIZE] [--learning_parameters.learning_rate LEARNING_RATE] [--learning_parameters.learning_rate_warmup_iters LEARNING_RATE_WARMUP_ITERS] + [--learning_parameters.num_iters NUM_ITERS] [--learning_parameters.enable_early_stopping ENABLE_EARLY_STOPPING] [--learning_parameters.early_stop_start EARLY_STOP_START] + [--learning_parameters.early_stop_patience EARLY_STOP_PATIENCE] [--learning_parameters.early_stop_iteration_patience EARLY_STOP_ITERATION_PATIENCE] + [--learning_parameters.use_adaptive_interval USE_ADAPTIVE_INTERVAL] [--postprocessing.confidence_threshold CONFIDENCE_THRESHOLD] + [--postprocessing.result_based_confidence_threshold RESULT_BASED_CONFIDENCE_THRESHOLD] [--nncf_optimization.enable_quantization ENABLE_QUANTIZATION] + [--nncf_optimization.enable_pruning ENABLE_PRUNING] [--nncf_optimization.pruning_supported PRUNING_SUPPORTED] [--tiling_parameters.enable_tiling ENABLE_TILING] + [--tiling_parameters.enable_adaptive_params ENABLE_ADAPTIVE_PARAMS] [--tiling_parameters.tile_size TILE_SIZE] [--tiling_parameters.tile_overlap TILE_OVERLAP] + [--tiling_parameters.tile_max_number TILE_MAX_NUMBER] optional arguments: -h, --help show this help message and exit @@ -193,97 +189,92 @@ optional arguments: --learning_parameters.learning_rate LEARNING_RATE header: Learning rate type: FLOAT - default_value: 0.008 + default_value: 0.01 max_value: 0.1 min_value: 1e-07 --learning_parameters.learning_rate_warmup_iters LEARNING_RATE_WARMUP_ITERS header: Number of iterations for learning rate warmup type: INTEGER - default_value: 200 + default_value: 3 max_value: 10000 min_value: 0 - --learning_parameters.num_iters NUM_ITERS - header: Number of training iterations - type: INTEGER - default_value: 300 - max_value: 100000 - min_value: 1 - --learning_parameters.enable_early_stopping ENABLE_EARLY_STOPPING - header: Enable early stopping of the training - type: BOOLEAN - default_value: True - --learning_parameters.early_stop_patience EARLY_STOP_PATIENCE - header: Patience for early stopping - type: INTEGER - default_value: 10 - max_value: 50 - min_value: 0 - --learning_parameters.early_stop_iteration_patience EARLY_STOP_ITERATION_PATIENCE - header: Iteration patience for early stopping - type: INTEGER - default_value: 0 - max_value: 1000 - min_value: 0 - --postprocessing.confidence_threshold CONFIDENCE_THRESHOLD - header: Confidence threshold - type: FLOAT - default_value: 0.35 - max_value: 1 - min_value: 0 - --postprocessing.result_based_confidence_threshold RESULT_BASED_CONFIDENCE_THRESHOLD - header: Result based confidence threshold - type: BOOLEAN - default_value: True - --nncf_optimization.enable_quantization ENABLE_QUANTIZATION - header: Enable quantization algorithm - type: BOOLEAN - default_value: True - --nncf_optimization.enable_pruning ENABLE_PRUNING - header: Enable filter pruning algorithm - type: BOOLEAN - default_value: False - --nncf_optimization.maximal_accuracy_degradation MAXIMAL_ACCURACY_DEGRADATION - header: Maximum accuracy degradation - type: FLOAT - default_value: 1.0 - max_value: 100.0 - min_value: 0.0 +... ``` -### ote optimize +#### Command example of the training -`ote optimize` optimizes a pre-trained model using NNCF or POT depending on the model format. +```bash +(mpa) ...$ ote train external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --train-ann-file data/airport/annotation_person_train.json --train-data-roots data/airport/train/ --val-ann-files data/airport/annotation_person_val.json --val-data-roots data/airport/val/ --save-model-to outputs +... -- NNCF optimization used for trained snapshots in a framework-specific format -- POT optimization used for models exported in the OpenVINO IR format - -For example: -Optimize a PyTorch model (.pth) with OpenVINO NNCF: +---------------iou_thr: 0.5--------------- -``` -ote optimize ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --load-weights weights.pth --save-model-to ./nncf_output --save-performance ./nncf_output/performance.json --train-ann-file ./data/car_tree_bug/annotations/instances_default.json --train-data-roots ./data/car_tree_bug/images --val-ann-file ./data/car_tree_bug/annotations/instances_default.json --val-data-roots ./data/car_tree_bug/images ++--------+-----+------+--------+-------+ +| class | gts | dets | recall | ap | ++--------+-----+------+--------+-------+ +| person | 0 | 2000 | 0.000 | 0.000 | ++--------+-----+------+--------+-------+ +| mAP | | | | 0.000 | ++--------+-----+------+--------+-------+ +2022-11-17 11:08:15,245 | INFO : run task done. +2022-11-17 11:08:15,318 | INFO : Inference completed +2022-11-17 11:08:15,319 | INFO : called evaluate() +2022-11-17 11:08:15,334 | INFO : F-measure after evaluation: 0.8809523809523808 +2022-11-17 11:08:15,334 | INFO : Evaluation completed +Performance(score: 0.8809523809523808, dashboard: (1 metric groups)) ``` -Optimize OpenVINO model (.bin or .xml) with OpenVINO POT: +### Exporting -``` -ote optimize ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --load-weights openvino.xml --save-model-to ./pot_output --save-performance ./pot_output/performance.json --train-ann-file ./data/car_tree_bug/annotations/instances_default.json --train-data-roots ./data/car_tree_bug/images --val-ann-file ./data/car_tree_bug/annotations/instances_default.json --val-data-roots ./data/car_tree_bug/images -``` +`export` exports a trained model to the OpenVINO format in order to efficiently run it on Intel hardware. -With the `--help` command, you can list additional information. +With the `--help` command, you can list additional information, such as its parameters common to all model templates: command example: +```bash +(mpa) ...$ ote export external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --help +usage: ote export [-h] --load-weights LOAD_WEIGHTS --save-model-to SAVE_MODEL_TO template + +positional arguments: + template + +optional arguments: + -h, --help show this help message and exit + --load-weights LOAD_WEIGHTS + Load weights from saved checkpoint for exporting + --save-model-to SAVE_MODEL_TO + Location where exported model will be stored. ``` -ote optimize ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --help + +#### Command example of the exporting + +The command below performs exporting to the [trained model](#command-example-of-the-training) `outputs/weights.pth` in previous section and save exported model to the `outputs/ov/` folder. + +```bash +(mpa) ...$ ote export external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --load-weights outputs/weights.pth --save-model-to outputs/ov +... +[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. +Find more information about API v2.0 and IR v11 at https://docs.openvino.ai +2022-11-21 15:40:06,534 | INFO : Exporting completed +2022-11-21 15:40:06,534 | INFO : run task done. +2022-11-21 15:40:06,538 | INFO : Exporting completed ``` -Output example: +### Optimization + +`optimize` optimizes a model using NNCF or POT depending on the model format. + +- NNCF optimization used for trained snapshots in a framework-specific format such as checkpoint (pth) file from Pytorch +- POT optimization used for models exported in the OpenVINO IR format + +With the `--help` command, you can list additional information. +command example: ``` -usage: ote optimize [-h] --train-ann-files TRAIN_ANN_FILES --train-data-roots TRAIN_DATA_ROOTS --val-ann-files - VAL_ANN_FILES --val-data-roots VAL_DATA_ROOTS --load-weights LOAD_WEIGHTS --save-model-to - SAVE_MODEL_TO [--aux-weights AUX_WEIGHTS] - template {params} ... +(mpa) ...$ ote optimize external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --help +usage: ote optimize [-h] --train-ann-files TRAIN_ANN_FILES --train-data-roots TRAIN_DATA_ROOTS --val-ann-files VAL_ANN_FILES --val-data-roots VAL_DATA_ROOTS --load-weights LOAD_WEIGHTS --save-model-to SAVE_MODEL_TO + [--save-performance SAVE_PERFORMANCE] + template {params} ... positional arguments: template @@ -301,31 +292,39 @@ optional arguments: --val-data-roots VAL_DATA_ROOTS Comma-separated paths to validation data folders. --load-weights LOAD_WEIGHTS - Load weights of trained model + Load weights of trained model (for NNCF) or exported OpenVINO model (for POT) --save-model-to SAVE_MODEL_TO Location where trained model will be stored. - --aux-weights AUX_WEIGHTS - Load weights of trained auxiliary model + --save-performance SAVE_PERFORMANCE + Path to a json file where computed performance will be stored. ``` -### ote eval +#### Command example for optimizing a PyTorch model (.pth) with OpenVINO NNCF: -`ote eval` runs evaluation of a trained model on a particular dataset. - -With the `--help` command, you can list additional information, such as its parameters common to all model templates: -command example: +The command below performs optimization to the [trained model](#command-example-of-the-training) `outputs/weights.pth` in previous section and save optimized model to the `outputs/nncf` folder. +```bash +(mpa) ...$ ote optimize external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --train-ann-files data/airport/annotation_person_train.json --train-data-roots data/airport/train/ --val-ann-files data/airport/annotation_person_val.json --val-data-roots data/airport/val/ --load-weights outputs/weights.pth --save-model-to outputs/nncf --save-performance outputs/nncf/performance.json ``` -ote eval ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --help -``` -output example: +#### Command example for optimizing OpenVINO model (.xml) with OpenVINO POT: + +The command below performs optimization to the [exported model](#command-example-of-the-exporting) `outputs/ov/openvino.xml` in previous section and save optimized model to the `outputs/ov/pot` folder. +```bash +(mpa) ...$ ote optimize external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --train-ann-files data/airport/annotation_person_train.json --train-data-roots data/airport/train/ --val-ann-files data/airport/annotation_person_val.json --val-data-roots data/airport/val/ --load-weights outputs/ov/openvino.xml --save-model-to outputs/ov/pot --save-performance outputs/ov/pot/performance.json ``` -usage: ote eval [-h] --test-ann-files TEST_ANN_FILES --test-data-roots - TEST_DATA_ROOTS --load-weights LOAD_WEIGHTS - [--save-performance SAVE_PERFORMANCE] - template {params} ... + +### Evaluation + +`eval` runs evaluation of a model on the particular dataset. + +With the `--help` command, you can list additional information, such as its parameters common to all model templates: +command example: + +```bash +(mpa) yunchu@yunchu-desktop:~/workspace/training_extensions$ ote eval external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --help +usage: ote eval [-h] --test-ann-files TEST_ANN_FILES --test-data-roots TEST_DATA_ROOTS --load-weights LOAD_WEIGHTS [--save-performance SAVE_PERFORMANCE] template {params} ... positional arguments: template @@ -339,59 +338,50 @@ optional arguments: --test-data-roots TEST_DATA_ROOTS Comma-separated paths to test data folders. --load-weights LOAD_WEIGHTS - Load only weights from previously saved checkpoint + Load weights to run the evaluation. It could be a trained/optimized model or exported model. --save-performance SAVE_PERFORMANCE - Path to a json file where computed performance will be - stored. + Path to a json file where computed performance will be stored. ``` -### ote export +> **_Note_**: Work-In-Progress for `params` argument. -`ote export` exports a trained model to the OpenVINO format in order to efficiently run it on Intel hardware. +#### Command example of the evaluation -With the `--help` command, you can list additional information, such as its parameters common to all model templates: -command example: +The command below performs evaluation to the [trained model](#command-example-of-the-training) `outputs/weights.pth` in previous section and save result performance to the `outputs/performance.json` file. -``` -ote export ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --help -``` - -output example: +```bash +(mpa) ...$ ote eval external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --test-ann-files data/airport/annotation_person_val.json --test-data-roots data/airport/val/ --load-weights outputs/weights.pth --save-performance outputs/performance.json +... +[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10/10, 7.9 task/s, elapsed: 1s, ETA: 0s +---------------iou_thr: 0.5--------------- ++--------+-----+------+--------+-------+ +| class | gts | dets | recall | ap | ++--------+-----+------+--------+-------+ +| person | 0 | 2000 | 0.000 | 0.000 | ++--------+-----+------+--------+-------+ +| mAP | | | | 0.000 | ++--------+-----+------+--------+-------+ +2022-11-21 15:30:04,695 | INFO : run task done. +2022-11-21 15:30:04,734 | INFO : Inference completed +2022-11-21 15:30:04,734 | INFO : called evaluate() +2022-11-21 15:30:04,746 | INFO : F-measure after evaluation: 0.8799999999999999 +2022-11-21 15:30:04,746 | INFO : Evaluation completed +Performance(score: 0.8799999999999999, dashboard: (1 metric groups)) ``` -usage: ote export [-h] --load-weights LOAD_WEIGHTS --save-model-to - SAVE_MODEL_TO - template -positional arguments: - template +### Demonstrate -optional arguments: - -h, --help show this help message and exit - --load-weights LOAD_WEIGHTS - Load only weights from previously saved checkpoint - --save-model-to SAVE_MODEL_TO - Location where exported model will be stored. -``` +`demo` runs model inference on images, videos, or webcam streams in order to see how it works with user's data -### ote demo - -`ote demo` runs model inference on images, videos, or webcam streams in order to see how it works with user's data +> **_Note:_** `demo` command requires GUI backend to your system for displaying inference results. With the `--help` command, you can list additional information, such as its parameters common to all model templates: command example: -``` -ote demo ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --help -``` - -output example: - -``` -usage: ote demo [-h] -i INPUT --load-weights LOAD_WEIGHTS - [--fit-to-size FIT_TO_SIZE FIT_TO_SIZE] [--loop] - [--delay DELAY] [--display-perf] - template {params} ... +```bash +(mpa) ...$ ote demo external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --help +usage: ote demo [-h] -i INPUT --load-weights LOAD_WEIGHTS [--fit-to-size FIT_TO_SIZE FIT_TO_SIZE] [--loop] [--delay DELAY] [--display-perf] template {params} ... positional arguments: template @@ -401,40 +391,34 @@ positional arguments: optional arguments: -h, --help show this help message and exit -i INPUT, --input INPUT - Source of input data: images folder, image, webcam and - video. + Source of input data: images folder, image, webcam and video. --load-weights LOAD_WEIGHTS - Load only weights from previously saved checkpoint + Load weights to run the evaluation. It could be a trained/optimized model or exported model. --fit-to-size FIT_TO_SIZE FIT_TO_SIZE - Width and Height space-separated values. Fits - displayed images to window with specified Width and - Height. This options applies to result visualisation - only. + Width and Height space-separated values. Fits displayed images to window with specified Width and Height. This options applies to result visualisation only. --loop Enable reading the input in a loop. --delay DELAY Frame visualization time in ms. - --display-perf This option enables writing performance metrics on - displayed frame. These metrics take into account not - only model inference time, but also frame reading, - pre-processing and post-processing. + --display-perf This option enables writing performance metrics on displayed frame. These metrics take into account not only model inference time, but also frame reading, pre-processing and post-processing. ``` -### ote deploy +#### Command example of the demostration -`ote deploy` creates openvino.zip with a self-contained python package, a demo application, and an exported model. - -With the `--help` command, you can list additional information, such as its parameters common to all model templates: -command example: +The command below performs demonstration to the [optimized model](#command-example-for-optimizing-a-pytorch-model-pth-with-openvino-nncf) `outputs/nncf/weights.pth` in previous section with images in the given input folder. -``` -ote deploy ./external/mmdetection/configs/ote/custom-object-detection/gen3_mobilenetV2_ATSS/template.yaml --help +```bash +TBD ``` -output example: +### Deployment -``` -usage: ote deploy [-h] --load-weights LOAD_WEIGHTS - [--save-model-to SAVE_MODEL_TO] - template +`deploy` creates openvino.zip with a self-contained python package, a demo application, and an exported model. + +With the `--help` command, you can list additional information, such as its parameters common to all model templates: +command example: + +```bash +(mpa) ...$ ote deploy external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --help +usage: ote deploy [-h] --load-weights LOAD_WEIGHTS [--save-model-to SAVE_MODEL_TO] template positional arguments: template @@ -442,7 +426,7 @@ positional arguments: optional arguments: -h, --help show this help message and exit --load-weights LOAD_WEIGHTS - Load only weights from previously saved checkpoint. + Load model's weights from. --save-model-to SAVE_MODEL_TO Location where openvino.zip will be stored. ``` diff --git a/README.md b/README.md index 0337680efba..84a8c62c2b4 100644 --- a/README.md +++ b/README.md @@ -5,31 +5,68 @@ [![mypy](https://img.shields.io/badge/%20type_checker-mypy-%231674b1?style=flat)]() [![openvino](https://img.shields.io/badge/openvino-2021.4-purple)]() -OpenVINO™ Training Extensions provide a convenient environment to train -Deep Learning models and convert them using the [OpenVINO™ -toolkit](https://software.intel.com/en-us/openvino-toolkit) for optimized -inference. +> **_DISCLAIMERS_**: Some features described below are under development. You can find more detailed estimation from the [Roadmap](#roadmap) section below. -## Prerequisites +## Overview -- Ubuntu 18.04 / 20.04 -- Python 3.8+ -- [CUDA Toolkit 11.1](https://developer.nvidia.com/cuda-11.1.1-download-archive) - for training on GPU +OpenVINO™ Training Extensions (OTE) is command-line interface (CLI) framework designed for low-code deep learning model training. OTE lets developers train/inference/optimize models with a diverse combination of model architectures and learning methods using the [OpenVINO™ +toolkit](https://software.intel.com/en-us/openvino-toolkit). For example, users can train a ResNet18-based SSD ([Single Shot Detection](https://arxiv.org/abs/1512.02325)) model in a semi-supervised manner without worrying about setting a configuration manually. `ote build` and `ote train` commands will automatically analyze users' dataset and do necessary tasks for training the model with best configuration. OTE provides the following features: -## Repository components +- Provide a set of pre-configured models for quick start + - `ote find` helps you quickly finds the best pre-configured models for common task types like classification, detection, segmentation, and anomaly analysis. +- Configure and train a model from torchvision, [OpenVINO Model Zoo (OMZ)](https://github.com/openvinotoolkit/open_model_zoo) + - `ote build` can help you configure your own model based on torchvision and OpenVINO Model Zoo models. You can replace backbones, necks and heads for your own preference (Currently only backbones are supported). +- Provide several learning methods including supervised, semi-supervised, imbalanced-learn, class-incremental, self-supervised representation learning + - `ote build` helps you automatically identify the best learning methods for your data and model. All you need to do is to set your data in the supported format. If you don't specify a model, the framework will automatically sets the best model for you. For example, if your dataset has long-tailed and partially-annotated bounding box annotations, OTE auto-configurator will choose a semi-supervised imbalanced-learning method and an appropriate model with the best parameters. +- Integrated efficient hyper-parameter optimization + - OTE has an integrated, efficient hyper-parameter optimization module. So, you don't need to worry about searching right hyper-parameters. Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget. +- Support widely-used annotation formats + - OTE uses [datumaro](https://github.com/openvinotoolkit/datumaro), which is designed for dataset building and transformation, as a default interface for dataset management. All supported formats by datumaro are also consumable by OTE without the need of explicit data conversion. If you want to build your own custom dataset format, you can do this via datumaro CLI and API. -- [OTE SDK](ote_sdk) -- [OTE CLI](ote_cli) -- [OTE Algorithms](external) +--- + +## Roadmap + +### v0.4.0 (4Q22) + +- New algorithms + +### v1.0.0 (1Q22) + +- Installation through PyPI + - Package will be renamed as OTX (OpenVINO Training eXtension) +- CLI update + - Update `find` command to find configurations of tasks/algorithms + - Introduce `build` command to customize task or model configurations + - Automatic algorihm selection for the `train` command using the given input dataset +- Adaptation of [Datumaro](https://github.com/openvinotoolkit/datumaro) component as a dataset interface + +### v1.1.0 (2Q22) + +- Structural update +- Integrate hyper-parameter optimizations + +--- + +## Repository + +- Components + - [OTE SDK](ote_sdk) + - [OTE CLI](ote_cli) + - [OTE Algorithms](external) +- Branches + - [develop](https://github.com/openvinotoolkit/training_extensions/tree/develop) + - Mainly maintained branch for releasing new features in the future + - [misc](https://github.com/openvinotoolkit/training_extensions/tree/misc) + - Previously developed models can be found on this branch + +--- ## Quick start guide In order to get started with OpenVINO™ Training Extensions see [the quick-start guide](QUICK_START_GUIDE.md). -## GitHub Repository - -The project files can be found in [OpenVINO™ Training Extensions](https://github.com/openvinotoolkit/training_extensions). -Previously developed models can be found on the [misc branch](https://github.com/openvinotoolkit/training_extensions/tree/misc). +--- ## License @@ -37,13 +74,23 @@ Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0]( By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. +--- + +## Issues / Discussions + +Please use [Issues](https://github.com/openvinotoolkit/training_extensions/issues/new/choose) tab for your bug reporting, feature requesting, or any questions. + +--- + ## Contributing Please read the [Contribution guide](CONTRIBUTING.md) before starting work on a pull request. +--- + ## Known limitations -Training, export, and evaluation scripts for TensorFlow- and most PyTorch-based models from the [misc](#misc) branch are, currently, not production-ready. They serve exploratory purposes and are not validated. +Training, export, and evaluation scripts for TensorFlow- and most PyTorch-based models from the [misc](https://github.com/openvinotoolkit/training_extensions/tree/misc) branch are, currently, not production-ready. They serve exploratory purposes and are not validated. --- diff --git a/ote_cli/ote_cli/tools/demo.py b/ote_cli/ote_cli/tools/demo.py index b17430ea27d..8e0e318eef9 100644 --- a/ote_cli/ote_cli/tools/demo.py +++ b/ote_cli/ote_cli/tools/demo.py @@ -50,6 +50,15 @@ def parse_args(): pre_parser = argparse.ArgumentParser(add_help=False) pre_parser.add_argument("template") + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser.add_argument("-i", "--input") + pre_parser.add_argument("--load-weights") + pre_parser.add_argument("--fit-to-size") + pre_parser.add_argument("--loop") + pre_parser.add_argument("--delay") + pre_parser.add_argument("--display-perf") + parsed, _ = pre_parser.parse_known_args() # Load template.yaml file. template = find_and_parse_model_template(parsed.template) @@ -68,7 +77,7 @@ def parse_args(): parser.add_argument( "--load-weights", required=True, - help="Load only weights from previously saved checkpoint", + help="Load weights to run the evaluation. It could be a trained/optimized model or exported model.", ) parser.add_argument( "--fit-to-size", diff --git a/ote_cli/ote_cli/tools/deploy.py b/ote_cli/ote_cli/tools/deploy.py index 70fd2c8b685..2224f88e8ca 100644 --- a/ote_cli/ote_cli/tools/deploy.py +++ b/ote_cli/ote_cli/tools/deploy.py @@ -37,7 +37,7 @@ def parse_args(): parser.add_argument( "--load-weights", required=True, - help="Load only weights from previously saved checkpoint.", + help="Load model's weights from.", ) parser.add_argument( "--save-model-to", diff --git a/ote_cli/ote_cli/tools/eval.py b/ote_cli/ote_cli/tools/eval.py index 658be72e1db..5f038ef361e 100644 --- a/ote_cli/ote_cli/tools/eval.py +++ b/ote_cli/ote_cli/tools/eval.py @@ -44,6 +44,13 @@ def parse_args(): pre_parser = argparse.ArgumentParser(add_help=False) pre_parser.add_argument("template") + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser.add_argument("--test-ann-files") + pre_parser.add_argument("--test-data-roots") + pre_parser.add_argument("--load-weights") + pre_parser.add_argument("--save-performance") + parsed, _ = pre_parser.parse_known_args() # Load template.yaml file. template = find_and_parse_model_template(parsed.template) @@ -66,7 +73,7 @@ def parse_args(): parser.add_argument( "--load-weights", required=True, - help="Load only weights from previously saved checkpoint", + help="Load weights to run the evaluation. It could be a trained/optimized model or exported model.", ) parser.add_argument( "--save-performance", diff --git a/ote_cli/ote_cli/tools/export.py b/ote_cli/ote_cli/tools/export.py index 8a235802ee9..fb4b8a665be 100644 --- a/ote_cli/ote_cli/tools/export.py +++ b/ote_cli/ote_cli/tools/export.py @@ -41,7 +41,7 @@ def parse_args(): parser.add_argument( "--load-weights", required=True, - help="Load only weights from previously saved checkpoint", + help="Load weights from saved checkpoint for exporting", ) parser.add_argument( "--save-model-to", diff --git a/ote_cli/ote_cli/tools/optimize.py b/ote_cli/ote_cli/tools/optimize.py index 32976d75322..ee40e6f2fc8 100644 --- a/ote_cli/ote_cli/tools/optimize.py +++ b/ote_cli/ote_cli/tools/optimize.py @@ -47,6 +47,16 @@ def parse_args(): pre_parser = argparse.ArgumentParser(add_help=False) pre_parser.add_argument("template") + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser.add_argument("--train-ann-files") + pre_parser.add_argument("--train-data-roots") + pre_parser.add_argument("--val-ann-files") + pre_parser.add_argument("--val-data-roots") + pre_parser.add_argument("--load-weights") + pre_parser.add_argument("--save-model-to") + pre_parser.add_argument("--save-performance") + parsed, _ = pre_parser.parse_known_args() # Load template.yaml file. template = find_and_parse_model_template(parsed.template) @@ -79,7 +89,7 @@ def parse_args(): parser.add_argument( "--load-weights", required=True, - help="Load weights of trained model", + help="Load weights of trained model (for NNCF) or exported OpenVINO model (for POT)", ) parser.add_argument( "--save-model-to", diff --git a/ote_cli/ote_cli/tools/train.py b/ote_cli/ote_cli/tools/train.py index 4257fb17a37..79599d7ad34 100644 --- a/ote_cli/ote_cli/tools/train.py +++ b/ote_cli/ote_cli/tools/train.py @@ -54,6 +54,17 @@ def parse_args(): pre_parser = argparse.ArgumentParser(add_help=False) pre_parser.add_argument("template") + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser.add_argument("--train-ann-files") + pre_parser.add_argument("--train-data-roots") + pre_parser.add_argument("--val-ann-files") + pre_parser.add_argument("--val-data-roots") + pre_parser.add_argument("--load-weights") + pre_parser.add_argument("--save-model-to") + pre_parser.add_argument("--enable-hpo") + pre_parser.add_argument("--hpo-time-ratio") + parsed, _ = pre_parser.parse_known_args() # Load template.yaml file. template = find_and_parse_model_template(parsed.template) From b76677ac0f471c29bf9d02794fd9f07e0b2bbb80 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 22 Nov 2022 13:06:19 +0900 Subject: [PATCH 2/6] Update QUICK_START_GUIDE.md Co-authored-by: Songki Choi --- QUICK_START_GUIDE.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/QUICK_START_GUIDE.md b/QUICK_START_GUIDE.md index 7c76727c64d..b77a0616503 100644 --- a/QUICK_START_GUIDE.md +++ b/QUICK_START_GUIDE.md @@ -2,7 +2,7 @@ ## Prerequisites -Current version of project was tested under following environments +Current version of OTE was tested under following environments - Ubuntu 20.04 - Python 3.8.x From 67340abbfe600f096962fc28bca4919fca32e5da Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 22 Nov 2022 13:08:21 +0900 Subject: [PATCH 3/6] Update README.md Co-authored-by: Songki Choi --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 84a8c62c2b4..c41f216f973 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ [![mypy](https://img.shields.io/badge/%20type_checker-mypy-%231674b1?style=flat)]() [![openvino](https://img.shields.io/badge/openvino-2021.4-purple)]() -> **_DISCLAIMERS_**: Some features described below are under development. You can find more detailed estimation from the [Roadmap](#roadmap) section below. +> **_DISCLAIMERS_**: Some features described below are under development (refer to feature/otx branch). You can find more detailed estimation from the [Roadmap](#roadmap) section below. ## Overview From c36daf1ea2d0a9e3a07082e2ac6a1f2b25ae8942 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 22 Nov 2022 15:02:21 +0900 Subject: [PATCH 4/6] updated docs and cli Signed-off-by: Yunchu Lee --- QUICK_START_GUIDE.md | 24 ++++++++---- README.md | 13 +++---- ote_cli/ote_cli/tools/demo.py | 54 ++++++++++++++------------ ote_cli/ote_cli/tools/eval.py | 52 ++++++++++++++----------- ote_cli/ote_cli/tools/optimize.py | 63 ++++++++++++++++--------------- ote_cli/ote_cli/tools/train.py | 63 ++++++++++++++++--------------- 6 files changed, 145 insertions(+), 124 deletions(-) diff --git a/QUICK_START_GUIDE.md b/QUICK_START_GUIDE.md index b77a0616503..c8540a6e0e9 100644 --- a/QUICK_START_GUIDE.md +++ b/QUICK_START_GUIDE.md @@ -38,7 +38,9 @@ Current version of OTE was tested under following environments $ pip3 install notebook; cd ote_cli/notebooks/; jupyter notebook ``` -1. There available scripts that create python virtual environments for different task types: + > **_Important note:_** You should confirm that the Python version that installed on your machine should be 3.8.X. For the future release of OTE will support wide range of the Python version. + +1. There are available scripts that create python virtual environments for different task types: ```bash $ find external/ -name init_venv.sh @@ -56,11 +58,13 @@ Current version of OTE was tested under following environments - Classification - Detection - Segmantation + - Instance segmentation + - Rotated detection - `external/anomaly/init_venv.sh` can be used to create a virtual environment for the following task types. - - Anomaly-classification - - Anomaly-detection - - Anomaly-segmentation + - Anomaly_classification + - Anomaly_detection + - Anomaly_segmentation 1. Create and activate a virtual environment for the chosen task, then install the `ote_cli`. The following example shows that creating virtual environment to the `.venv_mpa` folder in your current directory for detection task. @@ -343,7 +347,7 @@ optional arguments: Path to a json file where computed performance will be stored. ``` -> **_Note_**: Work-In-Progress for `params` argument. +> **_Note:_** Work-In-Progress for `params` argument. #### Command example of the evaluation @@ -376,6 +380,8 @@ Performance(score: 0.8799999999999999, dashboard: (1 metric groups)) > **_Note:_** `demo` command requires GUI backend to your system for displaying inference results. +> **_Note:_** The model optimzied with `NNCF` is not supported for the `demo` command. + With the `--help` command, you can list additional information, such as its parameters common to all model templates: command example: @@ -403,12 +409,16 @@ optional arguments: #### Command example of the demostration -The command below performs demonstration to the [optimized model](#command-example-for-optimizing-a-pytorch-model-pth-with-openvino-nncf) `outputs/nncf/weights.pth` in previous section with images in the given input folder. +The command below performs demonstration to the [optimized model](#command-example-for-optimizing-openvino-model-xml-with-openvino-pot) `outputs/ov/pot/openvino.xml` in previous section with images in the given input folder. ```bash -TBD +(mpa) ...$ ote demo external/model-preparation-algorithm/configs/detection/mobilenetv2_ssd_cls_incr/template.yaml --input data/airport/val/ --load-weights outputs/ov/pot/openvino.xml --display-perf --delay 1000 +... +[ INFO ] OpenVINO inference completed ``` +> **_Note:_** The inference results with a model will be display to the GUI window with 1 second interval. If you execute this command from the remote environment (e.g., using text-only SSH via terminal) without having remote GUI client software, you can meet some error message from this command. + ### Deployment `deploy` creates openvino.zip with a self-contained python package, a demo application, and an exported model. diff --git a/README.md b/README.md index c41f216f973..8dc3bb22a51 100644 --- a/README.md +++ b/README.md @@ -27,11 +27,7 @@ toolkit](https://software.intel.com/en-us/openvino-toolkit). For example, users ## Roadmap -### v0.4.0 (4Q22) - -- New algorithms - -### v1.0.0 (1Q22) +### v1.0.0 (1Q23) - Installation through PyPI - Package will be renamed as OTX (OpenVINO Training eXtension) @@ -40,11 +36,12 @@ toolkit](https://software.intel.com/en-us/openvino-toolkit). For example, users - Introduce `build` command to customize task or model configurations - Automatic algorihm selection for the `train` command using the given input dataset - Adaptation of [Datumaro](https://github.com/openvinotoolkit/datumaro) component as a dataset interface +- Integrate hyper-parameter optimizations +- Support action recognition task -### v1.1.0 (2Q22) +### v1.1.0 (2Q23) -- Structural update -- Integrate hyper-parameter optimizations +- SDK/API update --- diff --git a/ote_cli/ote_cli/tools/demo.py b/ote_cli/ote_cli/tools/demo.py index 8e0e318eef9..0110c8da0b3 100644 --- a/ote_cli/ote_cli/tools/demo.py +++ b/ote_cli/ote_cli/tools/demo.py @@ -43,41 +43,23 @@ ESC_BUTTON = 27 -def parse_args(): +def init_arguments(parser, parse_template_only=False): """ - Parses command line arguments. + initialize arguments to parser. if 'parse_template_only' set as 'True', + 'required' attribute to all arguments will be set as 'False' to simply get + the template argument. """ - - pre_parser = argparse.ArgumentParser(add_help=False) - pre_parser.add_argument("template") - # WA: added all available args to correctly parsing "template" positional arg - # to get the available hyper-parameters - pre_parser.add_argument("-i", "--input") - pre_parser.add_argument("--load-weights") - pre_parser.add_argument("--fit-to-size") - pre_parser.add_argument("--loop") - pre_parser.add_argument("--delay") - pre_parser.add_argument("--display-perf") - - parsed, _ = pre_parser.parse_known_args() - # Load template.yaml file. - template = find_and_parse_model_template(parsed.template) - # Get hyper parameters schema. - hyper_parameters = template.hyper_parameters.data - assert hyper_parameters - - parser = argparse.ArgumentParser() parser.add_argument("template") parser.add_argument( "-i", "--input", - required=True, + required=not parse_template_only, help="Source of input data: images folder, image, webcam and video.", ) parser.add_argument( "--load-weights", - required=True, - help="Load weights to run the evaluation. It could be a trained/optimized model or exported model.", + required=not parse_template_only, + help="Load weights to run the evaluation. It could be a trained/optimized model (POT only) or exported model.", ) parser.add_argument( "--fit-to-size", @@ -100,6 +82,28 @@ def parse_args(): "These metrics take into account not only model inference time, but also " "frame reading, pre-processing and post-processing.", ) + return parser + + +def parse_args(): + """ + Parses command line arguments. + """ + + pre_parser = argparse.ArgumentParser(add_help=False) + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser = init_arguments(pre_parser, parse_template_only=True) + + parsed, _ = pre_parser.parse_known_args() + # Load template.yaml file. + template = find_and_parse_model_template(parsed.template) + # Get hyper parameters schema. + hyper_parameters = template.hyper_parameters.data + assert hyper_parameters + + parser = argparse.ArgumentParser() + parser = init_arguments(parser) add_hyper_parameters_sub_parser(parser, hyper_parameters, modes=("INFERENCE",)) diff --git a/ote_cli/ote_cli/tools/eval.py b/ote_cli/ote_cli/tools/eval.py index 5f038ef361e..0bb68a081d8 100644 --- a/ote_cli/ote_cli/tools/eval.py +++ b/ote_cli/ote_cli/tools/eval.py @@ -37,48 +37,54 @@ ) -def parse_args(): +def init_arguments(parser, parse_template_only=False): """ - Parses command line arguments. + initialize arguments to parser. if 'parse_template_only' set as 'True', + 'required' attribute to all arguments will be set as 'False' to simply get + the template argument. """ - - pre_parser = argparse.ArgumentParser(add_help=False) - pre_parser.add_argument("template") - # WA: added all available args to correctly parsing "template" positional arg - # to get the available hyper-parameters - pre_parser.add_argument("--test-ann-files") - pre_parser.add_argument("--test-data-roots") - pre_parser.add_argument("--load-weights") - pre_parser.add_argument("--save-performance") - - parsed, _ = pre_parser.parse_known_args() - # Load template.yaml file. - template = find_and_parse_model_template(parsed.template) - # Get hyper parameters schema. - hyper_parameters = template.hyper_parameters.data - assert hyper_parameters - - parser = argparse.ArgumentParser() parser.add_argument("template") parser.add_argument( "--test-ann-files", - required=True, + required=not parse_template_only, help="Comma-separated paths to test annotation files.", ) parser.add_argument( "--test-data-roots", - required=True, + required=not parse_template_only, help="Comma-separated paths to test data folders.", ) parser.add_argument( "--load-weights", - required=True, + required=not parse_template_only, help="Load weights to run the evaluation. It could be a trained/optimized model or exported model.", ) parser.add_argument( "--save-performance", help="Path to a json file where computed performance will be stored.", ) + return parser + + +def parse_args(): + """ + Parses command line arguments. + """ + + pre_parser = argparse.ArgumentParser(add_help=False) + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser = init_arguments(pre_parser, parse_template_only=True) + + parsed, _ = pre_parser.parse_known_args() + # Load template.yaml file. + template = find_and_parse_model_template(parsed.template) + # Get hyper parameters schema. + hyper_parameters = template.hyper_parameters.data + assert hyper_parameters + + parser = argparse.ArgumentParser() + parser = init_arguments(parser) add_hyper_parameters_sub_parser(parser, hyper_parameters, modes=("INFERENCE",)) diff --git a/ote_cli/ote_cli/tools/optimize.py b/ote_cli/ote_cli/tools/optimize.py index ee40e6f2fc8..592e032a406 100644 --- a/ote_cli/ote_cli/tools/optimize.py +++ b/ote_cli/ote_cli/tools/optimize.py @@ -39,67 +39,70 @@ ) -def parse_args(): +def init_arguments(parser, parse_template_only=False): """ - Parses command line arguments. - It dynamically generates help for hyper-parameters which are specific to particular model template. + initialize arguments to parser. if 'parse_template_only' set as 'True', + 'required' attribute to all arguments will be set as 'False' to simply get + the template argument. """ - - pre_parser = argparse.ArgumentParser(add_help=False) - pre_parser.add_argument("template") - # WA: added all available args to correctly parsing "template" positional arg - # to get the available hyper-parameters - pre_parser.add_argument("--train-ann-files") - pre_parser.add_argument("--train-data-roots") - pre_parser.add_argument("--val-ann-files") - pre_parser.add_argument("--val-data-roots") - pre_parser.add_argument("--load-weights") - pre_parser.add_argument("--save-model-to") - pre_parser.add_argument("--save-performance") - - parsed, _ = pre_parser.parse_known_args() - # Load template.yaml file. - template = find_and_parse_model_template(parsed.template) - # Get hyper parameters schema. - hyper_parameters = template.hyper_parameters.data - assert hyper_parameters - - parser = argparse.ArgumentParser() parser.add_argument("template") parser.add_argument( "--train-ann-files", - required=True, + required=not parse_template_only, help="Comma-separated paths to training annotation files.", ) parser.add_argument( "--train-data-roots", - required=True, + required=not parse_template_only, help="Comma-separated paths to training data folders.", ) parser.add_argument( "--val-ann-files", - required=True, + required=not parse_template_only, help="Comma-separated paths to validation annotation files.", ) parser.add_argument( "--val-data-roots", - required=True, + required=not parse_template_only, help="Comma-separated paths to validation data folders.", ) parser.add_argument( "--load-weights", - required=True, + required=not parse_template_only, help="Load weights of trained model (for NNCF) or exported OpenVINO model (for POT)", ) parser.add_argument( "--save-model-to", - required=True, + required=not parse_template_only, help="Location where trained model will be stored.", ) parser.add_argument( "--save-performance", help="Path to a json file where computed performance will be stored.", ) + return parser + + +def parse_args(): + """ + Parses command line arguments. + It dynamically generates help for hyper-parameters which are specific to particular model template. + """ + + pre_parser = argparse.ArgumentParser(add_help=False) + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser = init_arguments(pre_parser, parse_template_only=True) + + parsed, _ = pre_parser.parse_known_args() + # Load template.yaml file. + template = find_and_parse_model_template(parsed.template) + # Get hyper parameters schema. + hyper_parameters = template.hyper_parameters.data + assert hyper_parameters + + parser = argparse.ArgumentParser() + parser = init_arguments(parser) add_hyper_parameters_sub_parser(parser, hyper_parameters) diff --git a/ote_cli/ote_cli/tools/train.py b/ote_cli/ote_cli/tools/train.py index 79599d7ad34..ceb5678be24 100644 --- a/ote_cli/ote_cli/tools/train.py +++ b/ote_cli/ote_cli/tools/train.py @@ -46,62 +46,40 @@ ) -def parse_args(): +def init_arguments(parser, parse_template_only=False): """ - Parses command line arguments. - It dynamically generates help for hyper-parameters which are specific to particular model template. + initialize arguments to parser. if 'parse_template_only' set as 'True', + 'required' attribute to all arguments will be set as 'False' to simply get + the template argument. """ - - pre_parser = argparse.ArgumentParser(add_help=False) - pre_parser.add_argument("template") - # WA: added all available args to correctly parsing "template" positional arg - # to get the available hyper-parameters - pre_parser.add_argument("--train-ann-files") - pre_parser.add_argument("--train-data-roots") - pre_parser.add_argument("--val-ann-files") - pre_parser.add_argument("--val-data-roots") - pre_parser.add_argument("--load-weights") - pre_parser.add_argument("--save-model-to") - pre_parser.add_argument("--enable-hpo") - pre_parser.add_argument("--hpo-time-ratio") - - parsed, _ = pre_parser.parse_known_args() - # Load template.yaml file. - template = find_and_parse_model_template(parsed.template) - # Get hyper parameters schema. - hyper_parameters = template.hyper_parameters.data - assert hyper_parameters - - parser = argparse.ArgumentParser() parser.add_argument("template") parser.add_argument( "--train-ann-files", - required=True, + required=not parse_template_only, help="Comma-separated paths to training annotation files.", ) parser.add_argument( "--train-data-roots", - required=True, + required=not parse_template_only, help="Comma-separated paths to training data folders.", ) parser.add_argument( "--val-ann-files", - required=True, + required=not parse_template_only, help="Comma-separated paths to validation annotation files.", ) parser.add_argument( "--val-data-roots", - required=True, + required=not parse_template_only, help="Comma-separated paths to validation data folders.", ) parser.add_argument( "--load-weights", - required=False, help="Load only weights from previously saved checkpoint", ) parser.add_argument( "--save-model-to", - required="True", + required=not parse_template_only, help="Location where trained model will be stored.", ) parser.add_argument( @@ -115,6 +93,29 @@ def parse_args(): type=float, help="Expected ratio of total time to run HPO to time taken for full fine-tuning.", ) + return parser + + +def parse_args(): + """ + Parses command line arguments. + It dynamically generates help for hyper-parameters which are specific to particular model template. + """ + + pre_parser = argparse.ArgumentParser(add_help=False) + # WA: added all available args to correctly parsing "template" positional arg + # to get the available hyper-parameters + pre_parser = init_arguments(pre_parser, parse_template_only=True) + + parsed, _ = pre_parser.parse_known_args() + # Load template.yaml file. + template = find_and_parse_model_template(parsed.template) + # Get hyper parameters schema. + hyper_parameters = template.hyper_parameters.data + assert hyper_parameters + + parser = argparse.ArgumentParser() + parser = init_arguments(parser) add_hyper_parameters_sub_parser(parser, hyper_parameters) From 0581c3ed8943678450071af3d5f7634be44d3d20 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 22 Nov 2022 15:05:16 +0900 Subject: [PATCH 5/6] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8dc3bb22a51..f455a358464 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ [![mypy](https://img.shields.io/badge/%20type_checker-mypy-%231674b1?style=flat)]() [![openvino](https://img.shields.io/badge/openvino-2021.4-purple)]() -> **_DISCLAIMERS_**: Some features described below are under development (refer to feature/otx branch). You can find more detailed estimation from the [Roadmap](#roadmap) section below. +> **_DISCLAIMERS_**: Some features described below are under development (refer to [feature/otx branch](https://github.com/openvinotoolkit/training_extensions/tree/feature/otx)). You can find more detailed estimation from the [Roadmap](#roadmap) section below. ## Overview From dfb0275907f7f09c84d248831243a13453582b49 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 22 Nov 2022 16:03:34 +0900 Subject: [PATCH 6/6] added choice option to find command Signed-off-by: Yunchu Lee --- QUICK_START_GUIDE.md | 14 ++++++++------ README.md | 2 +- ote_cli/ote_cli/tools/find.py | 12 +++++++++++- 3 files changed, 20 insertions(+), 8 deletions(-) diff --git a/QUICK_START_GUIDE.md b/QUICK_START_GUIDE.md index c8540a6e0e9..1366f449821 100644 --- a/QUICK_START_GUIDE.md +++ b/QUICK_START_GUIDE.md @@ -58,8 +58,8 @@ Current version of OTE was tested under following environments - Classification - Detection - Segmantation - - Instance segmentation - - Rotated detection + - Instance_segmentation + - Rotated_detection - `external/anomaly/init_venv.sh` can be used to create a virtual environment for the following task types. - Anomaly_classification @@ -90,12 +90,14 @@ Current version of OTE was tested under following environments `find` lists model templates available for the given virtual environment. ``` -usage: ote find [-h] [--root ROOT] [--task_type TASK_TYPE] [--experimental] +(mpa) ...$ ote find --help +usage: ote find [-h] [--root ROOT] [--task_type {classification,detection,segmentation,instance_segmantation,rotated_detection,anomaly_classification,anomaly_detection,anomaly_segmentation}] + [--experimental] optional arguments: - -h, --help show this help message and exit - --root ROOT A root dir where templates should be searched. - --task_type TASK_TYPE filter with the task type (e.g., classification) + -h, --help show this help message and exit + --root ROOT A root dir where templates should be searched. + --task_type {classification,detection,segmentation,instance_segmantation,rotated_detection,anomaly_classification,anomaly_detection,anomaly_segmentation} --experimental ``` diff --git a/README.md b/README.md index 8dc3bb22a51..f455a358464 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ [![mypy](https://img.shields.io/badge/%20type_checker-mypy-%231674b1?style=flat)]() [![openvino](https://img.shields.io/badge/openvino-2021.4-purple)]() -> **_DISCLAIMERS_**: Some features described below are under development (refer to feature/otx branch). You can find more detailed estimation from the [Roadmap](#roadmap) section below. +> **_DISCLAIMERS_**: Some features described below are under development (refer to [feature/otx branch](https://github.com/openvinotoolkit/training_extensions/tree/feature/otx)). You can find more detailed estimation from the [Roadmap](#roadmap) section below. ## Overview diff --git a/ote_cli/ote_cli/tools/find.py b/ote_cli/ote_cli/tools/find.py index 57ad6fa60fc..0e60eb9221a 100644 --- a/ote_cli/ote_cli/tools/find.py +++ b/ote_cli/ote_cli/tools/find.py @@ -30,7 +30,17 @@ def parse_args(): parser.add_argument( "--root", help="A root dir where templates should be searched.", default="." ) - parser.add_argument("--task_type") + task_types = [ + "classification", + "detection", + "segmentation", + "instance_segmantation", + "rotated_detection", + "anomaly_classification", + "anomaly_detection", + "anomaly_segmentation", + ] + parser.add_argument("--task_type", choices=task_types, type=str.lower) parser.add_argument("--experimental", action="store_true") return parser.parse_args()