Skip to content

Commit

Permalink
Update e2e tests to use convert_model (#2152)
Browse files Browse the repository at this point in the history
### Changes

- Update torch examples, added argument `--export-model-path` instead of
`--to-onnx` that select output type by suffix '.xml' or '.onnx'.
- Update tests to use Command to run examples.
- Remove tests of q_dq export path.
- Remove code to make html report, now metrics dumps in `results.csv`.
- Added "build url" in `results.csv`.
- Actualize reference metrics. 
- Move normalization of input to accuracy cheker configs.
- Use `target_ov` and `target_pt` to define reference metrics for
backends.
- Skip test `unet_mapillary_int8` and
`unet_mapillary_magnitude_sparsity_int8` models by 123448.
- Skip `resnet18_imagenet_binarization_dorefa` model by 22543.

Updates in CI:
- Update script to report `final e2e_result.html` for e2e tests.
- Now if any test falls trigger_job will be fails.

After merge requires update ci-pipelines.

### Related tickets

117885

### Tests

tests/torch/test_sota_checkpoints.py

### TODO
- update metrics after merge
#2227
- check metrics after #2211
  • Loading branch information
AlexanderDokuchaev authored Nov 30, 2023
1 parent db786a8 commit ce061bb
Show file tree
Hide file tree
Showing 93 changed files with 1,047 additions and 840 deletions.
6 changes: 4 additions & 2 deletions examples/torch/classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,9 @@ python main.py \
- Use the `--resume` flag with the path to a previously saved model to resume training.
- For Torchvision-supported image classification models, set `"pretrained": true` inside the NNCF config JSON file supplied via `--config` to initialize the model to be compressed with Torchvision-supplied pretrained weights, or, alternatively:
- Use the `--weights` flag with the path to a compatible PyTorch checkpoint in order to load all matching weights from the checkpoint into the model - useful if you need to start compression-aware training from a previously trained uncompressed (FP32) checkpoint instead of performing compression-aware training from scratch.
- Use the `--no_strip_on_export` to export not stripped model.
- Use `--export-model-path` to specify the path to export the model in OpenVINO or ONNX format by using the .xml or .onnx suffix, respectively.
- Use the `--no-strip-on-export` to export not stripped model.
- Use the `--export-to-ir-via-onnx` to to export to OpenVINO, will produce the serialized OV IR object by first exporting the torch model object to an .onnx file and then converting that .onnx file to an OV IR file.

### Validate Your Model Checkpoint

Expand All @@ -86,7 +88,7 @@ To export trained model to the ONNX format, use the following command:
python main.py -m export \
--config=configs/quantization/mobilenet_v2_imagenet_int8.json \
--resume=../../results/quantization/mobilenet_v2_int8/6/checkpoints/epoch_1.pth \
--to-onnx=../../results/mobilenet_v2_int8.onnx
--to-ir=../../results
```

### Export to OpenVINO™ Intermediate Representation (IR)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,5 +27,6 @@
"{re}ResNet/Sequential\\[layer4\\]/BasicBlock\\[0\\]/Sequential\\[downsample\\]/.*"]
}
],
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -27,5 +27,6 @@
"{re}ResNet/Sequential\\[layer4\\]/BasicBlock\\[0\\]/Sequential\\[downsample\\]/.*"]
}
],
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -40,5 +40,6 @@
"lr_poly_drop_duration_epochs": 10
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,6 @@
}
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -166,5 +166,6 @@
"disable_wd_start_epoch": 50
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -45,5 +45,6 @@
"lr_poly_drop_duration_epochs": 10
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,6 @@
"disable_wd_start_epoch": 20
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -171,5 +171,6 @@
}
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -173,5 +173,6 @@
"disable_wd_start_epoch": 20
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,6 @@
}
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,6 @@
"target_device": "TRIAL",
"compression": {
"algorithm": "quantization"
}
},
"no_strip_on_export": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -99,5 +99,6 @@
}
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -111,5 +111,6 @@
"disable_wd_start_epoch": 50
}
},
"no_strip_on_export": true
"no_strip_on_export": true,
"export_to_ir_via_onnx": true
}
6 changes: 2 additions & 4 deletions examples/torch/classification/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -235,8 +235,7 @@ def model_eval_fn(model):
load_state(model, model_state_dict, is_resume=True)

if is_export_only:
export_model(compression_ctrl, config.to_onnx, config.no_strip_on_export)
logger.info(f"Saved to {config.to_onnx}")
export_model(compression_ctrl, config)
return

model, _ = prepare_model_for_execution(model, config)
Expand Down Expand Up @@ -328,8 +327,7 @@ def configure_optimizers_fn():
config.mlflow.end_run()

if "export" in config.mode:
export_model(compression_ctrl, config.to_onnx, config.no_strip_on_export)
logger.info(f"Saved to {config.to_onnx}")
export_model(compression_ctrl, config)


def train(
Expand Down
8 changes: 3 additions & 5 deletions examples/torch/classification/staged_quantization_worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ def autoq_eval_fn(model, eval_loader):

best_acc1 = 0
# optionally resume from a checkpoint
if resuming_checkpoint is not None and config.to_onnx is None:
if resuming_checkpoint is not None and config.export_model_path is None:
best_acc1 = resuming_checkpoint["best_acc1"]
if "train" in config.mode:
kd_loss_calculator.original_model.load_state_dict(resuming_checkpoint["original_model_state_dict"])
Expand All @@ -228,8 +228,7 @@ def autoq_eval_fn(model, eval_loader):
log_common_mlflow_params(config)

if is_export_only:
export_model(compression_ctrl, config.to_onnx, config.no_strip_on_export)
logger.info(f"Saved to {config.to_onnx}")
export_model(compression_ctrl, config)
return

if config.execution_mode != ExecutionMode.CPU_ONLY:
Expand Down Expand Up @@ -262,8 +261,7 @@ def autoq_eval_fn(model, eval_loader):
validate(val_loader, model, criterion, config)

if "export" in config.mode:
export_model(compression_ctrl, config.to_onnx, config.no_strip_on_export)
logger.info(f"Saved to {config.to_onnx}")
export_model(compression_ctrl, config)


def train_staged(
Expand Down
24 changes: 18 additions & 6 deletions examples/torch/common/argparser.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,14 @@ def get_common_argument_parser():
parser.add_argument("--dist-url", default="tcp://127.0.0.1:8899", help="URL used to set up distributed training")
parser.add_argument("--rank", default=0, type=int, help="Node rank for distributed training")
parser.add_argument("--dist-backend", default="nccl", type=str, help="Distributed backend")
parser.add_argument("--no_strip_on_export", help="Set to export not stripped model.", action="store_true")
parser.add_argument("--no-strip-on-export", help="Set to export not stripped model.", action="store_true")
parser.add_argument(
"--export-to-ir-via-onnx",
help="When used with the `exported-model-path` option to export to OpenVINO, will produce the serialized "
"OV IR object by first exporting the torch model object to an .onnx file and then converting that .onnx file "
"to an OV IR file.",
action="store_true",
)

# Hyperparameters
parser.add_argument(
Expand Down Expand Up @@ -141,7 +148,7 @@ def get_common_argument_parser():

# Dataset
parser.add_argument(
"--data", dest="dataset_dir", type=str, help="Path to the root directory of the selected dataset. "
"--data", dest="dataset_dir", type=str, help="Path to the root directory of the selected dataset."
)

# Settings
Expand Down Expand Up @@ -169,8 +176,13 @@ def get_common_argument_parser():
)

parser.add_argument("--save-freq", default=5, type=int, help="Checkpoint save frequency (epochs). Default: 5")

parser.add_argument("--to-onnx", type=str, metavar="PATH", default=None, help="Export to ONNX model by given path")
parser.add_argument(
"--export-model-path",
type=str,
metavar="PATH",
default=None,
help="The path to export the model in OpenVINO or ONNX format by using the .xml or .onnx suffix, respectively.",
)

# Display
parser.add_argument(
Expand All @@ -191,6 +203,6 @@ def get_common_argument_parser():

def parse_args(parser, argv):
args = parser.parse_args(argv)
if "export" in args.mode and args.to_onnx is None:
raise RuntimeError("--mode export requires --to-onnx argument to be set")
if "export" in args.mode and args.export_model_path is None:
raise RuntimeError("--mode export requires --export-model-path argument to be set")
return args
53 changes: 44 additions & 9 deletions examples/torch/common/export.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,29 +8,64 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pathlib import Path

import torch

from examples.common.sample_config import SampleConfig
from examples.torch.common.example_logger import logger
from nncf.api.compression import CompressionAlgorithmController
from nncf.torch.exporter import count_tensors
from nncf.torch.exporter import generate_input_names_list
from nncf.torch.exporter import get_export_args


def export_model(ctrl: CompressionAlgorithmController, save_path: str, no_strip_on_export: bool) -> None:
def export_model(ctrl: CompressionAlgorithmController, config: SampleConfig) -> None:
"""
Export compressed model. Supported only 'onnx' format.
Export compressed model ot OpenVINO format.
:param controller: The compression controller.
:param save_path: Path to save onnx file.
:param no_strip_on_export: Set to skip strip model before export.
:param config: The sample config.
"""

model = ctrl.model if no_strip_on_export else ctrl.strip()

model = ctrl.model if config.no_strip_on_export else ctrl.strip()
model = model.eval().cpu()

export_args = get_export_args(model, device="cpu")
input_names = generate_input_names_list(count_tensors(export_args))

with torch.no_grad():
torch.onnx.export(model, export_args, save_path, input_names=input_names)
input_tensor_list = []
input_shape_list = []
for info in model.nncf.input_infos.elements:
input_shape = tuple([1] + info.shape[1:])
input_tensor_list.append(torch.rand(input_shape))
input_shape_list.append(input_shape)

if len(input_tensor_list) == 1:
input_tensor_list = input_tensor_list[0]
input_shape_list = input_shape_list[0]

model_path = Path(config.export_model_path)
model_path.parent.mkdir(exist_ok=True, parents=True)
extension = model_path.suffix

if extension == ".onnx":
with torch.no_grad():
torch.onnx.export(model, input_tensor_list, model_path, input_names=input_names)
elif extension == ".xml":
import openvino as ov
from openvino.tools.mo import convert_model

if config.export_to_ir_via_onnx:
model_onnx_path = model_path.with_suffix(".onnx")
with torch.no_grad():
torch.onnx.export(model, input_tensor_list, model_onnx_path, input_names=input_names)
ov_model = convert_model(model_onnx_path)
else:
ov_model = convert_model(model, example_input=input_tensor_list, input_shape=input_shape_list)
# Rename input nodes
for input_node, input_name in zip(ov_model.inputs, input_names):
input_node.node.set_friendly_name(input_name)
ov.save_model(ov_model, model_path)
else:
raise ValueError(f"--export-model-path argument should have suffix `.xml` or `.onnx` but got {extension}")
logger.info(f"Saved to {model_path}")
6 changes: 4 additions & 2 deletions examples/torch/object_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,9 @@ This scenario demonstrates quantization with fine-tuning of SSD300 on VOC datase
- Use `--weights` flag with the path to a compatible PyTorch checkpoint in order to load all matching weights from the checkpoint into the model - useful if you need to start compression-aware training from a previously trained uncompressed (FP32) checkpoint instead of performing compression-aware training from scratch. This flag is optional, but highly recommended to use.
- Use `--multiprocessing-distributed` flag to run in the distributed mode.
- Use `--resume` flag with the path to a previously saved model to resume training.
- Use the `--no_strip_on_export` to export not stripped model.
- Use `--export-model-path` to specify the path to export the model in OpenVINO or ONNX format by using the .xml or .onnx suffix, respectively.
- Use the `--no-strip-on-export` to export not stripped model.
- Use the `--export-to-ir-via-onnx` to to export to OpenVINO, will produce the serialized OV IR object by first exporting the torch model object to an .onnx file and then converting that .onnx file to an OV IR file.

### Validate your model checkpoint

Expand All @@ -62,7 +64,7 @@ If you want to validate an FP32 model checkpoint, make sure the compression algo
### Export compressed model

To export trained model to ONNX format use the following command:
`python main.py -m export --config configs/ssd300_vgg_voc_int8.json --data <path_to_dataset> --resume <path_to_compressed_model_checkpoint> --to-onnx=../../results/ssd300_int8.onnx`
`python main.py -m export --config configs/ssd300_vgg_voc_int8.json --data <path_to_dataset> --resume <path_to_compressed_model_checkpoint> --to-ir=../../results`

### Export to OpenVINO Intermediate Representation (IR)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,6 @@
"clip": false,
"flip": true,
"top_k": 200
}
},
"export_to_ir_via_onnx": true
}

Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,6 @@
{
"algorithm": "quantization"
}
]
],
"export_to_ir_via_onnx": true
}

Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,6 @@
{
"algorithm": "quantization"
}
]
],
"export_to_ir_via_onnx": true
}

3 changes: 2 additions & 1 deletion examples/torch/object_detection/configs/ssd300_vgg_voc.json
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,6 @@
"steps": [8, 16, 32, 64, 100, 300],
"aspect_ratios": [[2], [2, 3], [2, 3], [2, 3], [2], [2]],
"flip": true
}
},
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -41,5 +41,6 @@
"num_init_samples": 1280
}
}
}
},
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -49,5 +49,6 @@
"num_init_samples": 1280
}
}
}
},
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,6 @@
}
}
}
]
],
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -44,5 +44,6 @@
"filter_importance": "geometric_median"
}
}
]
],
"export_to_ir_via_onnx": true
}
3 changes: 2 additions & 1 deletion examples/torch/object_detection/configs/ssd512_vgg_voc.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,6 @@
"variance": [0.1, 0.1, 0.2, 0.2],
"clip": false,
"flip": true
}
},
"export_to_ir_via_onnx": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -38,5 +38,6 @@
"num_init_samples": 640
}
}
}
},
"export_to_ir_via_onnx": true
}
Loading

0 comments on commit ce061bb

Please sign in to comment.