Skip to content

Commit

Permalink
Feature/azaytsev/cherry picks from 2021 2 (#4069)
Browse files Browse the repository at this point in the history
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <[email protected]>

* Updated openvino_docs.xml

* Added Intel® Iris® Xe Dedicated Graphics, naming convention info (#3523)

* Added Intel® Iris® Xe Dedicated Graphics, naming convention info

* Added GPU.0 GPU.1

* added info about Intel® Iris® Xe MAX Graphics drivers

* Feature/azaytsev/transition s3 bucket (#3609)

* Replaced https://download.01.org/ links with https://storage.openvinotoolkit.org/

* Fixed links
# Conflicts:
#	inference-engine/ie_bridges/java/samples/README.md

* Benchmarks 2021 2 (#3590)

* Initial changes

* Updates

* Updates

* Updates

* Fixed graph names

* minor fix

* Fixed link

* Implemented changes according to the review changes

* fixed links

* Updated Legal_Information.md according to review feedback

* Replaced  Uzel* UI-AR8 with Mustang-V100-MX8

* Feature/azaytsev/ovsa docs (#3627)

* Added ovsa_get_started.md

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Updated the GSG topic, added a new image

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Formatting issues fixes

* Revert "Formatting issues fixes"

This reverts commit c6e6207.

* Replaced to Security section

* doc fixes (#3626)

Co-authored-by: Nikolay Tyukaev <[email protected]>
# Conflicts:
#	docs/IE_DG/network_state_intro.md

* fix latex formula (#3630)

Co-authored-by: Nikolay Tyukaev <[email protected]>

* fix comments ngraph api 2021.2 (#3520)

* fix comments ngraph api

* remove whitespace

* fixes

Co-authored-by: Nikolay Tyukaev <[email protected]>

* Feature/azaytsev/g api docs (#3731)

* Initial commit

* Added content

* Added new content for g-api documentation. Removed obsolete links through all docs

* Fixed layout

* Fixed layout

* Added new topics

* Added new info

* added a note

* Removed redundant .svg
# Conflicts:
#	docs/get_started/get_started_dl_workbench.md

* [Cherry-pick] DL Workbench cross-linking (#3488)

* Added links to MO and Benchmark App

* Changed wording

* Fixes a link

* fixed a link

* Changed the wording

* Links to WB

* Changed wording

* Changed wording

* Fixes

* Changes the wording

* Minor corrections

* Removed an extra point

* cherry-pick

* Added the doc

* More instructions and images

* Added slide

* Borders for screenshots

* fixes

* Fixes

* Added link to Benchmark app

* Replaced the image

* tiny fix

* tiny fix

* Fixed a typo

* Feature/azaytsev/g api docs (#3731)

* Initial commit

* Added content

* Added new content for g-api documentation. Removed obsolete links through all docs

* Fixed layout

* Fixed layout

* Added new topics

* Added new info

* added a note

* Removed redundant .svg

* Doc updates 2021 2 (#3749)

* Change the name of parameter tensorflow_use_custom_operations_config to transformations_config

* Fixed formatting

* Corrected MYRIAD plugin name

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Installation Guides formatting fixes

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed link to Model Optimizer Extensibility

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Fixed formatting

* Updated IGS, added links to Get Started Guides

* Fixed links

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Fixed formatting issues

* Move the Note to the proper place

* Removed optimization notice
# Conflicts:
#	docs/ops/detection/DetectionOutput_1.md

* minor fix

* Benchmark updates (#4041)

* Link fixes for 2021.2 benchmark page  (#4086)

* Benchmark updates

* Fixed links

Co-authored-by: Trawinski, Dariusz <[email protected]>
Co-authored-by: Nikolay Tyukaev <[email protected]>
Co-authored-by: Nikolay Tyukaev <[email protected]>
Co-authored-by: Alina Alborova <[email protected]>
  • Loading branch information
5 people authored Feb 2, 2021
1 parent ecb6d86 commit 235cd56
Show file tree
Hide file tree
Showing 86 changed files with 2,883 additions and 649 deletions.
1 change: 0 additions & 1 deletion docs/HOWTO/Custom_Layers_Guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,6 @@ python3 mri_reconstruction_demo.py \
- [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md)
- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
- [Inference Engine Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).

## Converting Models:
Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/API_Changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ The sections below contain detailed list of changes made to the Inference Engine

### Deprecated API

**Myriad Plugin API:**
**MYRIAD Plugin API:**

* VPU_CONFIG_KEY(IGNORE_IR_STATISTIC)

Expand Down
4 changes: 2 additions & 2 deletions docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,11 @@ The `ngraph::onnx_import::Node` class represents a node in ONNX model. It provid
New operator registration must happen before the ONNX model is read, for example, if an ONNX model uses the 'CustomRelu' operator, `register_operator("CustomRelu", ...)` must be called before InferenceEngine::Core::ReadNetwork.
Re-registering ONNX operators within the same process is supported. During registration of the existing operator, a warning is printed.

The example below demonstrates an examplary model that requires previously created 'CustomRelu' operator:
The example below demonstrates an exemplary model that requires previously created 'CustomRelu' operator:
@snippet onnx_custom_op/onnx_custom_op.cpp onnx_custom_op:model


For a reference on how to create a graph with nGraph operations, visit [nGraph tutorial](../nGraphTutorial.md).
For a reference on how to create a graph with nGraph operations, visit [Custom nGraph Operations](AddingNGraphOps.md).
For a complete list of predefined nGraph operators, visit [available operations sets](../../ops/opset.md).

If operator is no longer needed, it can be unregistered by calling `unregister_operator`. The function takes three arguments `op_type`, `version`, and `domain`.
Expand Down
3 changes: 2 additions & 1 deletion docs/IE_DG/InferenceEngine_QueryAPI.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@ MYRIAD.1.4-ma2480
FPGA.0
FPGA.1
CPU
GPU
GPU.0
GPU.1
...
```

Expand Down
5 changes: 1 addition & 4 deletions docs/IE_DG/Introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,4 @@ The open source version is available in the [OpenVINO™ toolkit GitHub reposito
- [Intel&reg; Deep Learning Deployment Toolkit Web Page](https://software.intel.com/en-us/computer-vision-sdk)


[scheme]: img/workflow_steps.png

#### Optimization Notice
<sup>For complete information about compiler optimizations, see our [Optimization Notice](https://software.intel.com/en-us/articles/optimization-notice#opt-en).</sup>
[scheme]: img/workflow_steps.png
3 changes: 0 additions & 3 deletions docs/IE_DG/Optimization_notice.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/IE_DG/Samples_Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ To run the sample applications, you can use images and videos from the media fil

## Samples that Support Pre-Trained Models

You can download the [pre-trained models](@ref omz_models_intel_index) using the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
To run the sample, you can use [public](@ref omz_models_public_index) or [Intel's](@ref omz_models_intel_index) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader_README).

## Build the Sample Applications

Expand Down Expand Up @@ -127,7 +127,7 @@ You can also build a generated solution manually. For example, if you want to bu
Microsoft Visual Studio and open the generated solution file from the `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\Samples.sln`
directory.

### <a name="build_samples_linux"></a>Build the Sample Applications on macOS*
### <a name="build_samples_macos"></a>Build the Sample Applications on macOS*

The officially supported macOS* build environment is the following:

Expand Down
1 change: 0 additions & 1 deletion docs/IE_DG/protecting_model_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,5 +59,4 @@ should be called with `weights` passed as an empty `Blob`.
- Inference Engine Developer Guide: [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md)
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.md)
- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
29 changes: 25 additions & 4 deletions docs/IE_DG/supported_plugins/CL_DNN.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,30 @@
GPU Plugin {#openvino_docs_IE_DG_supported_plugins_CL_DNN}
=======

The GPU plugin uses the Intel&reg; Compute Library for Deep Neural Networks ([clDNN](https://01.org/cldnn)) to infer deep neural networks.
clDNN is an open source performance library for Deep Learning (DL) applications intended for acceleration of Deep Learning Inference on Intel&reg; Processor Graphics including Intel&reg; HD Graphics and Intel&reg; Iris&reg; Graphics.
For an in-depth description of clDNN, see: [clDNN sources](https://github.com/intel/clDNN) and [Accelerate Deep Learning Inference with Intel&reg; Processor Graphics](https://software.intel.com/en-us/articles/accelerating-deep-learning-inference-with-intel-processor-graphics).
The GPU plugin uses the Intel® Compute Library for Deep Neural Networks (clDNN) to infer deep neural networks.
clDNN is an open source performance library for Deep Learning (DL) applications intended for acceleration of Deep Learning Inference on Intel® Processor Graphics including Intel® HD Graphics, Intel® Iris® Graphics, Intel® Iris® Xe Graphics, and Intel® Iris® Xe MAX graphics.
For an in-depth description of clDNN, see [Inference Engine source files](https://github.com/openvinotoolkit/openvino/tree/master/inference-engine/src/cldnn_engine) and [Accelerate Deep Learning Inference with Intel® Processor Graphics](https://software.intel.com/en-us/articles/accelerating-deep-learning-inference-with-intel-processor-graphics).

## Device Naming Convention
* Devices are enumerated as "GPU.X" where `X={0, 1, 2,...}`. Only Intel® GPU devices are considered.
* If the system has an integrated GPU, it always has id=0 ("GPU.0").
* Other GPUs have undefined order that depends on the GPU driver.
* "GPU" is an alias for "GPU.0"
* If the system doesn't have an integrated GPU, then devices are enumerated starting from 0.

For demonstration purposes, see the [Hello Query Device C++ Sample](../../../inference-engine/samples/hello_query_device/README.md) that can print out the list of available devices with associated indices. Below is an example output (truncated to the device names only):

```sh
./hello_query_device
Available devices:
Device: CPU
...
Device: GPU.0
...
Device: GPU.1
...
Device: HDDL
```

## Optimizations

Expand Down Expand Up @@ -92,7 +113,7 @@ When specifying key values as raw strings (that is, when using Python API), omit
| `KEY_CLDNN_PLUGIN_THROTTLE` | `<0-3>` | `0` | OpenCL queue throttling (before usage, make sure your OpenCL driver supports appropriate extension)<br> Lower value means lower driver thread priority and longer sleep time for it. 0 disables the setting. |
| `KEY_CLDNN_GRAPH_DUMPS_DIR` | `"<dump_dir>"` | `""` | clDNN graph optimizer stages dump output directory (in GraphViz format) |
| `KEY_CLDNN_SOURCES_DUMPS_DIR` | `"<dump_dir>"` | `""` | Final optimized clDNN OpenCL sources dump output directory |
| `KEY_GPU_THROUGHPUT_STREAMS` | `KEY_GPU_THROUGHPUT_AUTO`, or positive integer| 1 | Specifies a number of GPU "execution" streams for the throughput mode (upper bound for a number of inference requests that can be executed simultaneously).<br>This option is can be used to decrease GPU stall time by providing more effective load from several streams. Increasing the number of streams usually is more effective for smaller topologies or smaller input sizes. Note that your application should provide enough parallel slack (e.g. running many inference requests) to leverage full GPU bandwidth. Additional streams consume several times more GPU memory, so make sure the system has enough memory available to suit parallel stream execution. Multiple streams might also put additional load on CPU. If CPU load increases, it can be regulated by setting an appropriate `KEY_CLDNN_PLUGIN_THROTTLE` option value (see above). If your target system has relatively weak CPU, keep throttling low. <br>The default value is 1, which implies latency-oriented behaviour.<br>`KEY_GPU_THROUGHPUT_AUTO` creates bare minimum of streams to improve the performance; this is the most portable option if you are not sure how many resources your target machine has (and what would be the optimal number of streams). <br> A positive integer value creates the requested number of streams. |
| `KEY_GPU_THROUGHPUT_STREAMS` | `KEY_GPU_THROUGHPUT_AUTO`, or positive integer| 1 | Specifies a number of GPU "execution" streams for the throughput mode (upper bound for a number of inference requests that can be executed simultaneously).<br>This option is can be used to decrease GPU stall time by providing more effective load from several streams. Increasing the number of streams usually is more effective for smaller topologies or smaller input sizes. Note that your application should provide enough parallel slack (e.g. running many inference requests) to leverage full GPU bandwidth. Additional streams consume several times more GPU memory, so make sure the system has enough memory available to suit parallel stream execution. Multiple streams might also put additional load on CPU. If CPU load increases, it can be regulated by setting an appropriate `KEY_CLDNN_PLUGIN_THROTTLE` option value (see above). If your target system has relatively weak CPU, keep throttling low. <br>The default value is 1, which implies latency-oriented behavior.<br>`KEY_GPU_THROUGHPUT_AUTO` creates bare minimum of streams to improve the performance; this is the most portable option if you are not sure how many resources your target machine has (and what would be the optimal number of streams). <br> A positive integer value creates the requested number of streams. |
| `KEY_EXCLUSIVE_ASYNC_REQUESTS` | `YES` / `NO` | `NO` | Forces async requests (also from different executable networks) to execute serially.|

## Note on Debug Capabilities of the GPU Plugin
Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/supported_plugins/HDDL.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ For the "Supported Networks", please reference to [MYRIAD Plugin](MYRIAD.md)
See VPU common configuration parameters for the [VPU Plugins](VPU.md).
When specifying key values as raw strings (that is, when using Python API), omit the `KEY_` prefix.

In addition to common parameters for Myriad plugin and HDDL plugin, HDDL plugin accepts the following options:
In addition to common parameters for MYRIAD plugin and HDDL plugin, HDDL plugin accepts the following options:

| Parameter Name | Parameter Values | Default | Description |
| :--- | :--- | :--- | :--- |
Expand Down
8 changes: 5 additions & 3 deletions docs/IE_DG/supported_plugins/MULTI.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,11 +47,13 @@ Inference Engine now features a dedicated API to enumerate devices and their cap
```sh
./hello_query_device
Available devices:
Device: CPU
Device: CPU
...
Device: GPU
Device: GPU.0
...
Device: HDDL
Device: GPU.1
...
Device: HDDL
```
Simple programmatic way to enumerate the devices and use with the multi-device is as follows:

Expand Down
12 changes: 6 additions & 6 deletions docs/IE_DG/supported_plugins/VPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ This chapter provides information on the Inference Engine plugins that enable in
## Known Layers Limitations

* `'ScaleShift'` layer is supported for zero value of `'broadcast'` attribute only.
* `'CTCGreedyDecoder'` layer works with `'ctc_merge_repeated'` attribute equal 1.
* `'DetectionOutput'` layer works with zero values of `'interpolate_orientation'` and `'num_orient_classes'` parameters only.
* `'MVN'` layer uses fixed value for `'eps'` parameters (1e-9).
* `'Normalize'` layer uses fixed value for `'eps'` parameters (1e-9) and is supported for zero value of `'across_spatial'` only.
* `'Pad'` layer works only with 4D tensors.
* `ScaleShift` layer is supported for zero value of `broadcast` attribute only.
* `CTCGreedyDecoder` layer works with `ctc_merge_repeated` attribute equal 1.
* `DetectionOutput` layer works with zero values of `interpolate_orientation` and `num_orient_classes` parameters only.
* `MVN` layer uses fixed value for `eps` parameters (1e-9).
* `Normalize` layer uses fixed value for `eps` parameters (1e-9) and is supported for zero value of `across_spatial` only.
* `Pad` layer works only with 4D tensors.

## Optimizations

Expand Down
6 changes: 2 additions & 4 deletions docs/Legal_Information.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,15 @@ This software and the related documents are Intel copyrighted materials, and you

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting [www.intel.com/design/literature.htm](https://www.intel.com/design/literature.htm).

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.

Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit [www.intel.com/benchmarks](https://www.intel.com/benchmarks).
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.

Your costs and results may vary.

Intel technologies may require enabled hardware, software or service activation.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. \*Other names and brands may be claimed as the property of others.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. \*Other names and brands may be claimed as the property of others.

## OpenVINO™ Logo
To build equity around the project, the OpenVINO logo was created for both Intel and community usage. The logo may only be used to represent the OpenVINO toolkit and offerings built using the OpenVINO toolkit.
Expand Down
7 changes: 7 additions & 0 deletions docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,13 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi

* <code>.bin</code> - Contains the weights and biases binary data.

> **TIP**: You also can work with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
> performance of deep learning models on various Intel® architecture
> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
> <br>
> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## What's New in the Model Optimizer in this Release?

* Common changes:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ A summary of the steps for optimizing and deploying a model that was trained wit
* **Object detection models:**
* SSD300-VGG16, SSD500-VGG16
* Faster-RCNN
* RefineDet (Myriad plugin only)
* RefineDet (MYRIAD plugin only)

* **Face detection models:**
* VGG Face
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ python3 mo_tf.py --input_model inception_v1.pb -b 1 --tensorflow_custom_operatio

* Launching the Model Optimizer for Inception V1 frozen model and use custom sub-graph replacement file `transform.json` for model conversion. For more information about this feature, refer to [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md).
```sh
python3 mo_tf.py --input_model inception_v1.pb -b 1 --tensorflow_use_custom_operations_config transform.json
python3 mo_tf.py --input_model inception_v1.pb -b 1 --transformations_config transform.json
```

* Launching the Model Optimizer for Inception V1 frozen model and dump information about the graph to TensorBoard log dir `/tmp/log_dir`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ To generate the IR of the EfficientDet TensorFlow model, run:<br>
```sh
python3 $MO_ROOT/mo.py \
--input_model savedmodeldir/efficientdet-d4_frozen.pb \
--tensorflow_use_custom_operations_config $MO_ROOT/extensions/front/tf/automl_efficientdet.json \
--transformations_config $MO_ROOT/extensions/front/tf/automl_efficientdet.json \
--input_shape [1,$IMAGE_SIZE,$IMAGE_SIZE,3] \
--reverse_input_channels
```
Expand All @@ -56,7 +56,7 @@ EfficientDet models were trained with different input image sizes. To determine
dictionary in the [hparams_config.py](https://github.com/google/automl/blob/96e1fee/efficientdet/hparams_config.py#L304) file.
The attribute `image_size` specifies the shape to be specified for the model conversion.

The `tensorflow_use_custom_operations_config` command line parameter specifies the configuration json file containing hints
The `transformations_config` command line parameter specifies the configuration json file containing hints
to the Model Optimizer on how to convert the model and trigger transformations implemented in the
`$MO_ROOT/extensions/front/tf/AutomlEfficientDet.py`. The json file contains some parameters which must be changed if you
train the model yourself and modified the `hparams_config` file or the parameters are different from the ones used for EfficientDet-D4.
Expand Down
Loading

0 comments on commit 235cd56

Please sign in to comment.