From 6493a0f6bc1b8bedc471fa7d141bafffecb318ab Mon Sep 17 00:00:00 2001 From: Tim Zerrell Date: Tue, 2 Feb 2021 19:15:56 -0800 Subject: [PATCH] Merge from upstream (#132) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * [IE][VPU]: Remove protocol checks for updating memType and watchdog flag (#3762) Remove protocol checks for updating memType and watchdog flag. This has been verified by Microsoft on their target platform with 2 ma2085 over PCIE. The target was able to run openVino sample with these changes. * Broken first inference time counter (#3848) * [IE CLDNN] Performance / accuracy fixes (#3729) - Added linear_onnx mode support into resample_opt kernel. - Fixed byxf layout check. - Added Resample + Eltwise fusing support - Update dequantize merge pass to work with eltwise instead of scale - Fixed uninitialized m_maxBatch value for query mode - Fixed missing AddPrimitiveToProfiler for DeformablePSRoiPooling - Fixed 0d gather - Added WA for Resample+Eltwise fusing Co-authored-by: Gleb Kazantaev * MO dev guide refactoring (#3266) (#3595) * Release mo dev guide refactoring (#3266) * Updated MO extension guide * Minor change and adding svg images * Added additional information about operation extractors. Fixed links and markdown issues * Added missing file with information about Caffe Python layers and image for MO transformations dependencies graph * Added section with common graph transformations attributes and diagram with anchor transformations. Added list of available front phase transformations * Added description of front-phase transformations except the scope-defined and points defined. Removed legacy document and examples for such transformations. * Added sections about node name pattern defined front phase transformations. Copy-pasted the old one for the points defined front transformation * Added description of the rest of front transformations and and all middle and back phase transformations * Refactored Legacy_Mode_for_Caffe_Custom_Layers and updated the Customize_Model_Optimizer with information about extractors order * Added TOC for the MO Dev guide document and updated SVG images with PNG ones * Fixed broken link. Removed redundant image * Fixed broken links * Added information about attributes 'run_not_recursively', 'force_clean_up' and 'force_shape_inference' of the transformation * Code review comments * Added a section about `Port`s * Extended Ports description with examples * Added information about Connections * Updated MO README.md and removed a lot of redundant and misleading information * Updates to the Customize_Model_Optimizer.md * More updates to the Customize_Model_Optimizer.md * Final updates for the Customize_Model_Optimizer.md * Fixed some broken links * More fixed links * Refactored Custom Layers Guide: removed legacy and incorrect text, added up-to-date. * Draft implementation of the Custom layer guide example for the MO part * Fixed broken links using #. Change layer->operation in extensibility documents * Updated Custom operation guide with IE part * Fixed broken links and minor updates to the Custom Operations Guide * Updating links * Layer->Operation * Moved FFTOp implementation to the template extension * Update the CMake for template_extension to build the FFT op conditionally * Fixed template extension compilation * Fixed CMake for template extension * Fixed broken snippet * Added mri_demo script and updated documentation * One more compilation error fix * Added missing header for a demo file * Added reference to OpenCV * Fixed unit test for the template extension * Fixed typos in the template extension * Fixed compilation of template extension for case when ONNX importer is disabled Co-authored-by: Alexander Zhogov * [BF16] Simulation mode was added to documentation (#3649) * [BF16] Simulation mode was added to documentation * Update Bfloat16Inference.md Co-authored-by: Anastasiya Ageeva * Inconsistent inference results for hello_classification sample between Windows and Linux (44369) (#3849) * Fix image load issue * remove extra line * Attributes have different values in MO and NGraph IRs (#3793) * regionyolo do_softmax attribute * add serialization single layer tests for normalizel2 and reshape * add prelu sslt, change letter size in op name to align with MO * add shufflechanels sslt, add workaround to serialize the op with proper opset number * add broadcast sslt, change attribute string representations to lowercase * add pad sslt, change attribute string representations to lowercase * Unify sslt name prefixes * add prelu name translation for serialization * change expected type of regionyolo do_softmax attribute to bool * transform autobcast type attr to lowercase, add unit test, add special opset mapping in serialization * style fix * fix indentation * fix indentation 2 * Possibility of different opset assignment for different op versions * Update header dates in modified files * Match special opset to type_info_t instead of a string * Adjust the comment to match the code * Fix sanitizer build (#3869) * [ONNX Importer] Reduce amount of ONNX Importer public includes. (#3837) * Initial moving * ONNX Importer is private now - CMakeLists.txt * ONNX Importer is private now - Includes * Make some files visible * Style apply * Review fix * Public headers have a prefix now * Style * hide more headers * [IE][VPU]: Added MYRIAD_THROUGHPUT_STREAMS in default config (#3861) * [IE][VPU]: Fixed the calculation of timeout for x32 system (#3842) * [Python API] Small fixes in tests and change of format string to f-string (#3865) * Added -Wall for IE Core libraries (#3852) * doc files copyright (#3843) * doc files copyright * fix indentation * Enhancing Object Detection Sample SSD C Sample docs to cover the use of ie_network_reshape() for setting the batch size (#3875) * Allow FakeQuantize with output_low scalar to be transformed (#3812) * Allow FakeQuantize with output_low scalar to be transformed * add test case with scalar * Enable CNN2D tests for GNa Lib 2.1.0.1048 (#3529) Enable tests including rectangular kernel and multiple kernels Pad filters to 16B Fix style space after if before ( needed Fix PRETTY_FUNCTION double def Fix canMatchWith1AsyncThread Fix ifdefs for gna 2.0 Add and fix mock Simplify and fix condition for Rotate features Refine commnets in GNA CONV tests file Apply review, Refactor ConvolutionPrimitive Refine CNN enforce legacy Add debug print Move debug dump definitions Add new metric for GNA library version Add coments on FP32 * Add Clamp fusion transformation (#3756) * Add Clamp fusion transformation It fuses Maximum->Minimum subgraph to Clamp operator. Ticket: 44783 * address review comments * update year in headers * Adding docs for maacOS samples building (#3862) * Develop Bucketize Reference Implementation (#3693) * Bucketize: Revise op class and add type_prop unit tests * Bucketize: Develop reference implementation * Bucketize: Add unit tests * Bucketize: Add single layer test and cpu instantiation * Bucketize: Add unit test with empty buckets for INTERPRETER * Bucketize: Typo in buckets element type check * Bucketize: Add custom generated inputs in single layer test class * Bucketize: Use random_device to generate seed for data blob * Bucketize: Remove unsupported f64 precision * Bucketize: Add function description * Bucketize: Remove randomness of inputs generation by using static seed * Bucketize: Support different precisions for data and bucket inputs * Bucketize: Refactor type_prop tests and improve backend unit test coverage * Fixed warnings generation for ngraph API (#3864) * ONNX model editor - replacing input shapes (#3844) * Azure CI: Prepare for updating IB ver (#3085) * Add install_ib_console.bat * [ONNX Importer] move null_node to private scope (#3877) * Replaced size_t with int64_t (#3744) * Offline transformation API (#3408) * Added offline transformations library * Added python API for calling MOCTransformations * Added CF flag for MOC Transformations * Divided offline api to separate independent module * Update MOC pipeline to execute only fusions * Disable CF for PriorBox ops * Clean-up * Added python test * Removed transformation pipeline as it is not ready yet * Remove not related to this PR changes * Fixed build for dev package case; renamed to offline_transformations_api * Removed unrelated changes * Removed excess exports from cmake * Removed useless custom command from cmake * Remove deprecated methods usage from transformation library (#3881) * Remove deprecated methods usage from transformation library * graph_rewrite_callback -> matcher_pass_callback * Clean-up legacy library from deprecated methods usage * Update func tests * [IE][VPU]: changing condition in HW tiling (#3695) * Update MO extensions enabling/disabling mechanism (#3873) * Add id for NormalizeToNarmalizeL2 transformation * Update copyright year * Update extensions enabling/disabling mechanism * Remove copyright change * Copyright year * Update documentation * Revert missed year in copyright * Add MVN-6 related transformations (#3710) * Add MVN decomposition transformation * Add MVN-1 to MVN-6 transformation * Apply review feedback * Apply review feedback * Fix build * Fix if statement and add 5D tests * Apply review feedback * Apply review feedback * Apply feedback * Revert "Apply feedback" This reverts commit 039fefbff9590a718b1450ebac4417c2064e4103. * Apply review feedback * Apply review feedback * Fix build issue * Apply review feedback * Apply review feedback * Apply feedback * Proper cpplint target for object libraries (#3883) * Enable calculation of reference data without prior run of infer request (#3856) * Implement ExperimentalDetectronDetectionOutput and ExperimentalDetectronPriorGridGenerator operations as nGraph ops (#3374) * Commit. * Started to write nGraph operation ExperimentalDetectronDetectionOutput. Written draft of the header file. * Written draft of the cpp-file for nGraph operation ExperimentalDetectronDetectionOutput. * Small fix. * Added reading of ExperimentalDetectronDetectionOutput as nGraph operation. * Some fix. * Unregistered old shape infer function of the operation ExperimentalDetectronDetectionOutput. * Written the header file for the operation ExperimentalDetectronPriorGridGenerator. * Small refactoring. * Small fix. * Added set_output_size(3) into op::ExperimentalDetectronDetectionOutput::validate_and_infer_types(). * Added check for number of inputs of ExperimentalDetectronDetectionOutput. * Reverted some changes. * Changed IR for ExperimentalDetectronDetectionOutput serialization test. * Written cpp-file of nGraph operation ExperimentalDetectronPriorGridGenerator. * Small fix. * Some fixes. * Fixes in type and shape infer functions of the MO operation ExperimentalDetectronDetectionOutput. * Now ExperimentalDetectronPriorGridGenerator is readed as nGraph operation. * Fixed the infer function of the nGraph operation ExperimentalDetectronPriorGridGenerator. * Started to write tests for the shape infer function of the nGraph operation ExperimentalDetectronDetectionOutput. * Written the draft of the test for the shape infer function of the nGraph operation ExperimentalDetectronDetectionOutput. * Small fix. * Fixed ngraph/test/CMakeLists.txt. * Started to write tests for the shape infer function of the nGraph operation ExperimentalDetectronPriorGridGenerator. * Now the shape infer function of the nGraph operation ExperimentalDetectronPriorGridGenerator performs correctly case dynamic input shapes with static ranks. * Continued to write test for the nGraph operation ExperimentalDetectronPriorGridGenerator. * Small fixes. * Written tests for the shape infer function of the nGraph operation ExperimentalDetectronPriorGridGenerator (case when input shapes are partially dynamic). * Added test for reading ExperimentalDetectronDetectionOutput as an operation from opset6. * Some fixes. * Added some debug outputs. * Deleted inserted debug output. * Small fixes. * Small fix. * Small fix. * Small change. * Added comments to attributes of ExperimentalDetectronDetectionOutput. * Reverted changes. * Deleted shape infer for output port 3. * Small fixes. * Deleted redundant keyword 'virtual'. * Deleted redundant usings in header files of nGraph operations ExperimentalDetectronDetectionOutput and ExperimentalDetectronPriorGridGenerator. * Some fixes. * Small change. * Now GridGenerator::validate takes three args (input partial shapes). * Small fix. * Deleted some usings. * Small code style fix. * Reverted changes in validate_and_infer_types() and validate() of op::v6::ExperimentalDetectronPriorGridGenerator. * Added description of the class ExperimentalDetectronDetectionOutput. * Added some comments into the header file of the nGraph operation ExperimentalDetectronPriorGridGenerator. * Some fixes. * Added some comments to the class of the nGraph operation ExperimentalDetectronPriorGridGenerator. * Now the MO operation ExperimentalDetectronDetectionOutput has the attribute 'version' as 'opset6'. * Now the MO operation ExperimentalDetectronPriorGridGenerator has the attribute 'version' as 'opset6'. * Some fixes in the MO class ExperimentalDetectronDetectionOutput. * Fixes in the shape infer function of the nGraph operation ExperimentalDetectronPriorGridGenerator. * Renamed test XML model for ExperimentalDetectronDetectionOutput serialization tests. * Added validation of input shapes for the nGraph operation ExperimentalDetectronDetectionOutput. * Small fixes in the XML models for serialization testing of ExperimentalDetectronDetectionOutput. * Added tests of shape infer function of the nGraph operation ExperimentalDetectronDetectionOutput for the case when input shapes are partially dynamic. * Added tests of shape infer function of the nGraph operation ExperimentalDetectronDetectionOutput for the case when some input shapes have dynamic ranks. * Small fixes. * Small fix in the MO operation ExperimentalDetectronDetectionOutput shape infer function. * Fixes in op::v6::ExperimentalDetectronDetectionOutput::validate_and_infer_types(). * Code style fix. * Small refactoring. * Added NGRAPH_OP_SCOPE into ExperimentalDetectronDetectionOutput nGraph class. * Added NGRAPH_OP_SCOPE to the nGraph class ExperimentalDetectronPriorGridGenerator. * Small fixes. * Some refactoring. * Small fix. * Small fixes. * Reverted some changes in ExperimentalDetectronDetectionOutput::validate_and_infer_type(). * Now VPU reads the attribute class_agnostic_box_regression of ExperimentalDetectronDetectionOutput as Bool. * Now MO generates attribute 'class_agnostic_box_regression' of ExperimentalDetectronDetectionOutput only with values false or true. * Small fix. * Tabs were replaced by spaces in some XMLs. * Fixed copyrights. * Refactoring in op::v6::ExperimentalDetectronDetectionOutput::validate_and_infer_types(). * Refactoring in op::v6::ExperimentalDetectronPriorGridGenerator::validate_and_infer_types(). * Small fixes. * Started to write ExperimentalDetectronPriorGridGenerator shape infer tests for the case when dynamic input dimensions are intervals. * Deleted redundant 'return'. * Written tests for interval values of input shapes of op::v6::ExperimentalDetectronPriorGridGenerator. * Code style fix. * Code style fix. * Add support for ONNX Operator ReduceSum v13 and revise other Reduce operators (#3605) * Fixed -Wall warnings on ARM build (#3885) * ONNX tests mismatch error (#3836) * Deprecated IVariableState interface (#3884) * Improvements for subnormal floats zeroing in CPU plugin (#3797) * Added CC macro to transformations (#3795) * Added CC macro to transformations * Fixed typo * Added MATCHER_SCOPE * Fixed review comments * Try to remove MATCHER_CALLBACK_SCOPE * Fixed matcher name * Fixed MATCHER_SCOPE * Added documentation * Fixed typo * Fixed CC for linux * Fixed names * Fixed docs * Fixed typo * FIxed comments * Add more CC macros * [CPU] Runtime precision for execution graph (#3886) * [IE][VPU][DTS]: shrink mask support for StridedSlice and test (#3835) * [IE TESTS][CPU] Fusing tests added to the CPU specific single layer tests. (#3015) * Fix signed/unsigned comparison warnings (#3900) They are treated as error, which leads to build failure. Tested on Ubuntu 20.04, gcc 9.3.0. * [IE CLDNN] Changed weights layout used in the plugin (#3858) Before this patch constant with weights could be not detected if it wasn't directly connected to Conv/Deconv layer. Now weights always uses common data format (bfzyx) in the plugin which is converted into weights format later (goiyx, oiyx, etc), so weights sub-graph can now contain anything * Fix nGraph doxygen for master (#3899) * Changed style of some headers * Fixed shared buffer * Remove chrome_trace * Fixed comment * [IE CLDNN] Added missing pointer type case for blocked read (#3866) * [IE CLDNN] Change memory reset rules (#2909) * [IE CLDNN] Eltwise b_fs_yx_fsv16 mixed presicion support (#3734) * Visitor api ti serialization (#3777) * Add on_adapter(Function) for serialization. * Add port_map and back_edges serialization. * Add 2 unit tests for TI serialization. * Convert lambda expression into function pointer. * Add single layer test for tensor iterator. * Add limitation for file name length during serialization. * Add file name length limitation for Serialize(). * Add WA for LSTMCell v0 in serialize class, new test class for TI serialization with dynamic weights, add bin path to SerializationParams, replace call to ngfunction_2_irv10 with visitor.on_attribute(). * Remove hacks for TI from ngfunction_2_irv10(), validate buffers in port_map. * Changed year in new added test files. * Add check for version of LSTMv0 WA, add assert for model read from file. * Remove append_copy for xml Function, changed comparison for LSTMvo WA. * Update second WA for LSTMCell v0 with version check. * Remove find_child when searching for port_map and back_edges. * Fixed cmake 'message' for multiple arguments (#3901) * [nGraph][ONNX] Extend ONNX Importer for operation "GatherElements-6" (#3822) * [IE TESTS][CPU] I8 and U8 precisions was enabled in the reverse sequence single layer test. (#3722) * Change `json.loads` to `json.load` for `--db_metadata` (#3871) * Fixed import module name (#3906) * Set output blobs precision for IE tests (#3905) * Calling SetPrecission on cnn network outputs * added tests * get_output_name refactor * add missing test file * run tests on all backends (or disable if backend is not available) * fixed tests * fixed TestEngine * [CPU] ROIAlign: fixed misprint in input width value (#3853) * [IE][VPU]: Fix buffer size calculation bug (#3825) * [IE][VPU][Tests]: Support DTS for GatherElements (#3688) * Support DTS for GatherElements * Extract GatherBase to a common part * Introduce tests on inference * Introduce tests on function comparing * Disable failing tests * Added PythonAPI For LowLatency Transformation (#3910) * [IE][VPU][Tests]: Fix ROIPooling test (#3705) * fix initialization bug of spatial_scale in tests (affected input generating) * fix input generating for bilinear ROI Pooling * correct parameters for myriad tests: * myriad plugin does not support batch for this layer; * decrease threshold since myriad uses fp16 calculations * Azure CI: Add install (#3876) * Azure CI: Add install * Fix virtual inheritance for class PadLayerTest (#3912) * Implement ExperimentalDetectronROIFeatureExtractor operation as nGraph op (#3739) * Commit. * Written the header file of the nGraph operation ExperimentalDetectronROIFeatureExtractor. * Started to write cpp-file for the nGraph operation ExperimentalDetectronROIFeatureExtractor. * Deleted in_ports_count attribute for the MO operation ExperimentalDetectronROIFeatureExtractor. * Written validate_and_infer_type() method of ngraph::op::v6::ExperimentalDetectronROIFeatureExtractor. * Code style fixes. * Code style fixes. * Small fixes. * Code style fixes. * Now operation ExperimentalDetectronROIFeatureExtractor is read as nGraph operation ExperimentalDetectronROIFeatureExtractor. * Implemented op::v6::ExperimentalDetectronROIFeatureExtractor::clone_with_new_inputs(). * Added macro NGRAPH_OP_SCOPE to the cpp-file of the nGraph operation ExperimentalDetectronROIFeatureExtractor. * Fixes in some tests. * Code style fix. * Fixed yet another test for ExperimentalDetectronROIFeatureExtractor. * Added tests for reading ExperimentalDetectronROIFeatureExtractor as operation from opset6. * Added more test for reading ExperimentalDetectronROIFeatureExtractor as operation from opset6. * Started to write shape infer tests for the nGraph operation ExperimentalDetectronROIFeatureExtractor. * Corrected ngraph/test/CMakeLists.txt. * Small changes. * Code style fix. * Small fixes. * Added ctor of ExperimentalDetectronROIFeatureExtractor with NodeVector as argument. * Added setting the attribute in_ports_count when the MO operation ExperimentalDetectronROIFeatureExtractor is creating in the MO transformation ONNXMaskRCNNTransformation. * Added shape infer for the second output of the nGraph operation ExperimentalDetectronROIFeatureExtractor. * Written shape infer tests for nGraph operation ExperimentalDetectronROIFeatureExtractor (case when input shapes are partially dynamic). * Small fixes. * Code style fix. * Deleted redundant &expected_channels. * Code style fix. * Small refactoring. * Small fixes. * Small changes. * Small fixes. * Deleted attribute distribute_rois_between_levels of ExperimentalDetectronROIFeatureExtractoe. * Deleted attribute preserve_rois_order of ExperimentalDetectronROIFeatureExtractoe. * Deleted attribute image_id of ExperimentalDetectronROIFeatureExtractoe. * Now MO generates attribute 'aligned' of ExperimentalDetectronROIFeatureExtractor only with values 'true' or 'false'. * Small fix. * Fix in the conversion of the attribute 'aligned' of MO operation ExperimentalDetectronROIFeatureExtractor to string. * Tabs were replaced by spaces in some XMLs. * Tabs were replaced by spaces in ngraph_reshape_tests.cpp. * Fixed copyrights. * Applied small patch to IREngine from PR https://github.com/openvinotoolkit/openvino/pull/3814. * Tabs were replaced by spaces in cnn_ngraph_impl_tests.cpp. * op::v6::ExperimentalDetectronROIFeatureExtractor::validate_and_infer_types() was rewritten using operator & for channels. * Added tests for input shapes of op::v6::ExperimentalDetectronROIFeatureExtractor in the case when inputs shapes consist of intervals. * Fixes in test type_prop.detectron_roi_feature_extractor_intervals. * [IE CLDNN] Convolution b_fs_zyx_fsv16_imad tuning improvements (#3011) * Add ReluFakeQuantize transformation (#3811) * Add ReluFakeQuantize transformation * address review comments * replace constant with any_input * use MATCHER_SCOPE macro * Extend and fix input/output precisions support in functional tests (#3933) * Use actual blobs type to get nGraph element type, when generating inputs and calculating reference results. It will allow to run tests with `undefined` preset and use real type, returned from the device to generate and process input. It also fixes the case with several inputs with different type. * Extend `convertOutputPrecision` function to fully support conversions from/to fp16/bf16 types. The device might return blobs in that formats, so they need to be supported by the testing framework. * [LPT] StridedSlice Transformation (#3817) * [nGraph] evaluate_strided_slice: replace read_vec to host_tensor_2_vec * [LPT] StridedSliceTransformation * Develop ConvertLike Reference Implementation (#3857) * ConvertLike: Develop reference implementation * ConvertLike: Enable single layer tests for GPU plugin * ConvertLike: Enable bf16 precision for evaluate method * ConvertLike: Add unit tests * ConvertLike: Add dynamic shape test case * ConvertLike: Remove unnecessary ngraph namespace and using declaration for v1::ConvertLike * ConvertLike: Simplified reference::convert by using std::enable_if * [IE CLDNN] Kernel Selector improvements (#2998) * [IE TESTS] Fix passrate in report (#3904) * [IE TESTS] Fix passrate in report * add skip * update manifest (#3939) * [IE CLDNN] Convolution fsv16 improvements several fixes after code review (#3637) * Added SubGraphOp support in compare_function (#3943) * Add PadFusion transformation (#3785) * Add PadFusion transformation Ticket: 46482 * set pad explicit * fuse ifs * address review comments * fix signed to unsigned comparison warning * add MATCHER_SCOPE * Remove deprecated classes/methods usage from Legacy/Tests/VPUPlugin (#3907) * Fix legacy converter for Mul/Add/Sub ops * Updated VPU plugin to use pass_config;Updated tests to avoid legacy classes/methods * Updated VPU pipeline * rename time test config (#3923) * Update OV telemetry message for 2021.3 (#3926) * Update OV telemetry message for 2021.3 * Fixed missprint * [BUG]Add extra op clone to have default inputs/outputs set. (#3855) * Add extra op clone to have default inputs/outputs set. * Call `validate_and_infer_types` after clone for TensorIterator and Loop * Add gtest to check if default values works with dynamic shapes * Apply suggestions from PR. Co-authored-by: Patryk Elszkowski * Don't add a new Result operation if output port is already connected to Result (#3934) * Fixed issue with run stateful network with several infer requests on MKLDNNPlugin (#3711) * Refactored VPU tests not to use old interfaces (#3888) * Refactored VPU tests not to use old interfaces * Added except of exceptions * Commented failing part of HDDL tests * [MO] Implement TensorFlow 2 While and Keras RNN support in MO (#3573) * [MO] Implement TensorFlow 2 While support in MO Signed-off-by: Roman Kazantsev * Add extractors for both While and StatelessWhile and do minor changes Signed-off-by: Roman Kazantsev * Improve update_body_graph function and manage graph names properly Signed-off-by: Roman Kazantsev * Fix a map for original name of parameters from body and cond Signed-off-by: Roman Kazantsev * Implement draft version of support of TF2 Keras RNN Signed-off-by: Roman Kazantsev * Implement Keras LSTM and GRU support in MO Signed-off-by: Roman Kazantsev * Improve code for Keras RNN support Signed-off-by: Roman Kazantsev * Finalize implementation of TF2 Keras RNN support in MO Signed-off-by: Roman Kazantsev * Apply the first part of the comments after review #1 Signed-off-by: Roman Kazantsev * Avoid use of explicit values of port indices in the transformation Signed-off-by: Roman Kazantsev * Finalize code after the first-round review Signed-off-by: Roman Kazantsev * Apply comments after the second-round review Signed-off-by: Roman Kazantsev * Implement ExperimentalDetectronTopKROIs and ExperimentalDetectronGenerateProposalsSingleImage operations as nGraph operations (#3680) * Commit. * Written the header file for the nGraph operation ExperimentalDetectronTopKROIs. * Written an implementation file of the nGraph operation ExperimentalDetectronTopKROIs. * Small fix. * Added the nGraph operation ExperimentalDetectronTopKROIs into the table of ops of opset6. * Written the header file for the nGraph operation ExperimentalDetectronGenerateProposalsSingleImage. * Code style fix. * Written cpp-file of the nGraph operation ExperimentalDetectronGenerateProposalsSingleImage. * Now the operation ExperimentalDetectronGenerateProposalsSingleImage is read as nGraph operation. * Code style fix. * Fix in ngraph/ops.hpp * Added NGRAPH_OP_SCOPE to the nGraph class ExperimentalDetectronGenerateProposalsSingleImage. * Added NGRAPH_OP_SCOPE to the nGraph class ExperimentalDetectronTopKROIs. * Code style fix. * Small fix. * Added NGraphReshapeTests of ExperimentalDetectronGenerateProposalsSingleImage when ExperimentalDetectronGenerateProposalsSingleImage is considered as opset6 operation. * Changed copyright year to 2021. * Deleted the method ExperimentalDetectronTopKROIs::set_max_rois. * Deleted redundant virtual. * Now ExperimentalDetectronTopKROIs::validate_and_infer_types() handles all cases when input 0 and input 1 have static/dynamic rank independently. * Code style fix. * Small fix. * Started to write shape infer tests for the nGraph operation ExperimentalDetectronTopKROIs. * Written shape infer tests for the nGraph operation ExperimentalDetectronTopKROIs. * Code style fix. * Added checks of input shapes into ExperimentalDetectronGenerateProposalsSingleImage::validate_and_infer_types(). Started to write tests for ExperimentalDetectronGenerateProposalsSingleImage::validate_and_infer_types(). * Small fix. * Fixes in ExperimentalDetectronGenerateProposalsSingleImage::validate_and_infer_types(). Written draft tests for ExperimentalDetectronGenerateProposalsSingleImage::validate_and_infer_types(). * Code style fix. * Fixes in reshape tests for ExperimentalDetectronGenerateProposalsSingleImage. * Added Doxygen documentation to the nGraph operation class ExperimentalDetectronGenerateProposalsSingleImage. * Deleted methods validate_scores_shape and validate_deltas_shape of op::v6::ExperimentalDetectronGenerateProposalsSingleImage. * Deleted methods validate_input_rois_shape and validate_rois_probs_shape of op::v6::ExperimentalDetectronTopKROIs. * Added class descriftion for nGraph operations ExperimentalDetectronTopKROIs and ExperimentalDetectronGenerateProposalsSingleImage. * Revise abs (#3931) * remove type_prop test file for abs operator * add abs operator to unary_ops * remove abs type_prop from CMakeList * Fixed CVS-47220 (#3958) * Add MVN-6 reference implementation and tests (#3896) * Add MVN-6 reference implementation and tests * Return old version reference * Apply feedback * Fix build * Remove ops from Layer Creator/ Node Converter - part 7 (#3961) * remove result op from layer creator * remove squareddifference op from layer creator * remove regionyolo op from layer creator * fix indentation * [BUG] Serialize loses runtime info (#3903) * rt_info serialization for ngraph::Node * add test for it info serialization * try to fix centos build Co-authored-by: Patryk Elszkowski * Cleanup in ngraph_test_utils.hpp/cpp (#3959) Co-authored-by: Patryk Elszkowski * Added export / import for Template and Hetero plugins (#3940) * Added export / import for Template and Hetero plugins * Added WA for Apple RTTI * Maxpool fix bug (#3718) * style-apply * Update spec * Remove maxpool back_prop method * style-apply * Revise reduce mean (#3786) * Update spec * create type_prop tests * add reduce_mean type_prop tests to CMakeList * Update spec * fix typo * Add dynamic type_prop tests * style fix * Visitor api loop deserialization (#3894) * Add on_adpater() implementation for special body parts. * Remove NodeConverter and LayerCreator for Loop. Add WA for different number of inputs during Loop init by constructor and visit_attributes(). * Format files. * Implement use case external_port_id=-1 for output port_map,change API for map_type_in_function. * Replace GetStrAttr() with GetInt64Attr(). * Correct WA for input_offset when using visitorAPI. It shall search all input descriptions for duplicated indexes. * Apply proper file format. * Throw exception when input_offset < 0. * Add more detailed description for input_offset WA. * Add missing types in convertIE2nGraphPrc() test util function. (#3957) * Unused variables (#3963) * Added -Wused-variable flag * Fixes for clang compiler * Removed wrong -Wno-error from protobuf compilation * More fixes * delete unwanted doc script (#3960) * ConcatTransformation naming fix (#3965) * concat naming fix * [LPT] concat with child and output plugin tests * Python API For compare_functions (#3938) * Added python API for compare_functions * Fixed compare_funcion constant comparision, graph traversal * Add tests for python API functions * Move CompareNetworks to separate python module * Update python API tests * Added dev package support * ENABLE_TESTS * Update constant comparator * Fix merge conflict * Imironov/ref ngraph ctc gready decoder (#3867) * Add ctc gready decoder sec len op to ngraph * Remove some comments * Add second constructor * Fix code style * Fix code style * Add unit tests * Add tests to cmake * Fix according to review * Fix code style * fix * Change input layoyt * Fix code style * Add unit tests * Add 3 input tensor check * Update shell impl * Fix code style * Fix code style * Add doxy gen * Fix code style * Update doxigen * Update constructor description * Fix code style * Refactoring code * fix code style * Optimize op constructor * Add macros. Optimize code for validate_and_infer_types * Refactoring code * Fix code style * Fix code style * Fix check blanck_index shape * Fix code style * Add ref impl * Fix unit test for dynemic case * Fix code style * Fix copyryting * reverse changes * Update copyrite * Add ref implemintation * rollback * Fix code style * Fix code style * Fix * Add unit tests * Refactoring ref impl * Refactoring code style * Fix code style * Fix code style * fix unit tests * Refactoring code * Refactoring code * Fix code style * Refactoring unit tests * Fix style * Fix style * Missing attr seriazliation (#3920) * add override method for int since attribute keep_top_k from detection_output requires it * remove if statement to prevent gtest tests duplicate names for avg_pool * add single layer tests for seriazliation for AvgPool, PriorBoxClustered and DetectionOutput operators * add apropiate styling of the code * [IE CLDNN] Disabled v3 -> v1 conversion for Broadcast (#3991) * [IE CLDNN] Disabled eltwise fusion to const node when second input is in data flow (#3993) * Remove ops from Layer Creator/ Node Converter - part 8 (#3979) * remove equal op from layer creator * remove greaterequal op from layer creator * remove lstmcell op from layer creator * remove psroipooling op from layer creator * add missing newline * alignment * Demo scripts improvements. (#3954) * Demo script improvements. - Detect Visual Studio version installed into not-default location - Fix change directory for a case VS and OpenVINO resideds on different disks - Align indents * fix indents * Pre-deprecation of ICNNNetwork (#3887) * Deprecated ICNNNetwork * MKLDNN plugin: partially * MYRIAD plugin: partially * Fixed Myriad Plugin * Improved GNA; fixed MKLDNN * Fixed tests * Fixed GNA * Fixed unit tests linkage * Removed ICNNNetwork from tests * Removed obsolete tests * [IE TESTS] Remove random weights generation of blobs (#3998) * Revise topk (#3819) * Add visit_attribute and node validation check * add type_prop test for default values * style-apply * Update node validation check for index_element_type * Update type_prop test for default index_element_type * Add index_element_type attribute to TopK_1 spec * Fix restoring constant ops with old numbering (#3951) * Add check to correct restoring constant ops with old numbering * Resolve comments * Update comment * [IE TESTS] Remove TEST_P(Range... from lib with shared_classes (#3999) * [nGraph][ONNX] WA for currently unsupported precisions (#3964) * [nGraph] Fix bound check for reference GatherElements (#3981) * fix bound check for reference GatherElements * apply review comments * Remove layerCreator for logical ops. (#3970) * Remove layerCreator for logical ops. * Remove visit_attributes() for LogicalOr op. * [LPT] Disabling failed on GPU LPT StridedSlice plugin tests (#4001) * [CMake] Fixes for TBB tmp location (#3997) * fixes for TBB tmp location: - DL_SDK_TEMP path is not normalized, that leads to paths check mismatch in CMake conditions - TBB is not downloaded again in a case tmp location is cleaned up and build restarted (TBB_DIR and TBBROOT are cache variables) * use reset_deps_cache & update_deps_cache for TBB_DIR var. * single reset_deps_cache call * [IE Common][VPU]: Fix small input size inference for dynamic model (#3847) * eliminate Unsqueeze+Gather pair, when Gather gathers data by 1 dimension which was previously added by Unsqueeze which is actually doing nothing. * calculate K only once in StaticShapeTopK. The problem happens when we have ShapeOf->Concat->ReduceMin subgraph for K evaluation. If we have a pretty small input size, the value that we received from ShapeOf may be less than one that it is concatenated with (e.g. ShapeOf 283 vs const 300), so ReduceMin returns 283. After ShapeOf elimination we don't have a chance to propagate 283 so we get 300 as a result and shape inference fail then. There are no problems with bigger input sizes just because ShapeOf always propagates value >300 and there are no such mismatch. * Move ClampFusion before HSwishFusion and HSigmoidFusion (#3994) HSwishFusion and HSigmoidFusion use Clamp in their patterns, so that change allows for even more fusions. * [CPU] GatherElements implementation. (#3860) * Add Loop serialization, SLT and regular test. (#3980) * Add Loop serialization, SLT and regular test. * Remove loop test from SerializationTensorIteratorTest, add bin for test loop xml. * Remove metadata section from loop xml file. * Remove m_num_iterations initialization, it is done during validate_and_infer_types(). * Bump tox from 3.20.1 to 3.21.2 in /ngraph/python (#4003) * Update ONNX build dependency to v 1.8 (#3716) * [IE COMMON] Fixes for EliminateUnsqueezeGather transformation (#4013) * Add a target to fetch and build Intel SEAPI (#3915) * [GNA] Fix ParseBlobName and message for output layer names (#3527) * Remove Fake Quantize OP decomposition (#3506) * Remove Fake Quantize OP decomposition * Fix FQ OP inheritance * [CPU] Plugin migration on oneDNN (v1.6) (#3725) * Add support for nonconstant optional NMS-5 inputs (#3640) * [IE CLDNN] StridedSliceTransformation removed from GPU plugin (#4016) * [dependencies.cmake] Add license to tbb archive (#4018) * Generic LoadTime Optimizations (#4011) * Updated container passes to return false to avoid excess function validation * Added support for nested GraphRewrite registration * Updated passes to use MatcherPass; Reorganized CommonOptimizations pipeline * Disable node validation when graph is not modified * Enable Bersquad-10 in ONNX CI (#3889) * [Scripts] setupvars.bat: Added logic for passing '-pyver' option (#4026) * setupvars.bat: Added logic for passing '-pyver' option as we already have in Linux* setupvars * setupvars.bat: Fixed python_version name in echo * [GNA] Fix import model with header version 2.1 (#4023) [GNA] Fix import model with header version 2.1 Added fix version import Added template test Added test for backword compatibility added test * [GNA] Convolution without --disable_nhwc_to_nchw option for TF models. (#3918) * Fixed compilation with ENABLE_V7_SERIALIZE (#4037) * [LPT] Handle empty dequantization in MultiplyToGroupConvolution (#3818) Add const * [LPT] Add NormalizeDequantization function in NetworkHelper (#3458) * [LPT] Add NormalizeDequantization function in NetworkHelper. * [LPT] Handling subtract constant index in makeDequantization * [LPT] Extend Add and Multiply transformations with normalizeDequantization. * [LPT] Add/Subtract simplify normalizeDequantization call * [LPT] normalizeDequantization: usage replace_node instead of copy assignment * [LPT] Update lpt paths * [LPT] normalizeDequantization completion + refactoring Co-authored-by: Aleksandr Pertovsky * [CPU] PSROIPooling node enhancements (#3851) - bf support for PSROIPooling - nhwc, blocking formats support - code refactor & performance improvements - cpu specific tests * Updated readme.md (#4052) * Add conditional compilation tests (#3996) * Added copy tools dir for all OS (#4007) * Fixed compilation of ngraph python on some compilers (#4015) * Fixed compilation of ngraph python on some compilers * Fixed ONNX importer dependencies compilation for gcc 5.4.0 and 5.5.0 * Removed obsolete ie_profiling.hpp (#4043) * Refactor install_openvino_dependencies script: extra options and cleanup (#3868) * Refactor install_openvino_dependencies script: extra options and cleanup * install_dependencies: added more python tools * install_openvino_dependencies: extra OS checks Verify consistency for future edits * install_openvino_dependencies: clarify messages * Proposal test uses special run() method to check exception throwing (#4062) * proposal test uses special run() method to check exception throwing * validate() removed from run() * Introduce the Broker API to map original framework names to OV (#3800) * Added tests * Fixed tests * Added tests to check addOutput method * Added support of port names in the IR * Update copyrights * Deprecate tensor name * Fixed comments * Enabled functional tests for GPU, GNA and Myriad * Fixed get_tensor().get_names() * Added unit test to check tensor names * Fixed code style * Skip add output test for GNA * Added serialization support * Added PythonAPI * Fixed tests * Fixed tests * Fixed typo * Try to disable GNA test * Fixed tests * Removed unused variables * Fixed tests * Update documentation * Fixed comment * -no-unused-XXX options added for selective build mode (#3702) * [CPU] Disabled input zero point fusing into fp32 Convolution (#4056) * Update operation attributes (#3814) * Allign attribute values in spec * Fix wrong attribute name in spec * Add `get_boolean_attr` function * Add get_type function * Update conv attrs * Update copyright year * Add missed attrs, update copyright year * Fix year in copyright * Update ir parser for RegionYolo layer * Remove wrong changes for BinaryConvolution * Remove get_type function as it no more needed * Update check for reduce ops * Fix error in reduce attrs * Update ir_engine to work with bool attrs * Update DetectionOutput operation * Update PSROIPooling * remove redundant attrs from spec * Update get_boolean_attr function * Update Reduce operations * Update DetectionOutput specification * Update specification for missed attrs * Apply comments * Fixconst renumbering logic * Fix typo * Change default value to fix broken shape inference * Add additional asserts * Add comment * model-optimizer/mo/utils/ir_reader/layer_to_class.py * Sort imports * Sort imports * Update year in copyright * Update const * Remove changes from const restoring * Rename function * remove unnecessary changes * model-optimizer/mo/front/extractor_test.py * Fix year in copyright * Add soft_get * Fix exclude-pad attribute name for AvgPool operation * Update exclude_pad attribute values * Remove useless comment * Update examples in specification * Remove file added by mistake * Resolve comments * Resolve comments * Add return value * Allign global_pool attribute * [compare_function] compare ops attributes (#3966) * [compare_function] compare ops attributes value by value * Storage cleanup * Add comparison for: - SubGraphOpInputDescription - SubGraphOpOutputDescription - SpecialBodyPorts * cleanup * Report error on unhandled types * Change comparison of floating-point to general approach Co-authored-by: Patryk Elszkowski * Suppressing warning about unused variables for selective build of MKLDNN plugin. (#4039) * [CPU] ROIPooling with 1x1 pooled shape in bilinear mode fixed (#4020) * Added support for the MxNet op take (#4071) * Design test config and integrate in CC tests (#4051) * Prevent targets installation for 3rd party libs (mkl-dnn) (#4096) * [Python API] Support of FP16 blobs (#3893) * [Python API] Support of FP16 blobs * test_Blob refactoring * support fp16 for exec_net.infer method * add precisions Co-authored-by: anastasia.kuporosova * added log extractor for tf (#4090) * Re-implement caffe old-style extractors with extractor extensions (#3675) * move crop extractor * Add concat_ext.py * Add roipooling_ext.py * Add roipooling_ext * Add scale extractor * Add scale extractor * Add bn_ext.py and dropout_ext.py * Add bn_ext.py and dropout_ext.py * Add bn_ext.py and dropout_ext.py * Fix bn.ext.py * Sort fix * Fix bn_test.py * rename to batchnorm_ext * Add bn_ext * Fix batchnorm_ext.py * small fix * Small fix * Add MVN-6 op to ONNX importer (#4012) * docs copy code button (#4017) * copy code button * copy code button updates * [IE][VPU]: Fixes Extract Dynamic Batch (#3978) LCA (Least Common Ancestor) search procedure must take sub-graph without external connections to be able to count depths of nodes. Previously, just 2 steps of removing external connections were executed (1 time for top sub-graph and 1 time for bottom one). In case if originally top sub-graph had no external connections, but bottom one had, after removing external nodes from bottom sub-graph, some of the nodes from top sub-graph could became external. In order to prevent that, removing external connections must always be a loop until no external connections found in both top and bottom sub-graphs. Signed-off-by: Gladilov, Gleb * [CPU] Interpolate node: 5d support for onnx_linear mode (#3471) * Azure CI: Disable nGraph Mac tests IE_CPU/GRUSequenceOp.onnx_model_gru* * [DOC] ShapeInference.md update. slyalin comments (#3355) (#4104) * [DOC] ShapeInference.md update. slyalin comments * Apply suggestions from code review Co-authored-by: Alina Alborova * Apply suggestions from code review Co-authored-by: Alina Alborova * Update docs/IE_DG/ShapeInference.md Co-authored-by: Alina Alborova Co-authored-by: Alina Alborova * fix comments ngraph api - master (#3519) * fix comments ngraph api * remove whitespace * fixes Co-authored-by: Nikolay Tyukaev * Update ONNX dependency to v 1.8.1 (#4114) * Update copyright Co-authored-by: Krishna Prabhakaran Co-authored-by: Vladislav Volkov Co-authored-by: Vladimir Paramuzov Co-authored-by: Gleb Kazantaev Co-authored-by: Evgeny Lazarev Co-authored-by: Alexander Zhogov Co-authored-by: Alexey Varyzgin Co-authored-by: Anastasiya Ageeva Co-authored-by: Sergey Lyubimtsev Co-authored-by: Bartosz Lesniewski Co-authored-by: Andrey Somsikov Co-authored-by: Tomasz Socha Co-authored-by: Nikita Kudriavtsev Co-authored-by: Anastasia Kuporosova Co-authored-by: Ilya Lavrenov Co-authored-by: Nikolay Tyukaev Co-authored-by: Maksim Makridin Co-authored-by: Mateusz Tabaka Co-authored-by: Krzysztof Bruniecki Co-authored-by: Gabriele Galiero Casay Co-authored-by: Tomasz Dołbniak Co-authored-by: Liubov Batanina Co-authored-by: Gleb Kazantaev Co-authored-by: Aleksandr Korolev Co-authored-by: Anton Chetverikov Co-authored-by: Maxim Vafin Co-authored-by: Alexander Perepelkin Co-authored-by: Vladimir Gavrilov Co-authored-by: Bartosz Sledz Co-authored-by: Mateusz Bencer Co-authored-by: Ilya Churaev Co-authored-by: Gorokhov Dmitriy Co-authored-by: Maksim Kutakov Co-authored-by: Vladislav Vinogradov Co-authored-by: Sergey Shlyapnikov Co-authored-by: Szymon Durawa Co-authored-by: Tomasz Jankowski Co-authored-by: Vitaliy Urusovskij Co-authored-by: Yury Gaydaychuk Co-authored-by: Andrey Sokolov Co-authored-by: Andrew Bakalin Co-authored-by: Ilya Znamenskiy Co-authored-by: Vladislav Golubev Co-authored-by: Irina Efode Co-authored-by: Victor Kuznetsov <32412802+just-sparta@users.noreply.github.com> Co-authored-by: Anastasia Kazantaeva Co-authored-by: Patryk Elszkowski Co-authored-by: Patryk Elszkowski Co-authored-by: Svetlana Dolinina Co-authored-by: Roman Kazantsev Co-authored-by: Piotr Szmelczynski Co-authored-by: Jozef Daniecki Co-authored-by: iliya mironov Co-authored-by: Bartek Szmelczynski Co-authored-by: Jan Iwaszkiewicz Co-authored-by: Pavel Esir Co-authored-by: Edward Shogulin Co-authored-by: Nikolay Shchegolev Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Michał Karzyński <4430709+postrational@users.noreply.github.com> Co-authored-by: Andrey Dmitriev Co-authored-by: Mikhail Treskin Co-authored-by: Alexey Suhov Co-authored-by: Artyom Anokhov Co-authored-by: Elizaveta Lobanova Co-authored-by: Aleksandr Pertovsky Co-authored-by: Anton Romanov Co-authored-by: Maksim Shabunin Co-authored-by: anastasia.kuporosova Co-authored-by: Anna Likholat Co-authored-by: Eugeny Volosenkov Co-authored-by: Ewa Tusień Co-authored-by: Gladilov, Gleb Co-authored-by: Chenhu Wang Co-authored-by: Evgenya Stepyreva Co-authored-by: Alina Alborova Co-authored-by: Nikolay Tyukaev Co-authored-by: Yanglei Zou --- .ci/azure/linux.yml | 10 +- .ci/azure/mac.yml | 8 +- .ci/azure/windows.yml | 16 +- .../IEDevScriptsConfig.cmake | 8 +- .../compile_flags/os_flags.cmake | 1 + .../compile_flags/sanitizer.cmake | 8 + cmake/developer_package/message.cmake | 11 +- docs/CMakeLists.txt | 4 + docs/HOWTO/Custom_Layers_Guide.md | 477 +++-- docs/HOWTO/img/IE_extensions_flow.png | 3 - docs/HOWTO/img/MEG_generic_flow.png | 3 - docs/HOWTO/img/MO_extensions_flow.png | 3 - docs/HOWTO/img/converted_subgraph.png | 3 + docs/HOWTO/img/mo_caffe_priorities.png | 3 - docs/HOWTO/img/unsupported_subgraph.png | 3 + docs/HOWTO/mo_extensions/front/tf/Complex.py | 57 + .../mo_extensions/front/tf/ComplexAbs.py | 40 + docs/HOWTO/mo_extensions/front/tf/FFT_ext.py | 47 + docs/HOWTO/mo_extensions/ops/FFT.py | 40 + docs/HOWTO/mri_reconstruction_demo.py | 119 ++ docs/IE_DG/Bfloat16Inference.md | 42 +- .../IE_DG/Extensibility_DG/AddingNGraphOps.md | 7 +- docs/IE_DG/Extensibility_DG/CPU_Kernel.md | 2 +- docs/IE_DG/Extensibility_DG/Extension.md | 5 +- docs/IE_DG/Extensibility_DG/GPU_Kernel.md | 31 +- docs/IE_DG/Extensibility_DG/Intro.md | 23 +- docs/IE_DG/Samples_Overview.md | 57 + docs/IE_DG/ShapeInference.md | 62 +- .../Deep_Learning_Model_Optimizer_DevGuide.md | 9 +- .../Supported_Frameworks_Layers.md | 1 + .../convert_model/Converting_Model.md | 4 +- .../convert_model/Cutting_Model.md | 1 - .../Customize_Model_Optimizer.md | 1328 +++++++++++- ...Net_Model_Optimizer_with_New_Primitives.md | 64 +- ...odel_Optimizer_with_Caffe_Python_Layers.md | 89 + ...ing_Model_Optimizer_with_New_Primitives.md | 477 +---- .../Legacy_Mode_for_Caffe_Custom_Layers.md | 34 +- .../Subgraph_Replacement_Model_Optimizer.md | 363 +--- ...sorFlow_Faster_RCNN_ObjectDetection_API.md | 449 ---- .../TensorFlow_SSD_ObjectDetection_API.md | 339 --- docs/doxygen/assets/bootstrap.bundle.min.js | 8 + docs/doxygen/assets/bootstrap.min.css | 7 + docs/doxygen/assets/customdoxygen.css | 52 +- docs/doxygen/assets/menu.js | 53 - docs/doxygen/assets/openvino-layout.js | 64 + docs/doxygen/build_main_layout.py | 16 + docs/doxygen/doxy_md_filter.py | 16 + docs/doxygen/footer.html.in | 20 +- docs/doxygen/header.html.in | 23 +- docs/doxygen/ie_c_api.config | 15 + docs/doxygen/ie_c_api.xml | 18 + docs/doxygen/ie_docs.xml | 38 +- docs/doxygen/ie_plugin_api.config | 15 + docs/doxygen/ie_plugin_api.xml | 18 + docs/doxygen/ie_py_api.config | 15 + docs/doxygen/ie_py_api.xml | 18 + docs/doxygen/log.py | 16 + docs/doxygen/ngraph_cpp_api.config | 15 + docs/doxygen/ngraph_cpp_api.xml | 18 + docs/doxygen/ngraph_py_api.config | 15 + docs/doxygen/ngraph_py_api.xml | 23 +- docs/doxygen/openvino_docs.xml | 18 + docs/doxygen/pyx_filter.py | 16 + docs/how_tos/how-to-links.md | 6 +- docs/img/MO_connection_example_1.png | 3 + docs/img/MO_conversion_pipeline.png | 3 + docs/img/MO_graph_after_extractors.png | 3 + docs/img/MO_graph_after_loader.png | 3 + .../img/MO_graph_before_partial_inference.png | 3 + docs/img/MO_ports_example_1.png | 3 + docs/img/MO_ports_example_2.png | 3 + docs/img/MO_transformations_graph.png | 3 + docs/nGraph_DG/nGraphTransformation.md | 7 + .../ops/detection/DeformablePSROIPooling_1.md | 2 +- docs/ops/detection/DetectionOutput_1.md | 2 +- docs/ops/detection/PriorBoxClustered_1.md | 2 +- docs/ops/detection/PriorBox_1.md | 2 +- docs/ops/detection/RegionYolo_1.md | 6 +- docs/ops/image/Interpolate_4.md | 2 +- docs/ops/pooling/AvgPool_1.md | 14 +- docs/ops/pooling/MaxPool_1.md | 2 +- docs/ops/reduction/ReduceMean_1.md | 5 +- docs/ops/sequence/CTCGreedyDecoder_1.md | 5 +- docs/ops/sort/TopK_1.md | 8 + docs/snippets/CMakeLists.txt | 10 +- docs/snippets/MULTI3.cpp | 2 +- docs/template_extension/CMakeLists.txt | 15 +- docs/template_extension/extension.cpp | 37 +- docs/template_extension/fft_kernel.cpp | 119 ++ docs/template_extension/fft_kernel.hpp | 32 + docs/template_extension/fft_op.cpp | 34 + docs/template_extension/fft_op.hpp | 28 + docs/template_plugin/CMakeLists.txt | 4 + .../src/template_async_infer_request.cpp | 9 +- .../src/template_executable_network.cpp | 81 +- docs/template_plugin/src/template_plugin.cpp | 19 +- inference-engine/cmake/dependencies.cmake | 74 +- ...renceEngineDeveloperPackageConfig.cmake.in | 7 + inference-engine/cmake/vpu_dependencies.cmake | 6 +- .../object_detection_sample_ssd/README.md | 4 +- .../object_detection_sample_ssd/main.c | 2 + .../object_detection_sample_ssd.h | 6 +- .../ie_bridges/c/src/ie_c_api.cpp | 4 +- .../ie_bridges/c/tests/test_model_repo.hpp | 2 +- .../ie_bridges/python/CMakeLists.txt | 5 + .../classification_sample_async.py | 30 +- .../hello_classification.py | 16 +- .../hello_query_device/hello_query_device.py | 12 +- .../ngraph_function_creation_sample.py | 23 +- .../object_detection_sample_ssd.py | 17 +- .../style_transfer_sample.py | 8 +- .../openvino/inference_engine/CMakeLists.txt | 10 +- .../openvino/inference_engine/constants.pyx | 5 +- .../src/openvino/inference_engine/ie_api.pyx | 84 +- .../openvino/inference_engine/ie_api_impl.cpp | 8 + .../openvino/inference_engine/ie_api_impl.hpp | 3 + .../inference_engine/ie_api_impl_defs.pxd | 2 + .../offline_transformations/CMakeLists.txt | 56 + .../offline_transformations/__init__.py | 2 + .../offline_transformations_api.pyx | 25 + .../offline_transformations_api_impl.cpp | 30 + .../offline_transformations_api_impl.hpp | 16 + .../offline_transformations_api_impl_defs.pxd | 7 + .../src/openvino/test_utils/CMakeLists.txt | 45 + .../src/openvino/test_utils/__init__.py | 2 + .../openvino/test_utils/test_utils_api.pyx | 26 + .../test_utils/test_utils_api_impl.cpp | 14 + .../test_utils/test_utils_api_impl.hpp | 14 + .../test_utils/test_utils_api_impl_defs.pxd | 8 + .../ie_bridges/python/tests/test_Blob.py | 69 +- .../python/tests/test_ExecutableNetwork.py | 12 +- .../ie_bridges/python/tests/test_IECore.py | 21 +- .../ie_bridges/python/tests/test_IENetwork.py | 79 +- .../python/tests/test_InferRequest.py | 8 +- .../ie_bridges/python/tests/test_NGraph.py | 9 +- .../python/tests/test_offline_api.py | 40 + .../ie_bridges/python/tests/test_utils.py | 26 + inference-engine/include/cpp/ie_cnn_network.h | 138 +- .../include/cpp/ie_executable_network.hpp | 6 +- .../include/cpp/ie_infer_request.hpp | 2 + .../include/cpp/ie_memory_state.hpp | 50 +- inference-engine/include/gna/gna_config.hpp | 8 + inference-engine/include/ie_icnn_network.hpp | 54 +- .../include/ie_iexecutable_network.hpp | 8 + .../include/ie_iinfer_request.hpp | 23 +- inference-engine/include/ie_imemory_state.hpp | 7 + inference-engine/include/ie_precision.hpp | 2 +- .../include/vpu/vpu_plugin_config.hpp | 8 +- inference-engine/samples/CMakeLists.txt | 6 + .../samples/hello_classification/main.cpp | 6 +- .../samples/speech_sample/main.cpp | 2 +- .../samples/speech_sample/speech_sample.hpp | 2 +- inference-engine/src/CMakeLists.txt | 6 + .../src/cldnn_engine/CMakeLists.txt | 7 +- .../src/cldnn_engine/cldnn_common_utils.h | 26 +- .../src/cldnn_engine/cldnn_config.cpp | 1 + .../src/cldnn_engine/cldnn_engine.cpp | 11 +- .../src/cldnn_engine/cldnn_graph.cpp | 2 - .../src/cldnn_engine/cldnn_program.cpp | 2 +- .../src/cldnn_engine/cldnn_program.h | 2 +- .../src/cldnn_engine/ops/broadcast.cpp | 10 +- .../src/cldnn_engine/ops/constant.cpp | 104 +- .../src/cldnn_engine/ops/convolution.cpp | 115 +- .../src/cldnn_engine/ops/matmul.cpp | 1 - .../cldnn_engine/ops/non_max_suppression.cpp | 5 - .../src/cldnn_engine/ops/result.cpp | 2 + .../src/cldnn_engine/ops/roi_pooling.cpp | 27 +- .../src/cldnn_engine/ops/split.cpp | 2 + .../src/cldnn_engine/ops/strided_slice.cpp | 18 +- .../src/cldnn_engine/ops/transpose.cpp | 36 +- .../src/gna_plugin/CMakeLists.txt | 4 + .../src/gna_plugin/backend/am_intel_dnn.cpp | 116 +- .../src/gna_plugin/backend/am_intel_dnn.hpp | 43 + .../src/gna_plugin/backend/dnn.hpp | 12 - .../src/gna_plugin/backend/dnn_types.cpp | 57 + .../src/gna_plugin/backend/dnn_types.h | 128 +- .../gna_plugin/frontend/model_quantizer.hpp | 27 +- .../gna_plugin/frontend/scale_factor_calc.hpp | 1 - .../src/gna_plugin/gna2_model_helper.cpp | 36 +- .../src/gna_plugin/gna2_model_helper.hpp | 3 + .../src/gna_plugin/gna_device.cpp | 37 +- .../src/gna_plugin/gna_device.hpp | 21 +- .../src/gna_plugin/gna_executable_network.hpp | 2 +- .../src/gna_plugin/gna_graph_compiler.cpp | 322 ++- .../src/gna_plugin/gna_graph_compiler.hpp | 13 +- .../src/gna_plugin/gna_model_serial.cpp | 11 +- .../src/gna_plugin/gna_plugin.cpp | 55 +- .../src/gna_plugin/gna_plugin.hpp | 4 +- .../src/gna_plugin/gna_plugin_log.hpp | 1 + .../src/gna_plugin/gna_plugin_query_api.cpp | 1 + .../src/gna_plugin/layers/gna_layer_type.cpp | 5 +- .../src/gna_plugin/layers/gna_layer_type.hpp | 2 +- .../gna_plugin/optimizer/gna_pass_manager.cpp | 151 +- .../gna_plugin/optimizer/gna_pass_manager.hpp | 8 +- .../runtime/gna_float_runtime_op.cpp | 1 - .../src/gna_plugin/runtime/pwl.cpp | 1 - .../serial/headers/2dot4/gna_model_header.hpp | 11 + .../hetero_executable_network.cpp | 135 +- .../hetero_plugin/hetero_infer_request.cpp | 1 - .../src/hetero_plugin/hetero_plugin.cpp | 16 +- .../src/inference_engine/CMakeLists.txt | 4 +- .../src/inference_engine/blob_transform.cpp | 44 +- .../cnn_network_ngraph_impl.cpp | 88 +- .../cnn_network_ngraph_impl.hpp | 13 +- .../inference_engine/cpp/ie_cnn_network.cpp | 128 ++ .../cpp/ie_variable_state.cpp | 45 + .../src/inference_engine/generic_ie.cpp | 15 +- .../src/inference_engine/ie_core.cpp | 7 +- .../src/inference_engine/ie_layouts.cpp | 4 +- .../os/lin/lin_system_conf.cpp | 2 +- .../shape_infer/ie_built_in_holder.cpp | 12 +- .../ie_detectionoutput_onnx_shape_infer.hpp | 46 - ...ie_priorgridgenerator_onnx_shape_infer.hpp | 49 - .../ie_proposal_onnx_shape_infer.hpp | 33 - ...e_roifeatureextractor_onnx_shape_infer.hpp | 44 - .../ie_topkrois_onnx_shape_infer.hpp | 33 - .../src/legacy_api/CMakeLists.txt | 4 +- .../include/legacy/cnn_network_impl.hpp | 15 +- .../convert_function_to_cnn_network.hpp | 8 +- .../details/ie_cnn_network_iterator.hpp | 2 +- .../legacy/details/ie_cnn_network_tools.h | 4 +- .../legacy_api/include/legacy/graph_tools.hpp | 39 +- .../include/legacy/ie_util_internal.hpp | 12 +- .../src/legacy_api/include/legacy/net_pass.h | 12 +- .../legacy/ngraph_ops/gru_sequence_ie.hpp | 2 +- .../legacy/ngraph_ops/lstm_sequence_ie.hpp | 2 +- .../legacy/ngraph_ops/rnn_sequence_ie.hpp | 2 +- .../convert_matmul_to_fc_or_gemm.hpp | 18 +- ...convert_mul_add_to_scaleshift_or_power.hpp | 9 +- .../convert_mul_or_add_finally.hpp | 277 --- .../convert_prior_to_ie_prior.hpp | 27 +- .../reshape_1d_ops.hpp | 20 +- .../reshape_fc_fusion.hpp | 16 +- .../src/legacy_api/src/cnn_network_impl.cpp | 22 +- .../src/convert_function_to_cnn_network.cpp | 123 +- .../src/legacy_api/src/graph_tools.cpp | 2 +- .../src/legacy_api/src/graph_transformer.cpp | 33 +- .../src/ie_cnn_layer_builder_ngraph.cpp | 289 --- .../legacy_api/src/ie_layer_validators.cpp | 70 +- .../src/legacy_api/src/ie_layers_internal.cpp | 6 +- .../src/legacy_api/src/ie_util_internal.cpp | 27 +- .../src/legacy_api/src/net_pass.cpp | 64 +- .../legacy_api/src/network_serializer_v7.cpp | 64 +- .../legacy_api/src/network_serializer_v7.hpp | 2 +- .../src/legacy_api/src/ngraph_ops/crop_ie.cpp | 2 +- .../src/ngraph_ops/gru_sequence_ie.cpp | 2 +- .../src/legacy_api/src/ngraph_ops/interp.cpp | 3 +- .../src/ngraph_ops/lstm_sequence_ie.cpp | 2 +- .../src/ngraph_ops/normalize_ie.cpp | 2 +- .../src/ngraph_ops/prior_box_clustered_ie.cpp | 7 +- .../legacy_api/src/ngraph_ops/proposal_ie.cpp | 2 +- .../src/ngraph_ops/rnn_sequence_ie.cpp | 2 +- .../src/legacy_api/src/ngraph_ops/tile_ie.cpp | 2 +- ...vert_interpolate_to_interp_or_resample.cpp | 4 +- ...convert_mul_add_to_scaleshift_or_power.cpp | 12 +- .../convert_mul_or_add_finally.cpp | 292 ++- .../convert_nms_5_to_legacy.cpp | 2 +- .../convert_normalizel2_to_normalize_ie.cpp | 2 +- .../convert_opset1_to_legacy.cpp | 20 +- .../convert_prior_to_ie_prior.cpp | 20 +- .../convert_tile_to_ie_tile.cpp | 2 +- .../reshape_fc_fusion.cpp | 6 +- .../reshape_fully_connected.cpp | 2 +- .../include/low_precision/network_helper.hpp | 2 + .../include/low_precision/strided_slice.hpp | 26 + .../low_precision_transformations/src/add.cpp | 3 + .../src/concat.cpp | 10 +- .../src/concat_multi_channels.cpp | 4 +- .../src/convert.cpp | 1 - .../src/interpolate.cpp | 4 +- .../src/layer_transformation.cpp | 9 +- .../src/multiply.cpp | 3 + .../src/multiply_to_group_convolution.cpp | 11 +- .../low_precision_transformations/src/mvn.cpp | 3 - .../src/network_helper.cpp | 34 +- .../src/reshape.cpp | 8 +- .../src/split.cpp | 1 - .../src/strided_slice.cpp | 105 + .../src/transformer.cpp | 4 + .../src/weightable_layer_transformation.cpp | 6 +- .../src/mkldnn_plugin/CMakeLists.txt | 49 +- .../src/mkldnn_plugin/bf16transformer.cpp | 6 - .../src/mkldnn_plugin/bf16transformer.h | 6 +- inference-engine/src/mkldnn_plugin/config.cpp | 5 +- .../src/mkldnn_plugin/mkldnn/cpu_engine.h | 68 - .../src/mkldnn_plugin/mkldnn/cpu_prim_layer.h | 50 - .../mkldnn_plugin/mkldnn/cpu_prim_tensor.h | 34 - .../mkldnn_plugin/mkldnn/desc_iterator.hpp | 166 -- .../src/mkldnn_plugin/mkldnn/ie_mkldnn.cpp | 152 ++ .../src/mkldnn_plugin/mkldnn/ie_mkldnn.h | 21 + .../mkldnn_plugin/mkldnn/iml_type_mapper.cpp | 15 +- .../src/mkldnn_plugin/mkldnn_descriptor.cpp | 93 +- .../src/mkldnn_plugin/mkldnn_descriptor.h | 31 +- .../src/mkldnn_plugin/mkldnn_dims.h | 3 +- .../src/mkldnn_plugin/mkldnn_edge.cpp | 32 +- .../src/mkldnn_plugin/mkldnn_edge.h | 11 +- .../src/mkldnn_plugin/mkldnn_exec_network.cpp | 121 +- .../src/mkldnn_plugin/mkldnn_exec_network.h | 6 +- .../mkldnn_plugin/mkldnn_extension_utils.cpp | 126 +- .../mkldnn_plugin/mkldnn_extension_utils.h | 58 +- .../src/mkldnn_plugin/mkldnn_graph.cpp | 31 +- .../src/mkldnn_plugin/mkldnn_graph.h | 4 +- .../src/mkldnn_plugin/mkldnn_graph_dumper.cpp | 24 +- .../mkldnn_plugin/mkldnn_graph_optimizer.cpp | 350 ++- .../mkldnn_plugin/mkldnn_graph_optimizer.h | 6 - .../mkldnn_plugin/mkldnn_infer_request.cpp | 56 +- .../src/mkldnn_plugin/mkldnn_infer_request.h | 4 +- .../src/mkldnn_plugin/mkldnn_memory.cpp | 1873 ++++++----------- .../src/mkldnn_plugin/mkldnn_memory.h | 117 +- .../mkldnn_plugin/mkldnn_memory_solver.hpp | 4 +- .../src/mkldnn_plugin/mkldnn_memory_state.cpp | 17 +- .../src/mkldnn_plugin/mkldnn_memory_state.h | 12 +- .../src/mkldnn_plugin/mkldnn_node.cpp | 163 +- .../src/mkldnn_plugin/mkldnn_node.h | 60 +- .../src/mkldnn_plugin/mkldnn_plugin.cpp | 36 +- .../src/mkldnn_plugin/mkldnn_primitive.cpp | 65 +- .../src/mkldnn_plugin/mkldnn_primitive.h | 7 +- .../mkldnn_plugin/nodes/common/cpu_memcpy.h | 1 - .../mkldnn_plugin/nodes/common/emitter.cpp | 30 +- .../src/mkldnn_plugin/nodes/common/emitter.h | 21 +- .../mkldnn_plugin/nodes/common/softmax.cpp | 67 +- .../mkldnn_plugin/nodes/gather_elements.cpp | 149 ++ .../nodes/jit_eltwise_emitters.cpp | 455 ++-- .../nodes/jit_eltwise_emitters.hpp | 94 +- .../nodes/jit_mkldnn_emitters.cpp | 33 +- .../nodes/jit_mkldnn_emitters.hpp | 13 +- .../src/mkldnn_plugin/nodes/list_tbl.hpp | 3 +- .../nodes/mkldnn_batchnorm_node.cpp | 130 +- .../nodes/mkldnn_batchnorm_node.h | 9 +- .../nodes/mkldnn_bin_conv_node.cpp | 1556 ++++++++++---- .../nodes/mkldnn_bin_conv_node.h | 96 +- .../nodes/mkldnn_concat_node.cpp | 126 +- .../mkldnn_plugin/nodes/mkldnn_concat_node.h | 4 +- .../mkldnn_plugin/nodes/mkldnn_conv_node.cpp | 262 +-- .../mkldnn_plugin/nodes/mkldnn_conv_node.h | 5 +- .../mkldnn_plugin/nodes/mkldnn_crop_node.cpp | 36 +- .../nodes/mkldnn_deconv_node.cpp | 122 +- .../mkldnn_plugin/nodes/mkldnn_deconv_node.h | 5 +- .../nodes/mkldnn_def_conv_node.cpp | 1196 +++++++++-- .../nodes/mkldnn_def_conv_node.h | 84 +- .../nodes/mkldnn_eltwise_node.cpp | 276 ++- .../mkldnn_plugin/nodes/mkldnn_eltwise_node.h | 9 +- .../nodes/mkldnn_fullyconnected_node.cpp | 307 ++- .../nodes/mkldnn_fullyconnected_node.h | 11 +- .../mkldnn_plugin/nodes/mkldnn_gemm_node.cpp | 22 +- .../mkldnn_plugin/nodes/mkldnn_gemm_node.h | 4 +- .../nodes/mkldnn_generic_node.cpp | 7 +- .../mkldnn_plugin/nodes/mkldnn_input_node.cpp | 32 +- .../nodes/mkldnn_interpolate_node.cpp | 1187 +++++++---- .../nodes/mkldnn_interpolate_node.h | 19 +- .../mkldnn_plugin/nodes/mkldnn_lrn_node.cpp | 11 +- .../nodes/mkldnn_memory_node.cpp | 12 +- .../mkldnn_plugin/nodes/mkldnn_mvn_node.cpp | 173 +- .../src/mkldnn_plugin/nodes/mkldnn_mvn_node.h | 4 + .../nodes/mkldnn_normalize_node.cpp | 256 +-- .../nodes/mkldnn_normalize_node.h | 14 +- .../mkldnn_plugin/nodes/mkldnn_pad_node.cpp | 38 +- .../src/mkldnn_plugin/nodes/mkldnn_pad_node.h | 1 - .../nodes/mkldnn_permute_node.cpp | 238 +-- .../mkldnn_plugin/nodes/mkldnn_permute_node.h | 2 + .../nodes/mkldnn_pooling_node.cpp | 94 +- .../mkldnn_plugin/nodes/mkldnn_pooling_node.h | 1 + .../nodes/mkldnn_quantize_node.cpp | 1394 ++++++++++-- .../nodes/mkldnn_quantize_node.h | 71 +- .../nodes/mkldnn_reduce_node.cpp | 275 +-- .../mkldnn_plugin/nodes/mkldnn_reduce_node.h | 4 + .../nodes/mkldnn_reorder_node.cpp | 88 +- .../nodes/mkldnn_reshape_node.cpp | 9 +- .../src/mkldnn_plugin/nodes/mkldnn_rnn.cpp | 364 ++-- .../src/mkldnn_plugin/nodes/mkldnn_rnn.h | 17 +- .../nodes/mkldnn_roi_align_node.cpp | 56 +- .../nodes/mkldnn_roi_align_node.h | 1 - .../nodes/mkldnn_roi_pooling_node.cpp | 567 ++++- .../nodes/mkldnn_roi_pooling_node.h | 61 +- .../nodes/mkldnn_scatter_update_node.cpp | 21 +- .../nodes/mkldnn_softmax_node.cpp | 20 +- .../mkldnn_plugin/nodes/mkldnn_split_node.cpp | 16 +- .../nodes/mkldnn_tensoriterator_node.cpp | 63 +- .../nodes/mkldnn_tensoriterator_node.h | 7 +- .../mkldnn_plugin/nodes/mkldnn_tile_node.cpp | 15 +- .../src/mkldnn_plugin/nodes/psroi.cpp | 670 ++++-- .../src/mkldnn_plugin/nodes/region_yolo.cpp | 63 +- .../src/mkldnn_plugin/utils/bfloat16.hpp | 5 +- .../src/mkldnn_plugin/utils/blob_dump.cpp | 3 +- .../src/mkldnn_plugin/utils/general_utils.h | 43 + .../multi_device_infer_request.hpp | 4 +- .../src/multi_device/multi_device_plugin.cpp | 2 +- .../offline_transformations/CMakeLists.txt | 36 + .../include/moc_transformations.hpp | 33 + .../src/moc_transformations.cpp | 14 + .../base/ie_executable_network_base.hpp | 10 +- .../base/ie_infer_async_request_base.hpp | 3 +- .../base/ie_variable_state_base.hpp | 4 + .../cpp_interfaces/exception2status.hpp | 6 + .../impl/ie_executable_network_internal.hpp | 8 +- .../impl/ie_plugin_internal.hpp | 22 +- .../interface/ie_internal_plugin_config.hpp | 6 - .../interface/ie_iplugin_internal.hpp | 2 +- .../src/plugin_api/ie_profiling.hpp | 242 --- .../src/plugin_api/precision_utils.h | 49 +- .../src/preprocessing/CMakeLists.txt | 4 +- .../ie_preprocess_gapi_kernels_neon.hpp | 2 +- .../ie_preprocess_gapi_kernels_avx2.cpp | 5 + .../ie_preprocess_gapi_kernels_avx512.cpp | 27 +- .../ie_preprocess_gapi_kernels_sse42.cpp | 7 +- .../src/preprocessing/ie_preprocess_data.cpp | 1 - .../src/preprocessing/ie_preprocess_data.hpp | 1 - .../src/preprocessing/ie_preprocess_gapi.cpp | 9 +- .../src/preprocessing/ie_preprocess_gapi.hpp | 1 - .../ie_preprocess_gapi_kernels.cpp | 143 +- .../ie_preprocess_gapi_kernels_impl.hpp | 6 +- .../ie_preprocess_gapi_kernels_simd_impl.hpp | 12 +- .../src/readers/ir_reader/ie_ir_parser.cpp | 541 ++--- .../src/readers/ir_reader/ie_ir_parser.hpp | 17 +- .../readers/ir_reader_v7/ie_format_parser.cpp | 4 +- .../src/readers/ir_reader_v7/ie_ir_parser.cpp | 11 - .../ir_reader_v7/ie_layer_validators.cpp | 152 +- .../src/readers/reader_api/ie_blob_stream.cpp | 13 +- .../src/readers/reader_api/ie_blob_stream.hpp | 9 + .../src/transformations/CMakeLists.txt | 2 +- .../include/ngraph_ops/deconvolution_ie.hpp | 2 +- .../algebraic_simplification.hpp | 4 +- .../common_optimizations/clamp_fusion.hpp | 35 + .../common_optimizations/conv_bias_fusion.hpp | 20 +- .../depth_to_space_fusion.hpp | 9 +- .../eliminate_unsqueeze_gather.hpp | 31 + .../common_optimizations/hsigmoid_fusion.hpp | 31 +- .../common_optimizations/hswish_fusion.hpp | 30 +- .../lin_op_sequence_fusion.hpp | 20 +- .../common_optimizations/nop_elimination.hpp | 4 +- .../normalize_l2_fusion.hpp | 26 +- .../common_optimizations/pad_fusion.hpp | 117 + .../relu_fake_quantize_fusion.hpp | 31 + .../remove_filtering_boxes_by_size.hpp | 9 +- .../common_optimizations/swish_fusion.hpp | 30 +- .../bidirectional_sequences_decomposition.hpp | 18 + .../op_conversions/convert_convolutions.hpp | 24 +- .../op_conversions/convert_gather_0d.hpp | 31 + .../op_conversions/convert_mvn1_to_mvn6.hpp | 27 + .../convert_reduce_to_pooling.hpp | 40 +- .../convert_scatter_elements_to_scatter.hpp | 9 +- .../op_conversions/convert_shapeof3.hpp | 9 +- .../convert_shuffle_channels3.hpp | 9 +- .../op_conversions/convert_topk3.hpp | 9 +- .../op_conversions/mvn6_decomposition.hpp | 27 + .../include/transformations/serialize.hpp | 9 +- .../src/transformations/src/itt.hpp | 71 + .../src/ngraph_ops/convolution_ie.cpp | 6 +- .../src/ngraph_ops/deconvolution_ie.cpp | 17 +- .../src/ngraph_ops/nms_ie_internal.cpp | 4 + .../algebraic_simplification.cpp | 58 +- .../broadcast_elementwise_fusion.cpp | 4 +- .../common_optimizations/clamp_fusion.cpp | 60 + .../common_optimizations.cpp | 74 +- .../common_optimizations/conv_bias_fusion.cpp | 174 +- .../common_optimizations/conv_mul_fusion.cpp | 13 +- .../convert_quantize_dequantize.cpp | 4 +- .../depth_to_space_fusion.cpp | 12 +- .../eliminate_unsqueeze_gather.cpp | 60 + .../common_optimizations/fq_mul_fusion.cpp | 113 +- .../fq_reshape_fusion.cpp | 4 +- .../common_optimizations/hsigmoid_fusion.cpp | 13 +- .../common_optimizations/hswish_fusion.cpp | 13 +- .../lin_op_sequence_fusion.cpp | 10 +- .../common_optimizations/mish_fusion.cpp | 6 +- .../common_optimizations/nop_elimination.cpp | 74 +- .../normalize_l2_fusion.cpp | 13 +- .../optimize_strided_slice.cpp | 9 +- .../common_optimizations/pad_fusion.cpp | 387 ++++ .../pull_transpose_through_fq.cpp | 4 +- .../relu_fake_quantize_fusion.cpp | 60 + .../remove_filtering_boxes_by_size.cpp | 10 +- .../common_optimizations/softplus_fusion.cpp | 4 +- .../softplus_to_mish_fusion.cpp | 4 +- .../common_optimizations/swish_fusion.cpp | 13 +- .../control_flow/unroll_tensor_iterator.cpp | 12 +- .../src/transformations/convert_precision.cpp | 6 +- .../src/transformations/init_node_info.cpp | 2 + .../src/transformations/itt.hpp | 35 - .../batch_norm_decomposition.cpp | 7 +- .../bidirectional_sequences_decomposition.cpp | 11 +- .../op_conversions/convert_batch_to_space.cpp | 11 +- .../op_conversions/convert_broadcast3.cpp | 10 +- .../convert_broadcast_to_tiles.cpp | 4 +- .../op_conversions/convert_convolutions.cpp | 13 +- .../op_conversions/convert_depth_to_space.cpp | 6 +- .../op_conversions/convert_divide.cpp | 6 +- .../op_conversions/convert_gather_0d.cpp | 55 + .../op_conversions/convert_gelu.cpp | 6 +- .../convert_interpolate1_to_interpolate4.cpp | 6 +- .../convert_minimum_to_power_and_max.cpp | 6 +- .../op_conversions/convert_mod.cpp | 4 +- .../op_conversions/convert_mvn1_to_mvn6.cpp | 55 + .../op_conversions/convert_negative.cpp | 6 +- .../convert_nms_to_nms_ie_internal.cpp | 6 +- .../convert_pad_to_group_conv.cpp | 6 +- .../convert_previous_nms_to_nms_5.cpp | 10 +- .../convert_reduce_to_pooling.cpp | 25 + .../convert_scatter_elements_to_scatter.cpp | 12 +- .../convert_sequences_to_tensor_iterator.cpp | 14 +- .../op_conversions/convert_shapeof3.cpp | 16 +- .../convert_shuffle_channels3.cpp | 12 +- .../op_conversions/convert_space_to_batch.cpp | 9 +- .../op_conversions/convert_space_to_depth.cpp | 6 +- .../op_conversions/convert_subtract.cpp | 6 +- .../convert_ti_to_sequences.cpp | 20 +- .../op_conversions/convert_topk3.cpp | 16 +- .../op_conversions/gru_cell_decomposition.cpp | 4 +- .../op_conversions/hsigmoid_decomposition.cpp | 4 +- .../op_conversions/hswish_decomposition.cpp | 4 +- .../log_softmax_decomposition.cpp | 4 +- .../lstm_cell_decomposition.cpp | 4 +- .../op_conversions/mvn6_decomposition.cpp | 74 + .../reduce_l1_decomposition.cpp | 4 +- .../reduce_l2_decomposition.cpp | 4 +- .../op_conversions/rnn_cell_decomposition.cpp | 4 +- .../op_conversions/softplus_decomposition.cpp | 4 +- .../convert_opset2_to_opset1.cpp | 10 +- .../convert_opset3_to_opset2.cpp | 10 +- .../src/transformations/serialize.cpp | 330 ++- .../smart_reshape/matmul_sr.cpp | 18 +- .../smart_reshape/mimic_set_batch_size.cpp | 2 + .../proposal_scales_stridedslice.cpp | 7 +- .../smart_reshape/reshape_to_1D.cpp | 4 +- .../smart_reshape/set_batch_size.cpp | 3 +- .../smart_reshape/smart_reshape.cpp | 2 + .../smart_reshape/strided_slice_squeeze.cpp | 11 +- .../src/transformations/utils/utils.cpp | 2 +- inference-engine/src/vpu/CMakeLists.txt | 4 + .../src/vpu/common/CMakeLists.txt | 1 - .../include/vpu/ngraph/query_network.hpp | 4 +- ...ynamic_to_static_shape_gather_elements.hpp | 13 + .../eliminate_shapeof_after_dsr.hpp | 2 +- .../merge_subsequent_dsr_operations.hpp | 2 +- .../vpu/common/include/vpu/utils/dot_io.hpp | 2 +- .../src/vpu/common/include/vpu/utils/io.hpp | 5 +- .../operations/dynamic_shape_resolver.cpp | 2 +- .../operations/out_shape_of_reshape.cpp | 4 +- .../ngraph/operations/static_shape_topk.cpp | 2 +- .../vpu/common/src/ngraph/query_network.cpp | 4 +- .../dynamic_to_static_shape.cpp | 3 + .../dynamic_to_static_shape_broadcast.cpp | 4 +- ...ynamic_to_static_shape_gather_elements.cpp | 31 + .../dynamic_to_static_shape_strided_slice.cpp | 16 +- ...dynamic_to_static_shape_variadic_split.cpp | 2 +- .../eliminate_shapeof_after_dsr.cpp | 8 +- .../extract_dynamic_batch.cpp | 46 +- .../merge_subsequent_dsr_operations.cpp | 8 +- .../src/vpu/graph_transformer/CMakeLists.txt | 1 - .../include/vpu/frontend/frontend.hpp | 16 +- .../vpu/frontend/ie_parsed_network.hpp | 4 +- .../include/vpu/graph_transformer.hpp | 6 +- .../src/frontend/detect_network_batch.cpp | 27 +- .../src/frontend/frontend.cpp | 78 +- .../src/frontend/ie_parsed_network.cpp | 7 +- .../src/frontend/remove_const_layers.cpp | 7 +- .../src/frontend/unroll_loops.cpp | 4 +- .../src/graph_transformer.cpp | 11 +- .../src/middleend/hw/tiling.cpp | 2 +- .../src/stages/exp_detectionoutput.cpp | 4 +- .../src/stages/static_shape_nms.cpp | 39 +- .../myriad_executable_network.cpp | 4 +- .../myriad_plugin/myriad_executable_network.h | 26 +- .../src/vpu/myriad_plugin/myriad_plugin.cpp | 5 +- .../async_infer_request_test.cpp | 2 +- .../inference_engine/blob_copy_test.cpp | 1 - .../cnn_network/cnn_ngraph_impl_tests.cpp | 651 +++++- .../convert_ngraph_to_cnn_network_tests.cpp | 6 +- .../cnn_network/matmul_sr_tests.cpp | 1 - .../inference_engine/cnn_network_test.cpp | 24 +- .../inference_engine/executable_network.cpp | 2 +- .../models/conv_with_rt_info.bin | 0 .../models/conv_with_rt_info.xml | 67 + ...xperimental_detectron_detection_output.xml | 138 +- ...ntal_detectron_detection_output_opset6.xml | 106 + ...mental_detectron_roi_feature_extractor.xml | 150 +- ...detectron_roi_feature_extractor_opset6.xml | 118 ++ .../ir_serialization/models/loop_2d_add.bin | Bin 0 -> 29 bytes .../ir_serialization/models/loop_2d_add.xml | 237 +++ .../models/pad_with_shape_of.bin | Bin 0 -> 20 bytes .../models/pad_with_shape_of.xml | 93 + .../models/ti_negative_stride.xml | 272 +++ .../ir_serialization/models/ti_resnet.xml | 300 +++ .../ir_serialization/serialize.cpp | 49 +- .../ir_serialization/tensor_iterator.cpp | 86 + .../ir_serialization/tensor_names.cpp | 58 + .../keep_constant_inputs_tests.cpp | 31 +- ...ermediate_with_constant_transformation.cpp | 1 - ..._to_scaleshift_or_power_transformation.cpp | 1 + ...ly_to_group_convolution_transformation.cpp | 41 +- .../multiply_transformation.cpp | 2 - ...ormalize_dequantization_transformation.cpp | 170 ++ .../strided_slice_transformation.cpp | 376 ++++ .../inference_engine/net_reader_test.cpp | 6 +- .../ngraph_reader/tensor_names.cpp | 89 + .../inference_engine/ngraph_reshape_tests.cpp | 403 +++- .../inference_engine/parameter_tests.cpp | 12 +- .../serialization/single_layer/broadcast.cpp | 33 + .../single_layer/detection_output.cpp | 74 + .../single_layer/normalize_l2.cpp | 42 + .../serialization/single_layer/pad.cpp | 40 + .../serialization/single_layer/pooling.cpp | 92 + .../serialization/single_layer/prelu.cpp | 43 + .../single_layer/prior_box_clustered.cpp | 77 + .../single_layer/region_yolo.cpp | 49 + .../serialization/single_layer/reshape.cpp | 35 + .../single_layer/shuffle_channels.cpp | 49 + .../single_layer/tensor_iterator.cpp | 43 + .../single_layer/variadic_split.cpp | 4 +- .../inference_engine/task_executor_tests.cpp | 1 - .../transformations/clamp_fusion.cpp | 108 + .../compare_functions_test.cpp | 211 ++ .../convert_gather_0d_test.cpp | 85 + .../transformations/convert_matmul_test.cpp | 6 +- .../convert_mvn1_to_mvn6_test.cpp | 101 + .../transformations/convert_nms5_test.cpp | 5 - .../convert_nms_to_nms_ie_internal_test.cpp | 4 - .../convert_nms_to_nms_ie_test.cpp | 4 - .../convert_pad_to_group_conv.cpp | 35 - ...nvert_scatter_elements_to_scatter_test.cpp | 7 +- .../transformations/convert_shapeof3.cpp | 13 +- .../convert_shuffle_channels3_test.cpp | 7 +- .../transformations/convert_topk3_test.cpp | 26 +- .../depth_to_space_fusion_test.cpp | 55 +- .../eliminate_unsqueeze_gather.cpp | 87 + .../transformations/fq_reshape_fusion.cpp | 1 - .../mul_add_conversion_test.cpp | 7 +- .../mvn6_decomposition_test.cpp | 124 ++ .../transformations/nop_elimination.cpp | 2 - .../transformations/pad_fusion.cpp | 420 ++++ .../relu_fake_quantize_fusion.cpp | 98 + .../reshape_fc_fusion_test.cpp | 18 +- .../plugin/cpu/bfloat16/conv_conv.cpp | 2 +- .../conv_relu_pool_conv_relu_pool.cpp | 2 +- .../runtime_precision.cpp | 32 + .../concat_with_child_and_output.cpp | 53 + .../convolution_transformation.cpp | 14 +- .../layer_transformation.cpp | 16 +- .../mat_mul_with_constant_transformation.cpp | 4 +- .../strided_slice_transformation.cpp | 101 + .../single_layer_tests/bucketize.cpp | 56 + .../single_layer_tests/fake_quantize.cpp | 48 +- .../single_layer_tests/gather_elements.cpp | 76 + .../single_layer_tests/interpolate.cpp | 12 +- .../single_layer_tests/mat_mul.cpp | 4 +- .../single_layer_tests/reverse_sequence.cpp | 4 +- .../single_layer_tests/roi_align.cpp | 6 +- .../single_layer_tests/roi_pooling.cpp | 1 + .../single_layer_tests/scatter_update.cpp | 2 +- .../skip_tests_config.cpp | 7 +- .../subgraph_tests/tensor_names.cpp | 16 + .../cpu/single_layer_tests/activation.cpp | 2 +- .../plugin/cpu/single_layer_tests/convert.cpp | 1 - .../cpu/single_layer_tests/convolution.cpp | 403 ++++ .../plugin/cpu/single_layer_tests/crop.cpp | 2 +- .../plugin/cpu/single_layer_tests/eltwise.cpp | 6 +- .../single_layer_tests/gather_elements.cpp | 92 + .../single_layer_tests/group_convolution.cpp | 86 +- .../cpu/single_layer_tests/interpolate.cpp | 97 +- .../plugin/cpu/single_layer_tests/logical.cpp | 2 +- .../plugin/cpu/single_layer_tests/mvn.cpp | 2 +- .../cpu/single_layer_tests/normalize.cpp | 3 +- .../plugin/cpu/single_layer_tests/pad.cpp | 2 +- .../plugin/cpu/single_layer_tests/permute.cpp | 2 +- .../cpu/single_layer_tests/psroi_pooling.cpp | 184 ++ .../cpu/single_layer_tests/reduce_ops.cpp | 2 +- .../cpu/single_layer_tests/region_yolo.cpp | 5 +- .../cpu/single_layer_tests/roialign.cpp | 3 +- .../plugin/cpu/single_layer_tests/split.cpp | 3 +- .../cpu/subgraph_tests/src/conv_concat.cpp | 26 +- .../plugin/cpu/test_utils/cpu_test_utils.cpp | 71 +- .../plugin/cpu/test_utils/cpu_test_utils.hpp | 93 +- .../cpu/test_utils/fusing_test_utils.cpp | 102 + .../cpu/test_utils/fusing_test_utils.hpp | 177 ++ .../remove_permutations_NHWC_to_NCHW_pass.cpp | 307 ++- .../single_layer_tests/convolution.cpp | 46 +- .../skip_tests_config.cpp | 5 +- .../subgraph_tests/tensor_names.cpp | 17 + .../behavior/core_threading_tests.cpp | 2 +- .../concat_with_child_and_output.cpp | 53 + .../layer_transformation.cpp | 16 +- .../strided_slice_transformation.cpp | 98 + .../multi/gpu_remote_blob_tests.cpp | 2 +- .../single_layer_tests/scatter_update.cpp | 2 +- .../skip_tests_config.cpp | 7 +- .../quantized_convolution_backprop_data.cpp | 80 + ...ntized_group_convolution_backprop_data.cpp | 83 + .../subgraph_tests/tensor_names.cpp | 18 + .../dynamic_to_static_shape_clamp.cpp | 4 +- .../dynamic_to_static_shape_convert.cpp | 4 +- ...ynamic_to_static_shape_gather_elements.cpp | 133 ++ .../dynamic_to_static_shape_reduce.cpp | 2 - .../dynamic_to_static_shape_transpose.cpp | 4 +- ...amic_to_static_shape_unary_elementwise.cpp | 6 +- .../eliminate_shapeof_after_dsr.cpp | 15 +- .../merge_subsequent_dsr_operations.cpp | 12 +- .../behavior/core_integration.cpp | 26 +- .../single_layer_tests/roi_pooling.cpp | 2 +- .../skip_tests_config.cpp | 6 +- .../subgraph_tests/tensor_names.cpp | 19 + .../myriad/subgraph_tests/dsr_gather.cpp | 54 +- .../myriad/subgraph_tests/dsr_gather_base.hpp | 66 + .../subgraph_tests/dsr_gather_elements.cpp | 66 + .../myriad/subgraph_tests/dsr_reshape.cpp | 12 +- .../subgraph_tests/dsr_strided_slice.cpp | 2 + .../plugin/myriad/subgraph_tests/dsr_topk.cpp | 1 - .../subgraph_tests/nonzero_transpose.cpp | 6 +- .../subgraph_tests/topk_k_propagation.cpp | 49 + .../subgraph_tests/unsqueeze_gather.cpp | 61 + .../plugin/shared/include/behavior/config.hpp | 2 +- .../include/behavior/core_integration.hpp | 22 +- .../include/behavior/core_threading_tests.hpp | 1 - .../shared/include/behavior/infer_request.hpp | 11 +- .../behavior/invalid_cases/proposal.hpp | 1 + .../shared/include/behavior/preprocessing.hpp | 10 - .../shared/include/behavior/test_plugin.hpp | 4 +- .../runtime_precision.hpp | 45 + .../concat_with_child_and_output.hpp | 38 + .../strided_slice_transformation.hpp | 45 + .../single_layer_tests/binary_convolution.hpp | 2 +- .../include/single_layer_tests/bucketize.hpp | 9 +- .../include/single_layer_tests/one_hot.hpp | 2 +- .../include/single_layer_tests/prior_box.hpp | 2 +- .../include/single_layer_tests/range.hpp | 4 + .../include/single_layer_tests/reverse.hpp | 2 +- .../single_layer_tests/squared_difference.hpp | 2 +- .../subgraph_tests/concat_quantization.hpp | 2 +- .../include/subgraph_tests/tensor_names.hpp | 166 ++ .../src/behavior/invalid_cases/proposal.cpp | 5 + .../shared/src/behavior/memory_states.cpp | 182 +- .../exec_graph_serialization.cpp | 14 +- .../network_serializer.cpp | 11 +- .../num_inputs_fusing_bin_conv.cpp | 10 +- .../runtime_precision.cpp | 136 ++ .../unique_node_names.cpp | 2 +- .../plugin/shared/src/hetero/synthetic.cpp | 1 - .../concat_with_child_and_output.cpp | 63 + ...put_layers_handling_in_transformations.cpp | 1 - ...handling_in_transformations_for_concat.cpp | 3 - ...ansformations_for_concat_multi_channel.cpp | 1 - .../strided_slice_transformation.cpp | 86 + .../base/layer_test_utils.hpp | 11 +- .../single_layer/binary_convolution.hpp | 2 +- .../single_layer/bucketize.hpp | 33 +- .../single_layer/gather_elements.hpp | 33 + .../single_layer/one_hot.hpp | 2 +- .../shared_test_classes/single_layer/pad.hpp | 4 +- .../single_layer/prior_box.hpp | 2 +- .../single_layer/reverse.hpp | 2 +- .../single_layer/scatter_update.hpp | 6 +- .../single_layer/squared_difference.hpp | 2 +- .../subgraph/tensor_names.hpp | 28 + .../src/base/layer_test_utils.cpp | 62 +- .../src/single_layer/binary_convolution.cpp | 2 +- .../src/single_layer/broadcast.cpp | 2 +- .../src/single_layer/bucketize.cpp | 100 +- .../single_layer/extract_image_patches.cpp | 2 +- .../src/single_layer/gather_elements.cpp | 54 + .../src/single_layer/normalize_l2.cpp | 2 +- .../src/single_layer/one_hot.cpp | 2 +- .../src/single_layer/pooling.cpp | 4 +- .../src/single_layer/prior_box.cpp | 2 +- .../src/single_layer/psroi_pooling.cpp | 210 +- .../src/single_layer/range.cpp | 4 - .../src/single_layer/reorg_yolo.cpp | 1 - .../src/single_layer/reverse.cpp | 2 +- .../src/single_layer/roi_pooling.cpp | 12 +- .../src/single_layer/scatter_update.cpp | 8 +- .../src/single_layer/space_to_batch.cpp | 2 +- .../src/single_layer/squared_difference.cpp | 2 +- .../src/subgraph/basic_lstm.cpp | 1 - .../src/subgraph/concat_multi_input.cpp | 2 +- .../src/subgraph/constant_result.cpp | 1 - .../memory_eltwise_reshape_concat.cpp | 2 +- .../src/subgraph/parameter_result.cpp | 1 - .../subgraph/reshape_squeeze_reshape_relu.cpp | 1 - .../src/subgraph/tensor_names.cpp | 35 + .../src/subgraph/trivial_concat.cpp | 3 - .../two_fake_quantize_to_fullyconnected.cpp | 2 +- .../common_test_utils/common_utils.hpp | 11 +- .../common_test_utils/data_utils.hpp | 98 +- .../common_test_utils/ngraph_test_utils.cpp | 737 ++++++- .../common_test_utils/ngraph_test_utils.hpp | 24 +- .../common_test_utils/unicode_utils.cpp | 25 + .../common_test_utils/unicode_utils.hpp | 28 +- .../common_test_utils/w_dirent.h | 1 + .../functional_test_utils/blob_utils.hpp | 118 +- .../functional_test_utils/precision_utils.hpp | 6 +- .../impl/mock_inference_plugin_internal.hpp | 5 +- .../interface/mock_iinference_plugin.hpp | 4 +- .../mocks/mock_engine/mock_plugin.cpp | 2 +- .../mocks/mock_icnn_network.hpp | 8 +- .../mocks/mock_ie_ivariable_state.hpp | 4 + .../mocks/mock_iexecutable_network.hpp | 4 + .../mocks/mock_iinfer_request.hpp | 4 + .../common/dequantization_operations.hpp | 4 +- .../lpt_ngraph_functions/concat_function.hpp | 6 + ...multiply_to_group_convolution_function.hpp | 3 +- .../normalize_dequantization_function.hpp | 25 + .../strided_slice_function.hpp | 64 + .../src/common/builders.cpp | 9 +- .../src/concat_function.cpp | 35 +- ...fake_quantize_and_convolution_function.cpp | 1 - ...multiply_to_group_convolution_function.cpp | 22 +- .../src/normalize_dequantization_function.cpp | 45 + .../src/strided_slice_function.cpp | 131 ++ .../include/ngraph_functions/builders.hpp | 10 +- .../ngraph_functions/pass/convert_prc.hpp | 43 +- .../ngraph_functions/subgraph_builders.hpp | 28 +- .../ngraph_functions/utils/data_utils.hpp | 17 +- .../ngraph_functions/utils/ngraph_helpers.hpp | 4 +- .../ngraph_functions/src/gather_elements.cpp | 37 + .../ngraph_functions/src/scatter_update.cpp | 6 +- .../src/utils/ngraph_helpers.cpp | 132 +- .../unit/cpu/mkldnn_memory_desc_test.cpp | 126 ++ .../tests/unit/cpu/mkldnn_memory_test.cpp | 20 + .../onnx_import/onnx_importer_test.cpp | 1 - .../ie_executable_network_base_test.cpp | 2 + .../ie_memory_state_internal_test.cpp | 6 + .../ie_executable_network_test.cpp | 2 + .../inference_engine/ie_extension_test.cpp | 2 +- .../passes_tests/adjust_data_batch_tests.cpp | 2 + .../tests_deprecated/CMakeLists.txt | 4 + .../vpu/myriad_tests/aot_behavior_tests.cpp | 4 +- .../helpers/myriad_load_network_case.cpp | 3 +- .../helpers/myriad_protocol_case.hpp | 4 +- .../vpu/myriad_tests/vpu_protocol_tests.cpp | 14 +- .../fluid_preproc/CMakeLists.txt | 5 +- .../include/classification_matcher.hpp | 2 +- .../ie_tests/include/ie_core_adapter.hpp | 21 +- .../ie_tests/src/classification_matcher.cpp | 25 +- .../ie_tests/src/custom_matcher.cpp | 14 +- .../ie_tests/src/ie_core_adapter.cpp | 32 +- .../functional/ie_tests/src/raw_matcher.cpp | 1 - .../ie_tests/src/segmentation_matcher.cpp | 32 +- .../single_layer_tests.cpp | 10 +- .../io_blob_tests/layout_tests.cpp | 2 +- .../network_tests/network_test.cpp | 16 +- .../single_layer_tests/conv_int8_tests.cpp | 10 - .../single_layer_tests/pooling_tests.cpp | 2 +- .../single_layer_tests/region_yolo_tests.cpp | 1 - .../functional/shared_tests/CMakeLists.txt | 2 +- .../common_single_layer_tests/conv_ref.cpp | 10 +- .../def_conv_ref.cpp | 2 - .../common_single_layer_tests/pool_ref.cpp | 8 +- .../single_layer_tests.hpp | 1 - .../shared_tests/lstm/lstm_cell_test.hpp | 14 +- .../shared_tests/lstm/lstm_ir_test.hpp | 18 +- .../functional/shared_tests/lstm/rnn_util.cpp | 2 - .../single_layer_tests/quantize_tests.hpp | 8 +- .../common/layers/myriad_layers_blob_test.cpp | 132 +- .../layers/myriad_layers_concat_test.cpp | 34 +- .../layers/myriad_layers_conv_nd_test.hpp | 28 +- .../layers/myriad_layers_convolution1x1.hpp | 66 +- .../layers/myriad_layers_convolution3x3.hpp | 68 +- .../layers/myriad_layers_convolution_test.cpp | 45 +- .../layers/myriad_layers_convolution_test.hpp | 3 - .../common/layers/myriad_layers_crop_test.hpp | 2 - .../layers/myriad_layers_custom_test.hpp | 79 +- .../myriad_layers_detection_output_test.cpp | 99 +- .../layers/myriad_layers_eltwise_test.cpp | 79 +- ...myriad_layers_exp_detectionoutput_test.hpp | 405 ++-- .../myriad_layers_fully_connected_tests.hpp | 2 - .../common/layers/myriad_layers_gemm_test.hpp | 1 - .../common/layers/myriad_layers_lstm_cell.cpp | 34 +- .../common/layers/myriad_layers_lstm_cell.hpp | 4 - .../common/layers/myriad_layers_mvn_test.cpp | 24 +- .../common/layers/myriad_layers_pad_test.hpp | 8 - .../layers/myriad_layers_permute_test.hpp | 1 - .../layers/myriad_layers_pool_nd_test.hpp | 28 +- .../layers/myriad_layers_pooling_test.hpp | 2 - ...myriad_layers_prior_box_clustered_test.cpp | 31 +- .../layers/myriad_layers_prior_box_test.cpp | 92 +- .../layers/myriad_layers_proposal_test.cpp | 34 +- .../myriad_layers_psroipooling_test.hpp | 22 +- .../layers/myriad_layers_reduce_test.hpp | 55 +- .../common/layers/myriad_layers_relu_test.cpp | 11 +- .../common/layers/myriad_layers_relu_test.hpp | 24 +- .../layers/myriad_layers_reshape_test.cpp | 86 +- .../layers/myriad_layers_reshape_test.hpp | 24 +- .../common/layers/myriad_layers_rfcn_test.cpp | 72 +- .../layers/myriad_layers_roi_align_test.hpp | 30 +- ...riad_layers_roi_feature_extractor_test.hpp | 6 - .../layers/myriad_layers_roi_pooling_test.hpp | 24 +- ...ad_layers_scatter_elements_update_test.hpp | 42 +- .../myriad_layers_scatter_update_test.hpp | 46 +- .../layers/myriad_layers_select_test.hpp | 1 - .../layers/myriad_layers_strided_slice_test.h | 25 +- .../common/layers/myriad_layers_tile_test.hpp | 1 - .../common/layers/myriad_layers_topk_test.hpp | 28 +- .../layers/myriad_layers_unsqueeze_test.hpp | 3 - .../vpu/common/myriad_get_output_tests.hpp | 48 +- .../common/myriad_get_perf_count_tests.cpp | 21 +- .../vpu/common/myriad_hw_extra_tests.hpp | 49 +- .../vpu/common/myriad_hw_network_tests.hpp | 26 +- .../vpu/common/myriad_hw_tests_base.hpp | 32 +- .../vpu/common/myriad_infer_tests.cpp | 85 +- .../vpu/common/myriad_merge_permute_tests.hpp | 3 +- .../graph_transformer/gt_functional_tests.cpp | 7 +- .../vpu/myriad_tests/myriad_configs_tests.cpp | 38 +- .../myriad_multiple_graph_tests.cpp | 7 +- .../vpu/vpu_base/myriad_layers_tests.hpp | 1 - .../vpu/vpu_base/vpu_layers_tests.cpp | 34 +- .../vpu/vpu_base/vpu_layers_tests.hpp | 7 +- .../functional/vpu/vpu_base/vpu_test_net.cpp | 1 - .../helpers/tests_common_func.hpp | 8 +- .../tests_deprecated/unit/CMakeLists.txt | 38 +- .../cnn_network/cnn_network_impl_test.cpp | 261 --- .../unit/cnn_network/xml_father_tests.cpp | 61 - .../unit/engines/gna/I8_quantisation_test.cpp | 6 +- .../unit/engines/gna/gna_matcher.cpp | 8 + .../unit/engines/gna/gna_memory_test.cpp | 1 - .../engines/gna/i16_quantisation_test.cpp | 6 +- .../engines/gna/matchers/conv_matcher.hpp | 5 +- .../unit/engines/mkldnn/convert_desc_test.cpp | 170 +- .../graph/layers/extensions/gather_tests.cpp | 2 +- .../layers/extensions/normalize_tests.cpp | 1 - .../graph/layers/extensions/onehot_tests.cpp | 6 +- .../layers/extensions/strided_slice_tests.cpp | 14 +- .../layers/internal/graph_activation_test.cpp | 6 + .../graph_batchnorm_scaleshift_test.cpp | 4 +- .../layers/internal/graph_batchnorm_test.cpp | 4 +- .../layers/internal/graph_concat_test.cpp | 9 +- .../graph/layers/internal/graph_conv_test.cpp | 38 +- .../layers/internal/graph_deconv_test.cpp | 2 +- .../layers/internal/graph_depthwise_test.cpp | 3 + .../layers/internal/graph_input_test.cpp | 5 +- .../layers/internal/graph_pooling_test.cpp | 12 +- .../layers/internal/graph_reshape_test.cpp | 3 +- .../internal/graph_roi_pooling_test.cpp | 2 +- .../graph_conv_depthwise_fusing_test.cpp | 3 + .../graph/structure/graph_structure_test.cpp | 15 +- .../unit/engines/mkldnn/graph/test_graph.hpp | 34 +- .../engines/mkldnn/mkldnn_primitive_test.cpp | 2 +- .../vpu/mvnc/pthread_semaphore_tests.cpp | 2 +- .../unit/graph_tools/graph_copy_tests.cpp | 29 +- .../unit/graph_tools/graph_test_base.hpp | 9 +- .../unit/graph_tools/graph_tools_test.cpp | 10 +- .../util_const_infer_test.cpp | 30 +- .../unit/inference_engine_tests/util_test.cpp | 10 +- .../unit/stress_tests/stress_tests.cpp | 44 - inference-engine/thirdparty/CMakeLists.txt | 17 +- .../thirdparty/clDNN/CMakeLists.txt | 2 +- .../thirdparty/clDNN/api/convolution.hpp | 28 + .../thirdparty/clDNN/api/deconvolution.hpp | 10 + .../thirdparty/clDNN/api/device.hpp | 4 +- .../thirdparty/clDNN/api/tensor.hpp | 41 +- .../kernel_selector/common/tensor_type.cpp | 5 + .../activation/activation_kernel_base.cpp | 4 - .../activation/activation_kernel_opt.cpp | 6 +- .../activation/activation_kernel_opt.h | 1 + .../activation/activation_kernel_ref.cpp | 4 + .../activation/activation_kernel_ref.h | 1 + .../arg_max_min/arg_max_min_kernel_axis.cpp | 6 +- .../arg_max_min/arg_max_min_kernel_axis.h | 1 + .../arg_max_min/arg_max_min_kernel_base.cpp | 6 +- .../arg_max_min/arg_max_min_kernel_base.h | 2 +- .../arg_max_min_kernel_gpu_ref.cpp | 10 +- .../arg_max_min/arg_max_min_kernel_gpu_ref.h | 3 +- .../arg_max_min/arg_max_min_kernel_opt.cpp | 8 +- .../arg_max_min/arg_max_min_kernel_opt.h | 3 +- .../average_unpooling_kernel_base.cpp | 5 +- .../average_unpooling_kernel_base.h | 2 +- .../average_unpooling_kernel_gpu_ref.cpp | 11 +- .../average_unpooling_kernel_gpu_ref.h | 3 +- .../batch_to_space_kernel_base.cpp | 4 +- .../batch_to_space_kernel_base.h | 2 +- .../batch_to_space_kernel_ref.cpp | 6 +- .../batch_to_space_kernel_ref.h | 1 + .../binary_convolution_kernel_1x1.cpp | 6 +- .../binary_convolution_kernel_1x1.h | 1 + ...y_convolution_kernel_1x1_b_fs_yx_fsv16.cpp | 6 +- ...ary_convolution_kernel_1x1_b_fs_yx_fsv16.h | 1 + .../binary_convolution_kernel_base.cpp | 2 - .../binary_convolution_kernel_generic.cpp | 6 +- .../binary_convolution_kernel_generic.h | 1 + .../binary_convolution_kernel_ref.cpp | 6 +- .../binary_convolution_kernel_ref.h | 1 + .../border/border_kernel_base.cpp | 5 +- .../border/border_kernel_base.h | 2 +- .../border/border_kernel_ref.cpp | 8 +- .../actual_kernels/border/border_kernel_ref.h | 1 + .../broadcast/broadcast_kernel_base.cpp | 4 +- .../broadcast/broadcast_kernel_base.h | 2 +- .../broadcast/broadcast_kernel_ref.cpp | 8 +- .../broadcast/broadcast_kernel_ref.h | 1 + .../concatenation_kernel_b_fs_yx_fsv16.cpp | 6 +- .../concatenation_kernel_b_fs_yx_fsv16.h | 1 + .../concatenation_kernel_base.cpp | 6 - ...ncatenation_kernel_depth_bfyx_no_pitch.cpp | 6 +- ...concatenation_kernel_depth_bfyx_no_pitch.h | 1 + .../concatenation_kernel_fs_b_yx_fsv32.cpp | 10 +- .../concatenation_kernel_fs_b_yx_fsv32.h | 1 + .../concatenation_kernel_ref.cpp | 4 + .../concatenation/concatenation_kernel_ref.h | 1 + .../concatenation_kernel_simple_ref.cpp | 6 +- .../concatenation_kernel_simple_ref.h | 1 + .../convolution_kernel_b_fs_yx_fsv16.cpp | 102 +- .../convolution_kernel_b_fs_yx_fsv16.h | 12 +- .../convolution_kernel_b_fs_yx_fsv16_1x1.cpp | 116 +- .../convolution_kernel_b_fs_yx_fsv16_1x1.h | 12 +- ...olution_kernel_b_fs_yx_fsv16_depthwise.cpp | 11 +- ...nvolution_kernel_b_fs_yx_fsv16_depthwise.h | 1 + ...volution_kernel_b_fs_yx_fsv16_imad_1x1.cpp | 32 +- ...onvolution_kernel_b_fs_yx_fsv16_imad_1x1.h | 1 + .../convolution_kernel_b_fs_yx_fsv4_int8.cpp | 12 +- .../convolution_kernel_b_fs_yx_fsv4_int8.h | 1 + ...ution_kernel_b_fs_yx_fsv_16_32_imad_dw.cpp | 8 +- ...ution_kernel_b_fs_yx_fsv_16_32_imad_dw.hpp | 1 + .../convolution_kernel_b_fs_zyx_fsv16.cpp | 10 +- .../convolution_kernel_b_fs_zyx_fsv16.h | 1 + ...convolution_kernel_b_fs_zyx_fsv16_imad.cpp | 390 ++-- .../convolution_kernel_b_fs_zyx_fsv16_imad.h | 2 + .../convolution/convolution_kernel_base.cpp | 22 +- .../convolution_kernel_bfyx_1x1.cpp | 6 +- .../convolution/convolution_kernel_bfyx_1x1.h | 1 + .../convolution_kernel_bfyx_1x1_gemm_buf.cpp | 6 +- .../convolution_kernel_bfyx_1x1_gemm_buf.h | 1 + .../convolution_kernel_bfyx_1x1_opt.cpp | 8 +- .../convolution_kernel_bfyx_1x1_opt.h | 1 + ...tion_kernel_bfyx_depthwise_weights_lwg.cpp | 6 +- ...lution_kernel_bfyx_depthwise_weights_lwg.h | 1 + ...onvolution_kernel_bfyx_direct_10_12_16.cpp | 6 +- .../convolution_kernel_bfyx_direct_10_12_16.h | 1 + .../convolution_kernel_bfyx_gemm_like.cpp | 8 +- .../convolution_kernel_bfyx_gemm_like.h | 1 + .../convolution_kernel_bfyx_iyxo.cpp | 6 +- .../convolution_kernel_bfyx_iyxo.h | 1 + .../convolution_kernel_bfyx_os_iyx_osv16.cpp | 6 +- .../convolution_kernel_bfyx_os_iyx_osv16.h | 1 + ...nvolution_kernel_bfyx_to_b_fs_yx_fsv16.cpp | 11 +- ...convolution_kernel_bfyx_to_b_fs_yx_fsv16.h | 1 + ...on_kernel_bfyx_to_bs_fs_yx_bsv16_fsv16.cpp | 7 +- ...tion_kernel_bfyx_to_bs_fs_yx_bsv16_fsv16.h | 1 + ...onvolution_kernel_bfyx_to_fs_byx_fsv32.cpp | 6 +- .../convolution_kernel_bfyx_to_fs_byx_fsv32.h | 1 + .../convolution_kernel_fs_byx_fsv32.cpp | 6 +- .../convolution_kernel_fs_byx_fsv32.h | 1 + .../convolution_kernel_fs_byx_fsv32_1x1.cpp | 6 +- .../convolution_kernel_fs_byx_fsv32_1x1.h | 1 + ...volution_kernel_fs_byx_fsv32_depthwise.cpp | 6 +- ...onvolution_kernel_fs_byx_fsv32_depthwise.h | 1 + .../convolution/convolution_kernel_imad.cpp | 10 +- .../convolution/convolution_kernel_imad.h | 1 + ...nvolution_kernel_imad_b_fs_yx_fsv4_1x1.cpp | 9 +- ...convolution_kernel_imad_b_fs_yx_fsv4_1x1.h | 1 + ...onvolution_kernel_imad_b_fs_yx_fsv4_dw.cpp | 7 +- ...onvolution_kernel_imad_b_fs_yx_fsv4_dw.hpp | 1 + ...n_kernel_imad_bs_fs_yx_bsv16_fsv16_1x1.cpp | 6 +- ...ion_kernel_imad_bs_fs_yx_bsv16_fsv16_1x1.h | 1 + ...n_kernel_imad_bs_fs_yx_bsv16_fsv16_3x3.cpp | 6 +- ...ion_kernel_imad_bs_fs_yx_bsv16_fsv16_3x3.h | 1 + .../convolution_kernel_mmad_b_fs_yx_fsv32.cpp | 9 +- .../convolution_kernel_mmad_b_fs_yx_fsv32.h | 1 + ...nvolution_kernel_mmad_b_fs_yx_fsv32_dw.cpp | 8 +- ...convolution_kernel_mmad_b_fs_yx_fsv32_dw.h | 1 + ...tion_kernel_mmad_bfyx_to_b_fs_yx_fsv32.cpp | 10 +- ...lution_kernel_mmad_bfyx_to_b_fs_yx_fsv32.h | 1 + ...ution_kernel_mmad_bfyx_to_b_fs_yx_fsv4.cpp | 10 +- ...olution_kernel_mmad_bfyx_to_b_fs_yx_fsv4.h | 1 + .../convolution/convolution_kernel_ref.cpp | 4 + .../convolution/convolution_kernel_ref.h | 1 + .../convolution_kernel_winograd_2x3_s1.cpp | 6 +- .../convolution_kernel_winograd_2x3_s1.h | 1 + ...nvolution_kernel_winograd_2x3_s1_fused.cpp | 6 +- ...convolution_kernel_winograd_2x3_s1_fused.h | 1 + ...nvolution_kernel_winograd_6x3_s1_fused.cpp | 6 +- ...convolution_kernel_winograd_6x3_s1_fused.h | 1 + .../convolution_kernel_yxfb_ref.cpp | 6 +- .../convolution/convolution_kernel_yxfb_ref.h | 1 + .../convolution_kernel_yxfb_yxio_b16.cpp | 12 +- .../convolution_kernel_yxfb_yxio_b16.h | 1 + ...n_kernel_yxfb_yxio_b1_block_multiple_x.cpp | 4 + ...ion_kernel_yxfb_yxio_b1_block_multiple_x.h | 1 + .../convolution_kernel_yxfb_yxio_b8.cpp | 6 +- .../convolution_kernel_yxfb_yxio_b8.h | 1 + ...eformable_convolution_kernel_bfyx_conv.cpp | 6 +- .../deformable_convolution_kernel_bfyx_conv.h | 1 + ...ormable_convolution_kernel_bfyx_interp.cpp | 5 +- ...eformable_convolution_kernel_bfyx_interp.h | 1 + ...deformable_convolution_kernel_bfyx_ref.cpp | 4 + .../deformable_convolution_kernel_bfyx_ref.h | 1 + .../ctc_greedy_decoder_kernel_base.cpp | 5 +- .../ctc_greedy_decoder_kernel_base.h | 2 +- .../ctc_greedy_decoder_kernel_ref.cpp | 6 +- .../ctc_greedy_decoder_kernel_ref.h | 1 + .../cum_sum/cum_sum_kernel_base.cpp | 5 +- .../cum_sum/cum_sum_kernel_base.h | 2 +- .../cum_sum/cum_sum_kernel_partial_sum.cpp | 10 +- .../cum_sum/cum_sum_kernel_partial_sum.h | 3 +- .../cum_sum/cum_sum_kernel_ref.cpp | 6 +- .../cum_sum/cum_sum_kernel_ref.h | 1 + .../deconvolution_kernel_b_fs_zyx_fsv16.cpp | 6 +- .../deconvolution_kernel_b_fs_zyx_fsv16.h | 1 + ...deconvolution_kernel_b_fs_zyx_fsv16_dw.cpp | 6 +- .../deconvolution_kernel_b_fs_zyx_fsv16_dw.h | 1 + .../deconvolution_kernel_base.cpp | 3 - .../deconvolution_kernel_bfyx_opt.cpp | 5 +- .../deconvolution_kernel_bfyx_opt.h | 1 + ...nvolution_kernel_imad_along_f_tile_bfx.cpp | 17 +- ...nvolution_kernel_imad_along_f_tile_bfx.hpp | 1 + .../deconvolution_kernel_imad_ref.cpp | 6 +- .../deconvolution_kernel_imad_ref.hpp | 1 + .../deconvolution_kernel_ref.cpp | 4 + .../deconvolution/deconvolution_kernel_ref.h | 1 + .../depth_to_space_kernel_base.cpp | 4 +- .../depth_to_space_kernel_base.h | 2 +- .../depth_to_space_kernel_block2_opt.cpp | 6 +- .../depth_to_space_kernel_block2_opt.h | 1 + .../depth_to_space_kernel_ref.cpp | 6 +- .../depth_to_space_kernel_ref.h | 1 + .../eltwise/eltwise_kernel_b_fs_yx_fsv16.cpp | 43 +- .../eltwise/eltwise_kernel_b_fs_yx_fsv16.h | 1 + .../eltwise/eltwise_kernel_base.cpp | 2 - .../eltwise/eltwise_kernel_fs_b_yx_fsv32.cpp | 8 +- .../eltwise/eltwise_kernel_fs_b_yx_fsv32.h | 1 + ...se_kernel_mixed_byxf_and_fs_b_yx_fsv32.cpp | 18 +- ...wise_kernel_mixed_byxf_and_fs_b_yx_fsv32.h | 1 + .../eltwise/eltwise_kernel_ref.cpp | 4 + .../eltwise/eltwise_kernel_ref.h | 1 + .../eltwise/eltwise_kernel_vload8.cpp | 6 +- .../eltwise/eltwise_kernel_vload8.h | 1 + .../embedding_bag_kernel_ref.cpp | 6 +- .../embedding_bag/embedding_bag_kernel_ref.h | 1 + .../extract_image_patches_kernel_base.cpp | 5 +- .../extract_image_patches_kernel_base.h | 2 +- .../extract_image_patches_kernel_ref.cpp | 6 +- .../extract_image_patches_kernel_ref.h | 1 + .../fully_connected_kernel_base.cpp | 4 - .../fully_connected_kernel_base.h | 2 - .../fully_connected_kernel_bf_io_gemm.cpp | 7 +- .../fully_connected_kernel_bf_io_gemm.h | 1 + ...y_connected_kernel_bf_io_input_spatial.cpp | 36 +- ...lly_connected_kernel_bf_io_input_spatial.h | 1 + .../fully_connected_kernel_bf_io_ref.cpp | 6 +- .../fully_connected_kernel_bf_io_ref.h | 1 + .../fully_connected_kernel_bf_tiled.cpp | 26 +- .../fully_connected_kernel_bf_tiled.h | 1 + .../fully_connected_kernel_bfyx_ref.cpp | 5 +- .../fully_connected_kernel_bfyx_ref.h | 1 + .../fully_connected_kernel_bs_f_bsv16_af8.cpp | 5 +- .../fully_connected_kernel_bs_f_bsv16_af8.h | 1 + .../fully_connected_kernel_bs_f_bsv16_b1.cpp | 5 +- .../fully_connected_kernel_bs_f_bsv16_b1.h | 1 + .../fully_connected_kernel_bs_f_bsv8_af8.cpp | 5 +- .../fully_connected_kernel_bs_f_bsv8_af8.h | 1 + .../fully_connected_kernel_fb_io_b8_f8.cpp | 14 +- .../fully_connected_kernel_fb_io_b8_f8.h | 1 + .../fully_connected_kernel_fb_io_block.cpp | 14 +- .../fully_connected_kernel_fb_io_block.h | 1 + .../fully_connected_kernel_fb_io_ref.cpp | 4 +- .../fully_connected_kernel_fb_io_ref.h | 1 + .../fully_connected_kernel_fb_oi_b8_ref.cpp | 5 +- .../fully_connected_kernel_fb_oi_b8_ref.h | 1 + .../fully_connected_kernel_fb_oi_ref.cpp | 4 +- .../fully_connected_kernel_fb_oi_ref.h | 1 + .../fully_connected_kernel_fs_byx_fsv32.cpp | 7 +- .../fully_connected_kernel_fs_byx_fsv32.h | 1 + .../fully_connected_kernel_imad.cpp | 4 +- .../fully_connected_kernel_imad.h | 1 + .../fully_connected_kernel_mmad.cpp | 5 +- .../fully_connected_kernel_mmad.h | 1 + .../fully_connected_kernel_yxfb_ref.cpp | 4 +- .../fully_connected_kernel_yxfb_ref.h | 1 + .../fused_conv_eltwise_kernel_base.cpp | 2 - ...fused_conv_eltwise_kernel_bfyx_1x1_opt.cpp | 8 +- .../fused_conv_eltwise_kernel_bfyx_1x1_opt.h | 1 + .../fused_conv_eltwise_kernel_bfyx_iyxo.cpp | 6 +- .../fused_conv_eltwise_kernel_bfyx_iyxo.h | 1 + ..._conv_eltwise_kernel_bfyx_os_iyx_osv16.cpp | 6 +- ...ed_conv_eltwise_kernel_bfyx_os_iyx_osv16.h | 1 + ...used_conv_eltwise_kernel_yxfb_yxio_b16.cpp | 12 +- .../fused_conv_eltwise_kernel_yxfb_yxio_b16.h | 1 + .../gather/gather_kernel_ref.cpp | 6 +- .../actual_kernels/gather/gather_kernel_ref.h | 1 + .../gather_tree/gather_tree_kernel_base.cpp | 4 +- .../gather_tree/gather_tree_kernel_base.h | 2 +- .../gather_tree/gather_tree_kernel_ref.cpp | 6 +- .../gather_tree/gather_tree_kernel_ref.h | 1 + .../actual_kernels/gemm/gemm_kernel_base.cpp | 5 +- .../actual_kernels/gemm/gemm_kernel_base.h | 2 +- .../gemm/gemm_kernel_mmad_int8.cpp | 9 +- .../gemm/gemm_kernel_mmad_int8.h | 1 + .../gemm/gemm_kernel_mmad_int8_slm.cpp | 13 +- .../gemm/gemm_kernel_mmad_int8_slm.h | 1 + .../actual_kernels/gemm/gemm_kernel_ref.cpp | 6 +- .../actual_kernels/gemm/gemm_kernel_ref.h | 1 + .../gemm/gemm_kernel_tiled_opt.cpp | 6 +- .../gemm/gemm_kernel_tiled_opt.h | 1 + .../actual_kernels/grn/grn_kernel_base.cpp | 5 +- .../core/actual_kernels/grn/grn_kernel_base.h | 2 +- .../actual_kernels/grn/grn_kernel_ref.cpp | 7 +- .../core/actual_kernels/grn/grn_kernel_ref.h | 1 + ...ernel_across_channel_multiple_features.cpp | 8 +- ..._kernel_across_channel_multiple_features.h | 1 + ...across_channel_multiple_features_fsv16.cpp | 6 +- ...l_across_channel_multiple_features_fsv16.h | 1 + .../lrn/lrn_kernel_across_channel_opt_b8.cpp | 6 +- .../lrn/lrn_kernel_across_channel_opt_b8.h | 1 + .../lrn/lrn_kernel_across_channel_ref.cpp | 8 +- .../lrn/lrn_kernel_across_channel_ref.h | 1 + .../actual_kernels/lrn/lrn_kernel_base.cpp | 5 +- .../core/actual_kernels/lrn/lrn_kernel_base.h | 2 +- .../actual_kernels/lrn/lrn_kernel_ref.cpp | 6 +- .../core/actual_kernels/lrn/lrn_kernel_ref.h | 1 + .../lrn_kernel_within_channel_byxf_opt.cpp | 6 +- .../lrn/lrn_kernel_within_channel_byxf_opt.h | 1 + .../lrn/lrn_kernel_within_channel_ref.cpp | 6 +- .../lrn/lrn_kernel_within_channel_ref.h | 1 + .../lrn/lrn_kernel_within_channel_ref_opt.cpp | 6 +- .../lrn/lrn_kernel_within_channel_ref_opt.h | 1 + .../lstm/lstm_elt_kernel_base.cpp | 3 - .../lstm/lstm_elt_kernel_ref.cpp | 6 +- .../actual_kernels/lstm/lstm_elt_kernel_ref.h | 3 +- .../lstm/lstm_gemm_kernel_base.cpp | 5 +- .../lstm/lstm_gemm_kernel_ref.cpp | 6 +- .../lstm/lstm_gemm_kernel_ref.h | 3 +- ...m_gemv_gpu_subgroup1x64_bfyx_ff_simd16.cpp | 19 +- ...stm_gemv_gpu_subgroup1x64_bfyx_ff_simd16.h | 3 +- ...m_gemv_gpu_subgroup1x64_bfyx_hh_simd16.cpp | 19 +- ...stm_gemv_gpu_subgroup1x64_bfyx_hh_simd16.h | 3 +- .../lstm_dynamic_input_bfyx_opt.cpp | 5 +- .../lstm_dynamic_input_bfyx_opt.h | 1 + .../lstm_dynamic_input_kernel_base.cpp | 4 +- .../lstm_dynamic_input_kernel_base.h | 3 +- .../lstm_dynamic_input_ref_kernel.cpp | 8 +- .../lstm_dynamic_input_ref_kernel.h | 1 + .../lstm_dynamic_timeloop_kernel_base.cpp | 4 +- .../lstm_dynamic_timeloop_kernel_base.h | 3 +- .../lstm_dynamic_timeloop_ref_kernel.cpp | 8 +- .../lstm_dynamic_timeloop_ref_kernel.h | 1 + .../max_unpooling_kernel_base.cpp | 7 +- .../max_unpooling/max_unpooling_kernel_base.h | 2 +- .../max_unpooling_kernel_gpu_ref.cpp | 10 +- .../max_unpooling_kernel_gpu_ref.h | 3 +- .../mvn/mvn_kernel_b_fs_yx_fsv16_imad.cpp | 12 +- .../mvn/mvn_kernel_b_fs_yx_fsv16_imad.hpp | 3 +- .../actual_kernels/mvn/mvn_kernel_base.cpp | 5 +- .../core/actual_kernels/mvn/mvn_kernel_base.h | 2 +- .../mvn/mvn_kernel_bfyx_opt.cpp | 6 +- .../actual_kernels/mvn/mvn_kernel_bfyx_opt.h | 1 + .../actual_kernels/mvn/mvn_kernel_ref.cpp | 6 +- .../core/actual_kernels/mvn/mvn_kernel_ref.h | 1 + .../normalize_kernel_across_spatial_ref.cpp | 6 +- .../normalize_kernel_across_spatial_ref.h | 1 + .../normalize/normalize_kernel_base.cpp | 5 +- .../normalize/normalize_kernel_base.h | 2 +- .../normalize_kernel_within_spatial_ref.cpp | 6 +- .../normalize_kernel_within_spatial_ref.h | 3 +- .../one_hot/one_hot_kernel_base.cpp | 4 +- .../one_hot/one_hot_kernel_base.h | 2 +- .../one_hot/one_hot_kernel_ref.cpp | 8 +- .../one_hot/one_hot_kernel_ref.h | 1 + .../permute/permute_kernel_ref.cpp | 6 +- .../permute/permute_kernel_ref.h | 1 + .../pooling/pooling_kernel_base.cpp | 5 +- .../pooling/pooling_kernel_base.h | 2 +- .../pooling_kernel_gpu_b_fs_yx_fsv16.cpp | 12 +- .../pooling_kernel_gpu_b_fs_yx_fsv16.h | 1 + .../pooling_kernel_gpu_b_fs_yx_fsv4.cpp | 6 +- .../pooling/pooling_kernel_gpu_b_fs_yx_fsv4.h | 1 + ...pooling_kernel_gpu_b_fs_zyx_fsv16_imad.cpp | 6 +- .../pooling_kernel_gpu_b_fs_zyx_fsv16_imad.h | 1 + .../pooling_kernel_gpu_bfyx_block_opt.cpp | 6 +- .../pooling_kernel_gpu_bfyx_block_opt.h | 1 + ...ooling_kernel_gpu_bs_fs_yx_bsv16_fsv16.cpp | 7 +- .../pooling_kernel_gpu_bs_fs_yx_bsv16_fsv16.h | 1 + .../pooling_kernel_gpu_bsv16_fsv16.cpp | 8 +- .../pooling/pooling_kernel_gpu_bsv16_fsv16.h | 1 + .../pooling/pooling_kernel_gpu_byxf_opt.cpp | 6 +- .../pooling/pooling_kernel_gpu_byxf_opt.h | 1 + .../pooling_kernel_gpu_byxf_padding_opt.cpp | 6 +- .../pooling_kernel_gpu_byxf_padding_opt.h | 1 + .../pooling_kernel_gpu_fs_b_yx_fsv32.cpp | 6 +- .../pooling_kernel_gpu_fs_b_yx_fsv32.h | 1 + .../pooling/pooling_kernel_gpu_int8_ref.cpp | 6 +- .../pooling/pooling_kernel_gpu_int8_ref.h | 1 + .../pooling/pooling_kernel_gpu_ref.cpp | 6 +- .../pooling/pooling_kernel_gpu_ref.h | 1 + .../pyramid_roi_align_kernel_base.cpp | 5 +- .../pyramid_roi_align_kernel_base.h | 2 +- .../pyramid_roi_align_kernel_ref.cpp | 6 +- .../pyramid_roi_align_kernel_ref.h | 1 + .../quantize/quantize_kernel_base.cpp | 2 - .../quantize/quantize_kernel_ref.cpp | 5 +- .../quantize/quantize_kernel_ref.h | 1 + .../quantize_kernel_scale_shift_opt.cpp | 5 + .../quantize_kernel_scale_shift_opt.h | 1 + .../reduce/reduce_kernel_b_fs_yx_fsv16.cpp | 6 +- .../reduce/reduce_kernel_b_fs_yx_fsv16.h | 1 + .../reduce/reduce_kernel_base.cpp | 4 +- .../reduce/reduce_kernel_base.h | 2 +- .../reduce/reduce_kernel_ref.cpp | 5 +- .../actual_kernels/reduce/reduce_kernel_ref.h | 1 + .../region_yolo/region_yolo_kernel_ref.cpp | 6 +- .../region_yolo/region_yolo_kernel_ref.h | 1 + .../reorder/reorder_biplanar_nv12.cpp | 8 +- .../reorder/reorder_biplanar_nv12.h | 1 + .../reorder_from_winograd_2x3_kernel.cpp | 8 +- .../reorder_from_winograd_2x3_kernel.h | 1 + .../actual_kernels/reorder/reorder_kernel.cpp | 8 +- .../actual_kernels/reorder/reorder_kernel.h | 1 + .../reorder/reorder_kernel_base.cpp | 8 +- .../reorder/reorder_kernel_base.h | 5 +- .../reorder/reorder_kernel_binary.cpp | 6 +- .../reorder/reorder_kernel_binary.h | 1 + .../reorder/reorder_kernel_fast_b1.cpp | 12 +- .../reorder/reorder_kernel_fast_b1.h | 1 + .../reorder_kernel_fs_b_yx_fsv32_to_bfyx.cpp | 7 +- .../reorder_kernel_fs_b_yx_fsv32_to_bfyx.h | 1 + .../reorder_kernel_to_yxfb_batched.cpp | 10 +- .../reorder/reorder_kernel_to_yxfb_batched.h | 1 + .../reorder_to_winograd_2x3_kernel.cpp | 8 +- .../reorder/reorder_to_winograd_2x3_kernel.h | 1 + .../reorder/reorder_weights_binary_kernel.cpp | 6 +- .../reorder/reorder_weights_binary_kernel.h | 1 + .../reorder_weights_image_fyx_b_kernel.cpp | 6 +- .../reorder_weights_image_fyx_b_kernel.h | 1 + ...rder_weights_image_winograd_6x3_kernel.cpp | 8 +- ...eorder_weights_image_winograd_6x3_kernel.h | 1 + .../reorder/reorder_weights_kernel.cpp | 8 +- .../reorder/reorder_weights_kernel.h | 3 +- .../reorder/reorder_weights_opt.cpp | 6 +- .../reorder/reorder_weights_opt.h | 1 + .../reorder_weights_winograd_2x3_kernel.cpp | 8 +- .../reorder_weights_winograd_2x3_kernel.h | 1 + .../reorder_weights_winograd_6x3_kernel.cpp | 8 +- .../reorder_weights_winograd_6x3_kernel.h | 1 + .../reorg_yolo/reorg_yolo_kernel_ref.cpp | 6 +- .../reorg_yolo/reorg_yolo_kernel_ref.h | 1 + .../resample/resample_kernel_base.cpp | 4 - .../resample/resample_kernel_opt.cpp | 7 +- .../resample/resample_kernel_opt.h | 2 + .../resample/resample_kernel_ref.cpp | 6 +- .../resample/resample_kernel_ref.h | 2 + .../reshape/reshape_kernel_ref.cpp | 6 +- .../reshape/reshape_kernel_ref.h | 1 + .../reverse_sequence_kernel_ref.cpp | 6 +- .../reverse_sequence_kernel_ref.h | 1 + .../roi_pooling/roi_pooling_kernel_base.cpp | 5 +- .../roi_pooling/roi_pooling_kernel_base.h | 2 +- .../roi_pooling/roi_pooling_kernel_ps_ref.cpp | 6 +- .../roi_pooling/roi_pooling_kernel_ps_ref.h | 1 + .../roi_pooling/roi_pooling_kernel_ref.cpp | 6 +- .../roi_pooling/roi_pooling_kernel_ref.h | 1 + .../scatter_update_kernel_ref.cpp | 6 +- .../scatter_update_kernel_ref.h | 1 + .../select/select_kernel_base.cpp | 2 - .../select/select_kernel_ref.cpp | 4 + .../actual_kernels/select/select_kernel_ref.h | 1 + .../shuffle_channels_kernel_ref.cpp | 6 +- .../shuffle_channels_kernel_ref.h | 1 + .../softmax/softmax_kernel_base.cpp | 2 - .../softmax/softmax_kernel_bf.cpp | 5 +- .../softmax/softmax_kernel_bf.h | 1 + .../softmax/softmax_kernel_fb.cpp | 5 +- .../softmax/softmax_kernel_fb.h | 1 + .../softmax_kernel_items_class_optimized.cpp | 51 +- .../softmax_kernel_items_class_optimized.h | 1 + .../softmax/softmax_kernel_ref.cpp | 6 +- .../softmax/softmax_kernel_ref.h | 1 + .../space_to_batch_kernel_base.cpp | 4 +- .../space_to_batch_kernel_base.h | 2 +- .../space_to_batch_kernel_ref.cpp | 6 +- .../space_to_batch_kernel_ref.h | 1 + .../space_to_depth_kernel_ref.cpp | 6 +- .../space_to_depth_kernel_ref.h | 1 + .../strided_slice_kernel_ref.cpp | 6 +- .../strided_slice/strided_slice_kernel_ref.h | 1 + .../actual_kernels/tile/tile_kernel_ref.cpp | 6 +- .../actual_kernels/tile/tile_kernel_ref.h | 1 + .../clDNN/kernel_selector/core/auto_tuner.cpp | 9 +- .../cl_kernels/convolution_gpu_bfyx_f16.cl | 107 +- .../convolution_gpu_bfyx_f16_1x1.cl | 189 +- .../core/cl_kernels/eltwise_b_fs_yx_fsv16.cl | 6 +- .../cl_kernels/fused_conv_eltwise_gpu_imad.cl | 2 +- .../cl_kernels/lstm_dynamic_timeloop_ref.cl | 4 +- .../core/cl_kernels/quantize_gpu_ref.cl | 30 +- .../quantize_gpu_scale_shift_opt.cl | 19 +- .../core/cl_kernels/resample_opt.cl | 65 +- .../core/cl_kernels/resample_ref.cl | 30 + .../clDNN/kernel_selector/core/kernel_base.h | 9 +- .../kernel_selector/core/kernel_selector.cpp | 48 +- .../core/kernel_selector_common.h | 3 +- .../core/kernel_selector_params.h | 11 + .../thirdparty/clDNN/src/convolution.cpp | 19 +- .../thirdparty/clDNN/src/deconvolution.cpp | 17 +- .../clDNN/src/gpu/convolution_gpu.cpp | 4 +- .../clDNN/src/gpu/deconvolution_gpu.cpp | 3 +- .../thirdparty/clDNN/src/gpu/device_info.cpp | 5 +- .../thirdparty/clDNN/src/gpu/device_info.h | 4 +- .../clDNN/src/gpu/kernels_cache.cpp | 4 +- .../thirdparty/clDNN/src/gpu/memory_gpu.cpp | 4 +- .../clDNN/src/gpu/ocl_queue_wrapper.cpp | 26 - .../thirdparty/clDNN/src/gpu/ocl_toolkit.cpp | 26 - .../thirdparty/clDNN/src/gpu/quantize_gpu.cpp | 6 + .../graph_optimizer/add_required_reorders.cpp | 1 - .../graph_optimizer/pre_replace_deconv.cpp | 5 + .../prepare_primitive_fusing.cpp | 18 +- .../graph_optimizer/prepare_quantization.cpp | 31 +- .../src/include/fused_conv_eltwise_inst.h | 2 +- .../src/include/kernel_selector_helper.h | 12 +- .../clDNN/src/include/memory_impl.h | 2 +- .../clDNN/src/include/program_node.h | 2 +- .../clDNN/src/include/to_string_utils.h | 2 + .../clDNN/src/kernel_selector_helper.cpp | 26 +- .../thirdparty/clDNN/src/layout_optimizer.cpp | 3 +- .../thirdparty/clDNN/src/primitive_inst.cpp | 6 +- .../thirdparty/clDNN/src/program.cpp | 3 +- .../thirdparty/clDNN/src/resample.cpp | 2 + .../thirdparty/clDNN/tests/CMakeLists.txt | 1 - .../tests/test_cases/convolution_gpu_test.cpp | 23 +- .../test_cases/depth_concatenate_gpu_test.cpp | 12 +- .../tests/test_cases/eltwise_gpu_test.cpp | 88 +- .../tests/test_cases/fusings_gpu_test.cpp | 175 +- .../clDNN/tests/test_cases/lstm_gpu_test.cpp | 4 +- .../tests/test_cases/reduce_gpu_test.cpp | 12 +- .../tests/test_cases/reorder_gpu_test.cpp | 4 +- .../tests/test_cases/reshape_gpu_test.cpp | 4 +- .../tests/test_cases/softmax_gpu_test.cpp | 1 - .../clDNN/tests/test_utils/network_test.h | 1 - .../clDNN/tests_core_internal/CMakeLists.txt | 1 - .../gapi/include/opencv2/gapi/own/cvdefs.hpp | 6 +- inference-engine/thirdparty/mkl-dnn | 2 +- inference-engine/thirdparty/mkldnn.cmake | 147 -- .../thirdparty/movidius/mvnc/src/mvnc_data.c | 18 +- .../movidius/mvnc/src/watchdog/watchdog.cpp | 5 +- .../tools/vpu/vpu_compile/README.md | 11 +- model-optimizer/README.md | 147 +- model-optimizer/automation/package_BOM.txt | 17 +- .../back/SpecialNodesFinalization.py | 8 +- .../extensions/front/caffe/batchnorm_ext.py | 53 + model-optimizer/extensions/front/caffe/bn.py | 5 +- .../front/caffe/bn_ext.py} | 19 +- .../extensions/front/caffe/bn_test.py | 7 +- .../extensions/front/caffe/concat_ext.py | 32 + .../front/caffe/crop_ext.py} | 2 +- .../front/caffe/crop_ext_test.py} | 4 +- .../front/caffe/dropout_ext.py} | 23 +- .../extensions/front/caffe/pooling_ext.py | 8 +- .../front/caffe/pooling_ext_test.py | 16 +- .../extensions/front/caffe/roipooling_ext.py | 35 + .../extensions/front/caffe/scale_ext.py | 55 + .../extensions/front/mxnet/pooling_ext.py | 4 +- .../front/mxnet/pooling_ext_test.py | 4 +- .../extensions/front/mxnet/take_ext.py | 33 + .../front/onnx/mask_rcnn_conversion.py | 12 +- .../person_detection_crossroad_conversion.py | 5 +- .../extensions/front/onnx/pooling_ext.py | 6 +- .../front/onnx/roifeatureextractor_ext.py | 4 +- .../front/standalone_const_eraser.py | 5 +- .../front/tf/KerasRNNTransformation.py | 268 +++ .../extensions/front/tf/WhileNormalize.py | 53 + .../extensions/front/tf/activation_ext.py | 12 +- .../extensions/front/tf/pooling_ext.py | 4 +- .../extensions/front/tf/pooling_ext_test.py | 4 +- .../extensions/front/tf/while_ext.py | 207 ++ model-optimizer/extensions/load/tf/loader.py | 3 +- model-optimizer/extensions/ops/BN.py | 35 + .../extensions/ops/DetectionOutput.py | 43 +- model-optimizer/extensions/ops/GRUCell.py | 9 +- model-optimizer/extensions/ops/MatMul.py | 7 +- model-optimizer/extensions/ops/ReduceOps.py | 5 +- .../extensions/ops/adaptive_avg_pooling.py | 4 +- model-optimizer/extensions/ops/bucketize.py | 7 +- .../extensions/ops/ctc_greedy_decoder.py | 9 +- model-optimizer/extensions/ops/ctc_loss.py | 11 +- model-optimizer/extensions/ops/cumsum.py | 6 +- .../extensions/ops/detectionoutput_onnx.py | 12 +- model-optimizer/extensions/ops/interpolate.py | 11 +- model-optimizer/extensions/ops/loop.py | 60 +- model-optimizer/extensions/ops/mvn.py | 9 +- .../extensions/ops/non_max_suppression.py | 9 +- model-optimizer/extensions/ops/priorbox.py | 14 +- .../extensions/ops/priorbox_clustered.py | 9 +- .../extensions/ops/priorgridgenerator_onnx.py | 4 +- model-optimizer/extensions/ops/proposal.py | 14 +- .../extensions/ops/psroipooling.py | 4 +- model-optimizer/extensions/ops/regionyolo.py | 6 +- .../ops/roifeatureextractor_onnx.py | 10 +- model-optimizer/mo/front/caffe/extractor.py | 23 +- .../mo/front/caffe/extractors/batchnorm.py | 63 - .../front/caffe/extractors/batchnorm_test.py | 147 -- .../mo/front/caffe/extractors/concat_test.py | 37 - .../mo/front/caffe/extractors/scale.py | 47 - .../mo/front/caffe/extractors/scale_test.py | 144 -- model-optimizer/mo/front/extractor.py | 16 +- model-optimizer/mo/front/extractor_test.py | 19 +- model-optimizer/mo/front/tf/loader.py | 13 +- model-optimizer/mo/graph/graph.py | 4 +- .../mo/ops/deformable_convolution.py | 4 +- model-optimizer/mo/ops/op.py | 7 +- model-optimizer/mo/ops/pooling.py | 6 +- model-optimizer/mo/ops/pooling_test.py | 12 +- model-optimizer/mo/ops/reshape.py | 5 +- model-optimizer/mo/ops/roipooling.py | 3 +- .../mo/utils/class_registration.py | 8 +- .../mo/utils/get_ov_update_message.py | 4 +- .../mo/utils/ir_engine/ir_engine.py | 4 +- .../mo/utils/ir_reader/layer_to_class.py | 13 +- ngraph/cmake/external_onnx.cmake | 11 +- ngraph/cmake/external_protobuf.cmake | 15 +- .../include/ngraph/builder/make_constant.hpp | 13 + .../builder/src/builder/make_constant.cpp | 106 + ngraph/core/include/ngraph/chrome_trace.hpp | 143 -- .../core/include/ngraph/descriptor/output.hpp | 2 + .../core/include/ngraph/descriptor/tensor.hpp | 7 + ngraph/core/include/ngraph/node.hpp | 4 + ngraph/core/include/ngraph/node_output.hpp | 1 + .../core/include/ngraph/op/convert_like.hpp | 9 +- ngraph/core/include/ngraph/op/equal.hpp | 1 + ...xperimental_detectron_detection_output.hpp | 88 + ...erimental_detectron_generate_proposals.hpp | 79 + ...imental_detectron_prior_grid_generator.hpp | 82 + .../op/experimental_detectron_roi_feature.hpp | 76 + .../op/experimental_detectron_topkrois.hpp | 61 + .../core/include/ngraph/op/fake_quantize.hpp | 3 +- ngraph/core/include/ngraph/op/greater_eq.hpp | 1 + ngraph/core/include/ngraph/op/loop.hpp | 15 + ngraph/core/include/ngraph/op/lstm_cell.hpp | 2 - .../include/ngraph/op/non_max_suppression.hpp | 1 + ngraph/core/include/ngraph/op/quantize.hpp | 120 -- .../core/include/ngraph/op/strided_slice.hpp | 4 +- .../op/util/binary_elementwise_logical.hpp | 2 +- ngraph/core/include/ngraph/ops.hpp | 6 +- .../core/include/ngraph/opsets/opset6_tbl.hpp | 7 +- .../include/ngraph/pass/graph_rewrite.hpp | 49 +- .../core/include/ngraph/pass/low_latency.hpp | 73 +- .../core/include/ngraph/pass/pass_config.hpp | 8 + ngraph/core/include/ngraph/provenance.hpp | 2 - .../include/ngraph/runtime/shared_buffer.hpp | 45 +- ngraph/core/include/ngraph/runtime/tensor.hpp | 1 + .../ngraph/runtime/reference/bucketize.hpp | 65 + .../ngraph/runtime/reference/convert.hpp | 17 +- .../reference/ctc_greedy_decoder_seq_len.hpp | 69 + .../reference/embedding_segments_sum.hpp | 2 - .../reference/extract_image_patches.hpp | 4 - .../runtime/reference/gather_elements.hpp | 8 +- .../ngraph/runtime/reference/interpolate.hpp | 3 - .../include/ngraph/runtime/reference/lrn.hpp | 1 - .../ngraph/runtime/reference/max_pool.hpp | 94 - .../include/ngraph/runtime/reference/mvn.hpp | 63 +- .../ngraph/runtime/reference/region_yolo.hpp | 1 - .../src/runtime/reference/interpolate.cpp | 1 - .../runtime/reference/non_max_suppression.cpp | 2 - ngraph/core/src/chrome_trace.cpp | 241 --- ngraph/core/src/descriptor/tensor.cpp | 35 +- ngraph/core/src/graph_util.cpp | 2 + ngraph/core/src/itt.hpp | 7 + ngraph/core/src/node.cpp | 20 +- ngraph/core/src/op/binary_convolution.cpp | 1 - ngraph/core/src/op/bucketize.cpp | 25 +- ngraph/core/src/op/clamp.cpp | 99 +- .../src/op/ctc_greedy_decoder_seq_len.cpp | 1 - ngraph/core/src/op/deformable_convolution.cpp | 1 - ngraph/core/src/op/depth_to_space.cpp | 1 - ngraph/core/src/op/equal.cpp | 6 + ...xperimental_detectron_detection_output.cpp | 152 ++ ...erimental_detectron_generate_proposals.cpp | 132 ++ ...imental_detectron_prior_grid_generator.cpp | 142 ++ .../op/experimental_detectron_roi_feature.cpp | 125 ++ .../op/experimental_detectron_topkrois.cpp | 88 + ngraph/core/src/op/fake_quantize.cpp | 88 +- ngraph/core/src/op/gather.cpp | 1 - ngraph/core/src/op/greater_eq.cpp | 6 + ngraph/core/src/op/interpolate.cpp | 2 +- ngraph/core/src/op/loop.cpp | 56 +- ngraph/core/src/op/lstm_cell.cpp | 15 + ngraph/core/src/op/lstm_sequence.cpp | 2 + ngraph/core/src/op/max_pool.cpp | 48 +- ngraph/core/src/op/non_max_suppression.cpp | 13 +- ngraph/core/src/op/one_hot.cpp | 1 - ngraph/core/src/op/quantize.cpp | 168 -- ngraph/core/src/op/reshape.cpp | 1 - ngraph/core/src/op/reverse.cpp | 1 - ngraph/core/src/op/shuffle_channels.cpp | 1 - ngraph/core/src/op/space_to_batch.cpp | 1 - ngraph/core/src/op/space_to_depth.cpp | 1 - ngraph/core/src/op/strided_slice.cpp | 8 +- ngraph/core/src/op/tensor_iterator.cpp | 2 +- ngraph/core/src/op/topk.cpp | 7 + ngraph/core/src/op/util/attr_types.cpp | 36 +- ngraph/core/src/pass/constant_folding.cpp | 5 +- ngraph/core/src/pass/graph_rewrite.cpp | 6 +- ngraph/core/src/provenance.cpp | 2 + ngraph/core/src/runtime/host_tensor.cpp | 7 +- ngraph/core/src/runtime/tensor.cpp | 2 + ngraph/frontend/onnx_import/CMakeLists.txt | 20 +- .../include/onnx_import/editor/editor.hpp | 18 +- .../onnx_import/include/onnx_import/onnx.hpp | 4 +- .../include/onnx_import/onnx_utils.hpp | 2 +- .../onnx_import/src/core/attribute.cpp | 6 +- .../onnx_import => src}/core/attribute.hpp | 2 +- .../frontend/onnx_import/src/core/graph.cpp | 8 +- .../onnx_import => src}/core/graph.hpp | 8 +- .../onnx_import/src/core/graph_cache.cpp | 2 +- .../onnx_import => src}/core/graph_cache.hpp | 0 .../frontend/onnx_import/src/core/model.cpp | 4 +- .../onnx_import => src}/core/model.hpp | 0 ngraph/frontend/onnx_import/src/core/node.cpp | 8 +- .../onnx_import/src/core/null_node.cpp | 2 +- .../onnx_import => src}/core/null_node.hpp | 0 .../onnx_import => src}/core/tensor.hpp | 2 +- .../onnx_import/src/core/transform.cpp | 6 +- .../onnx_import => src}/core/transform.hpp | 0 .../onnx_import => src}/core/value_info.hpp | 6 +- .../onnx_import => src}/default_opset.hpp | 0 .../onnx_import/src/editor/editor.cpp | 104 +- .../frontend/onnx_import/src/exceptions.cpp | 2 +- .../onnx_import => src}/exceptions.hpp | 2 +- ngraph/frontend/onnx_import/src/onnx.cpp | 11 +- .../frontend/onnx_import/src/onnx_utils.cpp | 2 +- .../{include/onnx_import => src}/op/abs.hpp | 2 +- .../{include/onnx_import => src}/op/acos.hpp | 2 +- .../{include/onnx_import => src}/op/acosh.hpp | 2 +- ngraph/frontend/onnx_import/src/op/add.cpp | 4 +- .../{include/onnx_import => src}/op/add.hpp | 0 .../{include/onnx_import => src}/op/and.hpp | 2 +- ngraph/frontend/onnx_import/src/op/argmax.cpp | 20 +- .../onnx_import => src}/op/argmax.hpp | 12 + ngraph/frontend/onnx_import/src/op/argmin.cpp | 20 +- .../onnx_import => src}/op/argmin.hpp | 12 + .../{include/onnx_import => src}/op/asin.hpp | 2 +- .../{include/onnx_import => src}/op/asinh.hpp | 2 +- .../{include/onnx_import => src}/op/atan.hpp | 2 +- .../{include/onnx_import => src}/op/atanh.hpp | 2 +- .../onnx_import/src/op/average_pool.cpp | 4 +- .../onnx_import => src}/op/average_pool.hpp | 0 .../onnx_import/src/op/batch_norm.cpp | 8 +- .../onnx_import => src}/op/batch_norm.hpp | 0 ngraph/frontend/onnx_import/src/op/cast.cpp | 6 +- .../{include/onnx_import => src}/op/cast.hpp | 0 .../{include/onnx_import => src}/op/ceil.hpp | 2 +- ngraph/frontend/onnx_import/src/op/clip.cpp | 10 +- .../{include/onnx_import => src}/op/clip.hpp | 0 ngraph/frontend/onnx_import/src/op/concat.cpp | 6 +- .../onnx_import => src}/op/concat.hpp | 0 .../frontend/onnx_import/src/op/constant.cpp | 6 +- .../onnx_import => src}/op/constant.hpp | 0 .../onnx_import/src/op/constant_of_shape.cpp | 8 +- .../op/constant_of_shape.hpp | 0 ngraph/frontend/onnx_import/src/op/conv.cpp | 10 +- .../{include/onnx_import => src}/op/conv.hpp | 0 .../onnx_import/src/op/conv_integer.cpp | 6 +- .../onnx_import => src}/op/conv_integer.hpp | 0 .../onnx_import/src/op/conv_transpose.cpp | 8 +- .../onnx_import => src}/op/conv_transpose.hpp | 0 ngraph/frontend/onnx_import/src/op/cos.cpp | 4 +- .../{include/onnx_import => src}/op/cos.hpp | 0 ngraph/frontend/onnx_import/src/op/cosh.cpp | 4 +- .../{include/onnx_import => src}/op/cosh.hpp | 0 .../frontend/onnx_import/src/op/cum_sum.cpp | 4 +- .../onnx_import => src}/op/cum_sum.hpp | 0 .../onnx_import/src/op/depth_to_space.cpp | 4 +- .../onnx_import => src}/op/depth_to_space.hpp | 0 .../onnx_import/src/op/dequantize_linear.cpp | 8 +- .../op/dequantize_linear.hpp | 0 .../{include/onnx_import => src}/op/div.hpp | 2 +- .../onnx_import => src}/op/dropout.hpp | 2 +- ngraph/frontend/onnx_import/src/op/elu.cpp | 4 +- .../{include/onnx_import => src}/op/elu.hpp | 0 .../{include/onnx_import => src}/op/equal.hpp | 2 +- .../{include/onnx_import => src}/op/erf.hpp | 2 +- .../{include/onnx_import => src}/op/exp.hpp | 2 +- ngraph/frontend/onnx_import/src/op/expand.cpp | 4 +- .../onnx_import => src}/op/expand.hpp | 0 .../frontend/onnx_import/src/op/eye_like.cpp | 6 +- .../onnx_import => src}/op/eye_like.hpp | 0 .../frontend/onnx_import/src/op/flatten.cpp | 4 +- .../onnx_import => src}/op/flatten.hpp | 0 .../{include/onnx_import => src}/op/floor.hpp | 2 +- .../onnx_import => src}/op/gather.hpp | 2 +- .../onnx_import/src/op/gather_elements.hpp | 41 + .../frontend/onnx_import/src/op/gather_nd.cpp | 4 +- .../onnx_import => src}/op/gather_nd.hpp | 0 ngraph/frontend/onnx_import/src/op/gemm.cpp | 4 +- .../{include/onnx_import => src}/op/gemm.hpp | 0 .../src/op/global_average_pool.cpp | 4 +- .../op/global_average_pool.hpp | 0 .../onnx_import/src/op/global_max_pool.cpp | 4 +- .../op/global_max_pool.hpp | 0 .../onnx_import => src}/op/greater.hpp | 2 +- ngraph/frontend/onnx_import/src/op/gru.cpp | 8 +- .../{include/onnx_import => src}/op/gru.hpp | 0 .../onnx_import/src/op/hard_sigmoid.cpp | 4 +- .../onnx_import => src}/op/hard_sigmoid.hpp | 0 .../frontend/onnx_import/src/op/hardmax.cpp | 8 +- .../onnx_import => src}/op/hardmax.hpp | 0 .../onnx_import => src}/op/identity.hpp | 2 +- .../onnx_import/src/op/image_scaler.cpp | 4 +- .../onnx_import => src}/op/image_scaler.hpp | 0 .../onnx_import/src/op/instance_norm.cpp | 8 +- .../onnx_import => src}/op/instance_norm.hpp | 0 .../onnx_import/src/op/leaky_relu.cpp | 6 +- .../onnx_import => src}/op/leaky_relu.hpp | 0 .../{include/onnx_import => src}/op/less.hpp | 2 +- ngraph/frontend/onnx_import/src/op/log.cpp | 4 +- .../{include/onnx_import => src}/op/log.hpp | 0 .../onnx_import/src/op/log_softmax.cpp | 4 +- .../onnx_import => src}/op/log_softmax.hpp | 0 ngraph/frontend/onnx_import/src/op/loop.cpp | 13 +- .../{include/onnx_import => src}/op/loop.hpp | 0 .../frontend/onnx_import/src/op/lp_norm.cpp | 6 +- .../onnx_import => src}/op/lp_norm.hpp | 0 .../frontend/onnx_import/src/op/lp_pool.cpp | 8 +- .../onnx_import => src}/op/lp_pool.hpp | 0 ngraph/frontend/onnx_import/src/op/lrn.cpp | 4 +- .../{include/onnx_import => src}/op/lrn.hpp | 0 ngraph/frontend/onnx_import/src/op/lstm.cpp | 12 +- .../{include/onnx_import => src}/op/lstm.hpp | 0 .../onnx_import => src}/op/matmul.hpp | 2 +- .../onnx_import/src/op/matmul_integer.cpp | 2 +- .../onnx_import => src}/op/matmul_integer.hpp | 0 .../{include/onnx_import => src}/op/max.hpp | 4 +- .../frontend/onnx_import/src/op/max_pool.cpp | 6 +- .../onnx_import => src}/op/max_pool.hpp | 0 ngraph/frontend/onnx_import/src/op/mean.cpp | 6 +- .../{include/onnx_import => src}/op/mean.hpp | 0 .../src/op/mean_variance_normalization.cpp | 13 +- .../op/mean_variance_normalization.hpp | 0 .../{include/onnx_import => src}/op/min.hpp | 4 +- ngraph/frontend/onnx_import/src/op/mod.cpp | 6 +- .../{include/onnx_import => src}/op/mod.hpp | 0 .../{include/onnx_import => src}/op/mul.hpp | 2 +- .../{include/onnx_import => src}/op/neg.hpp | 0 .../src/op/non_max_suppression.cpp | 8 +- .../op/non_max_suppression.hpp | 0 .../frontend/onnx_import/src/op/non_zero.cpp | 4 +- .../onnx_import => src}/op/non_zero.hpp | 0 .../{include/onnx_import => src}/op/not.hpp | 2 +- ngraph/frontend/onnx_import/src/op/onehot.cpp | 6 +- .../onnx_import => src}/op/onehot.hpp | 0 .../{include/onnx_import => src}/op/or.hpp | 2 +- .../org.openvinotoolkit/detection_output.cpp | 4 +- .../org.openvinotoolkit/detection_output.hpp | 0 .../op/org.openvinotoolkit/fake_quantize.cpp | 4 +- .../op/org.openvinotoolkit/fake_quantize.hpp | 0 .../src/op/org.openvinotoolkit/group_norm.cpp | 8 +- .../op/org.openvinotoolkit/group_norm.hpp | 0 .../src/op/org.openvinotoolkit/normalize.cpp | 6 +- .../op/org.openvinotoolkit/normalize.hpp | 0 .../src/op/org.openvinotoolkit/prior_box.cpp | 4 +- .../op/org.openvinotoolkit/prior_box.hpp | 0 .../src/op/org.openvinotoolkit/swish.cpp | 10 +- .../op/org.openvinotoolkit/swish.hpp | 0 ngraph/frontend/onnx_import/src/op/pad.cpp | 8 +- .../{include/onnx_import => src}/op/pad.hpp | 0 ngraph/frontend/onnx_import/src/op/pow.cpp | 4 +- .../{include/onnx_import => src}/op/pow.hpp | 0 ngraph/frontend/onnx_import/src/op/prelu.cpp | 4 +- .../{include/onnx_import => src}/op/prelu.hpp | 0 .../onnx_import/src/op/qlinear_matmul.cpp | 2 +- .../onnx_import => src}/op/qlinear_matmul.hpp | 0 .../onnx_import/src/op/quant_conv.cpp | 6 +- .../onnx_import => src}/op/quant_conv.hpp | 0 .../onnx_import/src/op/quantize_linear.cpp | 8 +- .../op/quantize_linear.hpp | 0 ngraph/frontend/onnx_import/src/op/range.cpp | 4 +- .../{include/onnx_import => src}/op/range.hpp | 0 .../onnx_import/src/op/reciprocal.cpp | 4 +- .../onnx_import => src}/op/reciprocal.hpp | 0 ngraph/frontend/onnx_import/src/op/reduce.cpp | 111 +- .../onnx_import => src}/op/reduce.hpp | 16 + .../{include/onnx_import => src}/op/relu.hpp | 2 +- .../frontend/onnx_import/src/op/reshape.cpp | 8 +- .../onnx_import => src}/op/reshape.hpp | 0 ngraph/frontend/onnx_import/src/op/resize.cpp | 8 +- .../onnx_import => src}/op/resize.hpp | 0 .../onnx_import/src/op/reverse_sequence.cpp | 4 +- .../op/reverse_sequence.hpp | 0 ngraph/frontend/onnx_import/src/op/rnn.cpp | 6 +- .../{include/onnx_import => src}/op/rnn.hpp | 0 .../frontend/onnx_import/src/op/roi_align.cpp | 2 +- .../onnx_import => src}/op/roi_align.hpp | 0 ngraph/frontend/onnx_import/src/op/round.cpp | 4 +- .../{include/onnx_import => src}/op/round.hpp | 0 .../onnx_import/src/op/scatter_elements.cpp | 4 +- .../op/scatter_elements.hpp | 0 .../onnx_import/src/op/scatter_nd.cpp | 4 +- .../onnx_import => src}/op/scatter_nd.hpp | 0 ngraph/frontend/onnx_import/src/op/selu.cpp | 4 +- .../{include/onnx_import => src}/op/selu.hpp | 0 ngraph/frontend/onnx_import/src/op/shape.cpp | 4 +- .../{include/onnx_import => src}/op/shape.hpp | 0 ngraph/frontend/onnx_import/src/op/shrink.cpp | 6 +- .../onnx_import => src}/op/shrink.hpp | 0 .../onnx_import => src}/op/sigmoid.hpp | 2 +- .../{include/onnx_import => src}/op/sign.hpp | 2 +- .../{include/onnx_import => src}/op/sin.hpp | 2 +- .../{include/onnx_import => src}/op/sinh.hpp | 2 +- ngraph/frontend/onnx_import/src/op/size.cpp | 4 +- .../{include/onnx_import => src}/op/size.hpp | 0 ngraph/frontend/onnx_import/src/op/slice.cpp | 8 +- .../{include/onnx_import => src}/op/slice.hpp | 0 .../frontend/onnx_import/src/op/softmax.cpp | 4 +- .../onnx_import => src}/op/softmax.hpp | 0 .../frontend/onnx_import/src/op/softplus.cpp | 4 +- .../onnx_import => src}/op/softplus.hpp | 0 .../frontend/onnx_import/src/op/softsign.cpp | 4 +- .../onnx_import => src}/op/softsign.hpp | 0 .../onnx_import/src/op/space_to_depth.cpp | 4 +- .../onnx_import => src}/op/space_to_depth.hpp | 0 ngraph/frontend/onnx_import/src/op/split.cpp | 4 +- .../{include/onnx_import => src}/op/split.hpp | 0 .../{include/onnx_import => src}/op/sqrt.hpp | 2 +- .../frontend/onnx_import/src/op/squeeze.cpp | 6 +- .../onnx_import => src}/op/squeeze.hpp | 0 .../{include/onnx_import => src}/op/sub.hpp | 2 +- .../{include/onnx_import => src}/op/sum.hpp | 4 +- .../{include/onnx_import => src}/op/tan.hpp | 2 +- .../{include/onnx_import => src}/op/tanh.hpp | 2 +- .../onnx_import/src/op/thresholded_relu.cpp | 4 +- .../op/thresholded_relu.hpp | 0 ngraph/frontend/onnx_import/src/op/tile.cpp | 4 +- .../{include/onnx_import => src}/op/tile.hpp | 0 ngraph/frontend/onnx_import/src/op/topk.cpp | 6 +- .../{include/onnx_import => src}/op/topk.hpp | 0 .../frontend/onnx_import/src/op/transpose.cpp | 2 +- .../onnx_import => src}/op/transpose.hpp | 0 .../frontend/onnx_import/src/op/unsqueeze.cpp | 6 +- .../onnx_import => src}/op/unsqueeze.hpp | 0 .../frontend/onnx_import/src/op/upsample.cpp | 6 +- .../onnx_import => src}/op/upsample.hpp | 0 .../{include/onnx_import => src}/op/where.hpp | 2 +- .../{include/onnx_import => src}/op/xor.hpp | 2 +- .../frontend/onnx_import/src/ops_bridge.cpp | 259 +-- .../onnx_import => src}/ops_bridge.hpp | 0 .../src/utils/arg_min_max_factory.cpp | 4 +- .../utils/arg_min_max_factory.hpp | 2 +- .../frontend/onnx_import/src/utils/common.cpp | 4 +- .../onnx_import => src}/utils/common.hpp | 2 +- .../onnx_import/src/utils/convpool.cpp | 4 +- .../onnx_import => src}/utils/convpool.hpp | 0 .../frontend/onnx_import/src/utils/parser.cpp | 2 +- .../onnx_import => src}/utils/parser.hpp | 0 .../onnx_import/src/utils/pooling_factory.cpp | 8 +- .../utils/pooling_factory.hpp | 0 .../onnx_import/src/utils/provenance_tag.cpp | 2 +- .../utils/provenance_tag.hpp | 0 .../onnx_import/src/utils/recurrent.cpp | 6 +- .../onnx_import => src}/utils/recurrent.hpp | 0 .../onnx_import/src/utils/reshape.cpp | 4 +- .../onnx_import => src}/utils/reshape.hpp | 0 .../src/utils/tensor_external_data.cpp | 4 +- .../utils/tensor_external_data.hpp | 0 .../onnx_import => src}/utils/variadic.hpp | 0 ngraph/python/CMakeLists.txt | 21 +- ngraph/python/requirements_test.txt | 4 +- ngraph/python/src/ngraph/__init__.py | 2 +- ngraph/python/src/ngraph/exceptions.py | 8 +- ngraph/python/src/ngraph/helpers.py | 4 +- ngraph/python/src/ngraph/impl/op/__init__.py | 2 +- ngraph/python/src/ngraph/opset1/ops.py | 368 ++-- ngraph/python/src/ngraph/opset2/ops.py | 17 +- ngraph/python/src/ngraph/opset3/ops.py | 76 +- ngraph/python/src/ngraph/opset4/ops.py | 33 +- ngraph/python/src/ngraph/opset_utils.py | 2 +- ngraph/python/src/ngraph/utils/__init__.py | 2 +- .../python/src/ngraph/utils/broadcasting.py | 2 +- ngraph/python/src/ngraph/utils/decorators.py | 6 +- .../src/ngraph/utils/input_validation.py | 14 +- .../python/src/ngraph/utils/node_factory.py | 16 +- ngraph/python/src/ngraph/utils/reduction.py | 2 +- .../src/ngraph/utils/tensor_iterator_types.py | 32 +- ngraph/python/src/ngraph/utils/types.py | 18 +- ngraph/python/tests/__init__.py | 28 +- ngraph/python/tests/runtime.py | 37 +- ngraph/python/tests/test_ngraph/test_basic.py | 8 +- .../test_ngraph/test_sequence_processing.py | 4 +- .../tests/test_onnx/model_zoo_preprocess.sh | 2 +- ngraph/python/tests/test_onnx/test_backend.py | 96 +- .../tests/test_onnx/test_ops_nonlinear.py | 4 +- .../tests/test_onnx/test_ops_reduction.py | 94 +- .../python/tests/test_onnx/test_zoo_models.py | 9 +- ngraph/test/CMakeLists.txt | 19 +- ngraph/test/backend/auto_broadcast.in.cpp | 32 + ngraph/test/backend/bucketize.in.cpp | 71 + ngraph/test/backend/convert_like.in.cpp | 189 ++ .../backend/ctc_greedy_decoder_seq_len.in.cpp | 184 ++ ngraph/test/backend/fused_op.in.cpp | 2 - ngraph/test/backend/group_convolution.in.cpp | 1 - ngraph/test/backend/minimum.in.cpp | 23 + ngraph/test/backend/mvn.in.cpp | 151 ++ .../test/backend/non_max_suppression.in.cpp | 81 + ngraph/test/backend/region_yolo.in.cpp | 2 - ngraph/test/backend/roi_pooling.in.cpp | 1 - ngraph/test/graph_rewrite.cpp | 2 + ngraph/test/models/onnx/add_abc_3d.prototxt | 74 + .../onnx/argmax_select_last_index.prototxt | 60 + .../onnx/argmin_select_last_index.prototxt | 60 + .../onnx/gather_elements_float_1D.prototxt | 58 + .../gather_elements_float_3D_axis_2.prototxt | 76 + ...ther_elements_float_negative_axis.prototxt | 67 + .../gather_elements_int32_axis_0.prototxt | 67 + .../onnx/gather_elements_int8_axis_1.prototxt | 67 + .../shapes__add_two_inputs.prototxt | 54 + .../shapes__dynamic_rank_in_model.prototxt | 55 + ngraph/test/models/onnx/mvn_v6.prototxt | 57 + ...ppression_center_point_box_format.prototxt | 115 + .../nonmaxsuppression_single_box.prototxt | 110 + ...reduce_sum_13_axes_as_0_dim_input.prototxt | 75 + .../reduce_sum_13_axes_as_constant.prototxt | 67 + ..._13_axes_as_constant_keepdims_off.prototxt | 72 + ...m_13_axes_as_constant_single_axis.prototxt | 67 + .../onnx/reduce_sum_13_axes_as_input.prototxt | 63 + .../onnx/reduce_sum_13_axes_empty.prototxt | 49 + ..._13_axes_empty_dynamic_rank_input.prototxt | 35 + ...educe_sum_13_axes_empty_with_noop.prototxt | 63 + ...ce_sum_13_axes_empty_without_noop.prototxt | 63 + .../onnx/reduce_sum_13_input_dynamic.prototxt | 89 + .../reduce_sum_dynamic_rank_input.prototxt | 34 + .../models/onnx/test_clip_inbounds.prototxt | 41 + ngraph/test/onnx/onnx_editor.cpp | 166 +- ngraph/test/onnx/onnx_import.in.cpp | 445 +++- .../onnx/onnx_import_const_folding.in.cpp | 2 +- .../test/onnx/onnx_import_controlflow.in.cpp | 2 +- .../test/onnx/onnx_import_dyn_shapes.in.cpp | 2 +- ngraph/test/onnx/onnx_import_exceptions.cpp | 2 +- .../onnx/onnx_import_external_data.in.cpp | 2 +- .../test/onnx/onnx_import_provenance.in.cpp | 2 +- ngraph/test/onnx/onnx_test_utils.in.cpp | 88 + ngraph/test/op_eval/bucketize.cpp | 48 + ngraph/test/op_is.cpp | 9 - ngraph/test/op_version_tbl.hpp | 1 - ngraph/test/runtime/ie/unit_test.manifest | 57 +- .../runtime/interpreter/evaluates_map.cpp | 540 ++++- .../runtime/interpreter/int_executable.cpp | 6 +- .../runtime/interpreter/opset_int_tbl.hpp | 4 + .../runtime/interpreter/unit_test.manifest | 5 +- ngraph/test/runtime/opset0_tbl.hpp | 1 - .../runtime/pass/fused_op_decomposition.hpp | 5 +- .../pass/implicit_broadcast_elimination.hpp | 2 + ngraph/test/tensor.cpp | 21 + ngraph/test/type_prop/abs.cpp | 64 - ngraph/test/type_prop/bucketize.cpp | 84 +- ...xperimental_detectron_detection_output.cpp | 127 ++ ...erimental_detectron_generate_proposals.cpp | 116 + ...imental_detectron_prior_grid_generator.cpp | 178 ++ ...mental_detectron_roi_feature_extractor.cpp | 171 ++ .../experimental_detectron_topkrois.cpp | 68 + ngraph/test/type_prop/max_pool.cpp | 60 +- ngraph/test/type_prop/quantize.cpp | 806 ------- ngraph/test/type_prop/reduce_mean.cpp | 84 + ngraph/test/type_prop/top_k.cpp | 14 +- ngraph/test/type_prop/unary_ops.cpp | 2 +- ngraph/test/util/engine/ie_engines.cpp | 70 + ngraph/test/util/engine/ie_engines.hpp | 37 +- .../conditional_compilation/CMakeLists.txt | 24 + .../scripts/ccheader.py | 4 +- scripts/demo/demo_benchmark_app.bat | 124 +- scripts/demo/demo_security_barrier_camera.bat | 146 +- .../demo_squeezenet_download_convert_run.bat | 123 +- .../install_openvino_dependencies.sh | 527 +++-- scripts/setupvars/setupvars.bat | 28 +- tests/conditional_compilation/conftest.py | 95 + tests/conditional_compilation/test_collect.py | 40 + tests/conditional_compilation/test_config.yml | 15 + tests/conditional_compilation/test_infer.py | 17 + tests/lib/__init__.py | 0 tests/lib/path_utils.py | 22 + tests/lib/proc_utils.py | 43 + ...est_config.yml => desktop_test_config.yml} | 0 tests/time_tests/test_runner/conftest.py | 11 +- 1963 files changed, 42312 insertions(+), 20756 deletions(-) delete mode 100644 docs/HOWTO/img/IE_extensions_flow.png delete mode 100644 docs/HOWTO/img/MEG_generic_flow.png delete mode 100644 docs/HOWTO/img/MO_extensions_flow.png create mode 100644 docs/HOWTO/img/converted_subgraph.png delete mode 100644 docs/HOWTO/img/mo_caffe_priorities.png create mode 100644 docs/HOWTO/img/unsupported_subgraph.png create mode 100644 docs/HOWTO/mo_extensions/front/tf/Complex.py create mode 100644 docs/HOWTO/mo_extensions/front/tf/ComplexAbs.py create mode 100644 docs/HOWTO/mo_extensions/front/tf/FFT_ext.py create mode 100644 docs/HOWTO/mo_extensions/ops/FFT.py create mode 100644 docs/HOWTO/mri_reconstruction_demo.py create mode 100644 docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md delete mode 100644 docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md delete mode 100644 docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_SSD_ObjectDetection_API.md create mode 100644 docs/doxygen/assets/bootstrap.bundle.min.js create mode 100644 docs/doxygen/assets/bootstrap.min.css delete mode 100644 docs/doxygen/assets/menu.js create mode 100644 docs/img/MO_connection_example_1.png create mode 100644 docs/img/MO_conversion_pipeline.png create mode 100644 docs/img/MO_graph_after_extractors.png create mode 100644 docs/img/MO_graph_after_loader.png create mode 100644 docs/img/MO_graph_before_partial_inference.png create mode 100644 docs/img/MO_ports_example_1.png create mode 100644 docs/img/MO_ports_example_2.png create mode 100644 docs/img/MO_transformations_graph.png create mode 100644 docs/template_extension/fft_kernel.cpp create mode 100644 docs/template_extension/fft_kernel.hpp create mode 100644 docs/template_extension/fft_op.cpp create mode 100644 docs/template_extension/fft_op.hpp create mode 100644 inference-engine/ie_bridges/python/src/openvino/offline_transformations/CMakeLists.txt create mode 100644 inference-engine/ie_bridges/python/src/openvino/offline_transformations/__init__.py create mode 100644 inference-engine/ie_bridges/python/src/openvino/offline_transformations/offline_transformations_api.pyx create mode 100644 inference-engine/ie_bridges/python/src/openvino/offline_transformations/offline_transformations_api_impl.cpp create mode 100644 inference-engine/ie_bridges/python/src/openvino/offline_transformations/offline_transformations_api_impl.hpp create mode 100644 inference-engine/ie_bridges/python/src/openvino/offline_transformations/offline_transformations_api_impl_defs.pxd create mode 100644 inference-engine/ie_bridges/python/src/openvino/test_utils/CMakeLists.txt create mode 100644 inference-engine/ie_bridges/python/src/openvino/test_utils/__init__.py create mode 100644 inference-engine/ie_bridges/python/src/openvino/test_utils/test_utils_api.pyx create mode 100644 inference-engine/ie_bridges/python/src/openvino/test_utils/test_utils_api_impl.cpp create mode 100644 inference-engine/ie_bridges/python/src/openvino/test_utils/test_utils_api_impl.hpp create mode 100644 inference-engine/ie_bridges/python/src/openvino/test_utils/test_utils_api_impl_defs.pxd create mode 100644 inference-engine/ie_bridges/python/tests/test_offline_api.py create mode 100644 inference-engine/ie_bridges/python/tests/test_utils.py create mode 100644 inference-engine/src/gna_plugin/backend/dnn_types.cpp create mode 100644 inference-engine/src/inference_engine/cpp/ie_cnn_network.cpp create mode 100644 inference-engine/src/inference_engine/cpp/ie_variable_state.cpp delete mode 100644 inference-engine/src/inference_engine/shape_infer/ie_detectionoutput_onnx_shape_infer.hpp delete mode 100644 inference-engine/src/inference_engine/shape_infer/ie_priorgridgenerator_onnx_shape_infer.hpp delete mode 100644 inference-engine/src/inference_engine/shape_infer/ie_proposal_onnx_shape_infer.hpp delete mode 100644 inference-engine/src/inference_engine/shape_infer/ie_roifeatureextractor_onnx_shape_infer.hpp delete mode 100644 inference-engine/src/inference_engine/shape_infer/ie_topkrois_onnx_shape_infer.hpp create mode 100644 inference-engine/src/low_precision_transformations/include/low_precision/strided_slice.hpp create mode 100644 inference-engine/src/low_precision_transformations/src/strided_slice.cpp delete mode 100644 inference-engine/src/mkldnn_plugin/mkldnn/cpu_engine.h delete mode 100644 inference-engine/src/mkldnn_plugin/mkldnn/cpu_prim_layer.h delete mode 100644 inference-engine/src/mkldnn_plugin/mkldnn/cpu_prim_tensor.h delete mode 100644 inference-engine/src/mkldnn_plugin/mkldnn/desc_iterator.hpp create mode 100644 inference-engine/src/mkldnn_plugin/mkldnn/ie_mkldnn.cpp create mode 100644 inference-engine/src/mkldnn_plugin/mkldnn/ie_mkldnn.h create mode 100644 inference-engine/src/mkldnn_plugin/nodes/gather_elements.cpp create mode 100644 inference-engine/src/mkldnn_plugin/utils/general_utils.h create mode 100644 inference-engine/src/offline_transformations/CMakeLists.txt create mode 100644 inference-engine/src/offline_transformations/include/moc_transformations.hpp create mode 100644 inference-engine/src/offline_transformations/src/moc_transformations.cpp delete mode 100644 inference-engine/src/plugin_api/ie_profiling.hpp create mode 100644 inference-engine/src/transformations/include/transformations/common_optimizations/clamp_fusion.hpp create mode 100644 inference-engine/src/transformations/include/transformations/common_optimizations/eliminate_unsqueeze_gather.hpp create mode 100644 inference-engine/src/transformations/include/transformations/common_optimizations/pad_fusion.hpp create mode 100644 inference-engine/src/transformations/include/transformations/common_optimizations/relu_fake_quantize_fusion.hpp create mode 100644 inference-engine/src/transformations/include/transformations/op_conversions/convert_gather_0d.hpp create mode 100644 inference-engine/src/transformations/include/transformations/op_conversions/convert_mvn1_to_mvn6.hpp create mode 100644 inference-engine/src/transformations/include/transformations/op_conversions/mvn6_decomposition.hpp create mode 100644 inference-engine/src/transformations/src/itt.hpp create mode 100644 inference-engine/src/transformations/src/transformations/common_optimizations/clamp_fusion.cpp create mode 100644 inference-engine/src/transformations/src/transformations/common_optimizations/eliminate_unsqueeze_gather.cpp create mode 100644 inference-engine/src/transformations/src/transformations/common_optimizations/pad_fusion.cpp create mode 100644 inference-engine/src/transformations/src/transformations/common_optimizations/relu_fake_quantize_fusion.cpp delete mode 100644 inference-engine/src/transformations/src/transformations/itt.hpp create mode 100644 inference-engine/src/transformations/src/transformations/op_conversions/convert_gather_0d.cpp create mode 100644 inference-engine/src/transformations/src/transformations/op_conversions/convert_mvn1_to_mvn6.cpp create mode 100644 inference-engine/src/transformations/src/transformations/op_conversions/mvn6_decomposition.cpp create mode 100644 inference-engine/src/vpu/common/include/vpu/ngraph/transformations/dynamic_to_static_shape_gather_elements.hpp create mode 100644 inference-engine/src/vpu/common/src/ngraph/transformations/dynamic_to_static_shape_gather_elements.cpp create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/conv_with_rt_info.bin create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/conv_with_rt_info.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/experimental_detectron_detection_output_opset6.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/experimental_detectron_roi_feature_extractor_opset6.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/loop_2d_add.bin create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/loop_2d_add.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/pad_with_shape_of.bin create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/pad_with_shape_of.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/ti_negative_stride.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/models/ti_resnet.xml create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/tensor_iterator.cpp create mode 100644 inference-engine/tests/functional/inference_engine/ir_serialization/tensor_names.cpp create mode 100644 inference-engine/tests/functional/inference_engine/lp_transformations/normalize_dequantization_transformation.cpp create mode 100644 inference-engine/tests/functional/inference_engine/lp_transformations/strided_slice_transformation.cpp create mode 100644 inference-engine/tests/functional/inference_engine/ngraph_reader/tensor_names.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/broadcast.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/detection_output.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/normalize_l2.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/pad.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/pooling.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/prelu.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/prior_box_clustered.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/region_yolo.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/reshape.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/shuffle_channels.cpp create mode 100644 inference-engine/tests/functional/inference_engine/serialization/single_layer/tensor_iterator.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/clamp_fusion.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/compare_functions_test.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/convert_gather_0d_test.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/convert_mvn1_to_mvn6_test.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/eliminate_unsqueeze_gather.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/mvn6_decomposition_test.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/pad_fusion.cpp create mode 100644 inference-engine/tests/functional/inference_engine/transformations/relu_fake_quantize_fusion.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/shared_tests_instances/execution_graph_tests/runtime_precision.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/shared_tests_instances/low_precision_transformations/concat_with_child_and_output.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/shared_tests_instances/low_precision_transformations/strided_slice_transformation.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/shared_tests_instances/single_layer_tests/bucketize.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/shared_tests_instances/single_layer_tests/gather_elements.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/shared_tests_instances/subgraph_tests/tensor_names.cpp create mode 100755 inference-engine/tests/functional/plugin/cpu/single_layer_tests/convolution.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/single_layer_tests/gather_elements.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/single_layer_tests/psroi_pooling.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/test_utils/fusing_test_utils.cpp create mode 100644 inference-engine/tests/functional/plugin/cpu/test_utils/fusing_test_utils.hpp create mode 100644 inference-engine/tests/functional/plugin/gna/shared_tests_instances/subgraph_tests/tensor_names.cpp create mode 100644 inference-engine/tests/functional/plugin/gpu/shared_tests_instances/low_precision_transformations/concat_with_child_and_output.cpp create mode 100644 inference-engine/tests/functional/plugin/gpu/shared_tests_instances/low_precision_transformations/strided_slice_transformation.cpp create mode 100644 inference-engine/tests/functional/plugin/gpu/shared_tests_instances/subgraph_tests/quantized_convolution_backprop_data.cpp create mode 100644 inference-engine/tests/functional/plugin/gpu/shared_tests_instances/subgraph_tests/quantized_group_convolution_backprop_data.cpp create mode 100644 inference-engine/tests/functional/plugin/gpu/shared_tests_instances/subgraph_tests/tensor_names.cpp create mode 100644 inference-engine/tests/functional/plugin/myriad/ngraph/transformations/dynamic_to_static_shape_gather_elements.cpp create mode 100644 inference-engine/tests/functional/plugin/myriad/shared_tests_instances/subgraph_tests/tensor_names.cpp create mode 100644 inference-engine/tests/functional/plugin/myriad/subgraph_tests/dsr_gather_base.hpp create mode 100644 inference-engine/tests/functional/plugin/myriad/subgraph_tests/dsr_gather_elements.cpp create mode 100644 inference-engine/tests/functional/plugin/myriad/subgraph_tests/unsqueeze_gather.cpp create mode 100644 inference-engine/tests/functional/plugin/shared/include/execution_graph_tests/runtime_precision.hpp create mode 100644 inference-engine/tests/functional/plugin/shared/include/low_precision_transformations/concat_with_child_and_output.hpp create mode 100644 inference-engine/tests/functional/plugin/shared/include/low_precision_transformations/strided_slice_transformation.hpp create mode 100644 inference-engine/tests/functional/plugin/shared/include/subgraph_tests/tensor_names.hpp create mode 100644 inference-engine/tests/functional/plugin/shared/src/execution_graph_tests/runtime_precision.cpp create mode 100644 inference-engine/tests/functional/plugin/shared/src/low_precision_transformations/concat_with_child_and_output.cpp create mode 100644 inference-engine/tests/functional/plugin/shared/src/low_precision_transformations/strided_slice_transformation.cpp create mode 100644 inference-engine/tests/functional/shared_test_classes/include/shared_test_classes/single_layer/gather_elements.hpp create mode 100644 inference-engine/tests/functional/shared_test_classes/include/shared_test_classes/subgraph/tensor_names.hpp create mode 100644 inference-engine/tests/functional/shared_test_classes/src/single_layer/gather_elements.cpp create mode 100644 inference-engine/tests/functional/shared_test_classes/src/subgraph/tensor_names.cpp create mode 100644 inference-engine/tests/ie_test_utils/common_test_utils/unicode_utils.cpp create mode 100644 inference-engine/tests/ngraph_helpers/lpt_ngraph_functions/include/lpt_ngraph_functions/normalize_dequantization_function.hpp create mode 100644 inference-engine/tests/ngraph_helpers/lpt_ngraph_functions/include/lpt_ngraph_functions/strided_slice_function.hpp create mode 100644 inference-engine/tests/ngraph_helpers/lpt_ngraph_functions/src/normalize_dequantization_function.cpp create mode 100644 inference-engine/tests/ngraph_helpers/lpt_ngraph_functions/src/strided_slice_function.cpp create mode 100644 inference-engine/tests/ngraph_helpers/ngraph_functions/src/gather_elements.cpp create mode 100644 inference-engine/tests/unit/cpu/mkldnn_memory_desc_test.cpp create mode 100644 inference-engine/tests/unit/cpu/mkldnn_memory_test.cpp delete mode 100644 inference-engine/tests_deprecated/unit/cnn_network/cnn_network_impl_test.cpp delete mode 100644 inference-engine/tests_deprecated/unit/cnn_network/xml_father_tests.cpp delete mode 100644 inference-engine/tests_deprecated/unit/stress_tests/stress_tests.cpp delete mode 100644 inference-engine/thirdparty/mkldnn.cmake create mode 100644 model-optimizer/extensions/front/caffe/batchnorm_ext.py rename model-optimizer/{mo/front/caffe/extractors/concat.py => extensions/front/caffe/bn_ext.py} (64%) create mode 100644 model-optimizer/extensions/front/caffe/concat_ext.py rename model-optimizer/{mo/front/caffe/extractors/crop.py => extensions/front/caffe/crop_ext.py} (96%) rename model-optimizer/{mo/front/caffe/extractors/crop_test.py => extensions/front/caffe/crop_ext_test.py} (94%) rename model-optimizer/{mo/front/caffe/extractors/roipooling.py => extensions/front/caffe/dropout_ext.py} (57%) create mode 100644 model-optimizer/extensions/front/caffe/roipooling_ext.py create mode 100644 model-optimizer/extensions/front/caffe/scale_ext.py create mode 100644 model-optimizer/extensions/front/mxnet/take_ext.py create mode 100644 model-optimizer/extensions/front/tf/KerasRNNTransformation.py create mode 100644 model-optimizer/extensions/front/tf/WhileNormalize.py create mode 100644 model-optimizer/extensions/front/tf/while_ext.py create mode 100644 model-optimizer/extensions/ops/BN.py delete mode 100644 model-optimizer/mo/front/caffe/extractors/batchnorm.py delete mode 100644 model-optimizer/mo/front/caffe/extractors/batchnorm_test.py delete mode 100644 model-optimizer/mo/front/caffe/extractors/concat_test.py delete mode 100644 model-optimizer/mo/front/caffe/extractors/scale.py delete mode 100644 model-optimizer/mo/front/caffe/extractors/scale_test.py create mode 100644 ngraph/core/builder/src/builder/make_constant.cpp delete mode 100644 ngraph/core/include/ngraph/chrome_trace.hpp create mode 100644 ngraph/core/include/ngraph/op/experimental_detectron_detection_output.hpp create mode 100644 ngraph/core/include/ngraph/op/experimental_detectron_generate_proposals.hpp create mode 100644 ngraph/core/include/ngraph/op/experimental_detectron_prior_grid_generator.hpp create mode 100644 ngraph/core/include/ngraph/op/experimental_detectron_roi_feature.hpp create mode 100644 ngraph/core/include/ngraph/op/experimental_detectron_topkrois.hpp delete mode 100644 ngraph/core/include/ngraph/op/quantize.hpp create mode 100644 ngraph/core/reference/include/ngraph/runtime/reference/bucketize.hpp create mode 100644 ngraph/core/reference/include/ngraph/runtime/reference/ctc_greedy_decoder_seq_len.hpp delete mode 100644 ngraph/core/src/chrome_trace.cpp create mode 100644 ngraph/core/src/op/experimental_detectron_detection_output.cpp create mode 100644 ngraph/core/src/op/experimental_detectron_generate_proposals.cpp create mode 100644 ngraph/core/src/op/experimental_detectron_prior_grid_generator.cpp create mode 100644 ngraph/core/src/op/experimental_detectron_roi_feature.cpp create mode 100644 ngraph/core/src/op/experimental_detectron_topkrois.cpp delete mode 100644 ngraph/core/src/op/quantize.cpp rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/attribute.hpp (99%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/graph.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/graph_cache.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/model.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/null_node.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/tensor.hpp (99%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/transform.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/core/value_info.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/default_opset.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/exceptions.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/abs.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/acos.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/acosh.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/add.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/and.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/argmax.hpp (75%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/argmin.hpp (75%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/asin.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/asinh.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/atan.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/atanh.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/average_pool.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/batch_norm.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/cast.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/ceil.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/clip.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/concat.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/constant.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/constant_of_shape.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/conv.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/conv_integer.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/conv_transpose.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/cos.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/cosh.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/cum_sum.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/depth_to_space.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/dequantize_linear.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/div.hpp (98%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/dropout.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/elu.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/equal.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/erf.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/exp.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/expand.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/eye_like.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/flatten.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/floor.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/gather.hpp (97%) create mode 100644 ngraph/frontend/onnx_import/src/op/gather_elements.hpp rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/gather_nd.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/gemm.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/global_average_pool.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/global_max_pool.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/greater.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/gru.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/hard_sigmoid.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/hardmax.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/identity.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/image_scaler.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/instance_norm.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/leaky_relu.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/less.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/log.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/log_softmax.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/loop.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/lp_norm.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/lp_pool.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/lrn.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/lstm.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/matmul.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/matmul_integer.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/max.hpp (95%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/max_pool.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/mean.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/mean_variance_normalization.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/min.hpp (95%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/mod.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/mul.hpp (98%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/neg.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/non_max_suppression.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/non_zero.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/not.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/onehot.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/or.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/org.openvinotoolkit/detection_output.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/org.openvinotoolkit/fake_quantize.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/org.openvinotoolkit/group_norm.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/org.openvinotoolkit/normalize.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/org.openvinotoolkit/prior_box.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/org.openvinotoolkit/swish.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/pad.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/pow.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/prelu.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/qlinear_matmul.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/quant_conv.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/quantize_linear.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/range.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/reciprocal.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/reduce.hpp (91%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/relu.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/reshape.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/resize.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/reverse_sequence.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/rnn.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/roi_align.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/round.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/scatter_elements.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/scatter_nd.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/selu.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/shape.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/shrink.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sigmoid.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sign.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sin.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sinh.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/size.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/slice.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/softmax.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/softplus.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/softsign.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/space_to_depth.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/split.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sqrt.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/squeeze.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sub.hpp (98%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/sum.hpp (94%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/tan.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/tanh.hpp (96%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/thresholded_relu.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/tile.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/topk.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/transpose.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/unsqueeze.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/upsample.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/where.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/op/xor.hpp (97%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/ops_bridge.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/arg_min_max_factory.hpp (98%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/common.hpp (99%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/convpool.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/parser.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/pooling_factory.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/provenance_tag.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/recurrent.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/reshape.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/tensor_external_data.hpp (100%) rename ngraph/frontend/onnx_import/{include/onnx_import => src}/utils/variadic.hpp (100%) create mode 100644 ngraph/test/backend/bucketize.in.cpp create mode 100644 ngraph/test/backend/convert_like.in.cpp create mode 100644 ngraph/test/backend/ctc_greedy_decoder_seq_len.in.cpp create mode 100644 ngraph/test/backend/mvn.in.cpp create mode 100644 ngraph/test/models/onnx/add_abc_3d.prototxt create mode 100644 ngraph/test/models/onnx/argmax_select_last_index.prototxt create mode 100644 ngraph/test/models/onnx/argmin_select_last_index.prototxt create mode 100644 ngraph/test/models/onnx/gather_elements_float_1D.prototxt create mode 100644 ngraph/test/models/onnx/gather_elements_float_3D_axis_2.prototxt create mode 100644 ngraph/test/models/onnx/gather_elements_float_negative_axis.prototxt create mode 100644 ngraph/test/models/onnx/gather_elements_int32_axis_0.prototxt create mode 100644 ngraph/test/models/onnx/gather_elements_int8_axis_1.prototxt create mode 100644 ngraph/test/models/onnx/model_editor/shapes__add_two_inputs.prototxt create mode 100644 ngraph/test/models/onnx/model_editor/shapes__dynamic_rank_in_model.prototxt create mode 100644 ngraph/test/models/onnx/mvn_v6.prototxt create mode 100644 ngraph/test/models/onnx/nonmaxsuppression_center_point_box_format.prototxt create mode 100644 ngraph/test/models/onnx/nonmaxsuppression_single_box.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_as_0_dim_input.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_as_constant.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_as_constant_keepdims_off.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_as_constant_single_axis.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_as_input.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_empty.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_empty_dynamic_rank_input.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_empty_with_noop.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_axes_empty_without_noop.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_13_input_dynamic.prototxt create mode 100644 ngraph/test/models/onnx/reduce_sum_dynamic_rank_input.prototxt create mode 100644 ngraph/test/models/onnx/test_clip_inbounds.prototxt create mode 100644 ngraph/test/onnx/onnx_test_utils.in.cpp create mode 100644 ngraph/test/op_eval/bucketize.cpp delete mode 100644 ngraph/test/type_prop/abs.cpp create mode 100644 ngraph/test/type_prop/experimental_detectron_detection_output.cpp create mode 100644 ngraph/test/type_prop/experimental_detectron_generate_proposals.cpp create mode 100644 ngraph/test/type_prop/experimental_detectron_prior_grid_generator.cpp create mode 100644 ngraph/test/type_prop/experimental_detectron_roi_feature_extractor.cpp create mode 100644 ngraph/test/type_prop/experimental_detectron_topkrois.cpp delete mode 100644 ngraph/test/type_prop/quantize.cpp create mode 100644 ngraph/test/type_prop/reduce_mean.cpp create mode 100644 tests/conditional_compilation/conftest.py create mode 100644 tests/conditional_compilation/test_collect.py create mode 100644 tests/conditional_compilation/test_config.yml create mode 100644 tests/conditional_compilation/test_infer.py create mode 100644 tests/lib/__init__.py create mode 100644 tests/lib/path_utils.py create mode 100644 tests/lib/proc_utils.py rename tests/time_tests/test_runner/.automation/{tgl_test_config.yml => desktop_test_config.yml} (100%) diff --git a/.ci/azure/linux.yml b/.ci/azure/linux.yml index e153030e8b692a..a3cba475e2ca38 100644 --- a/.ci/azure/linux.yml +++ b/.ci/azure/linux.yml @@ -30,6 +30,8 @@ jobs: WORK_DIR: $(Pipeline.Workspace)/_w BUILD_DIR: $(WORK_DIR)/build BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE) + INSTALL_DIR: $(WORK_DIR)/install_pkg + SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh steps: - script: | @@ -52,10 +54,10 @@ jobs: displayName: 'System info' - script: | - echo TargetBranch: $(System.PullRequest.TargetBranch) - echo SourceBranch: $(Build.SourceBranch) rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR) rm -rf $(BUILD_DIR) ; mkdir $(BUILD_DIR) + echo TargetBranch: $(System.PullRequest.TargetBranch) + echo SourceBranch: $(Build.SourceBranch) displayName: 'Make dir' - checkout: self @@ -112,6 +114,10 @@ jobs: - script: ls -alR $(REPO_DIR)/bin/ displayName: 'List files' + - script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake + workingDirectory: $(BUILD_DIR) + displayName: 'Install' + - script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml displayName: 'nGraph UT' continueOnError: false diff --git a/.ci/azure/mac.yml b/.ci/azure/mac.yml index 30032ddd25a745..e2e03690ca614f 100644 --- a/.ci/azure/mac.yml +++ b/.ci/azure/mac.yml @@ -30,6 +30,8 @@ jobs: WORK_DIR: $(Pipeline.Workspace)/_w BUILD_DIR: $(WORK_DIR)/build BIN_DIR: $(REPO_DIR)/bin/intel64/$(BUILD_TYPE) + INSTALL_DIR: $(WORK_DIR)/install_pkg + SETUPVARS: $(INSTALL_DIR)/bin/setupvars.sh steps: - script: | @@ -99,7 +101,11 @@ jobs: - script: ls -alR $(REPO_DIR)/bin/ displayName: 'List files' - - script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid --gtest_output=xml:TEST-NGraphUT.xml + - script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake + workingDirectory: $(BUILD_DIR) + displayName: 'Install' + + - script: $(BIN_DIR)/unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU*:IE_CPU.onnx_model_sigmoid:IE_CPU/GRUSequenceOp.onnx_model_gru* --gtest_output=xml:TEST-NGraphUT.xml displayName: 'nGraph UT' continueOnError: false diff --git a/.ci/azure/windows.yml b/.ci/azure/windows.yml index 3f3e12d1b0b72e..c94153df5fbbe3 100644 --- a/.ci/azure/windows.yml +++ b/.ci/azure/windows.yml @@ -33,6 +33,8 @@ jobs: MSVS_VARS_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Auxiliary\Build\vcvars64.bat MSVC_COMPILER_PATH: C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC\Tools\MSVC\14.24.28314\bin\Hostx64\x64\cl.exe TEST_ENV_PATH: $(REPO_DIR)\inference-engine\temp\tbb\bin;$(REPO_DIR)\inference-engine\temp\opencv_4.5.1\opencv\bin;%PATH% + INSTALL_DIR: $(WORK_DIR)\install_pkg + SETUPVARS: $(INSTALL_DIR)\bin\setupvars.bat steps: - script: | @@ -79,16 +81,11 @@ jobs: displayName: 'Install dependencies' - script: | - certutil -urlcache -split -f https://incredibuilddiag1wu2.blob.core.windows.net/incredibuild/IBSetupConsole_9_5_0.exe IBSetupConsole_9_5_0.exe - call IBSetupConsole_9_5_0.exe /Install /Components=Agent,oneuse /Coordinator=11.1.0.4 /AGENT:OPENFIREWALL=ON /AGENT:AUTOSELECTPORTS=ON /ADDTOPATH=ON /AGENT:INSTALLADDINS=OFF + certutil -urlcache -split -f https://incredibuilddiag1wu2.blob.core.windows.net/incredibuild/install_ib_console.bat install_ib_console.bat + call install_ib_console.bat workingDirectory: $(WORK_DIR) displayName: 'Install IncrediBuild' - - script: | - echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent - reg add HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Xoreax\IncrediBuild\Builder /f /v LastEnabled /d 0 && echo Start IncrediBuild_Agent && net start IncrediBuild_Agent - displayName: 'Start IncrediBuild' - - script: | set PATH=$(WORK_DIR)\ninja-win;%PATH% call "$(MSVS_VARS_PATH)" && cmake -GNinja -DENABLE_FASTER_BUILD=ON -DENABLE_TEMPLATE_PLUGIN=ON -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -DENABLE_TESTS=ON -DIE_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules -DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" -DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" $(REPO_DIR) @@ -104,9 +101,14 @@ jobs: - script: echo Stop IncrediBuild_Agent && net stop IncrediBuild_Agent displayName: Stop IncrediBuild continueOnError: true + - script: dir $(REPO_DIR)\bin\ /s displayName: 'List files' + - script: cmake -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P cmake_install.cmake + workingDirectory: $(BUILD_DIR) + displayName: 'Install' + - script: | set PATH=$(TEST_ENV_PATH) $(BIN_DIR)\unit-test --gtest_print_time=1 --gtest_filter=-backend_api.config_unsupported:*IE_GPU* --gtest_output=xml:TEST-NGraphUT.xml diff --git a/cmake/developer_package/IEDevScriptsConfig.cmake b/cmake/developer_package/IEDevScriptsConfig.cmake index a28f77099b6ba8..76324e9aa6fbb1 100644 --- a/cmake/developer_package/IEDevScriptsConfig.cmake +++ b/cmake/developer_package/IEDevScriptsConfig.cmake @@ -46,13 +46,7 @@ endif() function(set_temp_directory temp_variable source_tree_dir) if (DEFINED ENV{DL_SDK_TEMP} AND NOT $ENV{DL_SDK_TEMP} STREQUAL "") message(STATUS "DL_SDK_TEMP environment is set : $ENV{DL_SDK_TEMP}") - - if (WIN32) - string(REPLACE "\\" "\\\\" temp $ENV{DL_SDK_TEMP}) - else() - set(temp $ENV{DL_SDK_TEMP}) - endif() - + file(TO_CMAKE_PATH $ENV{DL_SDK_TEMP} temp) if (ENABLE_ALTERNATIVE_TEMP) set(ALTERNATIVE_PATH ${source_tree_dir}/temp) endif() diff --git a/cmake/developer_package/compile_flags/os_flags.cmake b/cmake/developer_package/compile_flags/os_flags.cmake index 8e3a5606ab78c5..87359245b541e9 100644 --- a/cmake/developer_package/compile_flags/os_flags.cmake +++ b/cmake/developer_package/compile_flags/os_flags.cmake @@ -271,6 +271,7 @@ else() ie_add_compiler_flags(-fdiagnostics-show-option) ie_add_compiler_flags(-Wundef) ie_add_compiler_flags(-Wreturn-type) + ie_add_compiler_flags(-Wunused-variable) # Disable noisy warnings diff --git a/cmake/developer_package/compile_flags/sanitizer.cmake b/cmake/developer_package/compile_flags/sanitizer.cmake index e303b203100f7a..a9b8a47c72a171 100644 --- a/cmake/developer_package/compile_flags/sanitizer.cmake +++ b/cmake/developer_package/compile_flags/sanitizer.cmake @@ -4,6 +4,14 @@ include(CheckCXXCompilerFlag) +if (ENABLE_SANITIZER OR ENABLE_THREAD_SANITIZER) + # This is workaround for https://gitlab.kitware.com/cmake/cmake/-/issues/16609. + # It ensures pthread is searched without ASAN linking. + # Line bellow must be before adding -fsanitize=address or -fsanitize=thread to + # build options for the trick to work. + find_package(Threads REQUIRED) +endif() + if (ENABLE_SANITIZER) set(SANITIZER_COMPILER_FLAGS "-g -fsanitize=address -fno-omit-frame-pointer") CHECK_CXX_COMPILER_FLAG("-fsanitize-recover=address" SANITIZE_RECOVER_SUPPORTED) diff --git a/cmake/developer_package/message.cmake b/cmake/developer_package/message.cmake index eb6a1af60035ad..26912b05566599 100644 --- a/cmake/developer_package/message.cmake +++ b/cmake/developer_package/message.cmake @@ -11,12 +11,17 @@ if(UNIX AND ENABLE_ERROR_HIGHLIGHT) list(GET ARGV 0 MessageType) list(REMOVE_AT ARGV 0) + + foreach(arg IN LISTS ARGV) + set(_msg "${_msg}${arg}") + endforeach() + if(MessageType STREQUAL FATAL_ERROR OR MessageType STREQUAL SEND_ERROR) - _message(${MessageType} "${RED}${ARGV}${RESET}") + _message(${MessageType} "${RED}${_msg}${RESET}") elseif(MessageType STREQUAL WARNING) - _message(${MessageType} "${YELLOW}${ARGV}${RESET}") + _message(${MessageType} "${YELLOW}${_msg}${RESET}") else() - _message(${MessageType} "${ARGV}") + _message(${MessageType} "${_msg}") endif() endfunction() endif() diff --git a/docs/CMakeLists.txt b/docs/CMakeLists.txt index a4ee2f62aa5851..e34e8fe3ade2e2 100644 --- a/docs/CMakeLists.txt +++ b/docs/CMakeLists.txt @@ -3,6 +3,10 @@ # if(NOT ENABLE_DOCKER) + if(CMAKE_COMPILER_IS_GNUCXX) + ie_add_compiler_flags(-Wall) + endif() + add_subdirectory(snippets) # Detect nGraph diff --git a/docs/HOWTO/Custom_Layers_Guide.md b/docs/HOWTO/Custom_Layers_Guide.md index 23437de247aabb..0cacca13451ad7 100644 --- a/docs/HOWTO/Custom_Layers_Guide.md +++ b/docs/HOWTO/Custom_Layers_Guide.md @@ -1,200 +1,371 @@ -# Custom Layers Guide {#openvino_docs_HOWTO_Custom_Layers_Guide} +# Custom Operations Guide {#openvino_docs_HOWTO_Custom_Layers_Guide} + +The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with multiple frameworks including +TensorFlow*, Caffe*, MXNet*, Kaldi* and ONNX* file format. The list of supported operations (layers) is different for +each of the supported frameworks. To see the operations supported by your framework, refer to +[Supported Framework Layers](../MO_DG/prepare_model/Supported_Frameworks_Layers.md). + +Custom operations are operations that are not included in the list of known operations. If your model contains any +operation that is not in the list of known operations, the Model Optimizer is not able to generate an Intermediate +Representation (IR) for this model. + +This guide illustrates the workflow for running inference on topologies featuring custom operations, allowing you to +plug in your own implementation for existing or completely new operation. + +> **NOTE:** *Layer* — The legacy term for an *operation* which came from Caffe\* framework. Currently it is not used. +> Refer to the [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../MO_DG/IR_and_opsets.md) +> for more information on the topic. + +## Terms Used in This Guide + +- *Intermediate Representation (IR)* — Neural Network used only by the Inference Engine in OpenVINO abstracting the + different frameworks and describing the model topology, operations parameters and weights. + +- *Operation* — The abstract concept of a math function that is selected for a specific purpose. Operations supported by + OpenVINO™ are listed in the supported operation set provided in the [Available Operations Sets](../ops/opset.md). + Examples of the operations are: [ReLU](../ops/activation/ReLU_1.md), [Convolution](../ops/convolution/Convolution_1.md), + [Add](../ops/arithmetic/Add_1.md), etc. + +- *Kernel* — The implementation of a operation function in the OpenVINO™ plugin, in this case, the math programmed (in + C++ and OpenCL) to perform the operation for a target hardware (CPU or GPU). + +- *Inference Engine Extension* — Device-specific module implementing custom operations (a set of kernels). + +## Custom Operation Support Overview + +There are three steps to support inference of a model with custom operation(s): +1. Add support for a custom operation in the [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) so +the Model Optimizer can generate the IR with the operation. +2. Create an operation set and implement a custom nGraph operation in it as described in the +[Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md). +3. Implement a customer operation in one of the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) +plugins to support inference of this operation using a particular target hardware (CPU, GPU or VPU). + +To see the operations that are supported by each device plugin for the Inference Engine, refer to the +[Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md). + +> **NOTE:** If a device doesn't support a particular operation, an alternative to creating a new operation is to target +> an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be +> used to run an inference model on multiple devices allowing the unsupported operations on one device to "fallback" to +> run on another device (e.g., CPU) that does support those operations. + +### Custom Operation Support for the Model Optimizer + +Model Optimizer model conversion pipeline is described in details in "Model Conversion Pipeline" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md). +It is recommended to read that article first for a better understanding of the following material. + +Model Optimizer provides extensions mechanism to support new operations and implement custom model transformations to +generate optimized IR. This mechanism is described in the "Model Optimizer Extensions" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md). + +Two types of the Model Optimizer extensions should be implemented to support custom operation at minimum: +1. Operation class for a new operation. This class stores information about the operation, its attributes, shape +inference function, attributes to be saved to an IR and some others internally used attributes. Refer to the +"Model Optimizer Operation" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for the +detailed instruction on how to implement it. +2. Operation attributes extractor. The extractor is responsible for parsing framework-specific representation of the +operation and uses corresponding operation class to update graph node attributes with necessary attributes of the +operation. Refer to the "Operation Extractor" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for the +detailed instruction on how to implement it. + +> **NOTE:** In some cases you may need to implement some transformation to support the operation. This topic is covered +> in the "Graph Transformation Extensions" section on the +> [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md). + +## Custom Operations Extensions for the Inference Engine + +Inference Engine provides extensions mechanism to support new operations. This mechanism is described in the +[Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md). + +Each device plugin includes a library of optimized implementations to execute known operations which must be extended to +execute a custom operation. The custom operation extension is implemented according to the target device: + +- Custom Operation CPU Extension + - A compiled shared library (`.so`, `.dylib` or `.dll`) needed by the CPU Plugin for executing the custom operation + on a CPU. Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more + details. +- Custom Operation GPU Extension + - OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the GPU along with a + operation description file (.xml) needed by the GPU Plugin for the custom operation kernel. Refer to the + [How to Implement Custom GPU Operations](../IE_DG/Extensibility_DG/GPU_Kernel.md) for more details. +- Custom Operation VPU Extension + - OpenCL source code (.cl) for the custom operation kernel that will be compiled to execute on the VPU along with a + operation description file (.xml) needed by the VPU Plugin for the custom operation kernel. Refer to the + [How to Implement Custom Operations for VPU](../IE_DG/Extensibility_DG/VPU_Kernel.md) for more details. + +Also, it is necessary to implement nGraph custom operation according to the +[Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) so the Inference Engine can read an IR with this +operation and correctly infer output tensors shape and type. + +## Enabling Magnetic Resonance Image Reconstruction Model +This chapter provides a step-by-step instruction on how to enable the magnetic resonance image reconstruction model +implemented in the [repository](https://github.com/rmsouza01/Hybrid-CS-Model-MRI/) using a custom operation on CPU. The +example is prepared for a model generated from the repository with hash `2ede2f96161ce70dcdc922371fe6b6b254aafcc8`. + +### Download and Convert the Model to a Frozen TensorFlow\* Model Format +The original pre-trained model is provided in the hdf5 format which is not supported by OpenVINO directly and needs to +be converted to TensorFlow\* frozen model format first. + +1. Download repository `https://github.com/rmsouza01/Hybrid-CS-Model-MRI`:
+```py + import keras as K + import numpy as np + import Modules.frequency_spatial_network as fsnet + import tensorflow as tf -Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in the list of known layers, the Model Optimizer classifies them as custom. + under_rate = '20' -This guide illustrates the workflow for running inference on topologies featuring custom layers, allowing you to plug in your own implementation for existing or completely new layers. -For a step-by-step example of creating and executing a custom layer, see the [Custom Layer Implementation Tutorials for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0) + stats = np.load("Data/stats_fs_unet_norm_" + under_rate + ".npy") + var_sampling_mask = np.load("Data/sampling_mask_" + under_rate + "perc.npy") -## Terms used in this guide + model = fsnet.wnet(stats[0], stats[1], stats[2], stats[3], kshape = (5,5), kshape2=(3,3)) + model_name = "Models/wnet_" + under_rate + ".hdf5" + model.load_weights(model_name) -- *Layer* — The abstract concept of a math function that is selected for a specific purpose (relu, sigmoid, tanh, convolutional). This is one of a sequential series of building blocks within the neural network. -- *Kernel* — The implementation of a layer function, in this case, the math programmed (in C++ and Python) to perform the layer operation for target hardware (CPU or GPU). -- *Intermediate Representation (IR)* — Neural Network used only by the Inference Engine in OpenVINO abstracting the different frameworks and describing topology, layer parameters and weights. -The original format will be a supported framework such as TensorFlow, Caffe, or MXNet. + inp = np.random.standard_normal([1, 256, 256, 2]).astype(np.float32) + np.save('inp', inp) -- *Model Extension Generator* — Generates template source code files for each of the extensions needed by the Model Optimizer and the Inference Engine. + sess = K.backend.get_session() + sess.as_default() + graph_def = sess.graph.as_graph_def() + graph_def = tf.graph_util.convert_variables_to_constants(sess, graph_def, ['conv2d_44/BiasAdd']) + with tf.gfile.FastGFile('wnet_20.pb', 'wb') as f: + f.write(graph_def.SerializeToString()) +``` -- *Inference Engine Extension* — Device-specific module implementing custom layers (a set of kernels). - - -## Custom Layer Overview - -The [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) searches the list of known layers for each layer contained in the input model topology before building the model's internal representation, optimizing the model, and producing the Intermediate Representation files. - -The [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) loads the layers from the input model IR files into the specified device plugin, which will search a list of known layer implementations for the device. If your topology contains layers that are not in the list of known layers for the device, the Inference Engine considers the layer to be unsupported and reports an error. To see the layers that are supported by each device plugin for the Inference Engine, refer to the [Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md) documentation. -
-> **NOTE:** If a device doesn't support a particular layer, an alternative to creating a new custom layer is to target an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be used to run an inference model on multiple devices allowing the unsupported layers on one device to "fallback" to run on another device (e.g., CPU) that does support those layers. - -## Custom Layer Implementation Workflow - -When implementing a custom layer for your pre-trained model in the Intel® Distribution of OpenVINO™ toolkit, you will need to add extensions to both the Model Optimizer and the Inference Engine. - -## Custom Layer Extensions for the Model Optimizer - -The following figure shows the basic processing steps for the Model Optimizer highlighting the two necessary custom layer extensions, the Custom Layer Extractor and the Custom Layer Operation. - -![](img/MO_extensions_flow.png) - - -The Model Optimizer first extracts information from the input model which includes the topology of the model layers along with parameters, input and output format, etc., for each layer. The model is then optimized from the various known characteristics of the layers, interconnects, and data flow which partly comes from the layer operation providing details including the shape of the output for each layer. Finally, the optimized model is output to the model IR files needed by the Inference Engine to run the model. - -The Model Optimizer starts with a library of known extractors and operations for each [supported model framework](../MO_DG/prepare_model/Supported_Frameworks_Layers.md) which must be extended to use each unknown custom layer. The custom layer extensions needed by the Model Optimizer are: - -- Custom Layer Extractor - - Responsible for identifying the custom layer operation and extracting the parameters for each instance of the custom layer. The layer parameters are stored per instance and used by the layer operation before finally appearing in the output IR. Typically the input layer parameters are unchanged, which is the case covered by this tutorial. -- Custom Layer Operation - - Responsible for specifying the attributes that are supported by the custom layer and computing the output shape for each instance of the custom layer from its parameters.
The `--mo-op` command-line argument shown in the examples below generates a custom layer operation for the Model Optimizer. - -## Custom Layer Extensions for the Inference Engine - -The following figure shows the basic flow for the Inference Engine highlighting two custom layer extensions for the CPU and GPU Plugins, the Custom Layer CPU extension and the Custom Layer GPU Extension. - -![](img/IE_extensions_flow.png) - -Each device plugin includes a library of optimized implementations to execute known layer operations which must be extended to execute a custom layer. The custom layer extension is implemented according to the target device: - -- Custom Layer CPU Extension - - A compiled shared library (.so or .dll binary) needed by the CPU Plugin for executing the custom layer on the CPU. -- Custom Layer GPU Extension - - OpenCL source code (.cl) for the custom layer kernel that will be compiled to execute on the GPU along with a layer description file (.xml) needed by the GPU Plugin for the custom layer kernel. - -## Model Extension Generator +As a result the TensorFlow\* frozen model file "wnet_20.pb" is generated. -Using answers to interactive questions or a *.json* configuration file, the Model Extension Generator tool generates template source code files for each of the extensions needed by the Model Optimizer and the Inference Engine. To complete the implementation of each extension, the template functions may need to be edited to fill-in details specific to the custom layer or the actual custom layer functionality itself. +### Convert the Frozen TensorFlow\* Model to Intermediate Representation -### Command-line - -The Model Extension Generator is included in the Intel® Distribution of OpenVINO™ toolkit installation and is run using the command (here with the "--help" option): +Firstly, open the model in the TensorBoard or other TensorFlow* model visualization tool. The model supports dynamic +batch dimension because the value for the batch dimension is not hardcoded in the model. Model Optimizer need to set all +dynamic dimensions to some specific value to create the IR, therefore specify the command line parameter `-b 1` to set +the batch dimension equal to 1. The actual batch size dimension can be changed at runtime using the Inference Engine API +described in the [Using Shape Inference](../IE_DG/ShapeInference.md). Also refer to +[Converting a Model Using General Conversion Parameters](../MO_DG/prepare_model/convert_model/Converting_Model_General.md) +and [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md) +for more details and command line parameters used for the model conversion. ```bash -python3 /opt/intel/openvino/deployment_tools/tools/extension_generator/extgen.py new --help +.//mo.py --input_model /wnet_20.pb -b 1 ``` -where the output will appear similar to: - -``` -usage: You can use any combination of the following arguments: - -Arguments to configure extension generation in the interactive mode: - -optional arguments: - -h, --help show this help message and exit - --mo-caffe-ext generate a Model Optimizer Caffe* extractor - --mo-mxnet-ext generate a Model Optimizer MXNet* extractor - --mo-tf-ext generate a Model Optimizer TensorFlow* extractor - --mo-op generate a Model Optimizer operation - --ie-cpu-ext generate an Inference Engine CPU extension - --ie-gpu-ext generate an Inference Engine GPU extension - --output_dir OUTPUT_DIR - set an output directory. If not specified, the current - directory is used by default. +Model Optimizer produces the following error: +```bash +[ ERROR ] List of operations that cannot be converted to Inference Engine IR: +[ ERROR ] Complex (1) +[ ERROR ] lambda_2/Complex +[ ERROR ] IFFT2D (1) +[ ERROR ] lambda_2/IFFT2D +[ ERROR ] ComplexAbs (1) +[ ERROR ] lambda_2/Abs +[ ERROR ] Part of the nodes was not converted to IR. Stopped. ``` -The available command-line arguments are used to specify which extension(s) to generate templates for the Model Optimizer or Inference Engine. The generated extension files for each argument will appear starting from the top of the output directory as follows: - -Command-line Argument | Output Directory Location | ---------------------- | ------------------------------ | -`--mo-caffe-ext` | user_mo_extensions/front/caffe | -`--mo-mxnet-ext` | user_mo_extensions/front/mxnet | -`--mo-tf-ext` | user_mo_extensions/front/tf | -`--mo-op` | user_mo_extensions/ops | -`--ie-cpu-ext` | user_ie_extensions/cpu | -`--ie-gpu-ext` | user_ie_extensions/gpu | - -### Extension Workflow - -The workflow for each generated extension follows the same basic steps: - -![](img/MEG_generic_flow.png) - -**Step 1: Generate:** Use the Model Extension Generator to generate the Custom Layer Template Files. - -**Step 2: Edit:** Edit the Custom Layer Template Files as necessary to create the specialized Custom Layer Extension Source Code. - -**Step 3: Specify:** Specify the custom layer extension locations to be used by the Model Optimizer or Inference Engine. +The error means that the Model Optimizer doesn't know how to handle 3 types of TensorFlow\* operations: "Complex", +"IFFT2D" and "ComplexAbs". In order to see more details about the conversion process run the model conversion with +additional parameter `--log_level DEBUG`. It is worth to mention the following lines from the detailed output: -## Caffe\* Models with Custom Layers +```bash +[ INFO ] Called "tf_native_tf_node_infer" for node "lambda_2/Complex" +[ ] [ DEBUG ] [ tf:228 ] Added placeholder with name 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder' +[ ] [ DEBUG ] [ tf:228 ] Added placeholder with name 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder' +[ ] [ DEBUG ] [ tf:241 ] update_input_in_pbs: replace input 'lambda_2/lambda_3/strided_slice' with input 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder' +[ ] [ DEBUG ] [ tf:249 ] Replacing input '0' of the node 'lambda_2/Complex' with placeholder 'lambda_2/lambda_3/strided_slice_port_0_ie_placeholder' +[ ] [ DEBUG ] [ tf:241 ] update_input_in_pbs: replace input 'lambda_2/lambda_4/strided_slice' with input 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder' +[ ] [ DEBUG ] [ tf:249 ] Replacing input '1' of the node 'lambda_2/Complex' with placeholder 'lambda_2/lambda_4/strided_slice_port_0_ie_placeholder' +[ ] [ DEBUG ] [ tf:148 ] Inferred shape of the output tensor with index '0' of the node 'lambda_2/Complex': '[ 1 256 256]' +[ ] [ DEBUG ] [ infer:145 ] Outputs: +[ ] [ DEBUG ] [ infer:32 ] output[0]: shape = [ 1 256 256], value = +[ ] [ DEBUG ] [ infer:129 ] -------------------- +[ ] [ DEBUG ] [ infer:130 ] Partial infer for lambda_2/IFFT2D +[ ] [ DEBUG ] [ infer:131 ] Op: IFFT2D +[ ] [ DEBUG ] [ infer:132 ] Inputs: +[ ] [ DEBUG ] [ infer:32 ] input[0]: shape = [ 1 256 256], value = +``` -If your Caffe\* model has custom layers: +This is a part of the log of the partial inference phase of the model conversion. See the "Partial Inference" section on +the [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for +more information about this phase. Model Optimizer inferred output shape for the unknown operation of type "Complex" +using a "fallback" to TensorFlow\*. However, it is not enough to generate the IR because Model Optimizer doesn't know +which attributes of the operation should be saved to IR. So it is necessary to implement Model Optimizer extensions to +support these operations. + +Before going into the extension development it is necessary to understand what these unsupported operations do according +to the TensorFlow\* framework specification. + +* "Complex" - returns a tensor of complex type constructed from two real input tensors specifying real and imaginary +part of a complex number. +* "IFFT2D" - returns a tensor with inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of + an input. +* "ComplexAbs" - returns a tensor with absolute values of input tensor with complex numbers. + +The part of the model with all three unsupported operations is depicted below: + +![Unsupported sub-graph](img/unsupported_subgraph.png) + +This model uses complex numbers during the inference but Inference Engine does not support tensors of this data type. So +it is necessary to find a way how to avoid using tensors of such a type in the model. Fortunately, the complex tensor +appear as a result of "Complex" operation, is used as input in the "IFFT2D" operation then is passed to "ComplexAbs" +which produces real value tensor as output. So there are just 3 operations consuming/producing complex tensors in the +model. + +Let's design an OpenVINO operation "FFT" which get a single real number tensor describing the complex number and +produces a single real number tensor describing output complex tensor. This way the fact that the model uses complex +numbers is hidden inside the "FFT" operation implementation. The operation gets a tensor of shape `[N, H, W, 2]` and +produces the output tensor with the same shape, where the innermost dimension contains pairs of real numbers describing +the complex number (its real and imaginary part). As we will see further this operation will allow us to support the +model. The implementation of the Model Optimizer operation should be saved to `mo_extensions/ops/FFT.py` file: + +@snippet FFT.py fft:operation + +The attribute `inverse` is a flag specifying type of the FFT to apply: forward or inverse. + +See the "Model Optimizer Operation" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for the +detailed instruction on how to implement the operation. + +Now it is necessary to implement extractor for the "IFFT2D" operation according to the +"Operation Extractor" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md). The +following snippet provides two extractors: one for "IFFT2D", another one for "FFT2D", however only on of them is used +in this example. The implementation should be saved to the file `mo_extensions/front/tf/FFT_ext.py`. + +@snippet FFT_ext.py fft_ext:extractor + +> **NOTE:** The graph is in inconsistent state after extracting node attributes because according to original operation +> "IFFT2D" semantic it should have an input consuming a tensor of complex numbers, but the extractor instantiated an +> operation "FFT" which expects a real tensor with specific layout. But the inconsistency will be resolved during +> applying front phase transformations discussed below. + +The output shape of the operation "AddV2" from the picture above is `[N, H, W, 2]`. Where the innermost dimension +contains pairs of real numbers describing the complex number (its real and imaginary part). The following "StridedSlice" +operations split the input tensor into 2 parts to get a tensor of real and a tensor of imaginary parts which are then +consumed with the "Complex" operation to produce a tensor of complex numbers. These "StridedSlice" and "Complex" +operations can be removed so the "FFT" operation will get a real value tensor encoding complex numbers. To achieve this +we implement the front phase transformation which searches for a pattern of two "StridedSlice" operations with specific +attributes producing data to "Complex" operation and removes it from the graph. Refer to the +"Pattern-Defined Front Phase Transformations" section on the +[Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) for more +information on how this type of transformation works. The code snippet should be saved to the file +`mo_extensions/front/tf/Complex.py`. + +@snippet Complex.py complex:transformation + +> **NOTE:** The graph is in inconsistent state because the "ComplexAbs" operation consumes complex value tensor but +> "FFT" produces real value tensor. + +Now lets implement a transformation which replace a "ComplexAbs" operation with a sub-graph of primitive operations +which calculate the result using the following formulae: \f$module(z) = \sqrt{real(z) \cdot real(z) + imag(z) \cdot imag(z)}\f$. +Original "IFFT2D" operation produces tensor of complex values, but the "FFT" operation produces a real value tensor with +the same format and shape as the input for the operation. So the input shape for the "ComplexAbs" will be `[N, H, W, 2]` +with the innermost dimension containing tuple with real and imaginary part of a complex number. In order to calculate +absolute values for the complex tensor we do the following: +1. Raise all elements in the power of 2. +2. Calculate a reduced sum over the innermost dimension. +3. Calculate a square root. + +The implementation should be saved to the file `mo_extensions/front/tf/ComplexAbs.py` and provided below: + +@snippet ComplexAbs.py complex_abs:transformation + +Now it is possible to convert the model using the following command line: +```bash +.//mo.py --input_model /wnet_20.pb -b 1 --extensions mo_extensions/ +``` -**Register the custom layers as extensions to the Model Optimizer**. For instructions, see [Extending Model Optimizer with New Primitives](../MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You will need a bit of Python\* code that lets the Model Optimizer; +The sub-graph corresponding to the originally non-supported one is depicted on the image below: -- Generate a valid Intermediate Representation according to the rules you specified. -- Be independent from the availability of Caffe on your computer. - -If your model contains Custom Layers, it is important to understand the internal workflow of the Model Optimizer. Consider the following example. +![Converted sub-graph](img/converted_subgraph.png) -**Example**: +> **NOTE:** Model Optimizer performed conversion of the model from NHWC to NCHW layout that is why the dimension with +> the value 2 moved to another position. -The network has: +### Inference Engine Extension Implementation +Now it is necessary to implement the extension for the CPU plugin with operation "FFT" introduced previously. The code +below is based on the template extension described on the +[Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md). -* One input layer (#1) -* One output Layer (#5) -* Three internal layers (#2, 3, 4) +#### CMake Build File +The first step is to create a CMake configuration file which builds the extension. The content of the "CMakeLists.txt" +file is the following: -The custom and standard layer types are: +@snippet ../template_extension/CMakeLists.txt cmake:extension -* Layers #2 and #5 are implemented as Model Optimizer extensions. -* Layers #1 and #4 are supported in Model Optimizer out-of-the box. -* Layer #3 is neither in the list of supported layers nor in extensions, but is specified in CustomLayersMapping.xml. +The CPU FFT kernel implementation uses OpenCV to perform the FFT that is why the extension library is linked with +"opencv_core" which comes with the OpenVINO. -> **NOTE**: If any of the layers are not in one of three categories described above, the Model Optimizer fails with an appropriate message and a link to the corresponding question in [Model Optimizer FAQ](../MO_DG/prepare_model/Model_Optimizer_FAQ.md). +#### Custom nGraph Operation "FFT" Implementation +The next step is to create the nGraph operation FFT. The header file "fft_op.hpp" has the following content: -The general process is as shown: +@snippet ../template_extension/fft_op.hpp fft_op:header -![Example custom layer network](img/mo_caffe_priorities.png) -
+The operation has just one boolean attribute `inverse`. Implementation of the necessary nGraph operation functions are +in the "fft_op.cpp" file with the following content: -**Step 1:** The example model is fed to the Model Optimizer that **loads the model** with the special parser built on top of the `caffe.proto` file. In case of failure, the Model Optimizer asks you to prepare the parser that can read the model. For more information, refer to the Model Optimizer, FAQ #1. +@snippet ../template_extension/fft_op.cpp fft_op:implementation -**Step 2:** The Model Optimizer **extracts the attributes of all layers** by going through the list of layers and attempting to find the appropriate extractor. In order of priority, the Model Optimizer checks if the layer is: - -* A. Registered as a Model Optimizer extension -* B. Registered as a standard Model Optimizer layer - -When the Model Optimizer finds a satisfying condition from the list above, it extracts the attributes according to the following rules: - -* For A. - takes only the parameters specified in the extension -* For B. - takes only the parameters specified in the standard extractor -
+Refer to the [Custom nGraph Operation](../IE_DG/Extensibility_DG/AddingNGraphOps.md) for more details. -**Step 3:** The Model Optimizer **calculates the output shape of all layers**. The logic is the same as it is for the priorities. **Important:** the Model Optimizer always takes the first available option. +#### CPU FFT Kernel Implementation +The operation implementation for CPU plugin uses OpenCV to perform the FFT. The header file "fft_kernel.hpp" has the +following content: -**Step 4:** The Model Optimizer **optimizes the original model and produces the two Intermediate Representation (IR) files in .xml and .bin**. -
+@snippet ../template_extension/fft_kernel.hpp fft_kernel:header -## TensorFlow\* Models with Custom Layers +The "fft_kernel.cpp" with the implementation of the CPU has the following content: -You have two options for TensorFlow\* models with custom layers: -
+@snippet ../template_extension/fft_kernel.cpp fft_kernel:implementation -* **Register those layers as extensions to the Model Optimizer.** In this case, the Model Optimizer generates a valid and optimized Intermediate Representation. -* **If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option.** This feature is helpful for many TensorFlow models. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md). - -## MXNet\* Models with Custom Layers +Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more details. -There are two options to convert your MXNet* model that contains custom layers: +#### Extension Library Implementation +The last step is to create an extension library "extension.cpp" and "extension.hpp" which will include the FFT +operation for the CPU plugin. The code of the library is described in the [Extension Library](../IE_DG/Extensibility_DG/Extension.md). -1. Register the custom layers as extensions to the Model Optimizer. For instructions, see [Extending MXNet Model Optimizer with New Primitives](../MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You can create Model Optimizer extensions for both MXNet layers with op `Custom` and layers which are not standard MXNet layers. +### Building and Running the Custom Extension +In order to build the extension run the following:
+```bash +mkdir build && cd build +source /opt/intel/openvino/bin/setupvars.sh +cmake .. -DCMAKE_BUILD_TYPE=Release +make --jobs=$(nproc) +``` -2. If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option. In MXNet the function is actively used for ssd models provides an opportunity to for the necessary subgraph sequences and replace them. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md). +The result of this command is a compiled shared library (`.so`, `.dylib` or `.dll`). It should be loaded in the +application using `Core` class instance method `AddExtension` like this +`core.AddExtension(make_so_pointer(compiled_library_file_name), "CPU");`. -## Kaldi\* Models with Custom Layers -For information on converting your Kaldi* model containing custom layers see [Converting a Kaldi Model in the Model Optimizer Developer Guide](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md). +To test that the extension is implemented correctly we can run the "mri_reconstruction_demo.py" with the following content: -## ONNX\* Models with Custom Layers -For information on converting your ONNX* model containing custom layers see [Converting an ONNX Model in the Model Optimizer Developer Guide](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md). +@snippet mri_reconstruction_demo.py mri_demo:demo -## Step-by-Step Custom Layers Tutorial -For a step-by-step walk-through creating and executing a custom layer, see [Custom Layer Implementation Tutorial for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0) +The script can be executed using the following command line: +```bash +python3 mri_reconstruction_demo.py \ + -m /wnet_20.xml \ + -i .npy \ + -p /Data/sampling_mask_20perc.npy \ + -l /libtemplate_extension.so \ + -d CPU +``` ## Additional Resources - Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit) - OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org) - [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +- [Model Optimizer Extensibility](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) - [Inference Engine Extensibility Mechanism](../IE_DG/Extensibility_DG/Intro.md) - [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md) - [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index) @@ -204,9 +375,7 @@ For a step-by-step walk-through creating and executing a custom layer, see [Cust ## Converting Models: - [Convert Your Caffe* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md) +- [Convert Your Kaldi* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md) - [Convert Your TensorFlow* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md) - [Convert Your MXNet* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md) - [Convert Your ONNX* Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md) - - - diff --git a/docs/HOWTO/img/IE_extensions_flow.png b/docs/HOWTO/img/IE_extensions_flow.png deleted file mode 100644 index ca665ca3298bbb..00000000000000 --- a/docs/HOWTO/img/IE_extensions_flow.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c2f362a39ae6c2af080e4f055b6fdba4954f918f85731545d1df3d687d9213d5 -size 421056 diff --git a/docs/HOWTO/img/MEG_generic_flow.png b/docs/HOWTO/img/MEG_generic_flow.png deleted file mode 100644 index a492c3fff5026b..00000000000000 --- a/docs/HOWTO/img/MEG_generic_flow.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cb5c700d003936779455353bfa4ed9432410c0975c46e2dfd30c6a1abccd1727 -size 23320 diff --git a/docs/HOWTO/img/MO_extensions_flow.png b/docs/HOWTO/img/MO_extensions_flow.png deleted file mode 100644 index 5009c0ce2604ad..00000000000000 --- a/docs/HOWTO/img/MO_extensions_flow.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:99d6b5146be85fa408dc5432883c3e2745cffe890133854a97dcf22f5c5962d4 -size 47564 diff --git a/docs/HOWTO/img/converted_subgraph.png b/docs/HOWTO/img/converted_subgraph.png new file mode 100644 index 00000000000000..6a5b7220777d54 --- /dev/null +++ b/docs/HOWTO/img/converted_subgraph.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7c8ab4f15874d235968471bcf876c89c795d601e69891208107b8b72aa58eb1 +size 70014 diff --git a/docs/HOWTO/img/mo_caffe_priorities.png b/docs/HOWTO/img/mo_caffe_priorities.png deleted file mode 100644 index 665892316c17fc..00000000000000 --- a/docs/HOWTO/img/mo_caffe_priorities.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0a4de6e502cae7542f1f311bcdbea6bb145f960f0d27d86a03160d1a60133778 -size 301310 diff --git a/docs/HOWTO/img/unsupported_subgraph.png b/docs/HOWTO/img/unsupported_subgraph.png new file mode 100644 index 00000000000000..80f7084a78a859 --- /dev/null +++ b/docs/HOWTO/img/unsupported_subgraph.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d5ccf51fe1babb93d96d042494695a6a6e055d1f8ebf7eef5083d54d8987a23 +size 58789 diff --git a/docs/HOWTO/mo_extensions/front/tf/Complex.py b/docs/HOWTO/mo_extensions/front/tf/Complex.py new file mode 100644 index 00000000000000..465608dfaba644 --- /dev/null +++ b/docs/HOWTO/mo_extensions/front/tf/Complex.py @@ -0,0 +1,57 @@ +""" + Copyright (C) 2018-2020 Intel Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +#! [complex:transformation] +import logging as log + +import numpy as np + +from mo.front.common.replacement import FrontReplacementSubgraph +from mo.graph.graph import Graph + + +class Complex(FrontReplacementSubgraph): + enabled = True + + def pattern(self): + return dict( + nodes=[ + ('strided_slice_real', dict(op='StridedSlice')), + ('strided_slice_imag', dict(op='StridedSlice')), + ('complex', dict(op='Complex')), + ], + edges=[ + ('strided_slice_real', 'complex', {'in': 0}), + ('strided_slice_imag', 'complex', {'in': 1}), + ]) + + @staticmethod + def replace_sub_graph(graph: Graph, match: dict): + strided_slice_real = match['strided_slice_real'] + strided_slice_imag = match['strided_slice_imag'] + complex_node = match['complex'] + + # make sure that both strided slice operations get the same data as input + assert strided_slice_real.in_port(0).get_source() == strided_slice_imag.in_port(0).get_source() + + # identify the output port of the operation producing datat for strided slice nodes + input_node_output_port = strided_slice_real.in_port(0).get_source() + input_node_output_port.disconnect() + + # change the connection so now all consumers of "complex_node" get data from input node of strided slice nodes + complex_node.out_port(0).get_connection().set_source(input_node_output_port) +#! [complex:transformation] + diff --git a/docs/HOWTO/mo_extensions/front/tf/ComplexAbs.py b/docs/HOWTO/mo_extensions/front/tf/ComplexAbs.py new file mode 100644 index 00000000000000..bac4140d732f91 --- /dev/null +++ b/docs/HOWTO/mo_extensions/front/tf/ComplexAbs.py @@ -0,0 +1,40 @@ +""" + Copyright (C) 2018-2020 Intel Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +#! [complex_abs:transformation] +import numpy as np + +from extensions.ops.elementwise import Pow +from extensions.ops.ReduceOps import ReduceSum +from mo.front.common.replacement import FrontReplacementOp +from mo.graph.graph import Graph, Node +from mo.ops.const import Const + + +class ComplexAbs(FrontReplacementOp): + op = "ComplexAbs" + enabled = True + + def replace_op(self, graph: Graph, node: Node): + pow_2 = Const(graph, {'value': np.float32(2.0)}).create_node() + reduce_axis = Const(graph, {'value': np.int32(-1)}).create_node() + pow_0_5 = Const(graph, {'value': np.float32(0.5)}).create_node() + + sq = Pow(graph, dict(name=node.in_node(0).name + '/sq', power=2.0)).create_node([node.in_node(0), pow_2]) + sum = ReduceSum(graph, dict(name=sq.name + '/sum')).create_node([sq, reduce_axis]) + sqrt = Pow(graph, dict(name=sum.name + '/sqrt', power=0.5)).create_node([sum, pow_0_5]) + return [sqrt.id] +#! [complex_abs:transformation] diff --git a/docs/HOWTO/mo_extensions/front/tf/FFT_ext.py b/docs/HOWTO/mo_extensions/front/tf/FFT_ext.py new file mode 100644 index 00000000000000..283c87ba838f80 --- /dev/null +++ b/docs/HOWTO/mo_extensions/front/tf/FFT_ext.py @@ -0,0 +1,47 @@ +""" + Copyright (C) 2018-2020 Intel Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +# ! [fft_ext:extractor] +from ...ops.FFT import FFT +from mo.front.extractor import FrontExtractorOp +from mo.utils.error import Error + + +class FFT2DFrontExtractor(FrontExtractorOp): + op = 'FFT2D' + enabled = True + + @classmethod + def extract(cls, node): + attrs = { + 'inverse': 0 + } + FFT.update_node_stat(node, attrs) + return cls.enabled + + +class IFFT2DFrontExtractor(FrontExtractorOp): + op = 'IFFT2D' + enabled = True + + @classmethod + def extract(cls, node): + attrs = { + 'inverse': 1 + } + FFT.update_node_stat(node, attrs) + return cls.enabled +# ! [fft_ext:extractor] diff --git a/docs/HOWTO/mo_extensions/ops/FFT.py b/docs/HOWTO/mo_extensions/ops/FFT.py new file mode 100644 index 00000000000000..c3f37f7d6d6919 --- /dev/null +++ b/docs/HOWTO/mo_extensions/ops/FFT.py @@ -0,0 +1,40 @@ +""" + Copyright (C) 2018-2020 Intel Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +#! [fft:operation] +from mo.front.common.partial_infer.elemental import copy_shape_infer +from mo.graph.graph import Node, Graph +from mo.ops.op import Op + + +class FFT(Op): + op = 'FFT' + enabled = False + + def __init__(self, graph: Graph, attrs: dict): + super().__init__(graph, { + 'type': self.op, + 'op': self.op, + 'version': 'custom_opset', + 'inverse': None, + 'in_ports_count': 1, + 'out_ports_count': 1, + 'infer': copy_shape_infer + }, attrs) + + def backend_attrs(self): + return ['inverse'] +#! [fft:operation] diff --git a/docs/HOWTO/mri_reconstruction_demo.py b/docs/HOWTO/mri_reconstruction_demo.py new file mode 100644 index 00000000000000..74ce15721fc68a --- /dev/null +++ b/docs/HOWTO/mri_reconstruction_demo.py @@ -0,0 +1,119 @@ +""" + Copyright (C) 2018-2020 Intel Corporation + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +""" + +#! [mri_demo:demo] +import numpy as np +import cv2 as cv +import argparse +import time +from openvino.inference_engine import IECore + + +def kspace_to_image(kspace): + assert(len(kspace.shape) == 3 and kspace.shape[-1] == 2) + fft = cv.idft(kspace, flags=cv.DFT_SCALE) + img = cv.magnitude(fft[:,:,0], fft[:,:,1]) + return cv.normalize(img, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser(description='MRI reconstrution demo for network from https://github.com/rmsouza01/Hybrid-CS-Model-MRI (https://arxiv.org/abs/1810.12473)') + parser.add_argument('-i', '--input', dest='input', help='Path to input .npy file with MRI scan data.') + parser.add_argument('-p', '--pattern', dest='pattern', help='Path to sampling mask in .npy format.') + parser.add_argument('-m', '--model', dest='model', help='Path to .xml file of OpenVINO IR.') + parser.add_argument('-l', '--cpu_extension', dest='cpu_extension', help='Path to extensions library with FFT implementation.') + parser.add_argument('-d', '--device', dest='device', default='CPU', + help='Optional. Specify the target device to infer on; CPU, ' + 'GPU, HDDL or MYRIAD is acceptable. For non-CPU targets, ' + 'HETERO plugin is used with CPU fallbacks to FFT implementation. ' + 'Default value is CPU') + args = parser.parse_args() + + xml_path = args.model + assert(xml_path.endswith('.xml')) + bin_path = xml_path[:xml_path.rfind('.xml')] + '.bin' + + ie = IECore() + ie.add_extension(args.cpu_extension, "CPU") + + net = ie.read_network(xml_path, bin_path) + + device = 'CPU' if args.device == 'CPU' else ('HETERO:' + args.device + ',CPU') + exec_net = ie.load_network(net, device) + + # Hybrid-CS-Model-MRI/Data/stats_fs_unet_norm_20.npy + stats = np.array([2.20295299e-01, 1.11048916e+03, 4.16997984e+00, 4.71741395e+00], dtype=np.float32) + # Hybrid-CS-Model-MRI/Data/sampling_mask_20perc.npy + var_sampling_mask = np.load(args.pattern) # TODO: can we generate it in runtime? + print('Sampling ratio:', 1.0 - var_sampling_mask.sum() / var_sampling_mask.size) + + data = np.load(args.input) + num_slices, height, width = data.shape[0], data.shape[1], data.shape[2] + pred = np.zeros((num_slices, height, width), dtype=np.uint8) + data /= np.sqrt(height * width) + + print('Compute...') + start = time.time() + for slice_id, kspace in enumerate(data): + kspace = kspace.copy() + + # Apply sampling + kspace[var_sampling_mask] = 0 + kspace = (kspace - stats[0]) / stats[1] + + # Forward through network + input = np.expand_dims(kspace.transpose(2, 0, 1), axis=0) + outputs = exec_net.infer(inputs={'input_1': input}) + output = next(iter(outputs.values())) + output = output.reshape(height, width) + + # Save predictions + pred[slice_id] = cv.normalize(output, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U) + + print('Elapsed time: %.1f seconds' % (time.time() - start)) + + WIN_NAME = 'MRI reconstruction with OpenVINO' + + slice_id = 0 + def callback(pos): + global slice_id + slice_id = pos + + kspace = data[slice_id] + img = kspace_to_image(kspace) + + kspace[var_sampling_mask] = 0 + masked = kspace_to_image(kspace) + + rec = pred[slice_id] + + # Add a header + border_size = 20 + render = cv.hconcat((img, masked, rec)) + render = cv.copyMakeBorder(render, border_size, 0, 0, 0, cv.BORDER_CONSTANT, value=255) + cv.putText(render, 'Original', (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0) + cv.putText(render, 'Sampled (PSNR %.1f)' % cv.PSNR(img, masked), (width, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0) + cv.putText(render, 'Reconstructed (PSNR %.1f)' % cv.PSNR(img, rec), (width*2, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0) + + cv.imshow(WIN_NAME, render) + cv.waitKey(1) + + cv.namedWindow(WIN_NAME, cv.WINDOW_NORMAL) + print(num_slices) + cv.createTrackbar('Slice', WIN_NAME, num_slices // 2, num_slices - 1, callback) + callback(num_slices // 2) # Trigger initial visualization + cv.waitKey() +#! [mri_demo:demo] diff --git a/docs/IE_DG/Bfloat16Inference.md b/docs/IE_DG/Bfloat16Inference.md index e814a8948c44bb..136607af8ad435 100644 --- a/docs/IE_DG/Bfloat16Inference.md +++ b/docs/IE_DG/Bfloat16Inference.md @@ -2,7 +2,8 @@ ## Disclaimer -Inference Engine with the bfloat16 inference implemented on CPU must support the `avx512_bf16` instruction and therefore the bfloat16 data format. +Inference Engine with the bfloat16 inference implemented on CPU must support the native `avx512_bf16` instruction and therefore the bfloat16 data format. +It is possible to use bfloat16 inference in simulation mode on platforms with Intel® Advanced Vector Extensions 512 (Intel® AVX-512), but it leads to significant performance degradation in comparison with FP32 or native `avx512_bf16` instruction usage. ## Introduction @@ -12,7 +13,7 @@ Bfloat16 computations (referred to as BF16) is the Brain Floating-Point format w Preserving the exponent bits keeps BF16 to the same range as the FP32 (~1e-38 to ~3e38). This simplifies conversion between two data types: you just need to skip or flush to zero 16 low bits. Truncated mantissa leads to occasionally less precision, but according to [investigations](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus), neural networks are more sensitive to the size of the exponent than the mantissa size. Also, in lots of models, precision is needed close to zero but not so much at the maximum range. -Another useful feature of BF16 is possibility to encode an INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers. +Another useful feature of BF16 is possibility to encode INT8 in BF16 without loss of accuracy, because INT8 range completely fits in BF16 mantissa field. It reduces data flow in conversion from INT8 input image data to BF16 directly without intermediate representation in FP32, or in combination of [INT8 inference](Int8Inference.md) and BF16 layers. See the [Intel's site](https://software.intel.com/sites/default/files/managed/40/8b/bf16-hardware-numerics-definition-white-paper.pdf) for more bfloat16 format details. @@ -22,14 +23,7 @@ There are two ways to check if CPU device can support bfloat16 computations for @snippet snippets/Bfloat16Inference0.cpp part0 -Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the following layers in BF16 computation mode: -* Convolution -* FullyConnected -* InnerProduct -* LRN -* Pooling - -This means that BF16 inference can only be performed with the CPU plugin on the layers listed above. All other layers are executed in FP32. +Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the significant number of layers in BF16 computation mode. ## Lowering Inference Precision @@ -43,18 +37,36 @@ Bfloat16 data usage provides the following benefits that increase performance: 4. Reduced size of data in memory, as a result, larger models fit in the same memory bounds. 5. Reduced amount of data that must be transferred, as a result, reduced data transition time. -For default optimization on CPU, source model converts from FP32 or FP16 to BF16 and executes internally on platforms with native BF16 support. In that case, `KEY_ENFORCE_BF16` is set to `YES`. +For default optimization on CPU, source model is converted from FP32 or FP16 to BF16 and executed internally on platforms with native BF16 support. In this case, `KEY_ENFORCE_BF16` is set to `YES`. The code below demonstrates how to check if the key is set: @snippet snippets/Bfloat16Inference1.cpp part1 -To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers AS IS without modifications with precisions that were set on each layer edge. +To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers as is without modifications with precisions that were set on each layer edge. @snippet snippets/Bfloat16Inference2.cpp part2 +To disable BF16 in C API: + +``` +ie_config_t config = { "ENFORCE_BF16", "NO", NULL}; +ie_core_load_network(core, network, device_name, &config, &exe_network); +``` -An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support. +An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support or BF16 simulation mode. -Low-Precision 8-bit integer models do not convert to BF16, even if bfloat16 optimization is set by default. +Low-Precision 8-bit integer models cannot be converted to BF16, even if bfloat16 optimization is set by default. + +## Bfloat16 Simulation Mode + +Bfloat16 simulation mode is available on CPU and Intel® AVX-512 platforms that do not support the native `avx512_bf16` instruction. The simulator does not guarantee an adequate performance. +To enable Bfloat16 simulator: +* In [Benchmark App](../../inference-engine/samples/benchmark_app/README.md), add the `-enforcebf16=true` option +* In C++ API, set `KEY_ENFORCE_BF16` to `YES` +* In C API: +``` +ie_config_t config = { "ENFORCE_BF16", "YES", NULL}; +ie_core_load_network(core, network, device_name, &config, &exe_network); +``` ## Performance Counters @@ -77,4 +89,4 @@ prob EXECUTED layerType: SoftMax realT The `execType` column of the table includes inference primitives with specific suffixes. -[bf16_format]: img/bf16_format.png \ No newline at end of file +[bf16_format]: img/bf16_format.png diff --git a/docs/IE_DG/Extensibility_DG/AddingNGraphOps.md b/docs/IE_DG/Extensibility_DG/AddingNGraphOps.md index 42eda8f83c0fa4..9717b08f1c427d 100644 --- a/docs/IE_DG/Extensibility_DG/AddingNGraphOps.md +++ b/docs/IE_DG/Extensibility_DG/AddingNGraphOps.md @@ -1,4 +1,4 @@ -# Add Custom nGraph Operations {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps} +# Custom nGraph Operation {#openvino_docs_IE_DG_Extensibility_DG_AddingNGraphOps} Inference Engine Extension API allows to register operation sets (opsets) with custom nGraph operations, it allows to support Networks with unknown operations. @@ -71,10 +71,9 @@ nGraph provides opsets mechanism for operation versioning. Different opsets dist When specifying opset names, follow the rules below: * Use unique opset names. -* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opest2`. +* Do not use the following built-in opset names: `extension`, `experimental`, `opset1`, `opset2`, `opset3`, ... , `opsetN`. * Make sure that the Model Optimizer and your extension use the same opset names. -* IR v10 layers have the mandatory `version` attribute specifying the opset. -* `opset1` is the name of default operations set. +* IR v10 operations have the mandatory `version` attribute specifying the opset. Operations from the default opset cannot be redefined. Use a custom opset to create a new operation or extend functionality of an existing operation from another opset. diff --git a/docs/IE_DG/Extensibility_DG/CPU_Kernel.md b/docs/IE_DG/Extensibility_DG/CPU_Kernel.md index 205ae64a6e1825..0e2adca76a8775 100644 --- a/docs/IE_DG/Extensibility_DG/CPU_Kernel.md +++ b/docs/IE_DG/Extensibility_DG/CPU_Kernel.md @@ -1,4 +1,4 @@ -# How to Implement Custom CPU Layers {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel} +# How to Implement Custom CPU Operations {#openvino_docs_IE_DG_Extensibility_DG_CPU_Kernel} The primary vehicle for the performance of the CPU codepath in the Inference Engine is the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN), and new CPU kernels extend the Inference Engine plugin for the Intel MKL-DNN. Implementing the InferenceEngine::ILayerExecImpl defines a general CPU-side extension. There are no Intel MKL-DNN specifics in the way you need to implement a kernel. diff --git a/docs/IE_DG/Extensibility_DG/Extension.md b/docs/IE_DG/Extensibility_DG/Extension.md index 6df3a1424ec0e4..69bb614e605681 100644 --- a/docs/IE_DG/Extensibility_DG/Extension.md +++ b/docs/IE_DG/Extensibility_DG/Extension.md @@ -1,7 +1,10 @@ # Extension Library {#openvino_docs_IE_DG_Extensibility_DG_Extension} Inference Engine provides an InferenceEngine::IExtension interface, which defines the interface for Inference Engine Extension libraries. -All extension libraries should be inherited from this interface. +All extension libraries should be inherited from this interface. The example below contains implementation of two operations: `Template` +used as an example in this document and `FFT` used as a more complex example from the [Custom Operations Guide](../../HOWTO/Custom_Layers_Guide.md). + +> **NOTE**: `FFT` operation is implemented using OpenCV library functions `cv::dft` and `cv::idft`. Based on that, declaration of an extension class can look as follows: diff --git a/docs/IE_DG/Extensibility_DG/GPU_Kernel.md b/docs/IE_DG/Extensibility_DG/GPU_Kernel.md index a918076e756112..59c0f070cf0693 100644 --- a/docs/IE_DG/Extensibility_DG/GPU_Kernel.md +++ b/docs/IE_DG/Extensibility_DG/GPU_Kernel.md @@ -1,16 +1,16 @@ -# How to Implement Custom GPU Layers {#openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel} +# How to Implement Custom GPU Operations {#openvino_docs_IE_DG_Extensibility_DG_GPU_Kernel} -The GPU codepath abstracts many details about OpenCL™. You need to provide the kernel code in OpenCL C and the configuration file that connects the kernel and its parameters to the parameters of the layer. +The GPU codepath abstracts many details about OpenCL™. You need to provide the kernel code in OpenCL C and the configuration file that connects the kernel and its parameters to the parameters of the operation. -There are two options of using custom layer configuration file: +There are two options of using custom operation configuration file: * Include a section with your kernels into the global automatically-loaded `cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file, which is hosted in the `/deployment_tools/inference_engine/bin/intel64/{Debug/Release}` folder -* Call the `InferenceEngine::Core::SetConfig()` method from your application with the `InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE` key and the configuration file name as a value before loading the network that uses custom layers to the plugin: +* Call the `InferenceEngine::Core::SetConfig()` method from your application with the `InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE` key and the configuration file name as a value before loading the network that uses custom operations to the plugin: @snippet snippets/GPU_Kernel.cpp part0 All Inference Engine samples, except trivial `hello_classification`, -feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom layers for the classification sample, run the command below: +feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below: ```sh $ ./classification_sample -m /bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU -c /custom_layer_example.xml @@ -19,7 +19,7 @@ $ ./classification_sample -m /bvlc_alexnet_fp16.xml -i ./validati ## Configuration File Format The configuration file is expected to follow the `.xml` file structure -with a node of the type `CustomLayer` for every custom layer you provide. +with a node of the type `CustomLayer` for every custom operation you provide. The definitions described in the sections below use the following notations: @@ -32,14 +32,13 @@ Notation | Description ### CustomLayer Node and Sub-node Structure -`CustomLayer` node contains the entire configuration for a single custom -layer. +`CustomLayer` node contains the entire configuration for a single custom operation. | Attribute Name |\# | Description | |-----|-----|-----| -| `name` | (1) | The name of the layer type to be used. This name should be identical to the type used in the IR.| -| `type` | (1) | Must be `SimpleGPU`. | -| `version` | (1) | Must be `1`. | +| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the IR.| +| `type` | (1) | Must be `SimpleGPU`. | +| `version` | (1) | Must be `1`. | **Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+), `WorkSizes` (0/1) @@ -69,9 +68,9 @@ the sources during compilation (JIT). | Attribute Name | \# | Description | |------|-------|------| | `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well (taken as a string). | -| `param` | (0/1) | This parameter value is used as the value of this JIT definition. | +| `param` | (0/1) | This parameter value is used as the value of this JIT definition. | | `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. | -| `default` | (0/1) | The default value to be used if the specified parameters is missing from the layer in the IR. | +| `default` | (0/1) | The default value to be used if the specified parameters is missing from the operation in the IR. | **Sub-nodes:** None @@ -92,7 +91,7 @@ weights or biases). | Attribute Name | \# | Description | |----|-----|------| -| `name` | (1) | Name of a blob attached to a layer in the IR | +| `name` | (1) | Name of a blob attached to a operation in the IR | | `arg-index` | (1) | 0-based index in the entry function arguments to be bound to | **Sub-nodes**: None @@ -105,7 +104,7 @@ weights or biases). |------|-------|-------| | `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. | | `type` | (1) | `input` or `output` | -| `port-index` | (1) | 0-based index in the layer’s input/output ports in the IR | +| `port-index` | (1) | 0-based index in the operation input/output ports in the IR | | `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB` (also in all lowercase). Default value: `BFYX` | ### CompilerOptions Node and Sub-node Structure @@ -178,7 +177,7 @@ For an example, see [Example Kernel](#example-kernel). | `_PITCHES_SIZE`| The size of the `_PITCHES` array | | `_OFFSET`| The number of elements from the start of the tensor to the first valid element (bypassing the lower padding) | All `` values are automatically defined for every tensor -bound to this layer (`INPUT0`, `INPUT1`, `OUTPUT0`, and so on), as shown +bound to this operation (`INPUT0`, `INPUT1`, `OUTPUT0`, and so on), as shown in the following for example: ```sh diff --git a/docs/IE_DG/Extensibility_DG/Intro.md b/docs/IE_DG/Extensibility_DG/Intro.md index b5d90cba061ad3..06d030fc710294 100644 --- a/docs/IE_DG/Extensibility_DG/Intro.md +++ b/docs/IE_DG/Extensibility_DG/Intro.md @@ -2,19 +2,22 @@ Inference Engine Extensibility API allows to add support of custom operations to the Inference Engine. Extension should contain operation sets with custom operations and execution kernels for custom operations. -Physically, an extension library can be represented as a dynamic library exporting the single `CreateExtension` function that allows to create a new extension instance. +Physically, an extension library can be represented as a dynamic library exporting the single `CreateExtension` function +that allows to create a new extension instance. -Extensibility library can be loaded to the InferenceEngine::Core object using the InferenceEngine::Core::AddExtension method. +Extensibility library can be loaded to the `InferenceEngine::Core` object using the +`InferenceEngine::Core::AddExtension` method. ## Inference Engine Extension Library -Inference Engine Extension dynamic library contains several main components: +Inference Engine Extension dynamic library contains several components: - * [Extension class](Extension.md): + * [Extension Library](Extension.md): - Contains custom operation sets - Provides CPU implementations for custom operations - * [Custom operations](Intro.md): - - Allows to use InferenceEngine::Core::ReadNetwork to read Intermediate Representation (IR) with unsupported operations + * [Custom nGraph Operation](AddingNGraphOps.md): + - Allows to use `InferenceEngine::Core::ReadNetwork` to read Intermediate Representation (IR) with unsupported + operations - Allows to create `ngraph::Function` with unsupported operations - Provides shape inference mechanism for custom operations @@ -26,13 +29,13 @@ at `/docs/template_extension`. The Inference Engine workflow involves the creation of custom kernels and either custom or existing operations. -An _Operation_ is a Network building block implemented in the training framework, for example, `Convolution` in Caffe*. +An _Operation_ is a network building block implemented in the training framework, for example, `Convolution` in Caffe*. A _Kernel_ is defined as the corresponding implementation in the Inference Engine. -Refer to the [Custom Layers in the Model Optimizer](../../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) section for details on how -mapping between framework layers and Inference Engine kernels is registered. +Refer to the [Model Optimizer Extensibility](../../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) +for details on how a mapping between framework operations and Inference Engine kernels is registered. -In short, you can plug your own kernel implementations into the Inference Engine and map them to the layers in the original framework. +In short, you can plug your own kernel implementations into the Inference Engine and map them to the operations in the original framework. The following pages describe how to integrate custom _kernels_ into the Inference Engine: diff --git a/docs/IE_DG/Samples_Overview.md b/docs/IE_DG/Samples_Overview.md index d63a310de0b06b..245fa68e900e80 100644 --- a/docs/IE_DG/Samples_Overview.md +++ b/docs/IE_DG/Samples_Overview.md @@ -127,6 +127,63 @@ You can also build a generated solution manually. For example, if you want to bu Microsoft Visual Studio and open the generated solution file from the `C:\Users\\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\Samples.sln` directory. +### Build the Sample Applications on macOS* + +The officially supported macOS* build environment is the following: + +* macOS* 10.15 64-bit +* Clang* compiler from Xcode* 10.1 or higher +* CMake* version 3.13 or higher + +> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode). + +To build the C or C++ sample applications for macOS, go to the `/inference_engine/samples/c` or `/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script: +```sh +build_samples.sh +``` + +Once the build is completed, you can find sample binaries in the following folders: +* C samples: `~/inference_engine_c_samples_build/intel64/Release` +* C++ samples: `~/inference_engine_cpp_samples_build/intel64/Release` + +You can also build the sample applications manually: + +> **NOTE**: If you have installed the product as a root user, switch to root mode before you continue: `sudo -i` + +> **NOTE**: Before proceeding, make sure you have OpenVINO™ environment set correctly. This can be done manually by +```sh +cd /bin +source setupvars.sh +``` + +1. Navigate to a directory that you have write access to and create a samples build directory. This example uses a directory named `build`: +```sh +mkdir build +``` +> **NOTE**: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory: `~/inference_engine_samples_build/` + +2. Go to the created directory: +```sh +cd build +``` + +3. Run CMake to generate the Make files for release or debug configuration. For example, for C++ samples: + - For release configuration: + ```sh + cmake -DCMAKE_BUILD_TYPE=Release /inference_engine/samples/cpp + ``` + - For debug configuration: + ```sh + cmake -DCMAKE_BUILD_TYPE=Debug /inference_engine/samples/cpp + ``` +4. Run `make` to build the samples: +```sh +make +``` + +For the release configuration, the sample application binaries are in `/intel64/Release/`; +for the debug configuration — in `/intel64/Debug/`. + ## Get Ready for Running the Sample Applications ### Get Ready for Running the Sample Applications on Linux* diff --git a/docs/IE_DG/ShapeInference.md b/docs/IE_DG/ShapeInference.md index a7cdddb784d676..ea86911ff397e0 100644 --- a/docs/IE_DG/ShapeInference.md +++ b/docs/IE_DG/ShapeInference.md @@ -1,6 +1,36 @@ Using Shape Inference {#openvino_docs_IE_DG_ShapeInference} ========================================== +OpenVINO™ provides the following methods for runtime model reshaping: + +* **Set a new input shape** with the `InferenceEngine::CNNNetwork::reshape` method.
+ The `InferenceEngine::CNNNetwork::reshape` method updates input shapes and propagates them down to the outputs of the model through all intermediate layers. + +> **NOTES**: +> - Starting with the 2021.1 release, the Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping in most cases. +> - Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.
+> - If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin. + +* **Set a new batch dimension value** with the `InferenceEngine::CNNNetwork::setBatchSize` method.
+ The meaning of a model batch may vary depending on the model design. + This method does not deduce batch placement for inputs from the model architecture. + It assumes that the batch is placed at the zero index in the shape for all inputs and uses the `InferenceEngine::CNNNetwork::reshape` method to propagate updated shapes through the model. + + The method transforms the model before a new shape propagation to relax a hard-coded batch dimension in the model, if any. + + Use `InferenceEngine::CNNNetwork::reshape` instead of `InferenceEngine::CNNNetwork::setBatchSize` to set new input shapes for the model in case the model has: + * Multiple inputs with different zero-index dimension meanings + * Input without a batch dimension + * 0D, 1D, or 3D shape + + The `InferenceEngine::CNNNetwork::setBatchSize` method is a high-level API method that wraps the `InferenceEngine::CNNNetwork::reshape` method call and works for trivial models from the batch placement standpoint. + Use `InferenceEngine::CNNNetwork::reshape` for other models. + + Using the `InferenceEngine::CNNNetwork::setBatchSize` method for models with a non-zero index batch placement or for models with inputs that do not have a batch dimension may lead to undefined behaviour. + +You can change input shapes multiple times using the `InferenceEngine::CNNNetwork::reshape` and `InferenceEngine::CNNNetwork::setBatchSize` methods in any order. +If a model has a hard-coded batch dimension, use `InferenceEngine::CNNNetwork::setBatchSize` first to change the batch, then call `InferenceEngine::CNNNetwork::reshape` to update other dimensions, if needed. + Inference Engine takes three kinds of a model description as an input, which are converted into an `InferenceEngine::CNNNetwork` object: 1. [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md) through `InferenceEngine::Core::ReadNetwork` 2. [ONNX model](../IE_DG/OnnxImporterTutorial.md) through `InferenceEngine::Core::ReadNetwork` @@ -23,33 +53,7 @@ for (const auto & parameter : parameters) { To feed input data of a shape that is different from the model input shape, reshape the model first. -OpenVINO™ provides the following methods for runtime model reshaping: - -* **Set a new input shape** with the `InferenceEngine::CNNNetwork::reshape` method.
- The `InferenceEngine::CNNNetwork::reshape` method updates input shapes and propagates them down to the outputs of the model through all intermediate layers. - You can reshape a model multiple times like in this application scheme: - ``` - ReadNetwork -> reshape(input_1_shape) -> LoadNetwork -> infer(input_1) - \ - -> reshape(input_2_shape) -> LoadNetwork -> infer(input_2) - ``` - > **NOTES**: - > - Starting with the 2021.1 release, the Model Optimizer converts topologies keeping shape-calculating sub-graphs by default, which enables correct shape propagation during reshaping. - > - Older versions of IRs are not guaranteed to reshape successfully. Please regenerate them with the Model Optimizer of the latest version of OpenVINO™.
- > - If an ONNX model does not have a fully defined input shape and the model was imported with the ONNX importer, reshape the model before loading it to the plugin. -* **Set a new batch dimension value** with the `InferenceEngine::CNNNetwork::setBatchSize` method.
- The meaning of a model batch may vary depending on the model design. - The `InferenceEngine::CNNNetwork::setBatchSize` method deduces the index of a batch dimension based only on the input rank. - This method does not work for models with a non-zero index batch placement or models with inputs without a batch dimension. - The batch-setting algorithm does not involve the shape inference mechanism. - Batch of input and output shapes for all layers is set to a new batch value without layer validation. - It may cause both positive and negative side effects. - Due to the limitations described above, the current method is not recommended to use. - If you need to set a new batch size for the model, use the `CNNNetwork::reshape` method instead. - -Do not use runtime reshaping methods simultaneously, especially do not call the `CNNNetwork::reshape` method after you use `InferenceEngine::CNNNetwork::setBatchSize`. -The `InferenceEngine::CNNNetwork::setBatchSize` method causes irreversible conversion of the internal model representation into the legacy model representation. -The method does not use nGraph for shape inference which leads to reduced reshape opportunities and may affect the performance of the model. +Once the input shape of `InferenceEngine::CNNNetwork` is set, call the `InferenceEngine::Core::LoadNetwork` method to get an `InferenceEngine::ExecutableNetwork` object for inference with updated shapes. There are other approaches to reshape the model during the stage of IR generation or [nGraph::Function creation](../nGraph_DG/build_function.md). @@ -62,8 +66,8 @@ Shape collision during shape propagation may be a sign that a new shape does not Changing the model input shape may result in intermediate operations shape collision. Examples of such operations: -- `Reshape` operation with a hard-coded output shape value -- `MatMul` operation with the `Const` second input cannot be resized by spatial dimensions due to operation semantics +- [`Reshape` operation](../ops/shape/Reshape_1.md) with a hard-coded output shape value +- [`MatMul` operation](../ops/matrix/MatMul_1.md) with the `Const` second input cannot be resized by spatial dimensions due to operation semantics Model structure and logic should not change significantly after model reshaping. - The Global Pooling operation is commonly used to reduce output feature map of classification models output. diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md index 98de8d014145c7..8ce80da1d1579b 100644 --- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -77,7 +77,6 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi * [Converting DeepSpeech from TensorFlow](prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md) * [Converting Language Model on One Billion Word Benchmark from TensorFlow](prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md) * [Converting Neural Collaborative Filtering Model from TensorFlow*](prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md) - * [Converting TensorFlow* Object Detection API Models](prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md) * [Converting TensorFlow*-Slim Image Classification Model Library Models](prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md) * [Converting CRNN Model from TensorFlow*](prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md) @@ -91,17 +90,15 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi * [Model Optimizations Techniques](prepare_model/Model_Optimization_Techniques.md) * [Cutting parts of the model](prepare_model/convert_model/Cutting_Model.md) * [Sub-graph Replacement in Model Optimizer](prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md) - * [(Deprecated) Case-Study: Converting SSD models created with the TensorFlow* Object Detection API](prepare_model/customize_model_optimizer/TensorFlow_SSD_ObjectDetection_API.md) - * [(Deprecated) Case-Study: Converting Faster R-CNN models created with the TensorFlow* Object Detection API](prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md) * [Supported Framework Layers](prepare_model/Supported_Frameworks_Layers.md) * [Intermediate Representation and Operation Sets](IR_and_opsets.md) * [Operations Specification](../ops/opset.md) * [Intermediate Representation suitable for INT8 inference](prepare_model/convert_model/IR_suitable_for_INT8_inference.md) - - * [Custom Layers in Model Optimizer](prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) + * [Model Optimizer Extensibility](prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) * [Extending Model Optimizer with New Primitives](prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md) + * [Extending Model Optimizer with Caffe Python Layers](prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md) + * [Extending Model Optimizer with Custom MXNet* Operations](prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md) * [Legacy Mode for Caffe* Custom Layers](prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md) - * [Model Optimizer Frequently Asked Questions](prepare_model/Model_Optimizer_FAQ.md) * [Known Issues](Known_Issues_Limitations.md) diff --git a/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md b/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md index 869cfa49d5e942..e938848a679444 100644 --- a/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md +++ b/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md @@ -108,6 +108,7 @@ Standard MXNet\* symbols: | SoftmaxActivation | No | | SoftmaxOutput | No | | SoftSign | No | +| Take | The attribute 'mode' is not supported | | Tile | No | | UpSampling | No | | Where | No | diff --git a/docs/MO_DG/prepare_model/convert_model/Converting_Model.md b/docs/MO_DG/prepare_model/convert_model/Converting_Model.md index b523897a773c57..2df7773b8ad57d 100644 --- a/docs/MO_DG/prepare_model/convert_model/Converting_Model.md +++ b/docs/MO_DG/prepare_model/convert_model/Converting_Model.md @@ -38,5 +38,5 @@ Framework-specific parameters for: ## See Also * [Configuring the Model Optimizer](../Config_Model_Optimizer.md) * [IR Notation Reference](../../IR_and_opsets.md) -* [Custom Layers in Model Optimizer](../customize_model_optimizer/Customize_Model_Optimizer.md) -* [Model Cutting](Cutting_Model.md) \ No newline at end of file +* [Model Optimizer Extensibility](../customize_model_optimizer/Customize_Model_Optimizer.md) +* [Model Cutting](Cutting_Model.md) diff --git a/docs/MO_DG/prepare_model/convert_model/Cutting_Model.md b/docs/MO_DG/prepare_model/convert_model/Cutting_Model.md index b208a5f5b5c307..a4bb4e98017276 100644 --- a/docs/MO_DG/prepare_model/convert_model/Cutting_Model.md +++ b/docs/MO_DG/prepare_model/convert_model/Cutting_Model.md @@ -9,7 +9,6 @@ The following examples are the situations when model cutting is useful or even r * model has pre- or post-processing parts that cannot be translated to existing Inference Engine layers. * model has a training part that is convenient to be kept in the model, but not used during inference. * model is too complex (contains lots of unsupported operations that cannot be easily implemented as custom layers), so the complete model cannot be converted in one shot. -* model is one of the supported [SSD models](../customize_model_optimizer/TensorFlow_SSD_ObjectDetection_API.md). In this case, you need to cut a post-processing part off. * problem with model conversion in the Model Optimizer or inference in the Inference Engine occurred. To localize the issue, limit the scope for conversion by iteratively searching for problematic places in the model. * single custom layer or a combination of custom layers is isolated for debugging purposes. diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md index 2eb6b1717a58f5..73e439d83fee39 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md @@ -1,82 +1,1300 @@ -# Custom Layers in the Model Optimizer {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer} +# Model Optimizer Extensibility {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer} -Model Optimizer searches for each layer of the input model in the list of known layers before building the model's internal representation, optimizing the model, and producing the Intermediate Representation. +* [Model Representation in Memory](#model-representation-in-memory) +* [Model Conversion Pipeline](#model-conversion-pipeline) + * [Model Loading](#model-loading) + * [Operations Attributes Extracting](#operations-attributes-extracting) + * [Front Phase](#front-phase) + * [Partial Inference](#partial-inference) + * [Middle Phase](#middle-phase) + * [NHWC to NCHW Layout Change](#layout-change) + * [Back Phase](#back-phase) + * [Intermediate Representation Emitting](#ir-emitting) +* [Graph Traversal and Modification Using `Port`s and `Connection`s](#graph-ports-and-conneсtions) + * [Ports](#intro-ports) + * [Connections](#intro-connections) +* [Model Optimizer Extensions](#extensions) + * [Model Optimizer Operation](#extension-operation) + * [Operation Extractor](#operation-extractor) + * [Graph Transformation Extensions](#graph-transformations) + * [Front Phase Transformations](#front-phase-transformations) + * [Pattern-Defined Front Phase Transformations](#pattern-defined-front-phase-transformations) + * [Specific Operation Front Phase Transformations](#specific-operation-front-phase-transformations) + * [Generic Front Phase Transformations](#generic-front-phase-transformations) + * [Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformations) + * [Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations) + * [Generic Front Phase Transformations Enabled with Transformations Configuration File](#generic-transformations-config-front-phase-transformations) + * [Middle Phase Transformations](#middle-phase-transformations) + * [Pattern-Defined Middle Phase Transformations](#pattern-defined-middle-phase-transformations) + * [Generic Middle Phase Transformations](#generic-middle-phase-transformations) + * [Back Phase Transformations](#back-phase-transformations) + * [Pattern-Defined Back Phase Transformations](#pattern-defined-back-phase-transformations) + * [Generic Back Phase Transformations](#generic-back-phase-transformations) -The list of known layers is different for each of supported frameworks. To see the layers supported by your framework, refer to the [corresponding section](../Supported_Frameworks_Layers.md). +Model Optimizer extensibility mechanism allows to support new operations and custom transformations to generate the +optimized Intermediate Representation (IR) as described in the +[Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md). This +mechanism is a core part of the Model Optimizer and the Model Optimizer uses it under the hood, so the Model Optimizer +itself is a huge set of examples on how to add custom logic to support your model. -Custom layers are layers that are not included into a list of known layers. If your topology contains any layers that are not in the list of known layers, the Model Optimizer classifies them as custom. +There are several cases when the customization is needed: -## Caffe\* Models with Custom Layers +* A model contains operation(s) not known for the Model Optimizer, but these operation(s) could be expressed as a +combination of supported operations. In this case a custom transformation should be implemented to replace unsupported +operation(s) with supported ones. +* A model contains sub-graph of operations which can be replaced with a smaller number of operations to get the better +performance. This example corresponds to so called fusing transformations. For example, replace a sub-graph performing +the following calculation \f$x / (1.0 + e^{-(beta * x)})\f$ with a single operation of type +[Swish](../../../ops/activation/Swish_4.md). +* A model contains a custom framework operation (the operation which is not a part of an official operation set of the +framework) which was developed using the framework extensibility mechanism. In this case the Model Optimizer should know +how to handle the operation and generate a corresponding section in an IR for it. -You have two options if your Caffe\* model has custom layers: +It is necessary to figure out how the Model Optimizer represents a model in a memory and converts it to an IR before +going into details of the Model Optimizer extensibility mechanism. -* **Register the custom layers as extensions to the Model Optimizer**. For instructions, see [Extending Model Optimizer with New Primitives](Extending_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You only need to write a small chunk of Python\* code that lets the Model Optimizer: +> **NOTE**: All paths in this document are provided relatively to the Model Optimizer installation directory if not +> stated otherwise. - * Generate a valid Intermediate Representation according to the rules you specified - * Be independent from the availability of Caffe on your computer - -* **Register the custom layers as Custom and use the system Caffe to calculate the output shape of each Custom Layer**, which is required by the Intermediate Representation format. For this method, the Model Optimizer requires the Caffe Python interface on your system. When registering the custom layer in the `CustomLayersMapping.xml` file, you can specify if layer parameters should appear in Intermediate Representation or if they should be skipped. To read more about the expected format and general structure of this file, see [Legacy Mode for Caffe* Custom Layers](Legacy_Mode_for_Caffe_Custom_Layers.md). This approach has several limitations: +## Model Representation in Memory +The model can be represented as a directed graph where nodes are operations and edges correspond to data passing from a +producer operation (node) to a consumer operation (node). - * If your layer output shape depends on dynamic parameters, input data or previous layers parameters, calculation of output shape of the layer via Caffe can be incorrect. In this case, you need to patch Caffe on your own. - - * If the calculation of output shape of the layer via Caffe fails inside the framework, Model Optimizer is unable to produce any correct Intermediate Representation and you also need to investigate the issue in the implementation of layers in the Caffe and patch it. - - * You are not able to produce Intermediate Representation on any machine that does not have Caffe installed. If you want to use Model Optimizer on multiple machines, your topology contains Custom Layers and you use `CustomLayersMapping.xml` to fallback on Caffe, you need to configure Caffe on each new machine. - - For these reasons, it is best to use the Model Optimizer extensions for Custom Layers: you do not depend on the framework and fully control the workflow. +Model Optimizer uses Python class `mo.graph.graph.Graph` instance to represent the computation graph in memory during +the model conversion. This class is inherited from `networkx.MultiDiGraph` class of the standard `networkx` Python +library and provides many convenient methods to traverse and modify the graph. Refer to the `mo/graph/graph.py` file for +the examples. -If your model contains Custom Layers, it is important to understand the internal workflow of Model Optimizer. Consider the following example. +Model Optimizer keeps all necessary information about the operation in a node attributes. Model Optimizer uses class +`mo.graph.graph.Node` defined in the `mo/graph/graph.py` file which is a wrapper on top of a `networkx` node attributes +dictionary and provides many convenient methods to work with the node. For example, the node `my_node` attribute with a +name `'my_attr'` can be retrieved from the node with the following code `my_node.my_attr` which is equivalent to obtaining +attribute with name `'my_attr'` in the `graph.node['my_node']` dictionary. Refer to the `mo/graph/graph.py` for the +class implementation details. -**Example**: +An operation may have several inputs and outputs. For example, operation [Split](../../../ops/movement/Split_1.md) has +two inputs: data to split and axis to split along, and variable number of outputs depending on a value of attribute +`num_splits`. Each input data to the operation is passed to a specific operation **input port**. An operation produces +an output data from an **output port**. Input and output ports are numbered from 0 independently. Model Optimizer uses +classes `mo.graph.port.Port` and `mo.graph.connection.Connection` which are useful abstraction to perform graph +modifications like nodes connecting/re-connecting and a graph traversing. These classes are widely used in the Model +Optimizer code so it is easy to find a lot of usage examples. -The network has: +There is no dedicated class corresponding to an edge, so low-level graph manipulation is needed to get access to +edge attributes if needed. Meanwhile most manipulations with nodes connections should be done with help of +`mo.graph.connection.Connection` and `mo.graph.port.Port` classes. Thus, low-level graph manipulation is error prone and +is strongly not recommended. -* One input layer (#1) -* One output Layer (#5) -* Three internal layers (#2, 3, 4) +Further details and examples related to a model representation in memory are provided in the sections below in a context +for a better explanation. Also, refer to the [Graph Traversal and Modification Using `Port`s and +`Connection`s](#graph-ports-and-conneсtions) for more information on how to use ports and connections. -The custom and standard layer types are: +## Model Conversion Pipeline +A model conversion pipeline can be represented with the following diagram: -* Layers #2 and #5 are implemented as Model Optimizer extensions. -* Layers #1 and #4 are supported in Model Optimizer out-of-the box. -* Layer #3 is neither in the list of supported layers nor in extensions, but is specified in CustomLayersMapping.xml. +![Model Conversion pipeline](../../../img/MO_conversion_pipeline.png) -> **NOTE**: If any of the layers are not in one of three categories described above, the Model Optimizer fails with an appropriate message and a link to the corresponding question in [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). +Lets review each conversion step in details. -The general process is as shown: +### Model Loading +Model Optimizer gets as input a trained model file. The model loader component of the Model Optimizer reads a model file +using Python bindings provided with the framework and builds an in-memory representation of a computation graph. There +is a separate loader for each supported framework. These loaders are implemented in the +`extensions/load//loader.py` files of the Model Optimizer. -![Example custom layer network](../../img/mo_caffe_priorities.png) +> **NOTE**: Model Optimizer uses a special parser for Caffe\* models built on top of `caffe.proto` file. In case of a +> model loading failure, the Model Optimizer throws an error and requests to prepare the parser that can read the model. +> For more information on how to prepare the custom Caffe\* parser, refer to the [Model Optimizer Frequently Asked Questions #1](../Model_Optimizer_FAQ.md). -1. The example model is fed to the Model Optimizer that **loads the model** with the special parser, built on top of `caffe.proto` file. In case of failure, Model Optimizer asks you to prepare the parser that can read the model. For more information, refer to Model Optimizer, FAQ #1. +The result of a model loading step is a `Graph` object which can be depicted like in the following example: -2. Model Optimizer **extracts the attributes of all layers**. In particular, it goes through the list of layers and attempts to find the appropriate extractor. In order of priority, Model Optimizer checks if the layer is: - - * Registered in `CustomLayersMapping.xml` - * Registered as a Model Optimizer extension - * Registered as a standard Model Optimizer layer - - When the Model Optimizer finds a satisfying condition from the list above, it extracts the attributes according to the following rules: - - * For bullet #1 - either takes all parameters or no parameters, according to the content of `CustomLayersMapping.xml` - * For bullet #2 - takes only the parameters specified in the extension - * For bullet #3 - takes only the parameters specified in the standard extractor - -3. Model Optimizer **calculates the output shape of all layers**. The logic is the same as it is for the priorities. **Important:** the Model Optimizer always takes the first available option. +![Graph After Load](../../../img/MO_graph_after_loader.png) -4. Model Optimizer **optimizes the original model and produces the Intermediate Representation**. +Model Optimizer loader saves an operation instance framework description (usually it is a Protobuf message) into a node +attribute usually with a name `pb` for each operation of an input model. It is important that this is a +**framework-specific** description of an operation. This means that an operation, for example, +[Convolution](../../../ops/convolution/Convolution_1.md) may be represented differently in, for example, Caffe\* and +TensorFlow\* frameworks but perform exactly the same calculations from a mathematical point of view. -## TensorFlow\* Models with Custom Layers +In the example above the "Operation 2" has one input and two outputs. The tensor produced from the output port 0 is +consumed with the "Operation 5" (the input port 0) and "Operation 3" (the input port 1). The tensor produced from the +output port 1 is consumed with the "Operation 4" (the input port 0). -You have two options for TensorFlow\* models with custom layers: +Each edge has two attributes `in` and `out` containing the input port number of the consumer node and the output port +number of the producer node. These attribute describe the fact that nodes are operations consuming some input tensors +and producing some output tensors. But nodes themselves are "black boxes" from the Model Optimizer perspective because +they don't contain required information about the operation they perform. -* **Register those layers as extensions to the Model Optimizer.** In this case, the Model Optimizer generates a valid and optimized Intermediate Representation. -* **If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option.** This feature is helpful for many TensorFlow models. To read more, see [Sub-graph Replacement in the Model Optimizer](Subgraph_Replacement_Model_Optimizer.md). - -## MXNet\* Models with Custom Layers +### Operations Attributes Extracting +The next step is to parse framework-dependent operation representation saved in a node attribute and update the node +attributes with the operation specific attributes. There are three options to do this. -There are two options to convert your MXNet* model that contains custom layers: +1. The extractor extension approach. This is a recommended way to extract attributes for an operation and it is +explained in details in the [Operation Extractor](#extension-extractor) section. -1. Register the custom layers as extensions to the Model Optimizer. For instructions, see [Extending MXNet Model Optimizer with New Primitives](Extending_MXNet_Model_Optimizer_with_New_Primitives.md). When your custom layers are registered as extensions, the Model Optimizer generates a valid and optimized Intermediate Representation. You can create Model Optimizer extensions for both MXNet layers with op `Custom` and layers which are not standard MXNet layers. +2. The legacy approach with a built-in extractor. The file `mo/front//extractor.py` (for example, the one +for Caffe) defines a dictionary with extractors for specific operation types. A key in the dictionary is a type of an +operation to trigger the extracting function for and the value is the function. The function has one parameter – a node +to extract attributes from. This is a legacy and non-extensible approach so it should be avoided. This mechanism will be +removed in future versions of the Model Optimizer. -2. If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option. In MXNet the function is actively used for ssd models provides an opportunity to for the necessary subgraph sequences and replace them. To read more, see [Sub-graph Replacement in the Model Optimizer](Subgraph_Replacement_Model_Optimizer.md). +3. Caffe specific extractor using the `CustomLayersMapping.xml` described in the +[Legacy Mode for Caffe\* Custom Layers](Legacy_Mode_for_Caffe_Custom_Layers.md). This approach is deprecated and will be +removed in future versions of the Model Optimizer. +The extractors execution order is the following: +* `CustomLayersMapping.xml` (for Caffe models only). +* Model Optimizer extension. +* Built-in Model Optimizer extractor. + +The result of operations attributes extracting step can be depicted like in the following example: + +![Graph After Attributes Extraction](../../../img/MO_graph_after_extractors.png) + +The only difference in the graph from the previous step is that nodes contain dictionary with extracted attributes and +operation-specific attributes needed for the Model Optimizer. But starting from this step the Model Optimizer does not +need the original representation of the operation/model and uses just Model Optimizer representation (there are some +very specific cases when the Model Optimizer still uses the `pb` attribute and they are partially covered in this +document). Detailed list of common node attributes and their values is provided below in the +[Model Optimizer Operation](#extension-operation). + +### Front Phase +Due to legacy reasons an user must specify shapes for all not fully-defined inputs of the model. In contrast, other +machine learning frameworks like TensorFlow\* let user create a model with undefined or partially defined input shapes. +As an example, undefined dimension is marked with an integer value `-1` in a TensorFlow\* model or has some string name +in an ONNX\* model. + +During the front phase the Model Optimizer knows shape of the model inputs and constants only and does not know shapes +(and even ranks) of the intermediate tensors. But information about shapes may not be needed to implement particular +transformation. For example, the transformation `extensions/front/TopKNormalize.py` removes an attribute `k` from a +`TopK` node and adds an input constant with the value `k`. The transformation is needed to convert a `TopK` operation +which comes from frameworks where a number of output elements is defined as an attribute of the operation to the +OpenVINO™ [TopK](../../../ops/sort/TopK_3.md) operation semantic which requires this value to be a separate input. + +It is important to mention that sometimes it seems like a transformation cannot be implemented during the front phase +because the actual values of inputs or shapes are needed. But in fact shapes or values manipulations can be implemented +using operations which are added to the graph. Consider the transformation +`extensions/front/onnx/flattenONNX_to_reshape.py` which replaces an ONNX\* operation +[Flatten](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Flatten) with a sub-graph of operations performing +the following (for the case when `axis` is not equal to 0 and 1): + +1. Calculate a shape of the `Flatten` input tensor using the [ShapeOf](../../../ops/shape/ShapeOf_3.md) operation. +2. Get the first `axis` elements from the output of `Shape` operation and calculate their product using the +[ReduceProd](../../../ops/reduction/ReduceProd_1.md) operation. +3. Concatenate output of the `ReduceProd` and constant with the value `-1` (refer to the +[Reshape](../../../ops/shape/Reshape_1.md) specification for an explanation of this value). +4. Use the concatenated value as the second input to the `Reshape` operation. + +It is highly recommended to write shape-agnostic transformations to avoid model reshape-ability issues. Refer to +[Using Shape Inference](../../../IE_DG/ShapeInference.md) for more information related to the reshaping of a model. + +More information on how to develop front phase transformations and dedicated API description is provided in the +[Front Phase Transformations](#front-phase-transformations). + +### Partial Inference +Model Optimizer performs a partial inference of a model during a model conversion. This procedure includes output shapes +calculation of all operations in a model and constant folding (value calculation for constant sub-graphs). The constant +folding is needed for the shape inference because in some cases evaluation of constant sub-graph is needed to calculate +output shapes. For example, the output shape for the [Reshape](../../../ops/shape/Reshape_1.md) operation may be +defined as a mathematical expression using the [ShapeOf](../../../ops/shape/ShapeOf_3.md) operation output. + +> **NOTE**: Model Optimizer does not fold sub-graphs starting from the [ShapeOf](../../../ops/shape/ShapeOf_3.md) +> operation by default because this leads to a model non-reshape-ability (the command line parameter `--static_shape` +> can override this behavior). Refer to [Using Shape Inference](../../../IE_DG/ShapeInference.md) for more information +> related to reshaping of a model. + +Model Optimizer calculates output shapes for all operations in a model to write them to Intermediate Representation +files. + +> **NOTE**: This is a legacy requirement because starting from IR version 10 Inference Engine needs to know shapes of +> the [Const](../../../ops/infrastructure/Constant_1.md) and the [Parameter](../../../ops/infrastructure/Parameter_1.md) +> operations only. The nGraph component of the Inference Engine calculates output shapes for all operations in a model +> using shapes of [Parameter](../../../ops/infrastructure/Parameter_1.md) and +> [Const](../../../ops/infrastructure/Constant_1.md) operations defined with respective operation attributes. + +Model Optimizer inserts "data" nodes to the computation graph before starting the partial inference phase. The data node +corresponds to the specific tensor produced with the operation. Each data node contains two attributes: `shape` +containing the shape of the tensor and `value` which may contain the actual value of the tensor. The value for a `value` +attribute is equal to `None` if this tensor value cannot be calculated. This happens in two cases: when a tensor value +depends on a values passed to the [Parameter](../../../ops/infrastructure/Parameter_1.md) operation of a model or the +Model Optimizer does not have value propagation implementation for the operation. + +The graph before running the partial inference can be depicted like in the following example: + +![Graph Before Partial Inference](../../../img/MO_graph_before_partial_inference.png) + +The difference in a graph structure with a graph during the front phase is not only in the data nodes, but also in the +edge attributes. Note, that an `out` attribute is specified for edges **from operation** nodes only, while an `in` +attribute is specified for edges **from data** nodes only. This corresponds to the fact that a tensor (data node) is +produced from a specific output port of an operation and is consumed with a specific input port of an operation. Also, +a unique data node is created for each output port of an operation and may be used as an input node for several +operation nodes, like the data node "data2_0" which is consumed with the input port 1 of the operation "Operation 3" and +input port 0 of the operation "Operation 5". + +Now consider how the Model Optimizer performs shape and value propagation. Model Optimizer performs graph nodes +topological sort. An error message is thrown if a graph contains a cycle. Then shape inference functions are called for +each node in the graph according to the topological order. Each node of the graph must have an attribute called `infer` +with a shape inference function, which is a function with one parameter – an instance of the `Node` class. The `infer` +attribute is usually set in the operation extractor or when a node is added in some transformation using the Model +Optimizer operation class inherited from `mo.pos.Op` class. Refer to the [Model Optimizer Operation](#extension-operation) +and [Operation Extractor](#operation-extractor) for more information on how to specify a shape inference function. + +A shape inference function should calculate an operation (node) output shape(s) based on input shape(s) and operation +(node) attribute(s) and update `shape` and optionally `value` attributes of the corresponding data node(s). A simplified +example of the shape infer function for the [Reshape](../../../ops/shape/Reshape_1.md) operation (the full version is +available in the file `mo/ops/reshape.py`): + +```py + @staticmethod + def infer(node: Node): + name = node.soft_get('name', node.id) + + input_shape = node.in_port(0).data.get_shape() # get the input tensor shape + new_shape = node.in_port(1).data.get_value() # get the value defining the output tensor shape. This tensor may + # have special values like 0 and -1 + + output_shape = ... # calculate output shape without special values like 0 and -1 + + if node.in_port(0).data.get_value() is not None: # if the input value is defined then calculate output value; + # shape will be updated automatically with the value shape + node.out_port(0).data.set_value(node.in_port(0).data.get_value().reshape(output_shape)) + else: # in the opposite case calculate the output shape only + node.out_port(0).data.set_shape(output_shape) +``` + +Methods `in_port()` and `output_port()` of the `Node` class are used to get and set data node attributes. Refer to the +[Graph Traversal and Modification Using `Port`s and `Connection`s](#graph-ports-and-conneсtions) section on how to use +them. + +> **NOTE**: A shape inference function should perform output shape calculation in the original model layout. For +> example, OpenVINO™ supports Convolution operations in NCHW layout only but TensorFlow\* supports NHWC layout as +> well. Model Optimizer shape inference function calculates output shapes for NHWC Convolutions in NHWC layout and only +> during the layout change phase the shape is converted to NCHW. + +> **NOTE**: There is a legacy approach to read data node attribute like `input_shape = op_node.in_node(0).shape` and +> modify data nodes attributes like `op_node.out_node(0).shape = some_value`. This approach is still used in the Model +> Optimizer code but is not recommended. Instead use approach described in the [Ports](#intro-ports). + +### Middle Phase +The middle phase starts after the partial inference. At this phase a graph contains data nodes and output shapes of all +operations in the graph have been calculated. Any transformation implemented at this stage must update `shape` +attribute for all newly added operations. It is highly recommended to use API desribed in the +[Graph Traversal and Modification Using `Port`s and `Connection`s](#graph-ports-and-conneсtions) because modification of +a graph using this API causes automatic re-inference of affected nodes as well as necessary data nodes creation. + +More information on how to develop middle transformations and dedicated API description is provided in the +[Middle Phase Transformations](#middle-phase-transformations). + +### NHWC to NCHW Layout Change +There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations +are triggered by default for TensorFlow\* models only because it is the only framework with Convolution operations in +NHWC layout. + +> **NOTE**: If a TensorFlow\* model is in NCHW layout then an user should specify `--disable_nhwc_to_nchw` command line +> parameter to disable these transformations. + +The layout change is a complex problem and detailed explanation of it is out of scope of this document. A very brief +explanation of this process is provided below: + +1. Model Optimizer changes output shapes of most of operations producing 4D and 5D (four dimensional and five +dimensional) tensors as if they were in NHWC layout to NCHW layout: `nchw_shape = np.array(nhwc_shape)[0, 3, 1, 2]` for +4D and `nchw_shape = np.array(nhwc_shape)[0, 4, 1, 2, 3]` for 5D. This permutation does not happen for some operations +with specific conditions identified during a model conversion. +2. Model Optimizer inserts [Gather](../../../ops/movement/Gather_1.md) operations to the sub-graph relates to shapes +calculation to perform shape calculation in a correct layout. +3. Model Optimizer inserts [Transpose](../../../ops/movement/Transpose_1.md) operations for some operations with +specific conditions identified during a model conversion to produce correct inference results. + +The list of main transformations responsible for a layout change are: `extensions/middle/ApplyPermutations.py`, +`extensions/middle/InsertLayoutPropagationTransposes.py`, `extensions/middle/MarkSubgraphsWithCorrectLayout.py`, +`extensions/middle/ApplyNHWCtoNCHWpermutation.py` and `extensions/middle/LayoutChangeForConstantShapePaths.py`. +Refer to the source code of these transformations for more details on how the layout change works. + +### Back Phase +The back phase starts after the layout change to NCHW. This phase contains mostly the following transformations: + +1. Transformations which should be working with a graph in the NCHW layout and thus cannot be implemented in the middle +phase. +2. Transformations which replace nodes corresponding to internal Model Optimizer operations with nodes corresponding to +[opset](@ref openvino_docs_ops_opset) operations. +3. Transformations which normalize operations inputs according to the specification. +4. Final optimization transformations. + +A graph structure during the back phase is the same as during the middle phase. There is no difference in writing middle +and back transformations. + +More information on how to develop back transformations and dedicated API description is provided in the +[Back Phase Transformations](#back-phase-transformations). + +### Intermediate Representation Emitting +The last phase of a model conversion is the Intermediate Representation emitting. Model Optimizer performs the following +steps: + +1. Iterates over all operation nodes in the graph and checks that all nodes have attribute `type` set. This attribute +defines the operation type and used in the Inference Engine to instantiate proper operation from the +[opset](@ref openvino_docs_ops_opset) specified in the `version` attribute of the node. If some node does not have +attribute `type` or its values is equal to `None` then the Model Optimizer exits with an error. +2. Performs type inference of graph operations similar to the shape inference. Inferred data types are saved to a port +attributes in the IR. +3. Performs topological sort of the graph and changes `id` attribute of all operation nodes to be sequential integer +values starting from 0. +4. Saves all Constants values to the `.bin` file. Constants with the same value are shared among different operations. +5. Generates `.xml` file defining a graph structure. The information about operation inputs and outputs are prepared +uniformly for all operations regardless of their type. A list of attributes to be saved to an `.xml` file is defined +with the `backend_attrs()` or `supported_attrs()` of the `Op` class used for a graph node instantiation. For more +information on how the operation attributes are saved to XML refer to the function `prepare_emit_ir()` in +the `mo/pipeline/common.py` file and [Model Optimizer Operation](#extension-operation). + +## Graph Traversal and Modification Using `Port`s and `Connection`s +There are three APIs for a graph traversal and transformation used in the Model Optimizer: +1. The API provided with the `networkx` Python library for the `networkx.MultiDiGraph` class which is the base class for +the `mo.graph.graph.Graph` object. Refer to the [Model Representation in Memory](#model-representation-in-memory) for +more details. For example, the following methods belong to this API level: `graph.add_edges_from([list])`, +`graph.add_node(x, attrs)`, `graph.out_edges(node_id)` etc where `graph` is a an instance of the `networkx.MultiDiGraph` +class. **This is the lowest-level API and its usage should be avoided in the Model Optimizer transformations**. +2. The API built around the `mo.graph.graph.Node` class. The `Node` class is the primary class to work with graph nodes +and their attributes. **There are some `Node` class methods not recommended to use and some functions defined in the +`mo.graph.graph` have been deprecated**. Examples of such methods and functions are: +`node.in_node(y)`, `node.out_node(x)`, `node.get_outputs()`, `node.insert_node_after(n1, y)`, `create_edge(n1, n2)` etc. +Refer to the `mo/graph/graph.py` for more details. +3. The high-level API called Model Optimizer Graph API which uses `mo.graph.graph.Graph`, `mo.graph.port.Port` and +`mo.graph.connection.Connection` classes. For example, the following methods belong to this API level: +`node.in_port(x)`, `node.out_port(y)`, `port.get_connection()`, `connection.get_source()`, +`connection.set_destination(dest_port)` etc. **This is the recommended API to be used in the Model Optimizer +transformations and operations implementation**. + +The main benefit of using Model Optimizer Graph API is that it hides some internal implementation details (the fact that +the graph contains data nodes), provides API to perform safe and predictable graph manipulations and adds operation +semantic to the graph. This is achieved with introduction of concepts of ports and connections. This chapter is +dedicated to the Model Optimizer Graph API and does not cover other two non-recommended APIs. + +### Ports +An operation semantic describes how many inputs and outputs the operation have. For example, operations +[Parameter](../../../ops/infrastructure/Parameter_1.md) and [Const](../../../ops/infrastructure/Constant_1.md) have no +inputs and have one output, operation [ReLU](../../../ops/activation/ReLU_1.md) has one input and one output, operation +[Split](../../../ops/movement/Split_1.md) has 2 inputs and variable number of outputs depending on the value of the +attribute `num_splits`. + +Each operation node in the graph (an instance of the `Node` class) has 0 or more input and output ports (instances of +the `mo.graph.port.Port` class). `Port` object has several attributes: +* `node` - the instance of the `Node` object the port belongs to. +* `idx` - the port number. Input and output ports are numbered independently starting from `0`. Thus operation +[ReLU](../../../ops/activation/ReLU_1.md) has one input port (with index `0`) and one output port (with index `0`). +* `type` - the type of the port. Could be equal to either `"in"` or `"out"`. +* `data` - the object which should be used to get attributes of the corresponding data node. This object has methods +`get_shape()` / `set_shape()` and `get_value()` / `set_value()` to get/set shape/value of the corresponding data node. +For example, `in_port.data.get_shape()` returns an input shape of a tensor connected to input port `in_port` +(`in_port.type == 'in'`), `out_port.data.get_value()` returns a value of a tensor produced from output port `out_port` +(`out_port.type == 'out'`). + +> **NOTE**: Functions `get_shape()` and `get_value()` return `None` until the partial inference phase. Refer to the +> [Model Conversion Pipeline](#model-conversion-pipeline) for more information about model conversion phases and +> [Partial Inference](#partial-inference) about partial inference phase. + +There are several methods of the `Node` class to get the instance of a corresponding port: +* `in_port(x)` and `out_port(x)` to get the input/output port with number `x`. +* `in_ports()` and `out_ports()` to get a dictionary where key is a port number and the value is the corresponding +input/output port. + +Attributes `in_ports_count` and `out_ports_count` of the `Op` class instance define default number of input and output +ports to be created for the `Node` . However, additional input/output ports can be added using methods +`add_input_port()` and `add_output_port()`. Port also can be removed using `delete_input_port()` and +`delete_output_port()` methods. + +The `Port` class is just an abstraction which works with edges incoming/outgoing to/from a specific `Node` instance. For +example, output port with `idx = 1` corresponds to the outgoing edge of a node with an attribute `out = 1`, the input +port with `idx = 2` corresponds to the incoming edge of a node with an attribute `in = 2`. + +Consider an example of a graph part with 4 operation nodes "Op1", "Op2", "Op3" and "Op4" and a number of data nodes +depicted with light green boxes. + +![Ports example 1](../../../img/MO_ports_example_1.png) + +Operation nodes have input ports (yellow squares) and output ports (light purple squares). Input port may not be +connected. For example, the input port 2 of node "Op1" does not have incoming edge, while output port always has an +associated data node (after the partial inference when the data nodes are added to the graph) which may have no +consumers. + +Ports can be used to traverse a graph. The method `get_source()` of an input port returns an output port producing the +tensor the input port consumes. It is important that the method works the same during front, middle and back phases of a +model conversion even though the graph structure changes (there is no data nodes in the graph during the front phase). + +Let's assume that there are 4 instances of `Node` object `op1, op2, op3` and `op4` corresponding to nodes "Op1", "Op2", +"Op3" and "Op4" correspondingly. The result of `op2.in_port(0).get_source()` and `op4.in_port(1).get_source()` is the +same object `op1.out_port(1)` of type `Port`. + +The method `get_destination()` of an output port returns the input port of the node consuming this tensor. If there are +multiple consumers of this tensor then the error is raised. The method `get_destinations()` of an output port returns a +list of input ports consuming the tensor. + +The method `disconnect()` removes a node incoming edge corresponding to the specific input port. The method removes +several edges if it is applied during the front phase for a node output port connected with multiple nodes. + +The method `port.connect(another_port)` connects output port `port` and input port `another_port`. The method handles +situations when the graph contains data nodes (middle and back phases) and not just creates an edge between two nodes +but also automatically creates data node or re-uses existing data node. If the method is used during the front phase and +data nodes do not exist the method creates edge and properly sets `in` and `out` edge attributes. + +For example, applying the following two methods to the graph above will result in the graph depicted below: + +```py +op4.in_port(1).disconnect() +op3.out_port(0).connect(op4.in_port(1)) +``` + +![Ports example 2](../../../img/MO_ports_example_2.png) + +> **NOTE**: Refer to the `Node` class implementation in the `mo/graph/graph.py` and `Port` class implementation in the +`mo/graph/port.py` for a full list of available methods. + +### Connections +Connection is an concept introduced to easily and reliably perform graph modifications. Connection corresponds to a +link between a source output port with one or more destination input ports or a link between a destination input port +and source output port producing data. So each port is connected with one or more ports with help of a connection. +Model Optimizer uses the `mo.graph.connection.Connection` class to represent a connection. + +There is only one method `get_connection()` of the `Port` class to get the instance of the corresponding `Connection` +object. If the port is not connected then the returned value is `None`. + +For example, the method `op3.out_port(0).get_connection()` returns a `Connection` object encapsulating edges from node +"Op3" to data node "data_3_0" and two edges from data node "data_3_0" to two ports of the node "Op4". + +The `Connection` class provides methods to get source and destination(s) ports the connection corresponds to: +* `connection.get_source()` - returns an output `Port` object producing the tensor. +* `connection.get_destinations()` - returns a list of input `Port`s consuming the data. +* `connection.get_destination()` - returns a single input `Port` consuming the data. If there are multiple consumers +then the exception is raised. + +The `Connection` class provides methods to modify a graph by changing a source or destination(s) of a connection. For +example, the function call `op3.out_port(0).get_connection().set_source(op1.out_port(0))` changes source port of edges +consuming data from port `op3.out_port(0)` to `op1.out_port(0)`. The transformed graph from the sample above is depicted +below: + +![Connection example 1](../../../img/MO_connection_example_1.png) + +Another example is the method `connection.set_destination(dest_port)`. It disconnects `dest_port` and all input ports +the connection is currently connected to and connects the connection source port to the `dest_port`. + +Note that connection work seamlessly during front, middle and back phases and hides the fact that the graph structure is +different. + +> **NOTE**: Refer to the `Connection` class implementation in the `mo/graph/connection.py` for a full list of available +methods. + +## Model Optimizer Extensions +Model Optimizer extensions allow to inject some logic to the model conversion pipeline without changing the Model +Optimizer core code. There are three types of the Model Optimizer extensions: + +1. Model Optimizer operation. +2. A framework operation extractor. +3. A model transformation which can be executed during front, middle or back phase of the model conversion. + +An extension is just a plain text file with a Python code. The file should contain a class (or classes) inherited from +one of extension base classes. Extension files should be saved to a directory with the following structure: + +```sh +.// + ops/ - custom operations + front/ - framework independent front transformations + / - front transformations for models only and extractors for operations + / - front transformations for models only and extractors for operations + ... + middle/ - middle transformations + back/ - back transformations +``` + +Model Optimizer uses the same layout internally to keep built-in extensions. The only exception is that the directory +`mo/ops/` is also used as a source of the Model Optimizer operations due to historical reasons. + +> **NOTE**: The name of a root directory with extensions should not be equal to "extensions" because it will result in a +> name collision with the built-in Model Optimizer extensions. + +> **NOTE**: Model Optimizer itself is built using these extensions so there are huge number of examples on how to use +> them in the Model Optimizer code. + +### Model Optimizer Operation +Model Optimizer defines a class `mo.ops.Op` (`Op` will be used later in the document to be short) which is a base class +for an operation used in the Model Optimizer. The instance of the `Op` class serves several purposes: + +1. Stores the operation attributes. +2. Stores the operation shape/value and type inference functions. +3. Defines operation attributes to be saved to the corresponding IR section. +4. Contains convenient methods to create a graph node from an `Op` object instance and connect it with the existing +graph. +5. Used in the extractors to store parsed attributes and operation specific attributes in the dedicated graph node. + +It is important to mention that there is no connection between the instance of the `Op` class and the `Node` object +created from it. The `Op` class is just an attributes container describing the operation. Model Optimizer uses the `Op` +class during a model conversion to create node of the graph with attributes copied from the `Op` class instance. Graph +manipulations are performed with graph `Node`s and their attributes and does not involve `Op`s. + +There are a number of common attributes used in the operations. Here is the list of these attributes with description. + +* `id` — unique identifier of a node in a graph. Generated automatically equal to the number of nodes in the graph plus +1 if not specified. **Mandatory**. +* `name` — name of the operation. Generated automatically equal to the `id` if not specified. **Mandatory**. +* `type` — type of the operation according to the [opset specification](@ref openvino_docs_ops_opset). For the internal +Model Optimizer operations this attribute should be set to `None`. The model conversion fails if an operation with +`type` equal to `None` comes to the IR emitting phase. **Mandatory**. +* `version` — the operation set (opset) name the operation belongs to. If not specified then the Model Optimizer sets it +equal to `experimental`. Refer to [nGraph Basic Concepts](@ref openvino_docs_nGraph_DG_basic_concepts) for more +information about operation sets. **Mandatory**. +* `op` — Model Optimizer type of the operation. In many cases the value of `type` is equal to the value of `op`. But +when the Model Optimizer cannot instantiate opset operation during model loading it creates an instance of an internal +operation and the attribute `op` is used as a type of this internal operation. Later in the pipeline the node created +from an internal operation will be replaced during front, middle or back phase with node(s) created from the opset. +* `infer` — the attribute defines a function calculating output tensor(s) shape and optionally value(s). The attribute +may be set to `None` for internal Model Optimizer operations used during the front phase only. Refer to the +[Partial Inference](#partial-inference) for more information about the shape inference function. +* `type_infer` — the attribute defines a function calculating output tensor(s) data type. If the attribute is not +defined then the default function is used. The function checks if the node attribute `data_type` is set and then +propagates this type to the output tensor from the port 0, otherwise it propagates the data type of the tensor coming +into the input port 0 to the output tensor from the port 0. +* `in_ports_count` — default number of input ports to be created for the operation. Additional ports can be created or +redundant ports can be removed using dedicated `Node` class API methods. +* `out_ports_count` — default number of output ports to be created for the operation. Additional ports can be created or +redundant ports can be removed using dedicated `Node` class API methods. + +Here is an example of the Model Optimizer class for the operation [SoftMax](../../../ops/activation/SoftMax_1.md) from +the file `mo/ops/softmax.py` with the in code comments. + +```py +class Softmax(Op): + # the class attribute defines a name of the operation so the operation class can be obtained using the + # "Op.get_op_class_by_name()" static method + op = 'SoftMax' + + # the operation works as an extractor by default. This is a legacy behaviour not recommended for using currently, + # thus "enabled" class attribute is set to False. The recommended approach is to use dedicated extractor extension + enabled = False + + def __init__(self, graph: Graph, attrs: dict): + super().__init__(graph, { # the constructor of the base class Op is called with additional default attributes + 'type': __class__.op, # the operation is from the opset so the type is set to 'SoftMax' + 'op': __class__.op, # internal Model Optimizer operation has the same type + 'version': 'opset1', # the operation corresponds to opset1 + 'infer': Softmax.infer, # shape inference function is defined below + 'axis': 1, # default value for the "axis" attribute of the operation SoftMax + 'in_ports_count': 1, # the operation has one input + 'out_ports_count': 1, # the operation produces one output + }, attrs) + + # the method returns operation specific attributes list. This method is important for the case when implementing + # extractor inherited from CaffePythonFrontExtractorOp class to extract attribute for Caffe Python operation. + # But currently it is used interchangeably with the "backend_attrs()" method. If the "backend_attrs()" is not used + # then the "supported_attrs()" is used instead. In this particular case the operation has just one attribute "axis" + def supported_attrs(self): + return ['axis'] + + @staticmethod + def infer(node: Node): + "some code calculating output shape and values" +``` + +There is a dedicated method called `backend_attrs()` defining a list of attributes to be saved to the IR. Consider an +example from the `mo/ops/pooling.py` file: +```py + def backend_attrs(self): + return [ + ('strides', lambda node: ','.join(map(str, node['stride'][node.spatial_dims]))), + ('kernel', lambda node: ','.join(map(str, node['window'][node.spatial_dims]))), + + ('pads_begin', lambda node: ','.join(map(str, get_backend_pad(node.pad, node.spatial_dims, 0)))), + ('pads_end', lambda node: ','.join(map(str, get_backend_pad(node.pad, node.spatial_dims, 1)))), + + ('pool-method', 'pool_method'), + ('exclude-pad', 'exclude_pad'), + + 'rounding_type', + 'auto_pad', + ] +``` + +The `backend_attrs()` function returns a list of records. A record can be of one of the following formats: +1. A string defining the attribute to be saved to the IR. If the value of the attribute is `None` then the attribute is +not saved. Example of this case are `rounding_type` and `auto_pad`. +2. A tuple where the first element is a string defining the name of the attribute as it will appear in the IR and the +second element is a function to produce the value for this attribute. The function gets an instance of the `Node` as the +only parameter and returns a string with the value to be saved to the IR. Example of this case are `strides`, `kernel`, +`pads_begin` and `pads_end`. +3. A tuple where the first element is a string defining the name of the attribute as it will appear in the IR and the +second element is the name of tha `Node` attribute to get the value from. Example of this case are `pool-method` and +`exclude-pad`. + +### Operation Extractor +Model Optimizer runs specific extractor for each operation in the model during the model loading. Refer to the +[operations-attributes-extracting](#operations-attributes-extracting) for more information about this process. + +There are several types of Model Optimizer extractor extensions: +1. The generic one which is described in this section. +2. The special extractor for Caffe\* models with Python layers. This kind of extractor is described in the +[Extending the Model Optimizer with Caffe* Python Layers](Extending_Model_Optimizer_with_Caffe_Python_Layers.md). +3. The special extractor for MXNet\* models with custom operations. This kind of extractor is described in the +[Extending the Model Optimizer for Custom MXNet* Operations](Extending_MXNet_Model_Optimizer_with_New_Primitives.md). +4. The special extractor and fallback to Caffe\* for shape inference is described in the +[Legacy Mode for Caffe* Custom Layers](Legacy_Mode_for_Caffe_Custom_Layers.md). + +This chapter is focused on the option #1 which provides a generic mechanism for the operation extractor applicable for +all frameworks. Model Optimizer provides class `mo.front.extractor.FrontExtractorOp` as a base class to implement the +extractor. It has a class method `extract` which gets the only parameter `Node` which corresponds to the graph node to +extract data from. The operation description in the original framework format is stored in the attribute `pb` of the +node. The extractor goal is to parse this attribute and save necessary attributes to the corresponding node of the +graph. Consider the extractor for the TensorFlow\* operation `Const` (refer to the file +`extensions/front/tf/const_ext.py`): + +```py +from mo.front.extractor import FrontExtractorOp +from mo.front.tf.extractors.utils import tf_dtype_extractor, tf_tensor_shape, tf_tensor_content +from mo.ops.const import Const + + +class ConstExtractor(FrontExtractorOp): + # the "op" class attribute defines a type of the operation in the framework (in this case it is a TensorFlow) for + # which the extractor should be triggered + op = 'Const' + enabled = True # the flag that indicates that this extractor is enabled + + @classmethod + def extract(cls, node): # the entry point of the extractor + # node.pb attribute stores the TensorFlow representation of the operation which is a Protobuf message of the + # specific format. In particular the message contains the attribute called "value" containing the description of + # the constant. The string "pb.attr["value"].tensor" is just a Python binding for Protobuf message parsing + pb_tensor = node.pb.attr["value"].tensor + # get the shape of the tensor from the protobuf message using the helper function "tf_tensor_shape" + shape = tf_tensor_shape(pb_tensor.tensor_shape) + # create a dictionary with necessary attributes + attrs = { + 'shape': shape, + # get the tensor value using "tf_tensor_content" helper function + 'value': tf_tensor_content(pb_tensor.dtype, shape, pb_tensor), + # get the tensor data type using "tf_dtype_extractor" helper function + 'data_type': tf_dtype_extractor(pb_tensor.dtype), + } + # update the node attributes using default attributes from the "Const" operation and attributes saved to the + # "attrs" dictionary + Const.update_node_stat(node, attrs) + return cls.enabled +``` + +Consider another example with an extractor of ONNX\* operation `Constant` (refer to the file +`extensions/front/onnx/const_ext.py`): + +```py +from onnx import numpy_helper +from onnx.numpy_helper import to_array + +from mo.front.extractor import FrontExtractorOp +from mo.front.onnx.extractors.utils import onnx_attr +from mo.ops.const import Const + + +class ConstantExtractor(FrontExtractorOp): + op = 'Constant' + enabled = True + + @classmethod + def extract(cls, node): + # use helper method "onnx_attr" which parses the Protobuf representation of the operation saved in the "node" + # gets the value of the attribute with name "value" as "TensorProto" type (specified with a keyword "t") + pb_value = onnx_attr(node, 'value', 't') + # use ONNX helper method "numpy_helper.to_array()" to convert "TensorProto" object to a numpy array + value = numpy_helper.to_array(pb_value) + + attrs = { + 'data_type': value.dtype, + 'value': value, + } + # update the node attributes using default attributes from the "Const" operation and attributes saved to the + # "attrs" dictionary + Const.update_node_stat(node, attrs) + return cls.enabled +``` + +The extractors for operations from different frameworks work similarly. The only difference is in the helper methods +used to parse operation attributes encoded with a framework-specific representation. + +A common practice is to use `update_node_stat()` method of the dedicated `Op` class to update the node attributes. This +method does the following: + +1. Sets values for common attributes like `op`, `type`, `infer`, `in_ports_count`, `out_ports_count`, `version` etc to +values specific to the dedicated operation (`Const` operation in this case). +2. Uses methods `supported_attrs()` and `backend_attrs()` defined in the `Op` class to update specific node attribute +`IE`. The IR emitter uses the value stored in the `IE` attribute to pre-process attribute values and save them to IR. +3. Optionally sets additional attributes provided to the `update_node_stat()` function as a second parameter. Usually +these attributes are parsed from the particular instance of the operation. + +> **NOTE**: Model Optimizer uses numpy arrays to store values and numpy arrays of type `np.int64` to store shapes in the +> graph. + +### Graph Transformation Extensions +Model Optimizer provides various base classes to implement [Front Phase Transformations](#front-phase-transformations), +[Middle Phase Transformations](#middle-phase-transformations) and [Back Phase Transformations](#back-phase-transformations). +All classes have the following common class attributes and methods: +1. Attribute `enabled` specifies whether the transformation is enabled or not. The value can be changed during runtime +to enable or disable execution of the transformation during a model conversion. Default value is `True`. +2. Attribute `id` specifies a unique transformation string identifier. This transformation identified can be used to +enable (disable) the transformation by setting environment variable `MO_ENABLED_TRANSFORMS` (`MO_DISABLED_TRANSFORMS`) +with a comma separated list of `id`s. The environment variables override the value of the `enabled` attribute of the +transformation. Instead of using `id` attribute value you can add fully defined class name to `MO_ENABLED_TRANSFORMS` +(`MO_DISABLED_TRANSFORMS`) variable, `extensions.back.NonmalizeToNormalizeL2.NormalizeToNormalizeL2` for example. Optional attribute. +3. Attribute `run_not_recursively` specifies whether the transformation should be executed in the sub-graphs, for +example, body of the [TensorIterator](../../../ops/infrastructure/TensorIterator_1.md) and +[Loop](../../../ops/infrastructure/Loop_5.md). Default value is `True`. +4. Attribute `force_clean_up` specifies whether the graph clean up should be executed after the transformation. The +graph cleanup removes nodes of the graph not reachable from the model inputs. Default value is `False`. +5. Attribute `force_shape_inference` specifies whether the nodes marked with attribute `need_shape_inference` equal to +`True` should be re-inferred after the transformation. Model Optimizer sets this attribute automatically for nodes which +input(s) were changed during the transformation or developer can set this attribute manually in the transformation for +the specific nodes. Default value is `False`. +5. Attribute `graph_condition` specifies a list of functions with one parameter -- `Graph` object. The transformation +is executed if and only if all functions return `True`. If the attribute is not set then no check is performed. +7. Method `run_before()` returns a list of transformation classes which this transformation should be executed before. +8. Method `run_after()` returns a list of transformation classes which this transformation should be executed after. + +> **NOTE**: Some of the transformation types have specific class attributes and methods which are explained in the +> corresponding sections of this document. + +Model Optimizer builds a graph of dependencies between registered transformations and executes them in the topological +order. In order to execute the transformation during a proper model conversion phase the Model Optimizer defines several +anchor transformations which does nothing. All transformations are ordered with respect to these anchor transformations. +The diagram below shows anchor transformations, some of built-in transformations and dependencies between them: + +![Transformations Graph](../../../img/MO_transformations_graph.png) + +User defined transformations are executed after corresponding `Start` and before corresponding `Finish` anchor +transformations by default (if `run_before()` and `run_after()` methods have not been overridden). + +> **NOTE**: The `PreMiddleStart` and `PostMiddleStart` anchors were introduced due to historical reasons to refactor +> the Model Optimizer pipeline which initially had a hardcoded order of transformations. + +#### Front Phase Transformations +There are several types of a front phase transformation: + +1. [Pattern-Defined Front Phase Transformations](#pattern-defined-front-phase-transformations) triggered for each +sub-graph of the original graph isomorphic to the specified pattern. +2. [Specific Operation Front Phase Transformations](#specific-operation-front-phase-transformations) triggered for the +node with a specific `op` attribute value. +3. [Generic Front Phase Transformations](#generic-front-phase-transformations). +4. Manually enabled transformation defined with a JSON configuration file (for TensorFlow\*, ONNX\* and MXNet\* models +only) specified using the `--transformations_config` command line parameter: + 1. [Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformation). + 2. [Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations). + 3. [Generic Front Phase Transformations Enabled with Transformations Configuration File](#generic-transformations-config-front-phase-transformations). + +##### Pattern-Defined Front Phase Transformations +This type of transformation is implemented using `mo.front.common.replacement.FrontReplacementSubgraph` and +`mo.front.common.replacement.FrontReplacementPattern` as base classes and works the following way. +1. Developer defines a sub-graph to be matched using a list of nodes with attributes and edges connecting them (edges +may also have attributes). +2. Model Optimizer searches for all sub-graphs of the original graph isomorphic to the specified sub-graph (pattern). +3. Model Optimizer executes the developer-defined function performing graph transformation for each instance of a +matched sub-graph. Developer can override different functions in the base transformation class so the Model Optimizer +works differently: + 1. Override the method `replace_sub_graph(self, graph, match)`. In this case Model Optimizer only executes the + overridden function, pass the `graph` object and a dictionary describing the matched sub-graph. A developer is + responsible for writing the transformation and connecting the newly created nodes to the rest of the graph. + 2. Override the method `generate_sub_graph(self, graph, match)`. This case is not recommended to use because it is + the most complicated approach and it can be effectively replaced with one of two previous approaches and so it is not + explained in this section. The explanation of this function is provided in the + [Node Name Defined Sub-Graph Transformations](#node-name-defined-sub-graph-transformations) section. + +The sub-graph pattern is defined in the `pattern()` function. This function should return a dictionary with two keys: +`nodes` and `edges`: +* The value for the `nodes` key is a list of tuples with two elements. + * The first element is an alias name for a node which will be used to define edges between nodes and in the + transformation function. + * The second element is a dictionary with attributes. The key is a name of an attribute which should exist in the + node. The value for the attribute can be some specific value to match or a function which gets a single parameter - + the attribute value from the node. The function should return the result of attribute comparison with a dedicated + value. +* The value for the `edges` key is a list of tuples with two or three elements. + * The first element is the alias name of the node producing a tensor. + * The second element is the alias name of the node consuming the tensor. + * The third element (optional) is the dictionary with expected edge attributes. Usually this dictionary contains + attributes like `in` and `out` defining input and output ports. + +Consider the example of a front transformation implemented in the `extensions/front/Mish_fusion.py` file performing +fusing of the sub-graph defining the [Mish](../../../ops/activation/Mish_4.md) activation function into a single +operation: + +```py +from extensions.front.Softplus_fusion import SoftplusFusion +from extensions.ops.activation_ops import Mish +from mo.front.common.replacement import FrontReplacementSubgraph +from mo.front.subgraph_matcher import SubgraphMatch +from mo.graph.graph import Graph, rename_nodes + + +class MishFusion(FrontReplacementSubgraph): + """ + The transformation looks for the pattern with Softplus defining the Mish function: Mish(x) = x * tanh(SoftPlus(x)). + """ + enabled = True # transformation is enabled + + def run_after(self): # run this transformation after "SoftplusFusion" transformation + return [SoftplusFusion] + + def pattern(self): # define pattern according to formulae x * tanh(SoftPlus(x)). + return dict( + nodes=[ + ('mul', dict(op='Mul')), + ('tanh', dict(op='Tanh')), + ('softplus', dict(op='SoftPlus')), + ], + edges=[ + ('softplus', 'tanh'), + ('tanh', 'mul'), + ]) + + def replace_sub_graph(self, graph: Graph, match: [dict, SubgraphMatch]): # entry point for the transformation + mul = match['mul'] # get the Node corresponding to matched "mul" node + mul_name = mul.soft_get('name', mul.id) + softplus = match['softplus'] # get the Node corresponding to the matched "softplus" node + + # determine the input port of Mul which gets the 'input' node output + input_port_idx = int(mul.in_port(0).get_connection().get_source().node.soft_get('op') == 'Tanh') + + # check that the same tensor provided as input to Mul and SoftPlus + if mul.in_port(input_port_idx).get_source() != softplus.in_port(0).get_source(): + return + + mish = Mish(graph, {}).create_node() # create Mish operation + mish.in_port(0).connect(mul.in_port(input_port_idx).get_source()) # connect input to the Mish + mul.out_port(0).get_connection().set_source(mish.out_port(0)) # reconnect outgoing edge from "mul" to Mish + + # rename the created Mish operation to have the name of the "mul" node which produced the value equal to the + # Mish output + rename_nodes([(mul, mul_name + '/TBR'), (mish, mul_name)]) +``` + +##### Specific Operation Front Phase Transformations +This type of transformation is implemented using `mo.front.common.replacement.FrontReplacementOp` as base class and +works the following way. +1. Developer defines an operation type to trigger the transformation. +2. Model Optimizer search for all nodes in the graph with the attribute `op` equal to the specified value. +3. Model Optimizer executes developer-defined function performing graph transformation for each instance of a matched +node. Developer can override different functions in the base transformation class and the Model Optimizer works +differently: + 1. Override method `replace_sub_graph(self, graph, match)`. In this case Model Optimizer only executes the overridden + function, pass the `graph` object and a dictionary with a single key `op` with the matched node as value. A developer + is responsible for writing the transformation and connecting the newly created nodes to the rest of the graph. + 2. Override method `replace_op(self, graph, node)`. In this case Model Optimizer executes the overridden function, + pass the `graph` object and the matched node as `node` parameter. If the function returns an `id` of some node then + the `Node` with this `id` is connected to the consumers of the matched node. After applying the transformation the + matched node is removed from the graph. + +The `FrontReplacementOp` class provides a simpler mechanism to match a singe operation with specific value of `op` +(write an attribute `op` in the class instead of defining a `pattern()` function) attribute and perform the +transformation. + +Consider an example transformation from the file is `extensions/front/Pack.py` which replaces operation `Pack` from +the TensorFlow\*: +```py +from mo.front.common.partial_infer.utils import int64_array +from mo.front.common.replacement import FrontReplacementOp +from mo.front.tf.graph_utils import create_op_with_const_inputs +from mo.graph.graph import Node, Graph, rename_nodes +from mo.ops.concat import Concat +from mo.ops.unsqueeze import Unsqueeze + + +class Pack(FrontReplacementOp): + op = "Pack" # trigger transformation for all nodes in the graph with attribute op = "Pack" + enabled = True # transformation is enabled + + def replace_op(self, graph: Graph, node: Node): # entry point for the transformation + # create a Concat operation with a number of inputs equal to a number of inputs to Pack + out_node = Concat(graph, {'axis': node.axis, 'in_ports_count': len(node.in_ports())}).create_node() + pack_name = node.soft_get('name', node.id) + + for ind in node.in_ports(): + # add dimension of size 1 to all inputs of the Pack operation and add them as Concat inputs + unsqueeze_node = create_op_with_const_inputs(graph, Unsqueeze, {1: int64_array([node.axis])}, + {'name': node.soft_get('name', node.id) + '/Unsqueeze'}) + node.in_port(ind).get_connection().set_destination(unsqueeze_node.in_port(0)) + unsqueeze_node.out_port(0).connect(out_node.in_port(ind)) + + # rename the created Concat operation to have the name of the "pack" node which produced the value equal to the + # Concat output + rename_nodes([(node, pack_name + '/TBR'), (out_node, pack_name)]) + return [out_node.id] # reconnect the Pack operation consumers to get input from Concat instead +``` + +##### Generic Front Phase Transformations +Model Optimizer provides mechanism to implement generic front phase transformation. This type of transformation is +implemented using `mo.front.common.replacement.FrontReplacementSubgraph` or +`mo.front.common.replacement.FrontReplacementPattern` as base classes. The only condition to execute the transformation +is to check that it is enabled. Then the Model Optimizer executes the method `find_and_replace_pattern(self, graph)` and +provides a `Graph` object as an input. + +Consider the example of a generic front transformation from a file `extensions/front/SqueezeNormalize.py` performing +normalization of the [Squeeze](../../../ops/shape/Squeeze_1.md) operation. Older version of the operation had a list of +axes to squeeze as an attribute, but now it is a separate input. For backward compatibility the Model Optimizer +operation supports both semantics but before IR generation the operation should normalized according to the +specification. + +```py +import logging as log + +from mo.front.common.partial_infer.utils import int64_array +from mo.front.common.replacement import FrontReplacementPattern +from mo.graph.graph import Graph +from mo.ops.const import Const +from mo.utils.error import Error + + +class SqueezeNormalize(FrontReplacementPattern): + """ + Normalizes inputs of the Squeeze layers. The layers should have two inputs: the input with data and input with the + dimensions to squeeze. If the second input is omitted then all dimensions of size 1 should be removed. + """ + enabled = True # the transformation is enabled + + def find_and_replace_pattern(self, graph: Graph): # the function is called unconditionally + for squeeze_node in graph.get_op_nodes(op='Squeeze'): # iterate over all nodes with op='Squeeze' + # if the operation has only 1 input node and non None 'squeeze_dims' attribute then convert the attribute to + # the operation input + if len(squeeze_node.in_nodes()) == 1 and squeeze_node.has_valid('squeeze_dims'): + dims_node = Const(graph, {'name': squeeze_node.id + '/Dims', + 'value': int64_array(squeeze_node.squeeze_dims)}).create_node() + squeeze_node.in_port(1).connect(dims_node.out_port(0)) + del squeeze_node['squeeze_dims'] + # if two inputs already exists that meanss that the operation is already normalized + elif len(squeeze_node.in_nodes()) == 2: + log.debug('The Squeeze node "{}" is already normalized'.format(squeeze_node.name)) + # in all other cases raise an error + else: + raise Error('The Squeeze layer "{}" should either have 2 inputs or one input and an "squeeze_dims" ' + 'attribute'.format(squeeze_node.soft_get('name'))) +``` + +Refer to the `mo/front/common/replacement.py` for the implementation details on how these front phase transformations +work. + +##### Node Name Pattern Front Phase Transformations +Let's review a real life example before going into details how this type of transformation works. + +TensorFlow\* uses a mechanism of scope to group related operation nodes. It is a good practice to put nodes performing +particular task into the same scope. This approach divides a graph into logical blocks that are easier to review in the +TensorBoard\*. The scope, in fact, just defines a common name prefix for the nodes belonging to it. + +For example, Inception topologies contain several types of so-called "Inception blocks". Some of them are equal to each +other, but located in different places of the network. For example, Inception V4 from the +[TensorFlow-Slim image classification model library](https://github.com/tensorflow/models/tree/master/research/slim) has +inception blocks `Mixed_5b`, `Mixed_5c` and `Mixed_5d` with exactly the same nodes with the same set of attributes. + +Consider a situation when someone implemented these Inception blocks extremely efficiently using a single Inference +Engine operation called `InceptionBlock` and need to replace these blocks in the model with instances of this operation. +Model Optimizer provides mechanism to trigger the transformation for a sub-graph of operations defined by the node name +regular expressions (scope). In this particular case, some of the patterns are: `.*InceptionV4/Mixed_5b`, +`.*InceptionV4/Mixed_5c` and `.*InceptionV4/Mixed_5d`. Each pattern starts with `.*`, because a prefix `InceptionV4` +is added to all nodes names during a model freeze. + +This type of transformation is implemented using `mo.front.tf.replacement.FrontReplacementFromConfigFileSubGraph` as a +base class and works the following way. +1. Developer prepares a JSON configuration file template defining node names patterns. +2. Developer runs the Model Optimizer with a command line parameter `--tensorflow_custom_operations_config_update` and +Model Optimizer adds information about input and output nodes of the specified sub-graphs. +3. Model Optimizer executes developer-defined transformation **only** when an user specifies the path to the +configuration file updated in step 2 using the command line parameter `--transformations_config`. + +Consider the following possible configuration file template for the Inception Block transformation: +```json +[ + { + "custom_attributes": { + "attr1_key": "attr1_value", + "attr2_key": 123456 + }, + "id": "InceptionBlockTransformation", + "instances": [ + ".*InceptionV4/Mixed_5b", + ".*InceptionV4/Mixed_5c", + ".*InceptionV4/Mixed_5d" + ], + "match_kind": "scope" + } +] +``` + +The configuration file contains a list of dictionaries. Each dictionary defines one transformation. Each transformation +is defined with several parameters: + +* `id` (mandatory) is a unique identifier of the transformation. It is used in the Python\* code that implements the +transformation to link the class and the transformation description from the configuration file. +* `match_kind` (mandatory) is a string that specifies the matching algorithm. For the node name pattern case the value +should be equal to `scope`. Another possible values are described in the dedicated sections below. +* `instances` (mandatory) specifies instances of the sub-graph to be matched. It contains a list of node names prefixes +patterns for the match kind of type `scope`. +* `custom_attributes` (optional) is a dictionary with attributes that can be used in the transformation code. + +After running the Model Optimizer with additional parameter `--tensorflow_custom_operations_config_update` pointing to +the template configuration file the content of the file should be updated with two new sections `inputs` and `outputs`. +The file content after the update is the following: +```json +[ + { + "id": "InceptionBlockTransformation", + "custom_attributes": { + "attr1_key": "attr1_value", + "attr2_key": 123456 + }, + "instances": [ + ".*InceptionV4/Mixed_5b", + ".*InceptionV4/Mixed_5c", + ".*InceptionV4/Mixed_5d" + ], + "match_kind": "scope", + "inputs": [ + [ + { + "node": "Branch_2/Conv2d_0a_1x1/Conv2D$", + "port": 0 + }, + { + "node": "Branch_3/AvgPool_0a_3x3/AvgPool$", + "port": 0 + }, + { + "node": "Branch_1/Conv2d_0a_1x1/Conv2D$", + "port": 0 + }, + { + "node": "Branch_0/Conv2d_0a_1x1/Conv2D$", + "port": 0 + } + ] + ], + "outputs": [ + { + "node": "concat$", + "port": 0 + } + ] + } +] +``` + +The value for key `inputs` is a list of lists describing input tensors of the sub-graph. Each element of the top-level +list corresponds to one unique input tensor of the sub-graph. Each internal list describes a list of nodes consuming +this tensor and port numbers where the tensor is consumed. Model Optimizer generates regular expressions for the input +nodes names to uniquely identify them in each instance of the sub-graph defined by the `instances`. Denote these nodes +as input nodes of the sub-graph. + +In the InceptionV4 topology, the `InceptionV4/Mixed_5b` block has four input tensors from outside of the sub-graph, +but all of them are produced by the node `InceptionV4/Mixed_5a/concat`. Therefore, the top-level list of the `inputs` +contains one list corresponding to this tensor. Four input nodes of the sub-graph consume the tensor produced by +`InceptionV4/Mixed_5a/concat` node. In this case, all four input nodes consume input tensor into port 0. + +The order of items in the internal list describing nodes does not matter, but the order of elements in the top-level +list is important. This order defines the order in which the Model Optimizer attaches input tensors to a new generated +node if the sub-graph is replaced with a single node. The `i`-th input node of the sub-graph is obtained using call +`match.single_input_node(i)` in the sub-graph transformation code. More information about API is given below. If it is +necessary to change the order of input tensors, the configuration file can be edited in the text-editor. + +The value for the key `outputs` is a list describing nodes of the sub-graph producing tensor that goes outside of the +sub-graph or does not have child nodes. Denote these nodes as output nodes of the sub-graph. The order of elements in +the list is important. The i-th element of the list describes the `i`-th output tensor of the sub-graph, which could be +obtained using call `match.output_node(i)`. The order of elements can be manually changed in the configuration file. +Model Optimizer uses this order to connect output edges if the sub-graph is replaced with a single node. + +Refer to [Converting TensorFlow\* Object Detection API Models](../convert_model/tf_specific/Convert_Object_Detection_API_Models.md) +for more examples of this type of transformation. + +##### Front Phase Transformations Using Start and End Points +This type of transformation is implemented using `mo.front.tf.replacement.FrontReplacementFromConfigFileSubGraph` as a +base class and works the following way. +1. Developer prepares a JSON configuration file which defines the sub-graph to match using two lists of node names: +"start" and "end" nodes. +2. Model Optimizer executes developer-defined transformation **only** when an user specifies the path to the +configuration file using the command line parameter `--transformations_config`.Model Optimizer performs the following +steps to match the sub-graph: + 1. Starts a graph traversal from every start node following the direction of the graph edges. The search stops in an + end node or in case of a node without consumers. All visited nodes are added to the matched sub-graph. + 2. Starts another graph traversal from each non-start node of the sub-graph, i.e. every node except nodes from the + "start" list. In this step the edges are traversed in the opposite edge direction. All newly visited nodes are added + to the matched sub-graph. This step is needed to add nodes required for calculation values of internal nodes of the + matched sub-graph. + 3. Checks that all "end" nodes were reached from "start" nodes. If no then exit with error. + 4. Check that there are no [Parameter](../../../ops/infrastructure/Parameter_1.md) operations among added nodes. If + they exist then the sub-graph depends on the inputs of the model. Such configuration is considered incorrect so the + Model Optimizer exits with an error. + +This algorithm finds all nodes "between" start and end nodes and nodes needed for calculation of non-input nodes of the +matched sub-graph. + +The example of a JSON configuration file for a transformation with start and end points is +`extensions/front/tf/ssd_support_api_v1.15.json`: + +```json +[ + { + "custom_attributes": { + "code_type": "caffe.PriorBoxParameter.CENTER_SIZE", + "pad_mode": "caffe.ResizeParameter.CONSTANT", + "resize_mode": "caffe.ResizeParameter.WARP", + "clip_before_nms": false, + "clip_after_nms": true + }, + "id": "ObjectDetectionAPISSDPostprocessorReplacement", + "include_inputs_to_sub_graph": true, + "include_outputs_to_sub_graph": true, + "instances": { + "end_points": [ + "detection_boxes", + "detection_scores", + "num_detections" + ], + "start_points": [ + "Postprocessor/Shape", + "Postprocessor/scale_logits", + "Postprocessor/Tile", + "Postprocessor/Reshape_1", + "Postprocessor/Cast_1" + ] + }, + "match_kind": "points" + } +] +``` + +The format of the file is similar to the one provided as an example in the +[Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformations). There difference is in +the value of the `match_kind` parameter which should be equal to `points` and the format of the `instances` parameter +which should be a dictionary with two keys `start_points` and `end_points` defining start and end node names +correspondingly. + +> **NOTE**: `include_inputs_to_sub_graph` and `include_outputs_to_sub_graph` parameters are redundant and should be +> always equal to `true`. + +> **NOTE**: This sub-graph match algorithm has a limitation that each start node must have only one input. Therefore, it +> is not possible to specify, for example, [Convolution](../../../ops/convolution/Convolution_1.md) node as input +> because it has two inputs: data tensor and tensor with weights. + +For other examples of transformations with points, please refer to the +[Converting TensorFlow\* Object Detection API Models](../convert_model/tf_specific/Convert_Object_Detection_API_Models.md). + +##### Generic Front Phase Transformations Enabled with Transformations Configuration File +This type of transformation works similarly to the [Generic Front Phase Transformations](#generic-front-phase-transformations) +but require a JSON configuration file to enable it similarly to +[Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformation) and +[Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations). + +The base class for this type of transformation is +`mo.front.common.replacement.FrontReplacementFromConfigFileGeneral`. The Model Optimizer executes the method +`transform_graph(self, graph, replacement_descriptions)` and provides the `Graph` object and dictionary with values +parsed from the `custom_attributes` attribute of the provided JSON configuration file. + +The example of the configuration file for this type of transformation is `extensions/front/tf/yolo_v1_tiny.json`: + +```json +[ + { + "id": "TFYOLO", + "match_kind": "general", + "custom_attributes": { + "classes": 20, + "coords": 4, + "num": 2, + "do_softmax": 0 + } + } +] +``` +and the corresponding transformation file is `./extensions/front/YOLO.py`: + +```py +from extensions.front.no_op_eraser import NoOpEraser +from extensions.front.standalone_const_eraser import StandaloneConstEraser +from extensions.ops.regionyolo import RegionYoloOp +from mo.front.tf.replacement import FrontReplacementFromConfigFileGeneral +from mo.graph.graph import Node, Graph +from mo.ops.result import Result +from mo.utils.error import Error + + +class YoloRegionAddon(FrontReplacementFromConfigFileGeneral): + """ + Replaces all Result nodes in graph with YoloRegion->Result nodes chain. + YoloRegion node attributes are taken from configuration file + """ + replacement_id = 'TFYOLO' # the identifier matching the "id" attribute in the JSON file + + def run_after(self): + return [NoOpEraser, StandaloneConstEraser] + + def transform_graph(self, graph: Graph, replacement_descriptions): + op_outputs = [n for n, d in graph.nodes(data=True) if 'op' in d and d['op'] == 'Result'] + for op_output in op_outputs: + last_node = Node(graph, op_output).in_node(0) + op_params = dict(name=last_node.id + '/YoloRegion', axis=1, end_axis=-1) + op_params.update(replacement_descriptions) + region_layer = RegionYoloOp(graph, op_params) + region_layer_node = region_layer.create_node([last_node]) + # here we remove 'axis' from 'dim_attrs' to avoid permutation from axis = 1 to axis = 2 + region_layer_node.dim_attrs.remove('axis') + Result(graph).create_node([region_layer_node]) + graph.remove_node(op_output) +``` + +The configuration file has only 3 parameters: identifier of the transformation `id`, `match_kind` (which should be equal +to `general`) and the dictionary with custom attributes `custom_attributes` accessible in the transformation. + +#### Middle Phase Transformations +There are two types of middle phase transformations: + +1. [Pattern-Defined Middle Phase Transformations](#pattern-defined-middle-phase-transformations) triggered for each +sub-graph of the original graph isomorphic to the specified pattern. +2. [Generic Middle Phase Transformations](#generic-middle-phase-transformations). + +##### Pattern-Defined Middle Phase Transformations +This type of transformation is implemented using `mo.middle.replacement.MiddleReplacementPattern` as a base class and +works similarly to the [Pattern-Defined Front Phase Transformations](#pattern-defined-middle-phase-transformations). +The are two differences: +1. The transformation entry function name is `replace_pattern(self, graph, match)`. +2. The pattern defining the graph should contain data nodes because the structure of the graph is different between +front and middle phases. Refer to the [Partial Inference](#partial-inference) section for more information about the +graph structure changes. + +Refer to the `extensions/middle/L2NormToNorm.py` for the example of a pattern-defined middle transformation. + +##### Generic Middle Phase Transformations +Model Optimizer provides mechanism to implement generic middle phase transformations. This type of transformation is +implemented using `mo.middle.replacement.MiddleReplacementPattern` as a base class and works similarly to the +[Generic Front Phase Transformations](#generic-front-phase-transformations). The only difference is that the +transformation entry function name is `find_and_replace_pattern(self, graph: Graph)`. + +Refer to the `extensions/middle/CheckForCycle.py` for the example of a such type of transformation. + +#### Back Phase Transformations +There are two types of back phase transformations: + +1. [Pattern-Defined Back Phase Transformations](#pattern-defined-back-phase-transformations) triggered for each +sub-graph of the original graph isomorphic to the specified pattern. +2. [Generic Back Phase Transformations](#generic-back-phase-transformations). + +> **NOTE**: The graph layout during the back phase is always NCHW. However during the front and middle phases it could +> be NHWC if the original model was using it. Refer to [Model Conversion Pipeline](#model-conversion-pipeline) for more +> details. + +##### Pattern-Defined Back Phase Transformations +This type of transformation is implemented using `mo.back.replacement.MiddleReplacementPattern` as a base class and +works the same way as [Pattern-Defined Front Phase Transformations](#pattern-defined-middle-phase-transformations). + +Refer to the `extensions/back/ShufflenetReLUReorder.py` for the example of a pattern-defined back transformation. + +##### Generic Back Phase Transformations +Model Optimizer provides mechanism to implement generic back phase transformations. This type of transformation is +implemented using `mo.back.replacement.BackReplacementPattern` as a base class and works the same way as +[Generic Middle Phase Transformations](#generic-middle-phase-transformations). + +Refer to the `extensions/back/GatherNormalizer.py` for the example of a such type of transformation. + +## See Also +* [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](../../IR_and_opsets.md) +* [Converting a Model to Intermediate Representation (IR)](../convert_model/Converting_Model.md) +* [nGraph Basic Concepts](@ref openvino_docs_nGraph_DG_basic_concepts) +* [Inference Engine Extensibility Mechanism](../../../IE_DG/Extensibility_DG/Intro.md) +* [Extending the Model Optimizer with Caffe* Python Layers](Extending_Model_Optimizer_with_Caffe_Python_Layers.md) +* [Extending the Model Optimizer for Custom MXNet* Operations](Extending_MXNet_Model_Optimizer_with_New_Primitives.md) +* [Legacy Mode for Caffe* Custom Layers](Legacy_Mode_for_Caffe_Custom_Layers.md) diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md index 4203a1f74114de..aa3b5697242657 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md @@ -1,45 +1,41 @@ -# Extending the MXNet Model Optimizer with New Primitives {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Extending_MXNet_Model_Optimizer_with_New_Primitives} +# Extending Model Optimizer for Custom MXNet* Operations {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Extending_MXNet_Model_Optimizer_with_New_Primitives} -This section describes how you can create a Model Optimizer extension for a custom layer from your MXNet* model. It supplements the main document [Extending Model Optimizer with New Primitives](Extending_Model_Optimizer_with_New_Primitives.md) and provides a step-by-step procedure. To create an extension for a particular layer, perform the following steps: +This section provides instruction on how to support a custom MXNet operation (or as it called in the MXNet documentation +"operator" or "layer") which is not a part of the MXNet operation set. For example, if the operator is implemented using +the following [guide](https://mxnet.apache.org/versions/1.7.0/api/faq/new_op.html). + +This section describes a procedure on how to extract operator attributes in the Model Optimizer. The rest of the +operation enabling pipeline and documentation on how to support MXNet operations from standard MXNet operation set is +described in the main document [Customize_Model_Optimizer](Customize_Model_Optimizer.md). + +## Writing Extractor for Custom MXNet Operation +Custom MXNet operations have an attribute `op` (defining the type of the operation) equal to `Custom` and attribute +`op_type` which is an operation type defined by an user. Implement extractor class inherited from the +`MXNetCustomFrontExtractorOp` class instead of `FrontExtractorOp` class used for standard framework operations in order +to extract attributes for such kind of operations. The `op` class attribute value should be set to the `op_type` value +so the extractor is triggered for this kind of operation. + +There is the example of the extractor for the custom operation registered with type (`op_type` value) equal to +`MyCustomOp` having attribute `my_attribute` of the floating point type with default value `5.6`. In this sample we +assume that we have already created the `CustomOp` class (inherited from `Op` class) for the Model Optimizer operation +for this MXNet custom operation as described in the [Customize_Model_Optimizer](Customize_Model_Optimizer.md). -1. Create the file `custom_proposal_ext.py` in the folder `/deployment_tools/model_optimizer/extensions/front/mxnet` -If your MXNet layer has op `Custom`, create the `CustomProposalFrontExtractor` class inherited from `MXNetCustomFrontExtractorOp`: -```py -from mo.front.extractor import MXNetCustomFrontExtractorOp -class CustomProposalFrontExtractor(MXNetCustomFrontExtractorOp): - pass -``` -Otherwise, for layers that are not standard MXNet layers, create the `ProposalFrontExtractor` class inherited from `FrontExtractorOp`: -```py - from mo.front.extractor import FrontExtractorOp - class ProposalFrontExtractor(FrontExtractorOp): - pass -``` -2. Specify the operation that the extractor refers to and a specific flag. The flag represents whether the operation should be used by the Model Optimizer or should be excluded from processing: -```py -from mo.front.extractor import MXNetCustomFrontExtractorOp -class CustomProposalFrontExtractor(MXNetCustomFrontExtractorOp): - op = '_contrib_Proposal' - enabled = True -``` -3. Register a mapping rule between the original model and the `PythonProposalOp` attributes by overriding the following function: ```py +from extension.ops.custom_op import CustomOp # implementation of the MO operation class from mo.front.mxnet.extractors.utils import get_mxnet_layer_attrs from mo.front.extractor import MXNetCustomFrontExtractorOp -from mo.ops.op import Op -class CustomProposalFrontExtractor(MXNetCustomFrontExtractorOp): - op = '_contrib_Proposal' - enabled = True +class CustomProposalFrontExtractor(MXNetCustomFrontExtractorOp): # inherit from specific base class + op = 'MyCustomOp' # the value corresponding to the `op_type` value of the MXNet operation + enabled = True # the extractor is enabled + @staticmethod def extract(node): - attrs = get_mxnet_layer_attrs(node.symbol_dict) + attrs = get_mxnet_layer_attrs(node.symbol_dict) # parse the attributes to a dictionary with string values node_attrs = { - 'feat_stride': attrs.float('feat_stride', 16) + 'my_attribute': attrs.float('my_attribute', 5.6) } - - # update the attributes of the node - Op.get_op_class_by_name('Proposal').update_node_stat(node, node_attrs) # <------ here goes the name ('Proposal') of the Operation that was implemented before - return __class__.enabled -``` + CustomOp.update_node_stat(node, node_attrs) # update the attributes of the node + return self.enabled +``` diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md new file mode 100644 index 00000000000000..c79da3ef0efaa0 --- /dev/null +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md @@ -0,0 +1,89 @@ +# Extending Model Optimizer with Caffe* Python Layers {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Extending_Model_Optimizer_With_Caffe_Python_Layers} + +This section provides instruction on how to support a custom Caffe operation written only in Python. For example, the +[Faster-R-CNN model]((http://dl.dropboxusercontent.com/s/o6ii098bu51d139/faster_rcnn_models.tgz?dl=0)) implemented in +Caffe contains a custom layer Proposal written in Python. The layer is described in the +[Faster-R-CNN protoxt](https://raw.githubusercontent.com/rbgirshick/py-faster-rcnn/master/models/pascal_voc/VGG16/faster_rcnn_end2end/test.prototxt) +the following way: +```sh +layer { + name: 'proposal' + type: 'Python' + bottom: 'rpn_cls_prob_reshape' + bottom: 'rpn_bbox_pred' + bottom: 'im_info' + top: 'rois' + python_param { + module: 'rpn.proposal_layer' + layer: 'ProposalLayer' + param_str: "'feat_stride': 16" + } +} +``` + +This section describes only a procedure on how to extract operator attributes in the Model Optimizer. The rest of the +operation enabling pipeline and documentation on how to support other Caffe operations (written in C++) is described in +the main document [Customize_Model_Optimizer](Customize_Model_Optimizer.md). + +## Writing Extractor for Caffe Python Layer +Custom Caffe Python layers have an attribute `type` (defining the type of the operation) equal to `Python` and two +mandatory attributes `module` and `layer` in the `python_param` dictionary. The `module` defines the Python module name +with the layer implementation, while `layer` value is an operation type defined by an user. In order to extract +attributes for such an operation it is necessary to implement extractor class inherited from the +`CaffePythonFrontExtractorOp` class instead of `FrontExtractorOp` class used for standard framework layers. The `op` +class attribute value should be set to the `module + "." + layer` value so the extractor is triggered for this kind of +operation. + +Here is a simplified example of the extractor for the custom operation Proposal from Faster-R-CNN model mentioned above. +The full code with additional checks is provided in the +`/deployment_tools/model_optimizer/extensions/front/caffe/proposal_python_ext.py`. The sample code uses +operation `ProposalOp` which corresponds to `Proposal` operation described in the [Available Operations Sets](../../../ops/opset.md) +document. Refer to the source code below for a detailed explanation of the extractor. + +```py +from extensions.ops.proposal import ProposalOp +from mo.front.extractor import CaffePythonFrontExtractorOp + + +class ProposalPythonFrontExtractor(CaffePythonFrontExtractorOp): + op = 'rpn.proposal_layer.ProposalLayer' # module + "." + layer + enabled = True # extractor is enabled + + @staticmethod + def extract_proposal_params(node, defaults): + param = node.pb.python_param # get the protobuf message representation of the layer attributes + # parse attributes from the layer protobuf message to a Python dictionary + attrs = CaffePythonFrontExtractorOp.parse_param_str(param.param_str) + update_attrs = defaults + + # the operation expects ratio and scale values to be called "ratio" and "scale" while Caffe uses different names + if 'ratios' in attrs: + attrs['ratio'] = attrs['ratios'] + del attrs['ratios'] + if 'scales' in attrs: + attrs['scale'] = attrs['scales'] + del attrs['scales'] + + update_attrs.update(attrs) + ProposalOp.update_node_stat(node, update_attrs) # update the node attributes + + @classmethod + def extract(cls, node): + # define default values for the Proposal layer attributes + defaults = { + 'feat_stride': 16, + 'base_size': 16, + 'min_size': 16, + 'ratio': [0.5, 1, 2], + 'scale': [8, 16, 32], + 'pre_nms_topn': 6000, + 'post_nms_topn': 300, + 'nms_thresh': 0.7 + } + cls.extract_proposal_params(node, defaults) + return cls.enabled +``` + +## See Also +* [Customize_Model_Optimizer](Customize_Model_Optimizer.md) +* [Legacy Mode for Caffe* Custom Layers](Legacy_Mode_for_Caffe_Custom_Layers.md) diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md index b94ddb52885f80..9fb0e9b26f2db7 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md @@ -1,476 +1,3 @@ -# Extending the Model Optimizer with New Primitives {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Extending_Model_Optimizer_with_New_Primitives} +# Extending Model Optimizer with New Primitives {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Extending_Model_Optimizer_with_New_Primitives} -This section explains how to register a custom layer in the Model Optimizer, including how to register Proposal as a custom layer. This section also demonstrates how `Proposal` works as a custom layer. - -Model Optimizer loads the model, goes through the topology, and tries to find each layer type in the list of known layers. If the Model Optimizer does not find a layer in that list, it looks for the layer in the list of custom layers. If the Model Optimizer fails to find the layer among the defined custom layers, it registers a Caffe\* fallback for for the output shape inference. If the Model Optimizer does not find Caffe and cannot infer shapes, the Model Optimizer fails with an appropriate message. - -You must know two things about custom layers with the Model Optimizer: - -* How to map a subgraph in a FW model to a subgraph consisting of Inference Engine layers. For Caffe, the subgraph is a 1-to-1 mapping of a Caffe layer to an Inference Engine layer. -* How to infer shapes for unknown subgraphs. This can be either for a step in which the internal representation consists of framework-specific layers, or for a step in which the internal representation consists of Inference Engine layers. - -You also have the option of a framework fallback for unknown subgraphs, for when the original framework is used for inference of output shapes of operations. The example below demonstrates the case in which the framework is not available or should not be used. - -## Preparing an Example Topology - -> **NOTE**: Skip this section if you have a topology with a layer that is not known to the Model Optimizer. - -The information in this section prepares a Caffe\* model with the provided, deployment-ready `prototxt` for a -well-known topology called -[Faster-R-CNN protoxt](https://raw.githubusercontent.com/rbgirshick/py-faster-rcnn/master/models/pascal_voc/VGG16/faster_rcnn_end2end/test.prototxt) -to demonstrate the workflow. To use this example, you must have -[weights and biases](http://dl.dropboxusercontent.com/s/o6ii098bu51d139/faster_rcnn_models.tgz?dl=0) for inference, -because `prototxt` just describes the structure of the topology. - -1. Download the `.caffemodel` and `.prototxt` files -2. Run the Model Optimizer on the `.caffemodel` and `.prototxt` files: -```shell -python mo.py --input_model VGG16_faster_rcnn_final.caffemodel --input_proto test.prototxt -``` -You will likely see the error message: -```shell -Error parsing text-format caffe.NetParameter: 196:16: Message type "caffe.DropoutParameter" has no field named "scale_train". -``` -Whether you see the error depends on your Caffe version. For example, BVLC Caffe does not support the boolean parameter `scale_train` for the `dropout` layer. The error message does not matter, because the dropout layer is needed only for training, and the Model Optimizer removes it. -3. To proceed, comment out these lines in `test.prototxt`: -```sh -... -layer { - name: "drop6" - type: "Dropout" - bottom: "fc6" - top: "fc6" - dropout_param { - dropout_ratio: 0.5 - # scale_train: false # <-------------- comment out this line - } -} -... -layer { - name: "drop7" - type: "Dropout" - bottom: "fc7" - top: "fc7" - dropout_param { - dropout_ratio: 0.5 - # scale_train: false # <-------------- comment out this line - } -} -... -``` -4. Run the Model Optimizer on this model again: -```shell -python mo.py --input_model VGG16_faster_rcnn_final.caffemodel --input_proto test.prototxt -``` - You get the model successfully converted to Intermediate Representation, and you can infer it with the Inference Engine. - - However, the aim of this tutorial is to demonstrate the way of supporting custom layers not yet supported by the Model Optimizer. - If you want to understand better how Model Optimizer works, remove the extension for layer `Proposal` and follow all steps of this tutorial. - -5. Remove the extension for layer `Proposal`: -```sh -mkdir extensions/old -mv extensions/front/caffe/proposal_python_ext.py extensions/old/proposal_python_ext_old.py -mv extensions/ops/proposal_python_example.py extensions/old/proposal_python__example_old.py -``` -6. Now you can run the Model Optimizer on this model once again: -```sh -python mo.py --input_model VGG16_faster_rcnn_final.caffemodel --input_proto test.prototxt -``` -You will see the message: -```shell -[ ERROR ] Found custom layer proposal. Model Optimizer does not support this layer. -Please, register it in CustomLayersMapping.xml or implement extension. -For more information please refer to Model Optimizer FAQ, question #FAQ45. -``` -This message means the Model Optimizer can load the model, but is unable to infer the shape and handle the custom layer properties. - -## Registering a Custom Layer as a Model Optimizer Extension - -In the following sections, you will learn how to make the Model Optimizer independent from Caffe\* when processing a -model that has a custom layer. In this example, the custom layer is referred to as the Proposal layer. - -Use this section to implement the mapping rules for the `Proposal` layer attributes and the output shape calculation. As part of these steps, you must first create a class for the `Proposal` layer and inherit it from general-purpose Op that defines the interface of every new custom layer. - -In this section, it is important to understand the `Op` class and its function. The implementation of this class shows that it expects a graph and attributes to be passed when initializing. The graph and attributes are in `/deployment_tools/model_optimizer/mo/ops/op.py` - -`Op` keeps the attributes for each operation and contains logic for handling node creation for internal model representation. `Op` is responsible for dumping each particular operation to the `.xml` format for the Intermediate Representation. By inheriting from it, the technical items are complete and you concentrate on the specificity of this layer: the attributes it supports and the rules on computing its output shape. - -Follow these steps: - -1. Create the file `python_proposal.py` in the directory `/deployment_tools/model_optimizer/extensions/ops`: -```python -from mo.ops.op import Op -class PythonProposalOp(Op): - pass -``` -2. Define the name of the operation and make a stub constructor: -```python -from mo.ops.op import Op -class PythonProposalOp(Op): - op = 'Proposal' - def __init__(self, graph, attrs): - super().__init__(graph) -``` -3. Every `Op` must have three specific fields defined: `type`, `op`, and `infer`. In most cases, the `type` and `op` names are the same, and `infer` is defined as a function to compute the output shape. Reflect these fields in your constructor: -```python -from mo.ops.op import Op -class PythonProposalOp(Op): - op = 'Proposal' - def __init__(self, graph, attrs): - mandatory_props = { - 'type': __class__.op, - 'op': __class__.op, - 'infer': None - } - super().__init__(graph, mandatory_props, attrs) -``` - According to the Intermediate Representation catalog, Proposal layer has the following attributes: - - * `pre_nms_topn` - * `post_nms_topn` - * `nms_thresh` - * `feat_stride` - * `min_size` - * `base_size` - * `ratio` - * `scale` -4. In defining supported attribute names, it is best to use the same names as in the original models. The names are similar to parameters and have no connection with the model layer properties. For clarity, you can use the name `my_ratio` for `ratio`. Other than defining the list of supported parameters, you can define only the parameters that appear in the Intermediate Representation in the `backend_attrs` method. - Define your attributes: -```python -class PythonProposalOp(Op): - # ... constructor - def supported_attrs(self): - return [ - 'pre_nms_topn', - 'post_nms_topn', - 'nms_thresh', - 'feat_stride', - 'min_size', - 'base_size', - 'ratio', - 'scale' - ] -``` -5. Model Optimizer now knows how to create the layer called Proposal when it is in the topology and what attributes this layer has. However, the Model Optimizer does not know how to calculate the output shape of this operation. Define a rule to calculate the output shape: -```python -import numpy as np -from mo.graph.graph import Node -from mo.ops.op import Op -class PythonProposalOp(Op): - def __init__(self, graph, attrs): - mandatory_props = { - 'type': __class__.op, - 'op': __class__.op, - 'infer': PythonProposalOp.calculate_output_shape - } - super().__init__(graph, mandatory_props, attrs) - # ... supported attrs - @staticmethod - def calculate_output_shape(node: Node): - node.out_node().shape = (1, 1, 1, 1) # any Proposal now has always the same output -``` -6. According to the Intermediate Representation catalog, Proposal layer has the following output calculation formula, where shape dynamically depends on the `post_nms_topn` parameter. - Implement the output calculation formula in Python\*: -```python -import numpy as np -class PythonProposalOp(Op): - # ... static fields - # ... constructor - # ... supported attrs - @staticmethod - def calculate_output_shape(node: Node): - input_shape = node.in_node(0).shape - out_shape = np.array([0, 0], dtype=np.int64) - # rois blob: holds R regions of interest, each is a 5 - tuple - # (n, x1, y1, x2, y2) specifying an image batch index n and a - # rectangle(x1, y1, x2, y2) - out_shape[0] = input_shape[0] * node.post_nms_topn - out_shape[1] = 5 - node.out_node(0).shape = out_shape -``` - The node does not contain this parameter because it should be initialized in the constructor and in other parameters. The Inference Engine contains the implementation of a Caffe\*-like Proposal layer and works well with the default values from `caffe.proto`: -``` -// Message that stores parameters used by ProposalLayer message ProposalParameter { optional uint32 feat_stride = 1 [default = 16]; optional uint32 base_size = 2 [default = 16]; optional uint32 min_size = 3 [default = 16]; repeated float ratio = 4; repeated float scale = 5; optional uint32 pre_nms_topn = 6 [default = 6000]; optional uint32 post_nms_topn = 7 [default = 300]; optional float nms_thresh = 8 [default = 0.7]; } -``` -7. Change the constructor as follows: -```python -class PythonProposalOp(Op): - # ... static fields - def __init__(self, graph, attrs): - mandatory_props = { - 'type': __class__.op, - 'op': __class__.op, - 'feat_stride': 16, - 'base_size': 16, - 'min_size': 16, - 'ratio': [0.5, 1, 2], - 'scale': [8, 16, 32], - 'pre_nms_topn': 6000, - 'post_nms_topn': 300, - 'nms_thresh': 0.7, - 'infer': PythonProposalOp.calculate_output_shape - } - super().__init__(graph, mandatory_props, attrs) - # ... supported attrs - # ... calculate output shape - -``` - -It is mandatory to call two functions right after the implementation of that class: - -``` -class ProposalPythonOp(Op): - ... - -register_caffe_python_extractor(ProposalPythonOp, 'rpn.proposal_layer.ProposalLayer') -Op.excluded_classes.append(ProposalPythonOp) -``` - -Note that the first call register_caffe_python_extractor(ProposalPythonOp, 'rpn.proposal_layer.ProposalLayer') registers the extension of the layer in the Model Optimizer that will be found by a specific name (it is mandatory to join module name and layer name): 'rpn.proposal_layer.ProposalLayer'. - -The second call prevents the Model Optimizer from using this extension as if it is an extension for a layer with type `Proposal`. Otherwise, this layer can be chosen as an implementation of extension that can lead to potential issues. - -**Summary** - -In this section you implemented support for a custom layer with type `Python` that is `Proposal` layer in the topology. You learned how to calculate output shape of this layer. - -The values of attributes are hardcoded, and in the next section you will learn how to extract these values from original framework model (Caffe model in this case). - -## Registering Rules to Pass Extension Layer Properties from a Caffe\* Model to the Intermediate Representation - -Model Optimizer now knows how to set the shape of the `PythonProposalOp` operation, but it is incorrect to initialize attributes with same values for every operation. Instead, the values should be extracted from the original topology. Model Optimizer does not know how to map the custom layer properties to the `PythonProposalOp`. For this, you must register the `FrontExtractorOp` instance. - -> **NOTE**: This step is required only if the layer requires parameters from the original model. - -1. Remove call functions `register_caffe_python_extractor` and `Op.excluded_classes.append` from the file with `op`, because you will implement extracted attributes from prototxt by yourself. -There are multiple types of layers in Caffe: for example, `Convolution` and `Pooling`. Also, there is a specific type for custom Python\* layers called `Python`. Therefore, it is necessary to distinguish between those 'usual' types of layers and custom ones. If you want to implement extensions for a layer with type different to `Python`, you need to inherit your class of operation (for example, `ProposalFrontExtractor`) from `FrontExtractorOp`. Otherwise, inherit your class of operation from `CaffePythonFrontExtractorOp`. -2. Create a file `python_proposal_ext.py` in the folder `/deployment_tools/model_optimizer/extensions/front/caffe` -```py -from mo.front.extractor import CaffePythonFrontExtractorOp -class PythonProposalFrontExtractor(CaffePythonFrontExtractorOp): - pass -``` -For other layers types, inherit from `FrontExtractorOp`: -```py - from mo.front.extractor import FrontExtractorOp - class ProposalFrontExtractor(FrontExtractorOp): - pass -``` -You will implement extractor for layer with type `Python`, however, the steps are generally the same for layers with other types. -3. Specify the operation that the extractor refers to and a specific flag. The flag represents whether the operation should be used by the Model Optimizer or should be excluded from processing: -```py -from mo.front.extractor import CaffePythonFrontExtractorOp -class PythonProposalFrontExtractor(CaffePythonFrontExtractorOp): - op = 'rpn.proposal_layer.ProposalLayer' - enabled = True -``` -4. Register a mapping rule between the original model and the `PythonProposalOp` attributes by overriding the following function: -```py -from mo.front.extractor import CaffePythonFrontExtractorOp -from mo.ops.op import Op -class ProposalPythonFrontExtractor(CaffePythonFrontExtractorOp): - op = 'rpn.proposal_layer.ProposalLayer' - enabled = True - @staticmethod - def extract(node): - proto_layer = node.pb - param = proto_layer.python_param # each layer has a specific parameter, take a look at caffe.proto - python_params = str(param.param_str) # for Python layers, all params are in param_str - attrs = { - 'feat_stride': int(python_params.split(':')[-1]) - } - # update the attributes of the node - Op.get_op_class_by_name('Proposal').update_node_stat(node, attrs) # <------ here goes the name ('Proposal') of the Operation that was implemented before - return __class__.enabled -``` -> **NOTE:** if you implement extension for layer with type different to `Python`, change the following line: Op.get_op_class_by_name('Proposal').update_node_stat(node, attrs) to this line: Op.get_op_class_by_name(__class__.op).update_node_stat(node, mapping_rule). -You have successfully extracted the parameter `feat_stride` from `prototxt`, assuming it is the only parameter in this layer. -5. To increase the implementation flexibility: -```py - from mo.front.extractor import CaffePythonFrontExtractorOp - from mo.ops.op import Op - class PythonProposalFrontExtractor(CaffePythonFrontExtractorOp): - op = 'rpn.proposal_layer.ProposalLayer' - enabled = True - @staticmethod - def extract(node): - param = node.pb.python_param - attrs = CaffePythonFrontExtractorOp.parse_param_str(param.param_str) - Op.get_op_class_by_name('Proposal').update_node_stat(node, attrs) - return ProposalPythonFrontExtractor.enabled -``` - -You can successfully convert the model. Open the `.xml` file and view your code: -```xml -... - - - - - 1 - 18 - 15 - 15 - - - 1 - 36 - 15 - 15 - - - 1 - 3 - - - - - 300 - 5 - - - -... -``` - -Look at the output shape of the custom layer you implemented. The shape was calculated according to the rules specified in `PythonProposalOp`. The `ratio` and `scale` properties have the value `[0.5, 1, 2]` and `[8, 16, 32]`. They have square brackets because they are originally a repeated parameter. You converted the parameter to a list in `PythonProposalOp`. Model Optimizer cast the value to a string. According to Python\* rules, a list has a string representation of opening and closing square brackets and values joined by commas. - -This is not a valid notation for the Intermediate Representation specification, because repeated parameters must be separated by a comma but without the brackets. Therefore, you must override the Model Optimizer default behavior regarding how it handles those parameters during the Intermediate Representation emitting stage, after the optimizations are complete. To do so, implement `backend_attrs()` in the `PythonProposalOp` class: -```python -class PythonProposalOp(Op): - ... other methods - def backend_attrs(self) -> list: - """ - Gets list of attributes that should appear in resulting IR - Returns: - list of attributes names or list of tuples (name of attribute, pre-processing rule) - """ - return [ - ( # a tuple per attribute - 'ratio', # name of attribute - # pre-processing rule in a form of lambda - # lambda takes a PythonProposalOp node with all defined properties - # it translates [1,2,3] -> "1,2,3" - lambda node: ','.join(map(str, node['ratio'])) - ), - ( - 'scale', - lambda node: ','.join(map(str, node['scale'])) - ), - 'feat_stride', - 'base_size', - 'min_size', - 'pre_nms_topn', - 'post_nms_topn', - 'nms_thresh' - ] -``` -The model can now be successfully converted. - -Open the `.xml` file. `ratio` and `scale` have the expected correct values `0.5,1,2` and `8,16,32`: -```xml - ... - - - - - ... - - - ... - - - - ... -``` - -> **NOTE**: Model Optimizer supports the Faster-R-CNN topology. Run the following command for the same Intermediate Representation: - -```sh -python mo.py --input_model VGG16_faster_rcnn_final.caffemodel --input_proto test.prototxt --extensions /deployment_tools/inference-engine/samples/object_detection_sample/fasterrcnn_extensions -``` - -**Summary** - -In this section you learned how to: - -1. Create a framework-independent extension implementation of the Intermediate Representation custom layer with unified logic for calculating output shapes, specified set of attributes -2. Use the Framework-Specific property extractor to map original model custom layer properties to the expected properties of the Framework-Independent extension -3. Manipulate the custom layer properties representation in the resulting Intermediate Representation - -Files used in this section: - -* `/deployment_tools/model_optimizer/extensions/ops/python_proposal.py`: - -```py -import networkx as nx -import numpy as np -from mo.front.extractor import attr_getter -from mo.graph.graph import Node -from mo.ops.op import Op - -class ProposalOp(Op): - op = 'Proposal' - - def __init__(self, graph: nx.MultiDiGraph, attrs: dict): - mandatory_props = { - 'type': __class__.op, - 'op': __class__.op, - 'post_nms_topn': 300, # default in caffe-shared - 'infer': ProposalOp.proposal_infer - } - super().__init__(graph, mandatory_props, attrs) - - def supported_attrs(self): - return [ - 'feat_stride', - 'base_size', - 'min_size', - 'ratio', - 'scale', - 'pre_nms_topn', - 'post_nms_topn', - 'nms_thresh' - ] - - def backend_attrs(self): - return [ - 'feat_stride', - 'base_size', - 'min_size', - ('ratio', lambda node: attr_getter(node, 'ratio')), - ('scale', lambda node: attr_getter(node, 'scale')), - 'pre_nms_topn', - 'post_nms_topn', - 'nms_thresh', - ] - - @staticmethod - def proposal_infer(node: Node): - input_shape = node.in_node(0).shape - out_shape = np.array([0, 0], dtype=np.int64) - # rois blob: holds R regions of interest, each is a 5 - tuple - # (n, x1, y1, x2, y2) specifying an image batch index n and a - # rectangle(x1, y1, x2, y2) - out_shape[0] = input_shape[0] * node.post_nms_topn - out_shape[1] = 5 - node.out_node(0).shape = out_shape -``` -* `/deployment_tools/model_optimizer/extensions/front/caffe/python_proposal_ext.py`: - -```py -from mo.front.extractor import CaffePythonFrontExtractorOp -from mo.ops.op import Op - -class ProposalPythonFrontExtractor(CaffePythonFrontExtractorOp): - op = 'rpn.proposal_layer.ProposalLayer' - enabled = True - - @staticmethod - def extract(node): - param = node.pb.python_param - attrs = CaffePythonFrontExtractorOp.parse_param_str(param.param_str) - Op.get_op_class_by_name('Proposal').update_node_stat(node, attrs) - return ProposalPythonFrontExtractor.enabled -``` +This page is deprecated. Please, refer to [Model Optimizer Extensibility](Customize_Model_Optimizer.md) page for more information. diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md index ba56ecfcaa147d..c106d489ea8af7 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md @@ -1,10 +1,23 @@ # Legacy Mode for Caffe* Custom Layers {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Legacy_Mode_for_Caffe_Custom_Layers} -> **NOTE**: This functionality is deprecated and will be removed in future releases. +> **NOTE**: This functionality is deprecated and will be removed in the future releases. -Model Optimizer can register custom layers in a way that the output shape is calculated by the Caffe\* framework installed on your system. This chapter covers this option. +Model Optimizer can register custom layers in a way that the output shape is calculated by the Caffe\* framework +installed on your system. This approach has several limitations: -> **NOTE**: Caffe Python\* API has an issue when layer name does not correspond to the name of its top. The fix was implemented on [BVLC Caffe\*](https://github.com/BVLC/caffe/commit/35a7b87ad87457291dfc79bf8a7e7cf7ef278cbb). The Caffe framework on your computer must contain this fix. Otherwise, Caffe framework can unexpectedly fail during the fallback procedure. +* If your layer output shape depends on dynamic parameters, input data or previous layers parameters, calculation of +output shape of the layer via Caffe can be incorrect. For example, `SimplerNMS` is filtering out bounding boxes that do +not satisfy the condition. Internally, Caffe fallback forwards the whole net without any meaningful data - just some +noise. It is natural to get only one bounding box (0,0,0,0) instead of expected number (for example, 15). There is an +option to patch Caffe accordingly, however, it makes success of Intermediate Representation generation on the patched +Caffe on the particular machine. To keep the solution independent from Caffe, we recommend to use extensions mechanism +for such layers described in the [Model Optimizer Extensibility](Customize_Model_Optimizer.md). +* It is not possible to produce Intermediate Representation on a machine that does not have Caffe installed. + +> **NOTE**: Caffe Python\* API has an issue when layer name does not correspond to the name of its top. The fix was +> implemented on [BVLC Caffe\*](https://github.com/BVLC/caffe/commit/35a7b87ad87457291dfc79bf8a7e7cf7ef278cbb). The +> Caffe framework on your computer must contain this fix. Otherwise, Caffe framework can unexpectedly fail during the +> fallback procedure. > **NOTE**: The Caffe fallback feature was validated against [this GitHub revision](https://github.com/BVLC/caffe/tree/99466224dac86ddb86296b1e727794fb836bd80f). You may have issues with forks or later Caffe framework versions. @@ -25,7 +38,8 @@ Where: **Example**: -1. `Proposal` layer has parameters, and they appear in the Intermediate Representation. The parameters are stored in the `proposal_param` property of the layer: +1. `Proposal` layer has parameters, and they appear in the Intermediate Representation. The parameters are stored in +the `proposal_param` property of the layer: ```shell \ ``` @@ -34,16 +48,6 @@ Where: \ ``` -For this feature, you need an appropriate version of Caffe installed on the computer on which you run the Model Optimizer. - -## Constraints of Using the Caffe Fallback - -Several layers in the Caffe\* framework can have shapes that dynamically depend on the input data, not only the layers that proceed the layer and its parameters. For example, `SimplerNMS` is filtering out bounding boxes that do not satisfy the condition. Internally, Caffe fallback forwards the whole net without any meaningful data - just some noise. It is natural to get only one bounding box (0,0,0,0) instead of expected number (for example, 15). There is an option to patch Caffe accordingly, however, it makes success of Intermediate Representation generation on the patched Caffe on the particular machine. To keep the solution independent from Caffe, we recommend to use extensions mechanism for such layers. - -Known cases like `Proposal`, `DetectionOutput`, `SimplerNMS` are implemented as extensions and can be used out of the box. - -A detailed description of supported layers is in the [Operations Specification](../../../ops/opset.md) document. - ## Building Caffe\* 1. Build Caffe\* with Python\* 3.5: @@ -68,4 +72,4 @@ python3 import caffe ``` -If Caffe was installed correctly, the `caffe` module is imported without errors. \ No newline at end of file +If Caffe was installed correctly, the `caffe` module is imported without errors. diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md index d3ba399a87745d..a3e6eda7756ad7 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md @@ -1,363 +1,4 @@ # Sub-Graph Replacement in the Model Optimizer {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Subgraph_Replacement_Model_Optimizer} -Several reasons exist for why the Model Optimizer could not generate an Intermediate Representation for a model. However, in some cases, the Intermediate Representation could be generated after providing certain hints to the tool. The examples of hints below are mostly related to TensorFlow\*, but potentially could be actual for models created in any framework: - -* Topology contains an operation (or a sub-graph of operations) not known for Model Optimizer, but this operation (sub-graph) could be expressed as a combination of known operations. A hint would be a description of this combination to the tool). -* Sub-graph of operations in the topology expresses a single layer known to Inference Engine. -* TensorFlow and Inference Engine use different layouts of tensors, NHWC and NCHW respectively. If some tensor in NHWC layout is flattened (for example, all the dimensions are squashed into single dim), it is not possible to convert it to NCHW layout required for Inference Engine, so Model Optimizer cannot produce correct Intermediate Representation. - -The detailed solutions for the examples above are given later, the next subsection shows what is common in all three examples. - -## Sub-graph Replacement - -In these cases, the sub-graph (or a single node) of initial graph is replaced with a new sub-graph (single node). The sub-graph replacement consists of the following steps: - -1. Identify an existing sub-graph for replacement - -2. Generate a new sub-graph - -3. Connect a new sub-graph to the graph (create input/output edges to the new sub-graph) - -4. Create output edges out of a new sub-graph to the graph - -5. Do something with the original sub-graph (for example, remove it) - -Model Optimizer provides several ways to perform most of the sub-graph replacement steps. The next subsections describe these methods. - -## Replace a Single Operation with a Sub-graph of Operations - -For example, there is an operation `SquaredDifference` in TensorFlow which calculates \f$(a - b)^2\f$, where \f$a\f$ and \f$b\f$ are input tensors. Inference Engine does not support such operation. However, `SquaredDifference` could be expressed using two `Power` operations and one `Eltwise Add`. The `Power` operation calculates \f$scale * (a ^ {power}) + shift\f$, where \f$a\f$ is a tensor and \f$scale\f$, \f$power\f$ and \f$shift\f$ are float values. The first `Power` operation negates the value of tensor \f$b\f$. The second one is used to square the result of \f$a + (- b)\f$ which is calculated using the `Eltwise Add` operation applied to tensor \f$a\f$ and tensor \f$-b\f$. - -Given that, we can replace all `SquaredDifference` operations in the initial model with two `Power` and one `Eltwise` operations. The replacer is implemented in the following file `/deployment_tools/model_optimizer/extensions/front/SquaredDifference.py`. -```python -import networkx as nx -from mo.front.common.replacement import FrontReplacementOp -from mo.graph.graph import Node -from mo.ops.eltwise import Eltwise -from mo.ops.power import Power -class SquaredDifference(FrontReplacementOp): - """ - Example class illustrating how to implement replacement of a single op in the front-end of the MO pipeline. - This class replaces a single op SquaredDifference by a sub-graph consisting of 3 lower-level ops. - """ - op = "SquaredDifference" - enabled = True - def replace_op(self, graph: nx.MultiDiGraph, node: Node): - negate = Power(graph, dict(scale=-1, name=node.name + '/negate_')) - add = Eltwise(graph, dict(operation='sum', name=node.name + '/add_')) - squared = Power(graph, dict(power=2, name=node.name + '/squared_')) - out_node = squared.create_node([add.create_node([node.in_node(0), negate.create_node([node.in_node(1)])])]) - # Replace edge from out port 0 of the matched node with a edge from node out_node.id with port 0. - # The "explicit" version of the return value is: [(out_node.id, 0)]) - return [out_node.id] -``` -Model Optimizer internal representation of the graph uses the networkx module. - -**Key lines**: - -* Line 1: Imports this module. - -* Line 3: Imports class `FrontReplacementOp` that is used to replace operation of particular type with a new sub-graph. This class performs the first step of the sub-graph replacement (identifies an existing sub-graph for replacement). It is important to mention that the replacement happens before shape inference and creation of data nodes representing tensors with values. At this stage of model conversion pipeline, all nodes in the graph are operation nodes or nodes of type `Const` that produce tensor with fixed value embedded into the node. - -* Line 4: Imports class `Node` representing a single node in the computation graph. - -* Lines 5 - 6: Import classes representing operations `Power` and `Eltwise`. These classes are inherited from base class `mo.ops.Op` that represents operation and stores its attributes. - -* Line 9: Defines class `SquaredDifference` inherited from `FrontReplacementOp`. This is a replacer class that is automatically registered and executed by Model Optimizer. Since the class is located in the common (not framework) specific directory `/deployment_tools/model_optimizer/extensions/front`, it is used for replacement for all supported frameworks. - -* Line 15: Defines the class variable `op` that stores the name of the operation to be replaced. In this case, it is `SquaredDifference`. - -* Line 16: Defines class variable `enabled` that controls whether the replacer is enabled or not. The only function that should be implemented in the class is `replace_op`. It gets graph to operate on and an instance of node of desired operation (`SquaredDifference` in this case). This function performs step two and three of the sub-graph replacement (generates a new sub-graph to replace with and connects a new sub-graph to the graph). - -* Lines 19 - 21: Create instances of operations classes with required attributes. - -* Line 23: Creates a sub-graph from the operations defined above. The `create_node` method of the `Op` class generates `Node` from the `Op` and uses single mandatory argument - the list of input nodes (represented as instances of `Node` class) to create input edges to the node being generated. Inputs of the `SquaredDifference` node are retrieved using `node.in_node(0)` and `node.in_node(1)` method calls. The `Eltwise Add` node gets first input as initial first input of `SquaredDifference` node, the second input of `add` is the result of negation of the second input of `SquaredDifference` node: `[add.create_node([node.in_node(0), negate.create_node([node.in_node(1)])])]`. Then the result of `Add` node is squared. `out_node` node performs this calculation. - -The `replace_op` function returns a list of node names used to create output edges of the sub-graph to connect it with the rest of the graph. Each element of the list describes mapping between old output edge of the matched node and new sub-graph node and output edge index. The i-th element of the list corresponds to the i-th output tensor of the matched node. In this case, `SquaredDifference` produces single tensor through output port 0, so the returned list contains single element. In general, each element is a tuple, where the first element is the name of a new node producing required tensor and the second is the output port for that tensor. If the output port is 0, it is possible to use shortcut - just the name of the node instead of a tuple. Line 26 uses this shortcut. The returned value is used to create the new sub-graph output edges (step 4 of the sub-graph replacement). - -Default implementation of the `FrontReplacementOp` class removes matched node and all its input/output edges (step 5 of the sub-graph replacement). - -Another example of such kind of replacement is in the `/deployment_tools/model_optimizer/extensions/front/Sub.py` class where all instances of `Sub` operations are replaced with two operations: `Power` to negate the second argument and the `Eltwise` to perform elementwise add. - -## Replace Sub-graph of Operations with a New Sub-graph of Operations - -The previous example considered situation when one single node of a specific type is replaced. When it is necessary to replace a sub-graph of operations it is necessary to tell Model Optimizer how to identify this sub-graph. There are three ways to achieve that: - -* Use graph isomorphism pattern of the networkx module - -* Use nodes name pattern to identify `scope` (according to TensorFlow terminology) to be replaced - -* Use sets of `start` and `end` node names to match all nodes "between" them - -The next sections explain each option using real examples. - -### Replace Sub-graph of Operations Using Graph Isomorphism Pattern - -networkx Python\* module provides methods to find graph isomorphic to the given one using nodes and edges match: for example, `networkx.algorithms.isomorphism.categorical_node_match`, `networkx.algorithms.isomorphism.categorical_multiedge_match`. Model Optimizer uses these methods and provides simple API to use that feature. - -For example, the Caffe\* has layer called [Mean-Variance Normalization (MVN)](http://caffe.berkeleyvision.org/tutorial/layers/mvn.html), which is also supported by the Inference Engine. This layer is implemented with low-level operations in TensorFlow: `Mean`, `StopGradient`, `SquaredDifference`, `Squeeze` and `FusedBatchNorm`. Model Optimizer should replace sub-graph with these operations with a single Inference Engine layer of type `MVN`. - -The file `/deployment_tools/model_optimizer/extensions/front/tf/mvn.py` performs such a replacement. The first part of the file is: -```python -class MVN(FrontReplacementSubgraph): - enabled = True - def pattern(self): - log.debug('Enabled MVN replacement') - return dict( - nodes=[ - ('mean', dict(op='Mean')), - ('stop_grad', dict(op='StopGradient')), - ('sqdiff', dict(op='SquaredDifference')), - ('variance', dict(op='Mean')), - ('squeeze_mean', dict(op='Squeeze')), - ('squeeze_variance', dict(op='Squeeze')), - ('fbn', dict(op='FusedBatchNorm')), - ], - edges=[ - ('mean', 'stop_grad', {'in': 0}), - ('stop_grad', 'sqdiff', {'in': 1}), - ('sqdiff', 'variance', {'in': 0}), - ('mean', 'squeeze_mean', {'in': 0}), - ('variance', 'squeeze_variance', {'in': 0}), - ('squeeze_mean', 'fbn', {'in': 3}), - ('squeeze_variance', 'fbn', {'in': 4}), - ], - node_attrs=['op'], - edge_attrs=['in']) -``` -**Key lines**: - -* Line 1: Defines class `MVN` inherited from class `FrontReplacementSubgraph` that performs sub-graph replacement using sub-graph isomorphism pattern. - -* Line 3: Sets class variable `enabled` to value True meaning that this replacer is enabled. - -* The function `pattern` defines the sub-graph constraints to be matched. It returns a dictionary with four keys: - - * the `nodes` defines a list of nodes to be matched. Each element in the list is a tuple. The first element is the alias name assigned for the matched node, the second element is a dictionary with desired attributes of the node. - - * the `edges` defines a list of edges to be matched. Each element in the list is a tuple. The first and the second elements are the start and end edge nodes alias names respectively. The third element is a dictionary with desired edge attributes. - - * the `node_attrs` contains the names of nodes attributes to use during sub-graph isomorphism search. - - * the `edge_attrs` contains the names of edges attributes to use during sub-graph isomorphism search. - - The sub-graph is matched if all provided constraints are satisfied. If at least one node with desired attributes is missing or at least one defined edge is absent, the sub-graph is not matched. -* Line 9: Adds constraint that sub-graph should contain node with attribute `op` with value `Mean`. The matched node gets an alias name `mean`. The same way the line 10 add constrain for node `StopGradient`, the matched node gets an alias name `stop_grad`. - -* Line 18: Defines edge from node with alias name `mean` to node with alias name `stop_grad` having attribute `in` equal to 0. This means that the output of node `mean` is connected to the node `stop_grad` as a first input (Model Optimizer uses zero-based indexing that is why `in` is 0). Another example of defining the edges constraints is in line 25 where the edge from `squeeze_mean` is connected to the `fbn` node as fourth input. - -* Lines 26 - 27: Specify a list of attributes to be checked. In fact, these lists are just list of all keys in the dictionaries for node and edge attributes. - -Now when the Model Optimizer knows how to find sub-graph (step 1 of the sub-graph replacement), it is necessary to implement function that will perform actual sub-graph replacement (step 2 and 3). The code for this function is: -```python -def replace_sub_graph(self, graph: nx.MultiDiGraph, match: dict): - fbn = match['fbn'] - input = fbn.in_node(0) - log.debug('Found potential MVN pattern after {} with name {}'.format(input.op, input.name)) - if input.id != match['mean'].in_node(0).id or input.id != match['sqdiff'].in_node(0).id: - return - log.debug('Confirmed MVN pattern after {} with name {}'.format(input.op, input.name)) - MVN = Op.get_op_class_by_name('MVN') - mvn = MVN(graph, dict( - name=fbn.name + '/MVN_', - eps=fbn.eps, - required_reduction_indices=[1,2] if fbn.data_format == b'NHWC' else [2,3] - )) - mvn.attrs['old_infer'] = mvn.attrs['infer'] - mvn.attrs['infer'] = __class__.infer - mul = Eltwise(graph, dict(operation='mul', name=fbn.name + '/Mul_')) - add = Eltwise(graph, dict(operation='sum', name=fbn.name + '/Add_')) - input_gamma = fbn.in_node(1) - input_beta = fbn.in_node(2) - mean_reduction = match['mean'].in_node(1) - variance_reduction = match['mean'].in_node(1) - new_subgraph = add.create_node([ - mul.create_node([ - mvn.create_node([input, mean_reduction, variance_reduction]), - input_gamma - ]), - input_beta - ]) - replace_node(fbn, new_subgraph) -``` -The function accepts two arguments - the graph and the dictionary `match`. The keys in the dictionary are the alias names of matched nodes (defined in the `nodes` list in the function `pattern`) and the values are the matched node of the graph (the instance of Node object). - -The function generates new sub-graph with node of type `MVN` and two nodes of the type `Eltwise` calculating sum and product. There is nothing interesting in how the graph is generated and mathematics behind that, so attention will be put to two aspects of this function. - -The first one is the call to function `replace_node` in line 36. `FusedBatchNorm` node is replaced with the output node of the generated sub-graph: all input edges of the `FusedBatchNorm` node are re-connected to the `new_subgraph` node, all consumers of the `FusedBatchNorm` node are updated to get inputs from the `new_subgraph` node. This action connects newly generated sub-graph with an existing graph (step 4 of the sub-graph replacement). - -The second one is that the default implementation of the inference function for `MVN` operation is overwritten. In line 16, the default implementation of the inference function for `MVN` is saved to attribute `old_infer`. In line 17, the new inference function is saved to the instance of the `MVN` operation class. The new inference function code looks the following way: -```python -@staticmethod -def infer(node: Node): - if not(node.in_node(1).has_valid('value') and node.in_node(2).has_valid('value')): - log.warning('Reduction indices for mean and variance for MVN node {} are not constants'.format(node.name)) - return - if not(all(node.in_node(1).value == node.required_reduction_indices) and - all(node.in_node(2).value == node.required_reduction_indices)): - log.warning('Reduction indices for mean {} and variance {} do not match required ones {}'.format( - node.in_node(1).value, - node.in_node(2).value, - node.required_reduction_indices - )) - return - node.graph.remove_edge(node.in_node(1).id, node.id) - node.graph.remove_edge(node.in_node(2).id, node.id) - node.old_infer(node) -``` -The `infer` function is needed to infer value of the node (if it is possible) and to infer shapes of the output tensors of the node (mandatory). The custom `infer` function performs additional checks that describe limitations of the `MVN` layer implementation in the Inference Engine. For example, reduction indices for mean and variance must be constants (line 10), while in TensorFlow they could be computed during model inference. In addition, the function removes two edges from the graph (lines 17 and 18) because all required information is already stored in the `MVN` node attributes. This is due to different `MVN` layer implementation in Inference Engine and TensorFlow\*: `mean` and `variance` are attributes of the node in Inference Engine while in TensorFlow they are input tensors. Edges are not removed in the `replace_sub_graph` function, because these edges are used in the `infer` function (lines 7-12). - -The last action in the `infer` method (line 19) is to call default infer function for the `MVN`, which is saved in the attribute `old_infer` of the node to infer output tensors shapes. - -On the step 5 of the sub-graph replacement, six matching nodes are automatically removed during the dead code elimination pass that is performed after applying of custom sub-graph replacements defined. Six matching nodes are no more connected to the inputs of the network after replacing node `fbn` with a newly created sub-graph node. Since they are not marked as output nodes (using `--output` command line parameter), they could be removed. - -The replacement works for all sub-graph isomorphism instances found in the network. - -### Replace Sub-graph of Operations Using Nodes Name Pattern - -TensorFlow uses a mechanism of scope to group related operation nodes. It is a good practice to put nodes performing particular task into the scope. This approach divides a graph into logical blocks that are easier to review in TensorBoard\*. The `scope`, in fact, just defines a common prefix for the node names in the scope. - -For example, Inception topologies contain several types of so-called "Inception blocks". Some of them are exactly equal to each other, but located in different places of the network. For example, Inception V4 from `tensorflow.contrib.slim` module has inception blocks `Mixed_5b`, `Mixed_5c` and `Mixed_5d` with exactly the same nodes with the same attributes. - -Now consider situation when someone implemented these Inception blocks extremely efficiently using single Inference Engine custom layer called `InceptionBlock` and would like to replace these blocks with instances of the layer to decrease inference time. Model Optimizer provides mechanism to replace sub-graph of operations defined by the regular expressions for the node names prefixes (scope). In this particular case, some of the patterns are: `.*InceptionV4/Mixed_5b`, `.*InceptionV4/Mixed_5c` and `.*InceptionV4/Mixed_5d`. Each pattern starts with `.*`, because a prefix `InceptionV4` is added to all nodes names during a model freeze. - -The sub-graph replacement using nodes name pattern is a bit trickier than replacements of single operation and networkx isomorphism pattern described above. You should do the following additional steps in comparison with previously described replacements: - -1. Prepare configuration file template defining node names patterns and information about custom layer attributes. - -2. Run Model Optimizer with command line parameter to add information about input and output nodes of the specified sub-graphs. - -Consider the following possible configuration file for the Inception Block replacer: -```json -[ - { - "custom_attributes": { - "attr1_key": "attr1_value", - "attr2_key": 123456 - }, - "id": "InceptionBlockReplacer", - "op": "InceptionBlock", - "instances": [ - ".*InceptionV4/Mixed_5b", - ".*InceptionV4/Mixed_5c", - ".*InceptionV4/Mixed_5d" - ], - "match_kind": "scope" - } -] -``` -The `.json` file contains list of dictionaries. Each dictionary defines one replacement. Each replacement is defined with several keys: - -* `id` (mandatory) is a unique identifier of the replacer. It is used in the Python\* code that implements sub-graph replacement to link the class and the replacement description from the configuration file. - -* `match_kind` (mandatory) is a string that specifies what matching algorithm is used. Currently supported `scope` and `points`. In this example, the first one is considered. The `points` match kind is described below. - -* `instances` (mandatory) specifies instances of the sub-graph to be matched. It contains a list of node names prefixes patterns for the match kind `scope`. - -* `custom_attributes` (optional) is a dictionary with static attributes of the layer to be dumped to Inference Engine Intermediate Representation `.xml` file. - -* `op` (optional) is used only if the sub-graph replacement Python code is not needed, because the sub-graph should be replaced with a single node of type `op`. If this attribute is not set, it is necessary to implement Python code with sub-graph generation code. Both options are considered in this example. - -When the configuration file is ready, run the Model Optimizer with regular command line parameters pointing to the file with model and input shapes (if necessary) and additional parameter `--tensorflow_custom_operations_config_update` pointing to the generated configuration file. If the file is correct, Model Optimizer adds two keys to the `InceptionBlockReplacer` dictionary: `inputs` and `outputs` with the following content: -```json -[ - { - "id": "InceptionBlockReplacer", - ... - "inputs": [ - [ - { - "node": "Branch_2/Conv2d_0a_1x1/Conv2D$", - "port": 0 - }, - { - "node": "Branch_3/AvgPool_0a_3x3/AvgPool$", - "port": 0 - }, - { - "node": "Branch_1/Conv2d_0a_1x1/Conv2D$", - "port": 0 - }, - { - "node": "Branch_0/Conv2d_0a_1x1/Conv2D$", - "port": 0 - } - ] - ], - "outputs": [ - { - "node": "concat$", - "port": 0 - } - ] - } -] -``` -The value for key `inputs` is a list of lists describing input tensors of the sub-graph. Each element of the top-level list corresponds to one unique input tensor of the sub-graph. Each internal list describes a list of nodes consuming this tensor and port numbers where the tensor is consumed. Model Optimizer generates regular expressions for the input nodes names to uniquely identify them in each instance of the sub-graph defined by the `instances`. Denote these nodes as input nodes of the sub-graph. - -In the InceptionV4 topology, the `InceptionV4/Mixed_5b` block has four input tensors from outside of the sub-graph, but all of them are produced by the node `InceptionV4/Mixed_5a/concat`. Therefore, the top-level list of the `inputs` contains one list corresponding to this tensor. Four input nodes of the sub-graph consume the tensor produced by `InceptionV4/Mixed_5a/concat` node. In this case, all four input nodes consume input tensor into port 0. - -The order of items in the internal list describing nodes does not matter, but the order of elements in the top-level list is important. This order defines the order in which the Model Optimizer attaches input tensors to a new generated node if the sub-graph is replaced with a single node. The i-th input node of the sub-graph is obtained using call `match.single_input_node(i)` in the sub-graph replacer code. More information about API is given below. If you need to change the order of input tensors, you can edit the configuration file in the text-editor. - -The value for the key `outputs` is a list describing nodes of the sub-graph producing tensor that goes outside of the sub-graph or does not have child nodes. Denote these nodes as output nodes of the sub-graph. The order of elements in the list is important. The i-th element of the list describes the i-th output tensor of the sub-graph, which could be obtained using call `match.output_node(i)`. The order of elements can be manually changed in the configuration file. Model Optimizer uses this order to connect output edges if the sub-graph is replaced with a single node. - -Now, when meaning of `inputs` and `outputs` attributes is clean, return back to the replacer implementation. The replacer `InceptionBlockReplacer` contains attribute `op` with the value `InceptionBlock`, which means that the identified sub-graph should be replaced with a single layer of type `InceptionBlock`. This layer is not known for the Model Optimizer, so it is necessary to define it. See [Extending the Model Optimizer with New Primitives](Extending_Model_Optimizer_with_New_Primitives.md). You must create file `extension/ops/InceptionBlock.py` with the following content: -```python -import numpy as np -from mo.graph.graph import Node -from mo.ops.op import Op -class InceptionBlock(Op): - op = "InceptionBlock" - enabled = True - def __init__(self, graph, attrs): - super().__init__(graph, attrs, { - 'type': __class__.op, - 'op': __class__.op, - }) -``` -The shape inference function is not defined. In this case, Model Optimizer uses TensorFlow fallback to calculate shapes of the sub-graph output tensors. - -Run the Model Optimizer with the regular command line parameters, path to the model file and input shape (if necessary), and the parameter `--tensorflow_use_custom_operations_config` and point to the created configuration file. Model Optimizer generates Intermediate Representation `.xml` file with three sequential layers of type `InceptionBlock` like in the following example: -```xml - - - - 1 - 384 - 35 - 35 - - - - - 1 - 384 - 35 - 35 - - - -``` -The implementation of the sub-graph replacement by scope with a single layer is complete. The next subsection explains -how Model Optimizer replaces sub-graph identified by start/end nodes (`points`) with another sub-graph. - -### Replace Sub-graph of Operations Using Points -In this scenario, for the matching algorithm user defines the sub-graph via a set of "start" and "end" nodes. -Given the set, the Model Optimizer performs the following steps: -1. Starts graph traversal from every _start_ nodes following the direction of the graph edges. -The search stops in _end_ nodes or in case of nodes without further children. All visited nodes are added to the matched sub-graph. -2. Starts another graph traversal from each non-start node of the sub-graph, i.e. every node except nodes from "start" set. -In this step the edges are traversed in the opposite edge direction. All newly visited nodes are added to the - matched sub-graph. This step is needed to add nodes required for calculation values of internal nodes of the - matched sub-graph. -3. Checks that all "end" nodes were reached from "input" nodes. If no then exit with error. -4. Check that there are no "Placeholder" operations among added nodes. If it is not true then some side branch of - the sub-graph (added in step 2) depends on inputs of the network. Such configuration is not correct so exit with error. - -This algorithm finds all nodes "between" start and end nodes. Also nodes needed for calculation of non-input nodes of the -matched sub-graph produce _constant_ values because they do not depend on input of the network. -**This sub-graph match has a limitation that each start node must have only one input**. Therefore, it is not possible -to specify, for example, convolution node as input because it has two inputs: data tensor and tensor with weights. - -For example of replacement with points, please refer to the case-study of the -[conversion for the SSD models, created with TensorFlow Object Detection API](TensorFlow_SSD_ObjectDetection_API.md). +The document has been deprecated. Refer to the [Model Optimizer Extensibility](Subgraph_Replacement_Model_Optimizer.md) +for the up-to-date documentation. diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md b/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md deleted file mode 100644 index 482cb1545abf97..00000000000000 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md +++ /dev/null @@ -1,449 +0,0 @@ -# Converting Faster R-CNN models, created with TensorFlow Object Detection API {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_TensorFlow_Faster_RCNN_ObjectDetection_API} - -This is a deprecated page. Please, consider reading [this](../convert_model/tf_specific/Convert_Object_Detection_API_Models.md) page describing new approach to convert Object Detection API models giving closer to TensorFlow inference results. - -## Converting models created with TensorFlow Object Detection API version equal or higher than 1.6.0 -This chapter describes how to convert selected Faster R-CNN models from the TensorFlow Object Detection API zoo version equal or higher than 1.6.0. The full list of supported models is provided in the table below. Note that currently batch size 1 is supported only. The only Inference Engine plugin supporting these topologies inference is CPU. - -The Faster R-CNN models contain several building blocks similar to building blocks from SSD models so it is highly recommended to read chapter about [enabling TensorFlow Object Detection API SSD models](TensorFlow_SSD_ObjectDetection_API.md) first. Detailed information about Faster R-CNN topologies is provided [here](https://arxiv.org/abs/1506.01497). - -The TensorFlow network consists of a number of big blocks grouped by scope: - -* `Preprocessor` performs scaling/resizing of the image and converts input data to [0, 1] interval. Has two outputs: the first one is modified input image and the second one is a constant tensor with shape (batch_size, 3) and values (resized_image_height, resized_image_width, 3). - -* `FirstStageFeatureExtractor` is a backbone feature extractor. - -* `FirstStageBoxPredictor` calculates boxes and classes predictions. - -* `GridAnchorGenerator` generates anchors coordinates. - -* `ClipToWindow` crops anchors to the resized image size. - -* `Decode` decodes coordinates of boxes using anchors and data from the `FirstStageBoxPredictor`. - -* `BatchMultiClassNonMaxSuppression` performs non maximum suppression. - -* `map` scales coordinates of boxes to [0, 1] interval by dividing coordinates by (resized_image_height, resized_image_width). - -* `map_1` scales coordinates from [0, 1] interval to resized image sizes. - -* `SecondStageFeatureExtractor` is a feature extractor for predicted Regions of interest (ROIs). - -* `SecondStageBoxPredictor` refines box coordinates according `SecondStageFeatureExtractor`. - -* `SecondStagePostprocessor` is Detection Output layer performing final boxes predictions. - -### Sub-graph replacements -There are three sub-graph replacements defined in the `extensions/front/tf/legacy_faster_rcnn_support.json` used to convert these models: - -* the first one replaces the `Preprocessor` block. The implementation of this replacer is in the `/deployment_tools/model_optimizer/extensions/front/tf/Preprocessor.py` - -* the second one replaces a number of blocks in the the graph including `GridAnchorGenerator`, `ClipToWindow`, `Decode`, `BatchMultiClassNonMaxSuppression`, `Tile`, `Tile_1` and `map` with Proposal and ROIRooling layers and some additional layers to pre-process input data - -* the third one replaces `SecondStagePostprocessor` with a DetectionOutput layer. - -The second replacer is defined using the following configuration that matches sub-graph by points: - -```json - { - "custom_attributes": { - "nms_threshold": 0.7, - "feat_stride": 16, - "max_proposals": 100, - "anchor_base_size": 256, - "anchor_scales": [0.25, 0.5, 1.0, 2.0], - "anchor_aspect_ratios": [0.5, 1.0, 2.0], - "roi_spatial_scale": 0.0625 - }, - "id": "TFObjectDetectionAPIFasterRCNNProposalAndROIPooling", - "include_inputs_to_sub_graph": true, - "include_outputs_to_sub_graph": true, - "instances": { - "end_points": [ - "CropAndResize", - "map_1/TensorArrayStack/TensorArrayGatherV3", - "map_1/while/strided_slice/Enter", - "BatchMultiClassNonMaxSuppression/map/TensorArrayStack_4/TensorArrayGatherV3" - ], - "start_points": [ - "FirstStageBoxPredictor/concat", - "FirstStageBoxPredictor/concat_1", - "GridAnchorGenerator/Identity", - "Shape", - "CropAndResize" - ] - }, - "match_kind": "points" - } -``` - -The `start_points` list contains the following nodes: - -* `FirstStageBoxPredictor/concat` node produces box coordinates predictions. - -* `FirstStageBoxPredictor/concat_1` node produces classes predictions which will be used for the ROIs - -* `GridAnchorGenerator/Identity` node produces anchors coordinates. - -* `Shape` and `CropAndResize` nodes are specified as inputs to correctly isolate the required sub-graph. Refer to the [chapter](Subgraph_Replacement_Model_Optimizer.md) for more information about replacements by points. - -The `end_points` list contains the following nodes: - -* `CropAndResize` is the node that performs ROI pooling operation. - -* `map_1/TensorArrayStack/TensorArrayGatherV3`, `map_1/while/strided_slice/Enter` and `BatchMultiClassNonMaxSuppression/map/TensorArrayStack_4/TensorArrayGatherV3` are specified to correctly isolate the sub-graph. - -The `custom_attributes` dictionary contains attributes where most values are taken from the topology-specific configuration file `samples/configs/faster_rcnn_*.config` of the [TensorFlow Object Detection API repository](https://github.com/tensorflow/models/tree/master/research/object_detection): - -* `nms_threshold` is the value of the `first_stage_nms_iou_threshold` parameter. - -* `feat_stride` is the value of the `height_stride` and `width_stride` parameters. Inference Engine supports case when these two values are equal that is why the replacement configuration file contains just one parameter. - -* `max_proposals` is the value of the `max_total_detections` parameter which is a maximum number of proposal boxes from the Proposal layer and detected boxes. - -* `anchor_base_size` is the base size of the generated anchor. The 256 is the default value for this parameter and it is not specified in the configuration file. - -* `anchor_scales" is the value of the `scales` attrbite. - -* `anchor_aspect_ratios` is the value of the `aspect_ratios` attribute. - -* `roi_spatial_scale` is needed for the Inference Engine ROIPooling layer. It is the default value that is not actually used. - -The identifier for this replacer is `TFObjectDetectionAPIFasterRCNNProposalAndROIPooling`. The Python implementation of this replacer is in the file `/deployment_tools/model_optimizer/extensions/front/tf/FasterRCNNs.py`. - -The first four functions of the replacer class are the following: - -```python -class TFObjectDetectionAPIFasterRCNNProposalAndROIPooling(FrontReplacementFromConfigFileSubGraph): - """ - This class replaces sub-graph of operations with Proposal and ROIPooling layers and additional layers transforming - tensors from layout of TensorFlow to layout required by Inference Engine. - Refer to comments inside the function for more information about performed actions. - """ - replacement_id = 'TFObjectDetectionAPIFasterRCNNProposalAndROIPooling' - - def run_after(self): - return [PreprocessorReplacement] - - def run_before(self): - return [SecondStagePostprocessorReplacement] - - def output_edges_match(self, graph: nx.DiGraph, match: SubgraphMatch, new_sub_graph: dict): - return {match.output_node(0)[0].id: new_sub_graph['roi_pooling_node'].id} - - def nodes_to_remove(self, graph: nx.MultiDiGraph, match: SubgraphMatch): - new_list = match.matched_nodes_names().copy() - # do not remove nodes that produce box predictions and class predictions - new_list.remove(match.single_input_node(0)[0].id) - new_list.remove(match.single_input_node(1)[0].id) - return new_list -``` - -The function `run_after` returns list of Python classes inherited from one of the replacer classes (`FrontReplacementOp`, `FrontReplacementPattern`, `FrontReplacementFromConfigFileSubGraph` etc) those current sub-graph replacement class must be run after. In this case the replacer must be run after the `Preprocessor` is removed by the `PreprocessorReplacement` replacer. Similar way the `run_before` function is used to tell Model Optimizer to execute `SecondStagePostprocessorReplacement` before this replacer. - -The `output_edges_match` function describes matching between the output nodes of the sub-graph before replacement and after. In this case the only needed output node of the sub-graph is the `CropAndResize` node which is identified with `match.output_node(0)[0]`. The new output node which is created in the `generate_sub_graph` function is identified with `new_sub_graph['roi_pooling_node']`. - -The `nodes_to_remove` function takes the default list of nodes to be removed which contains all matched nodes and remove from them two input nodes which are identified with `match.single_input_node(0)[0]` and `match.single_input_node(1)[0]`. These nodes will be connected as inputs to new nodes being generated in the `generate_sub_graph` function so they should node be removed. - -The code generating new sub-graph is the following: - -```python - def generate_sub_graph(self, graph: nx.MultiDiGraph, match: SubgraphMatch): - log.debug('TFObjectDetectionAPIFasterRCNNProposal: matched_nodes = {}'.format(match.matched_nodes_names())) - - config_attrs = match.custom_replacement_desc.custom_attributes - nms_threshold = config_attrs['nms_threshold'] - feat_stride = config_attrs['feat_stride'] - max_proposals = config_attrs['max_proposals'] - anchor_base_size = config_attrs['anchor_base_size'] - roi_spatial_scale = config_attrs['roi_spatial_scale'] - proposal_ratios = config_attrs['anchor_aspect_ratios'] - proposal_scales = config_attrs['anchor_scales'] - anchors_count = len(proposal_ratios) * len(proposal_scales) -``` - -These lines get parameters defined in the sub-graph replacement configuration file and calculate initial anchors count. - -```python - # get the ROIPool size from the CropAndResize which performs the same action - if 'CropAndResize' not in graph.nodes(): - raise Error('Failed to find node with name "CropAndResize" in the topology. Probably this is not Faster' - ' RCNN topology or it is not supported') - roi_pool_size = Node(graph, 'CropAndResize').in_node(3).value[0] -``` - -The code above gets the ROI Pooling spatial output dimension size as a value from the fourth argument of the node with name `CropAndResize`. - -```python - # Convolution/matmul node that produces classes predictions - # Permute result of the tensor with classes permissions so it will be in a correct layout for Softmax - predictions_node = match.single_input_node(1)[0].in_node(0).in_node(0) - permute_predictions_op = Permute(graph, {'order': np.array([0, 2, 3, 1])}) - permute_predictions_node = permute_predictions_op.create_node([], dict(name=predictions_node.name + '/Permute_')) - insert_node_after(predictions_node, permute_predictions_node, 0) - - reshape_classes_op = Reshape(graph, {'dim': np.array([0, -1, 2])}) - reshape_classes_node = reshape_classes_op.create_node([permute_predictions_node], - dict(name='Reshape_FirstStageBoxPredictor_Class_')) - update_attrs(reshape_classes_node, 'shape_attrs', 'dim') - - softmax_conf_op = Softmax(graph, {'axis': 1}) - softmax_conf_node = softmax_conf_op.create_node([reshape_classes_node], - dict(name='FirstStageBoxPredictor_SoftMax_Class_')) -``` - -The output with class predictions from the `FirstStageBoxPredictor` is generated with a convolution operation. The convolution output data layout in TensorFlow is NHWC while Inference Engine uses NCHW layout. Model Optimizer by default converts the weights of TensorFlow convolutions to produce output tensor in NCHW layout required by Inference Engine. The issue arises because the class predictions tensor is passed through the Softmax operation to produce class probabilities. The Inference Engine Softmax is performed over the fastest-changing dimension which is 'W' in Inference Engine. Thus, the softmax operation will be performed over a wrong dimension after conversion of the convolution node producing classes predicitions. The solution is to add Permute and Reshape operations to prepare the input data for Softmax. The Reshape operation is required to make the size of the fastest-changing dimension equal to 2, because there are 2 classes being predicted: background and foreground. - -Another issue is that layout of elements in the predicted classes tensor is different between TensorFlow and Inference Engine Proposal layer requirements. In TensorFlow the tensor has the following virtual layout [N, H, W, num_anchors, num_classes] while the Inference Engine Proposal layer requires in the following virtual layout [N, num_classes, num_anchors, H, W]. Thus, it is necessary to reshape, permute and then reshape again output from the Softmax to the required shape for the Proposal layer: - -```python - reshape_softmax_op = Reshape(graph, {'dim': np.array([1, anchors_count, 2, -1])}) - reshape_softmax_node = reshape_softmax_op.create_node([softmax_conf_node], dict(name='Reshape_Softmax_Class_')) - update_attrs(reshape_softmax_node, 'shape_attrs', 'dim') - - permute_reshape_softmax_op = Permute(graph, {'order': np.array([0, 1, 3, 2])}) - permute_reshape_softmax_node = permute_reshape_softmax_op.create_node([reshape_softmax_node], - dict(name='Permute_')) - - # implement custom reshape infer function because we need to know the input convolution node output dimension - # sizes but we can know it only after partial infer - reshape_permute_op = Reshape(graph, {'dim': np.ones([4]), 'anchors_count': anchors_count, - 'conv_node': predictions_node}) - reshape_permute_op.attrs['old_infer'] = reshape_permute_op.attrs['infer'] - reshape_permute_op.attrs['infer'] = __class__.classes_probabilities_reshape_shape_infer - reshape_permute_node = reshape_permute_op.create_node([permute_reshape_softmax_node], - dict(name='Reshape_Permute_Class_')) - update_attrs(reshape_permute_node, 'shape_attrs', 'dim') -``` - -The Proposal layer has 3 inputs: classes probabilities, boxes predictions and a input shape of the image. The first two tensors are ready so it is necessary to create the Const operation that produces the desired third input tensor. - -```python - # create constant input with the image height, width and scale H and scale W (if present) required for Proposal - const_value = np.array([[input_height, input_width, 1]], dtype=np.float32) - const_op = Const(graph, dict(value=const_value, shape=const_value.shape)) - const_node = const_op.create_node([], dict(name='Proposal_const_image_size_')) -``` - -Now add the Proposal layer: - -```python - - proposal_op = ProposalOp(graph, dict(min_size=10, framework='tensorflow', box_coordinate_scale=10, - box_size_scale=5, post_nms_topn=max_proposals, feat_stride=feat_stride, - ratio=proposal_ratios, scale=proposal_scales, base_size=anchor_base_size, - pre_nms_topn=2**31 - 1, - nms_thresh=nms_threshold)) - proposal_node = proposal_op.create_node([reshape_permute_node, - match.single_input_node(0)[0].in_node(0).in_node(0), - const_node], - dict(name=proposal_op.attrs['type'] + '_')) -``` - -The box coordinates in the TensorFlow are in the following layout "YXYX" while Inference Engine uses "XYXY" layout so it is necessary to swap coordinates produced by Proposal layer. It is implemented with help of a convolution node with a special filter of a size [5, 5]: - -```python - proposal_reshape_4d_op = Reshape(graph, {'dim': np.array([max_proposals, 1, 1, 5])}) - proposal_reshape_4d_node = proposal_reshape_4d_op.create_node([proposal_node], dict(name="reshape_4d_")) - update_attrs(proposal_reshape_4d_node, 'shape_attrs', 'dim') - - # create convolution node to swap X and Y coordinates in the proposals - conv_filter_const_data = np.array(np.array([[1, 0, 0, 0, 0], - [0, 0, 1, 0, 0], - [0, 1, 0, 0, 0], - [0, 0, 0, 0, 1], - [0, 0, 0, 1, 0]], - dtype=np.float32).reshape([1, 1, 5, 5]), dtype=np.float32) - conv_filter_const_op = Const(graph, dict(value=conv_filter_const_data, spatial_dims=np.array([2, 3]))) - conv_filter_const_node = conv_filter_const_op.create_node([], dict(name="conv_weights")) - - conv_op = Op(graph, { - 'op': 'Conv2D', - 'bias_addable': False, - 'spatial_dims': np.array([1, 2]), - 'channel_dims': np.array([3]), - 'batch_dims': np.array([0]), - 'pad': None, - 'pad_spatial_shape': None, - 'input_feature_channel': 2, - 'output_feature_channel': 2, - 'output_shape': [max_proposals, 1, 1, 5], - 'dilation': np.array([1, 1, 1, 1], dtype=np.int64), - 'stride': np.array([1, 1, 1, 1]), - 'type': 'Convolution', - 'group': None, - 'layout': 'NHWC', - 'infer': __class__.fake_conv_shape_infer}) - predictions_node = conv_op.create_node([proposal_reshape_4d_node, conv_filter_const_node], dict(name="conv_")) - update_ie_fields(graph.node[predictions_node.id]) - - proposal_reshape_2d_op = Reshape(graph, {'dim': np.array([max_proposals, 5])}) - proposal_reshape_2d_node = proposal_reshape_2d_op.create_node([predictions_node], dict(name="reshape_2d_")) - # set specific name for this Reshape operation so we can use it in the DetectionOutput replacer - proposal_reshape_2d_node['name'] = 'swapped_proposals' -``` - -The ROIPooling layer in TensorFlow is implemented with operation called `CropAndResize` with bi-linear filtration. Inference Engine implementation of the ROIPooling layer with bi-linear filtration requires input boxes coordinates be scaled to [0, 1] interval. Adding elementwise multiplication of box coordinates solves this issue: - -```python - # the TF implementation of Proposal with bi-linear filtration need proposals scaled by image size - proposal_scale_const = np.array([1.0, 1 / input_height, 1 / input_width, 1 / input_height, 1 / input_width], - dtype=np.float32) - proposal_scale_const_op = Const(graph, dict(value=proposal_scale_const, shape=proposal_scale_const.shape)) - proposal_scale_const_node = proposal_scale_const_op.create_node([], dict(name='Proposal_scale_const_')) - - scale_proposals_op = Eltwise(graph, {'operation': 'mul'}) - scale_proposals_node = scale_proposals_op.create_node([proposal_reshape_2d_node, proposal_scale_const_node], - dict(name='scale_proposals_')) -``` - -The last step is to create the ROIPooling node with 2 inputs: the identified feature maps from the `FirstStageFeatureExtractor` and the scaled output of the Proposal layer: - -```python - feature_extractor_output_nodes = scope_output_nodes(graph, 'FirstStageFeatureExtractor') - if len(feature_extractor_output_nodes) != 1: - raise Error("Failed to determine FirstStageFeatureExtractor output node to connect it to the ROIPooling." - "Found the following nodes: {}".format([node.name for node in feature_extractor_output_nodes])) - - roi_pooling_op = ROIPooling(graph, dict(method="bilinear", framework="tensorflow", - pooled_h=roi_pool_size, pooled_w=roi_pool_size, - spatial_scale=roi_spatial_scale)) - roi_pooling_node = roi_pooling_op.create_node([feature_extractor_output_nodes[0], scale_proposals_node], - dict(name='ROI_Pooling_')) - - return {'roi_pooling_node': roi_pooling_node} -``` - -The are two additional methods implemented in the replacer class: - -* The `fake_conv_shape_infer` is a silly infer function for the convolution that permutes X and Y coordinates of the Proposal output which avoids setting a lot of internal attributes required for propoper shape inference. - -* The "classes_probabilities_reshape_shape_infer" function is used to update the output dimension of the reshape operation. The output spatial dimensions depends on the convolution output spatial dimensions thus they are not known until the shape inference pass which is performed after this sub-graph replacement class. So this custom infer function is called instead of default Reshape shape inference function, updates the required attribute "dim" of the node with the convolution output spatial dimensions which are known at the time of calling this inference function and then call the default Reshape inference function. - -```python - @staticmethod - def fake_conv_shape_infer(node: Node): - node.out_node(0).shape = node.in_node(0).shape - # call functions to update internal attributes required for correct IR generation - mark_input_bins(node) - assign_dims_to_weights(node.in_node(1), [0, 1], node.input_feature_channel, node.output_feature_channel, 4) - - @staticmethod - def classes_probabilities_reshape_shape_infer(node: Node): - # now we can determine the reshape dimensions from Convolution node - conv_node = node.conv_node - conv_output_shape = conv_node.out_node().shape - - # update desired shape of the Reshape node - node.dim = np.array([0, conv_output_shape[1], conv_output_shape[2], node.anchors_count * 2]) - node.old_infer(node) -``` - -The second replacer defined in the sub-graph replacement configuration file replaces the `SecondStagePostprocessor` block and is defined using scope: - -```json - { - "custom_attributes": { - "code_type": "caffe.PriorBoxParameter.CENTER_SIZE", - "confidence_threshold": 0.01, - "keep_top_k": 300, - "nms_threshold": 0.6, - "pad_mode": "caffe.ResizeParameter.CONSTANT", - "resize_mode": "caffe.ResizeParameter.WARP", - "max_detections_per_class": 100, - "num_classes": 90 - }, - "id": "SecondStagePostprocessorReplacement", - "inputs": [ - [ - { - "node": "Reshape$", - "port": 0 - } - ], - [ - { - "node": "Reshape_1$", - "port": 0 - } - ], - [ - { - "node": "ExpandDims$", - "port": 0 - } - ] - ], - "instances": [ - ".*SecondStagePostprocessor/" - ], - "match_kind": "scope", - "outputs": [ - { - "node": "BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3$", - "port": 0 - } - ] - } -``` - -The replacement code is similar to the `SecondStagePostprocessor` replacement for the SSDs topologies. The are two major difference: - -* The tensor with bounding boxes doesn't contain locations for class 0 (background class) but Inference Engine Detection Output layer requires it. The Const node with some dummy values are created and concatenated with the tensor. - -* The priors tensor is not constant like in SSDs so the bounding boxes tensor must be scaled with variances [0.1, 0.1, 0.2, 0.2]. - -The described above difference are resolved with the following code: - -```python - # TF produces locations tensor without boxes for background. - # Inference Engine DetectionOutput layer requires background boxes so we generate them with some values - # and concatenate with locations tensor - fake_background_locs_blob = np.tile([[[1, 1, 2, 2]]], [max_detections_per_class, 1, 1]) - fake_background_locs_const_op = Const(graph, dict(value=fake_background_locs_blob, - shape=fake_background_locs_blob.shape)) - fake_background_locs_const_node = fake_background_locs_const_op.create_node([]) - - reshape_loc_op = Reshape(graph, {'dim': np.array([max_detections_per_class, num_classes, 4])}) - reshape_loc_node = reshape_loc_op.create_node([match.single_input_node(0)[0].in_node(0)], - dict(name='Reshape_loc_')) - - concat_loc_op = Concat(graph, {'axis': 1}) - concat_loc_node = concat_loc_op.create_node([fake_background_locs_const_node, reshape_loc_node], - dict(name='Concat_fake_loc_')) - - # blob with variances - variances_blob = np.array([0.1, 0.1, 0.2, 0.2]) - variances_const_op = Const(graph, dict(value=variances_blob, shape=variances_blob.shape)) - variances_const_node = variances_const_op.create_node([]) - - # reshape locations tensor to 2D so it could be passed to Eltwise which will be converted to ScaleShift - reshape_loc_2d_op = Reshape(graph, {'dim': np.array([-1, 4])}) - reshape_loc_2d_node = reshape_loc_2d_op.create_node([concat_loc_node], dict(name='reshape_locs_2d_')) - - # element-wise multiply locations with variances - eltwise_locs_op = Eltwise(graph, {'operation': 'mul'}) - eltwise_locs_node = eltwise_locs_op.create_node([reshape_loc_2d_node, variances_const_node], - dict(name='scale_locs_')) -``` - -### Example of Model Optimizer Command-Line for TensorFlow's Faster R-CNNs -The final command line to convert Faster R-CNNs from the TensorFlow* Object Detection Zoo is the following: - -```sh -./mo.py --input_model= --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config extensions/front/tf/legacy_faster_rcnn_support.json -``` - -Note that there are minor changes that should be made to the and sub-graph replacement configuration file `/deployment_tools/model_optimizer/extensions/front/tf/legacy_faster_rcnn_support.json` before converting particular Faster R-CNN topology. Refer to the table below. - -### Sub-Graph Replacement Configuration File Parameters to Convert Different Faster R-CNN Models -|Model Name | Configuration File Changes| -|:----|:----:| -| faster_rcnn_inception_v2_coco | None -| faster_rcnn_resnet50_coco | None -| faster_rcnn_resnet50_lowproposals_coco | None -| faster_rcnn_resnet101_coco | None -| faster_rcnn_resnet101_lowproposals_coco | None -| faster_rcnn_inception_resnet_v2_atrous_coco | "feat_stride: 8" -| faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco| "feat_stride: 8" - diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_SSD_ObjectDetection_API.md b/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_SSD_ObjectDetection_API.md deleted file mode 100644 index b43d5de15e21aa..00000000000000 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_SSD_ObjectDetection_API.md +++ /dev/null @@ -1,339 +0,0 @@ -# (Deprecated) Case Study: Converting SSD Models Created with TensorFlow* Object Detection API {#openvino_docs_MO_DG_prepare_model_customize_model_optimizer_TensorFlow_SSD_ObjectDetection_API} - -This is a deprecated page. Please, consider reading [this](../convert_model/tf_specific/Convert_Object_Detection_API_Models.md) page describing new approach to convert Object Detection API models giving closer to TensorFlow inference results. - -## Converting Models Created with TensorFlow Object Detection API Version prior 1.6.0 - -As explained in the [Sub-graph Replacement in Model Optimizer](Subgraph_Replacement_Model_Optimizer.md) section, there are multiple -ways to setup the sub-graph matching. In this example we are focusing on the defining the sub-graph via a set of -"start" and "end" nodes. -The result of matching is two buckets of nodes: -* Nodes "between" start and end nodes. -* Nodes connected to the first list, but just on the constant path (e.g. these nodes are not connected to the inputs of the entire graph). - -Let's look closer to the SSD models from the TensorFlow* detection model -zoo: -[SSD MobileNet](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz) and -[SSD InceptionV2](http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz). - -* Nodes "between" start and end nodes -* Nodes connected to the first list, but just on the constant path (for example, these nodes are not connected to the inputs of the entire graph). Let's look closer to the SSD models from the TensorFlow\* detection model zoo : [SSD MobileNet](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_coco_2017_11_17.tar.gz) and [SSD InceptionV2](http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz). - -A distinct layer of any SSD topology is the `DetectionOutput` layer. This layer is implemented with a dozens of primitive operations in TensorFlow, while in Inference Engine, it is one [layer](../../../ops/opset.md). Thus, to convert a SSD model from the TensorFlow, the Model Optimizer should replace the entire sub-graph of operations that implement the `DetectionOutput` layer with a single well-known `DetectionOutput` node. - -The Inference Engine `DetectionOutput` layer consumes three tensors in the following order: - -1. Tensor with locations of bounding boxes -2. Tensor with confidences for each bounding box -3. Tensor with prior boxes (anchors in TensorFlow terminology) - -`DetectionOutput` layer produces one tensor with seven numbers for each actual detection. There are more output tensors in the TensorFlow Object Detection API, but the values in them are consistent with the Inference Engine ones. - -The difference with [other examples](Subgraph_Replacement_Model_Optimizer.md) is that here the `DetectionOutput` sub-graph is replaced with a new sub-graph (not a single layer). - -Look at sub-graph replacement configuration file `/deployment_tools/model_optimizer/extensions/front/tf/legacy_ssd_support.json` that is used to enable two models listed above: -```json -[ - { - "custom_attributes": { - "code_type": "caffe.PriorBoxParameter.CENTER_SIZE", - "confidence_threshold": 0.01, - "keep_top_k": 200, - "nms_threshold": 0.45, - "pad_mode": "caffe.ResizeParameter.CONSTANT", - "resize_mode": "caffe.ResizeParameter.WARP" - }, - "id": "TFObjectDetectionAPIDetectionOutput", - "include_inputs_to_sub_graph": true, - "include_outputs_to_sub_graph": true, - "instances": { - "end_points": [ - "detection_boxes", - "detection_scores", - "num_detections" - ], - "start_points": [ - "Postprocessor/Shape", - "Postprocessor/Slice", - "Postprocessor/ExpandDims", - "Postprocessor/Reshape_1" - ] - }, - "match_kind": "points" - }, - { - "custom_attributes": { - }, - "id": "PreprocessorReplacement", - "inputs": [ - [ - { - "node": "map/Shape$", - "port": 0 - }, - { - "node": "map/TensorArrayUnstack/Shape$", - "port": 0 - }, - { - "node": "map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3$", - "port": 2 - } - ] - ], - "instances": [ - ".*Preprocessor/" - ], - "match_kind": "scope", - "outputs": [ - { - "node": "sub$", - "port": 0 - }, - { - "node": "map/TensorArrayStack_1/TensorArrayGatherV3$", - "port": 0 - } - ] - } -] -``` - -**Key lines**: - -* Lines 3-10 define static attributes that will be saved to the Intermediate Representation `.xml` file for `DetectionOutput` layer. - -* Lines 12 and 13 define values for attributes that should be always set to "true" for this release of the Model Optimizer. These two attributes are specific for sub-graph match by points only. - -* Lines 14-26 define one instance of the sub-graph to be match. It is an important difference between sub-graph matching by scope and points. Several instances could be specified for matching by scope, but matching with points allows specifying just one instance. So the full node names (not regular expressions like in case of match with scope) are specified in `instances` dictionary. - -The second sub-graph replacer with identifier `PreprocessorReplacement` is used to remove the `Preprocessing` block from the graph. The replacer removes all nodes from this scope except nodes performing mean value subtraction and scaling (if applicable). Implementation of the replacer is in the `/deployment_tools/model_optimizer/extensions/front/tf/Preprocessor.py` file. - -Now let's analyze the structure of the topologies generated with the Object Detection API. There are several blocks in the graph performing particular task: - -* `Preprocessor` block resizes, scales and subtracts mean values from the input image. - -* `FeatureExtractor` block is a [MobileNet](https://arxiv.org/abs/1704.04861) or other backbone to extract features. - -* `MultipleGridAnchorGenerator` block creates initial bounding boxes locations (anchors). - -* `Postprocessor` block acts as a `DetectionOutput` layer. So we need to replace `Postprocessor` block with `DetectionOutput` layer. It is necessary to add all input nodes of the `Postprocessor` scope to the list `start_points`. Consider inputs of each of these nodes: - - * `Postprocessor/Shape` consumes tensor with locations. - * `Postprocessor/Slice` consumes tensor with confidences. - * `Postprocessor/ExpandDims` consumes tensor with prior boxes. - * `Postprocessor/Reshape_1` consumes tensor with locations similarly to the `Postprocessor/Shape` node. Despite the fact that the last node `Postprocessor/Reshape_1` gets the same tensor as node `Postprocessor/Shape`, it must be explicitly put to the list. - -Object Detection API `Postprocessor` block generates output nodes: `detection_boxes`, `detection_scores`, `num_detections`, `detection_classes`. - -Now consider the implementation of the sub-graph replacer, available in the `/deployment_tools/model_optimizer/extensions/front/tf/SSDs.py`. The file is rather big, so only some code snippets are used: -```python -class PostprocessorReplacement(FrontReplacementFromConfigFileSubGraph): - replacement_id = 'TFObjectDetectionAPIDetectionOutput' -``` - -These lines define the new `PostprocessorReplacement` class inherited from `FrontReplacementFromConfigFileSubGraph`. `FrontReplacementFromConfigFileSubGraph` is designed to replace sub-graph of operations described in the configuration file. There are methods to override for implementing custom replacement logic that we need: - -* `generate_sub_graph` performs new sub-graph generation and returns dictionary where key is an alias name for the node and value is a Node objects. The dictionary has the same format as parameter `match` in the `replace_sub_graph` method in the example with networkx sub-graph isomorphism pattern. This dictionary is passed as argument to the next three methods, so it should contain entries the for nodes that the functions need. - -* `input_edges_match` specifies mapping between input edges to sub-graph before replacement and after replacement. The key of the dictionary is a tuple specifying input tensor of the sub-graph before replacement: sub-graph input node name and input port number for this node. The value for this key is also a tuple specifying the node where this tensor should be attached during replacement: the node name (or alias name of the node) and the input port for this node. If the port number is zero, the parameter could be omitted so the key or value is just a node name (alias). Default implementation of the method returns an empty dictionary, so Model Optimizer does not create new edges. - -* `output_edges_match` returns mapping between old output edges of the matched nodes and new sub-graph node and output edge index. The format is similar to the dictionary returned in the `input_edges_match` method. The only difference is that instead of specifying input port numbers for the nodes it is necessary to specify output port number. Of course, this mapping is needed for the output nodes only. Default implementation of the method returns an empty dictionary, so the Model Optimizer does not create new edges. - -* `nodes_to_remove` specifies list of nodes that Model Optimizer should remove after sub-graph replacement. Default implementation of the method removes all sub-graph nodes. - -Review of the replacer code, considering details of the `DetectionOutput` layer implementation in the Inference Engine. There are several constraints to the input tensors of the `DetectionOutput` layer: - -* The tensor with locations must be of shape `[#‍batch, #‍prior_boxes * 4]` or `[#‍batch, #‍prior_boxes * 5]` depending on shared locations between different batches or not. -* The tensor with confidences must be of shape `[#‍batch, #‍prior_boxes * #‍classes]` and confidences values are in range [0, 1], that is passed through `softmax` layer. -* The tensor with prior boxes must be of shape `[#‍batch, 2, #‍prior_boxes * 4]`. Inference Engine expects that it contains variance values which TensorFlow Object Detection API does not add. - -To enable these models, add `Reshape` operations for locations and confidences tensors and update the values for the prior boxes to include the variance constants (they are not there in TensorFlow Object Detection API). - -Look at the `generate_sub_graph` method: -```python -def generate_sub_graph(self, graph: nx.MultiDiGraph, match: SubgraphMatch): - log.debug('PostprocessorReplacement.generate_sub_graph') - log.debug('matched_nodes = {}'.format(match.matched_nodes_names())) - # softmax to be applied to the confidence - softmax_conf_op = Softmax(graph, {'axis': 2, 'nchw_layout': True}) - softmax_conf_node = softmax_conf_op.add_node(dict(name='DetectionOutput_SoftMax_conf_')) - # Inference Engine DetectionOutput layer consumes flattened tensors - # reshape operation to flatten locations tensor - reshape_loc_op = Reshape(graph, {'dim': np.array([0, -1])}) - reshape_loc_node = reshape_loc_op.add_node(dict(name='DetectionOutput_Reshape_loc_')) - # Inference Engine DetectionOutput layer consumes flattened tensors - # reshape operation to flatten confidence tensor - reshape_conf_op = Reshape(graph, {'dim': np.array([0, -1])}) - reshape_conf_node = reshape_conf_op.add_node(dict(name='DetectionOutput_Reshape_conf_')) - # create Node object from Op class - detection_output_op = DetectionOutput(graph, match.custom_replacement_desc.custom_attributes) - detection_output_op.attrs['old_infer'] = detection_output_op.attrs['infer'] - detection_output_op.attrs['infer'] = __class__.do_infer - detection_output_node = detection_output_op.add_node(dict(name=detection_output_op.attrs['type'] + '_')) - # create internal edges of the sub-graph. In this case we add edges to connect input port 0 and 1 of the - # detection output with output of reshape of locations and reshape of confidence - create_edge(softmax_conf_node, reshape_conf_node, 0, 0) - create_edge(reshape_loc_node, detection_output_node, 0, 0) - create_edge(reshape_conf_node, detection_output_node, 0, 1) - return {'detection_output_node': detection_output_node, 'reshape_conf_node': softmax_conf_node, - 'reshape_loc_node': reshape_loc_node} -``` -The method has two inputs: the graph to operate on and the instance of `SubgraphMatch` object, which describes matched sub-graph. The latter class has several useful methods to get particular input/output node of the sub-graph by input/output index or by node name pattern. Examples of these methods usage are given below. - -**Key lines**: - -* Lines 6 and 7 create new instance of operation of type `Softmax` and graph Node object corresponding to that operation. - -* Lines 11-12 and 16-17 create new instance of operation of type `Reshape` to reshape locations and confidences tensors correspondingly. - -* Lines 20-23 create new instance of operation `DetectionOutput` and graph Node object corresponding to that operation. - -* Lines 27-29 connect `softmax` node with `reshape` node and connect two reshaped locations and confidences tensors with `DetectionOutput` node. - -* Lines 30-31 define dictionary with aliases for detection output node, reshape locations and confidences nodes. These aliases are used in the `input_edges_match` and `output_edges_match` methods. - -The `input_edges_match` method is the following: -```python -def input_edges_match(self, graph: nx.DiGraph, match: SubgraphMatch, new_sub_graph: dict): - locs_consumer_node, locs_consumer_node_port = match.input_nodes(0)[0] - conf_consumer_node, conf_consumer_node_port = match.input_nodes(1)[0] - priors_consumer_node, priors_consumer_node_port = match.input_nodes(2)[0] - # create matching nodes for locations and confidence tensors using simple scheme "old_node_name: new_node_name" - # which in fact means "(old_node_name, 0): (new_node_name, 0)", while first '0' means old_port and the second - # zero defines 'new_port'. - return {locs_consumer_node.id: new_sub_graph['reshape_loc_node'].id, - conf_consumer_node.id: new_sub_graph['reshape_conf_node'].id, - priors_consumer_node.id: (new_sub_graph['detection_output_node'].id, 2), - } -``` -The method has three parameters: input `graph`, `match` object describing matched sub-graph and `new_sub_graph` dictionary with alias names returned from the `generate_sub_graph` method. - -**Key lines**: - -* Lines 2-4 initialize Node objects and input ports for the nodes where the input tensors for the sub-graph are consumed. The method `match.input_nodes(ind)` returns list of tuples where the first element is a Node object and the second is the input port for this node which consumes the ind-th input tensor of the sub-graph. `input_points` list in the configuration file defines the order of input tensors to the sub-graph. For example, the `locs_consumer_node` object of type Node is a node that consumes tensor with locations in the port with number `locs_consumer_node_port`. - -* Lines 8-11 define dictionary with the mapping of tensors as described above. Note that the attribute `id` of the Node object contains the name of the node in the graph. - -The `output_edges_match` method is the following: -```python -def output_edges_match(self, graph: nx.DiGraph, match: SubgraphMatch, new_sub_graph: dict): - # the DetectionOutput in IE produces single tensor, but in TF it produces two tensors, so we need to create only - # one output edge match - return {match.output_node(0)[0].id: new_sub_graph['detection_output_node'].id} -``` - -The method has the same three parameters as `input_edges_match` method. The returned dictionary contains mapping just for one tensor initially produces by the first output node of the sub-graph (which is `detection_boxes` according to the configuration file) to a single output tensor of the created `DetectionOutput` node. In fact, it is possible to use any output node of the initial sub-graph in mapping, because the sub-graph output nodes are the output nodes of the whole graph (their output is not consumed by any other nodes). - -Now, the Model Optimizer knows how to replace the sub-graph. The last step to enable the model is to cut-off some parts of the graph not needed during inference. - -It is necessary to remove the `Preprocessor` block where image is resized. Inference Engine does not support dynamic input shapes, so the Model Optimizer must froze the input image size, and thus, resizing of the image is not necessary. This is achieved by replacer `/deployment_tools/model_optimizer/extensions/front/tf/Preprocessor.py` which is executed automatically. - -There are several `Switch` operations in the `Postprocessor` block without output edges. For example: -```sh -Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_t -``` -```sh -Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond/cond/switch_f -``` -```sh -Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/cond/switch_t -``` -```sh -Postprocessor/BatchMultiClassNonMaxSuppression/map/while/PadOrClipBoxList/cond_1/cond/switch_f -``` - -Model Optimizer marks these nodes as output nodes of the topology. Some parts of the `Posprocessor` blocks are not removed during sub-graph replacement because of that. In order to fix this issue, it is necessary to specify output nodes of the graph manually using the `--output` command line parameter. - -###Example Model Optimizer Command-Line for TensorFlow\* SSD - -The final command line to convert SSDs from the TensorFlow Object Detection API Zoo is: -```shell -./mo_tf.py --input_model= --tensorflow_use_custom_operations_config extensions/front/tf/legacy_ssd_support.json --output="detection_boxes,detection_scores,num_detections" -``` - -## Converting MobileNet V2 model created with TensorFlow Object Detection API -The [MobileNet V2 model](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz) differs from the previous version, so converting the model requires a new sub-graph replacement configuration file and new command line parameters. The major differences are: - -* The `Preprocessor` block has two outputs: the pre-processed image and the pre-processed image size. -* The `Postprocessor` block has one more input (in comparison with models created with TensorFlow Object Detection API -version 1.6 or lower): the pre-processed image size. -* Some node names have been changed in the `Postprocessor` block. - -The updated sub-graph replacement configuration file `extensions/front/tf/ssd_v2_support.json` reflecting these changes -is the following: - -```json -[ - { - "custom_attributes": { - "code_type": "caffe.PriorBoxParameter.CENTER_SIZE", - "confidence_threshold": 0.01, - "keep_top_k": 200, - "nms_threshold": 0.6, - "pad_mode": "caffe.ResizeParameter.CONSTANT", - "resize_mode": "caffe.ResizeParameter.WARP" - }, - "id": "TFObjectDetectionAPIDetectionOutput", - "include_inputs_to_sub_graph": true, - "include_outputs_to_sub_graph": true, - "instances": { - "end_points": [ - "detection_boxes", - "detection_scores", - "num_detections" - ], - "start_points": [ - "Postprocessor/Shape", - "Postprocessor/scale_logits", - "Postprocessor/ExpandDims", - "Postprocessor/Reshape_1", - "Postprocessor/ToFloat" - ] - }, - "match_kind": "points" - }, - { - "custom_attributes": { - }, - "id": "PreprocessorReplacement", - "inputs": [ - [ - { - "node": "map/Shape$", - "port": 0 - }, - { - "node": "map/TensorArrayUnstack/Shape$", - "port": 0 - }, - { - "node": "map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3$", - "port": 2 - } - ] - ], - "instances": [ - ".*Preprocessor/" - ], - "match_kind": "scope", - "outputs": [ - { - "node": "sub$", - "port": 0 - }, - { - "node": "map/TensorArrayStack_1/TensorArrayGatherV3$", - "port": 0 - } - ] - } -] -``` - -### Example of Model Optimizer Command-Line for TensorFlow SSD MobileNet V2 -The final command line to convert MobileNet SSD V2 from the TensorFlow Object Detection Zoo is the following: - -```sh -./mo_tf.py --input_model= --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --output="detection_boxes,detection_scores,num_detections" -``` diff --git a/docs/doxygen/assets/bootstrap.bundle.min.js b/docs/doxygen/assets/bootstrap.bundle.min.js new file mode 100644 index 00000000000000..6952361b1b2a0b --- /dev/null +++ b/docs/doxygen/assets/bootstrap.bundle.min.js @@ -0,0 +1,8 @@ +/*! + * Bootstrap v4.4.1 (https://getbootstrap.com/) + * Copyright 2011-2019 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + */ + !function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports,require("jquery")):"function"==typeof define&&define.amd?define(["exports","jquery"],t):t((e=e||self).bootstrap={},e.jQuery)}(this,function(e,p){"use strict";function i(e,t){for(var n=0;nthis._items.length-1||e<0))if(this._isSliding)p(this._element).one(V.SLID,function(){return t.to(e)});else{if(n===e)return this.pause(),void this.cycle();var i=n=i.clientWidth&&n>=i.clientHeight}),u=0l[e]&&!i.escapeWithReference&&(n=Math.min(h[t],l[e]-("right"===e?h.width:h.height))),Ye({},t,n)}};return c.forEach(function(e){var t=-1!==["left","top"].indexOf(e)?"primary":"secondary";h=ze({},h,u[t](e))}),e.offsets.popper=h,e},priority:["left","right","top","bottom"],padding:5,boundariesElement:"scrollParent"},keepTogether:{order:400,enabled:!0,fn:function(e){var t=e.offsets,n=t.popper,i=t.reference,o=e.placement.split("-")[0],r=Math.floor,s=-1!==["top","bottom"].indexOf(o),a=s?"right":"bottom",l=s?"left":"top",c=s?"width":"height";return n[a]r(i[a])&&(e.offsets.popper[l]=r(i[a])),e}},arrow:{order:500,enabled:!0,fn:function(e,t){var n;if(!gt(e.instance.modifiers,"arrow","keepTogether"))return e;var i=t.element;if("string"==typeof i){if(!(i=e.instance.popper.querySelector(i)))return e}else if(!e.instance.popper.contains(i))return console.warn("WARNING: `arrow.element` must be child of its popper element!"),e;var o=e.placement.split("-")[0],r=e.offsets,s=r.popper,a=r.reference,l=-1!==["left","right"].indexOf(o),c=l?"height":"width",h=l?"Top":"Left",u=h.toLowerCase(),f=l?"left":"top",d=l?"bottom":"right",p=nt(i)[c];a[d]-ps[d]&&(e.offsets.popper[u]+=a[u]+p-s[d]),e.offsets.popper=Xe(e.offsets.popper);var m=a[u]+a[c]/2-p/2,g=ke(e.instance.popper),_=parseFloat(g["margin"+h],10),v=parseFloat(g["border"+h+"Width"],10),y=m-e.offsets.popper[u]-_-v;return y=Math.max(Math.min(s[c]-p,y),0),e.arrowElement=i,e.offsets.arrow=(Ye(n={},u,Math.round(y)),Ye(n,f,""),n),e},element:"[x-arrow]"},flip:{order:600,enabled:!0,fn:function(m,g){if(at(m.instance.modifiers,"inner"))return m;if(m.flipped&&m.placement===m.originalPlacement)return m;var _=Ze(m.instance.popper,m.instance.reference,g.padding,g.boundariesElement,m.positionFixed),v=m.placement.split("-")[0],y=it(v),E=m.placement.split("-")[1]||"",b=[];switch(g.behavior){case Et:b=[v,y];break;case bt:b=yt(v);break;case wt:b=yt(v,!0);break;default:b=g.behavior}return b.forEach(function(e,t){if(v!==e||b.length===t+1)return m;v=m.placement.split("-")[0],y=it(v);var n=m.offsets.popper,i=m.offsets.reference,o=Math.floor,r="left"===v&&o(n.right)>o(i.left)||"right"===v&&o(n.left)o(i.top)||"bottom"===v&&o(n.top)o(_.right),l=o(n.top)o(_.bottom),h="left"===v&&s||"right"===v&&a||"top"===v&&l||"bottom"===v&&c,u=-1!==["top","bottom"].indexOf(v),f=!!g.flipVariations&&(u&&"start"===E&&s||u&&"end"===E&&a||!u&&"start"===E&&l||!u&&"end"===E&&c),d=!!g.flipVariationsByContent&&(u&&"start"===E&&a||u&&"end"===E&&s||!u&&"start"===E&&c||!u&&"end"===E&&l),p=f||d;(r||h||p)&&(m.flipped=!0,(r||h)&&(v=b[t+1]),p&&(E=function(e){return"end"===e?"start":"start"===e?"end":e}(E)),m.placement=v+(E?"-"+E:""),m.offsets.popper=ze({},m.offsets.popper,ot(m.instance.popper,m.offsets.reference,m.placement)),m=st(m.instance.modifiers,m,"flip"))}),m},behavior:"flip",padding:5,boundariesElement:"viewport",flipVariations:!1,flipVariationsByContent:!1},inner:{order:700,enabled:!1,fn:function(e){var t=e.placement,n=t.split("-")[0],i=e.offsets,o=i.popper,r=i.reference,s=-1!==["left","right"].indexOf(n),a=-1===["top","left"].indexOf(n);return o[s?"left":"top"]=r[n]-(a?o[s?"width":"height"]:0),e.placement=it(t),e.offsets.popper=Xe(o),e}},hide:{order:800,enabled:!0,fn:function(e){if(!gt(e.instance.modifiers,"hide","preventOverflow"))return e;var t=e.offsets.reference,n=rt(e.instance.modifiers,function(e){return"preventOverflow"===e.name}).boundaries;if(t.bottomn.right||t.top>n.bottom||t.rightdocument.documentElement.clientHeight;!this._isBodyOverflowing&&e&&(this._element.style.paddingLeft=this._scrollbarWidth+"px"),this._isBodyOverflowing&&!e&&(this._element.style.paddingRight=this._scrollbarWidth+"px")},e._resetAdjustments=function(){this._element.style.paddingLeft="",this._element.style.paddingRight=""},e._checkScrollbar=function(){var e=document.body.getBoundingClientRect();this._isBodyOverflowing=e.left+e.right
',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:0,container:!1,fallbackPlacement:"flip",boundary:"scrollParent",sanitize:!0,sanitizeFn:null,whiteList:Cn,popperConfig:null},Fn="show",Mn="out",Wn={HIDE:"hide"+Nn,HIDDEN:"hidden"+Nn,SHOW:"show"+Nn,SHOWN:"shown"+Nn,INSERTED:"inserted"+Nn,CLICK:"click"+Nn,FOCUSIN:"focusin"+Nn,FOCUSOUT:"focusout"+Nn,MOUSEENTER:"mouseenter"+Nn,MOUSELEAVE:"mouseleave"+Nn},Un="fade",Bn="show",qn=".tooltip-inner",Kn=".arrow",Qn="hover",Vn="focus",Yn="click",zn="manual",Xn=function(){function i(e,t){if("undefined"==typeof St)throw new TypeError("Bootstrap's tooltips require Popper.js (https://popper.js.org/)");this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this.element=e,this.config=this._getConfig(t),this.tip=null,this._setListeners()}var e=i.prototype;return e.enable=function(){this._isEnabled=!0},e.disable=function(){this._isEnabled=!1},e.toggleEnabled=function(){this._isEnabled=!this._isEnabled},e.toggle=function(e){if(this._isEnabled)if(e){var t=this.constructor.DATA_KEY,n=p(e.currentTarget).data(t);n||(n=new this.constructor(e.currentTarget,this._getDelegateConfig()),p(e.currentTarget).data(t,n)),n._activeTrigger.click=!n._activeTrigger.click,n._isWithActiveTrigger()?n._enter(null,n):n._leave(null,n)}else{if(p(this.getTipElement()).hasClass(Bn))return void this._leave(null,this);this._enter(null,this)}},e.dispose=function(){clearTimeout(this._timeout),p.removeData(this.element,this.constructor.DATA_KEY),p(this.element).off(this.constructor.EVENT_KEY),p(this.element).closest(".modal").off("hide.bs.modal",this._hideModalHandler),this.tip&&p(this.tip).remove(),this._isEnabled=null,this._timeout=null,this._hoverState=null,this._activeTrigger=null,this._popper&&this._popper.destroy(),this._popper=null,this.element=null,this.config=null,this.tip=null},e.show=function(){var t=this;if("none"===p(this.element).css("display"))throw new Error("Please use show on visible elements");var e=p.Event(this.constructor.Event.SHOW);if(this.isWithContent()&&this._isEnabled){p(this.element).trigger(e);var n=m.findShadowRoot(this.element),i=p.contains(null!==n?n:this.element.ownerDocument.documentElement,this.element);if(e.isDefaultPrevented()||!i)return;var o=this.getTipElement(),r=m.getUID(this.constructor.NAME);o.setAttribute("id",r),this.element.setAttribute("aria-describedby",r),this.setContent(),this.config.animation&&p(o).addClass(Un);var s="function"==typeof this.config.placement?this.config.placement.call(this,o,this.element):this.config.placement,a=this._getAttachment(s);this.addAttachmentClass(a);var l=this._getContainer();p(o).data(this.constructor.DATA_KEY,this),p.contains(this.element.ownerDocument.documentElement,this.tip)||p(o).appendTo(l),p(this.element).trigger(this.constructor.Event.INSERTED),this._popper=new St(this.element,o,this._getPopperConfig(a)),p(o).addClass(Bn),"ontouchstart"in document.documentElement&&p(document.body).children().on("mouseover",null,p.noop);var c=function(){t.config.animation&&t._fixTransition();var e=t._hoverState;t._hoverState=null,p(t.element).trigger(t.constructor.Event.SHOWN),e===Mn&&t._leave(null,t)};if(p(this.tip).hasClass(Un)){var h=m.getTransitionDurationFromElement(this.tip);p(this.tip).one(m.TRANSITION_END,c).emulateTransitionEnd(h)}else c()}},e.hide=function(e){function t(){n._hoverState!==Fn&&i.parentNode&&i.parentNode.removeChild(i),n._cleanTipClass(),n.element.removeAttribute("aria-describedby"),p(n.element).trigger(n.constructor.Event.HIDDEN),null!==n._popper&&n._popper.destroy(),e&&e()}var n=this,i=this.getTipElement(),o=p.Event(this.constructor.Event.HIDE);if(p(this.element).trigger(o),!o.isDefaultPrevented()){if(p(i).removeClass(Bn),"ontouchstart"in document.documentElement&&p(document.body).children().off("mouseover",null,p.noop),this._activeTrigger[Yn]=!1,this._activeTrigger[Vn]=!1,this._activeTrigger[Qn]=!1,p(this.tip).hasClass(Un)){var r=m.getTransitionDurationFromElement(i);p(i).one(m.TRANSITION_END,t).emulateTransitionEnd(r)}else t();this._hoverState=""}},e.update=function(){null!==this._popper&&this._popper.scheduleUpdate()},e.isWithContent=function(){return Boolean(this.getTitle())},e.addAttachmentClass=function(e){p(this.getTipElement()).addClass(Ln+"-"+e)},e.getTipElement=function(){return this.tip=this.tip||p(this.config.template)[0],this.tip},e.setContent=function(){var e=this.getTipElement();this.setElementContent(p(e.querySelectorAll(qn)),this.getTitle()),p(e).removeClass(Un+" "+Bn)},e.setElementContent=function(e,t){"object"!=typeof t||!t.nodeType&&!t.jquery?this.config.html?(this.config.sanitize&&(t=In(t,this.config.whiteList,this.config.sanitizeFn)),e.html(t)):e.text(t):this.config.html?p(t).parent().is(e)||e.empty().append(t):e.text(p(t).text())},e.getTitle=function(){var e=this.element.getAttribute("data-original-title");return e=e||("function"==typeof this.config.title?this.config.title.call(this.element):this.config.title)},e._getPopperConfig=function(e){var t=this;return l({},{placement:e,modifiers:{offset:this._getOffset(),flip:{behavior:this.config.fallbackPlacement},arrow:{element:Kn},preventOverflow:{boundariesElement:this.config.boundary}},onCreate:function(e){e.originalPlacement!==e.placement&&t._handlePopperPlacementChange(e)},onUpdate:function(e){return t._handlePopperPlacementChange(e)}},{},this.config.popperConfig)},e._getOffset=function(){var t=this,e={};return"function"==typeof this.config.offset?e.fn=function(e){return e.offsets=l({},e.offsets,{},t.config.offset(e.offsets,t.element)||{}),e}:e.offset=this.config.offset,e},e._getContainer=function(){return!1===this.config.container?document.body:m.isElement(this.config.container)?p(this.config.container):p(document).find(this.config.container)},e._getAttachment=function(e){return Hn[e.toUpperCase()]},e._setListeners=function(){var i=this;this.config.trigger.split(" ").forEach(function(e){if("click"===e)p(i.element).on(i.constructor.Event.CLICK,i.config.selector,function(e){return i.toggle(e)});else if(e!==zn){var t=e===Qn?i.constructor.Event.MOUSEENTER:i.constructor.Event.FOCUSIN,n=e===Qn?i.constructor.Event.MOUSELEAVE:i.constructor.Event.FOCUSOUT;p(i.element).on(t,i.config.selector,function(e){return i._enter(e)}).on(n,i.config.selector,function(e){return i._leave(e)})}}),this._hideModalHandler=function(){i.element&&i.hide()},p(this.element).closest(".modal").on("hide.bs.modal",this._hideModalHandler),this.config.selector?this.config=l({},this.config,{trigger:"manual",selector:""}):this._fixTitle()},e._fixTitle=function(){var e=typeof this.element.getAttribute("data-original-title");!this.element.getAttribute("title")&&"string"==e||(this.element.setAttribute("data-original-title",this.element.getAttribute("title")||""),this.element.setAttribute("title",""))},e._enter=function(e,t){var n=this.constructor.DATA_KEY;(t=t||p(e.currentTarget).data(n))||(t=new this.constructor(e.currentTarget,this._getDelegateConfig()),p(e.currentTarget).data(n,t)),e&&(t._activeTrigger["focusin"===e.type?Vn:Qn]=!0),p(t.getTipElement()).hasClass(Bn)||t._hoverState===Fn?t._hoverState=Fn:(clearTimeout(t._timeout),t._hoverState=Fn,t.config.delay&&t.config.delay.show?t._timeout=setTimeout(function(){t._hoverState===Fn&&t.show()},t.config.delay.show):t.show())},e._leave=function(e,t){var n=this.constructor.DATA_KEY;(t=t||p(e.currentTarget).data(n))||(t=new this.constructor(e.currentTarget,this._getDelegateConfig()),p(e.currentTarget).data(n,t)),e&&(t._activeTrigger["focusout"===e.type?Vn:Qn]=!1),t._isWithActiveTrigger()||(clearTimeout(t._timeout),t._hoverState=Mn,t.config.delay&&t.config.delay.hide?t._timeout=setTimeout(function(){t._hoverState===Mn&&t.hide()},t.config.delay.hide):t.hide())},e._isWithActiveTrigger=function(){for(var e in this._activeTrigger)if(this._activeTrigger[e])return!0;return!1},e._getConfig=function(e){var t=p(this.element).data();return Object.keys(t).forEach(function(e){-1!==xn.indexOf(e)&&delete t[e]}),"number"==typeof(e=l({},this.constructor.Default,{},t,{},"object"==typeof e&&e?e:{})).delay&&(e.delay={show:e.delay,hide:e.delay}),"number"==typeof e.title&&(e.title=e.title.toString()),"number"==typeof e.content&&(e.content=e.content.toString()),m.typeCheckConfig(An,e,this.constructor.DefaultType),e.sanitize&&(e.template=In(e.template,e.whiteList,e.sanitizeFn)),e},e._getDelegateConfig=function(){var e={};if(this.config)for(var t in this.config)this.constructor.Default[t]!==this.config[t]&&(e[t]=this.config[t]);return e},e._cleanTipClass=function(){var e=p(this.getTipElement()),t=e.attr("class").match(Pn);null!==t&&t.length&&e.removeClass(t.join(""))},e._handlePopperPlacementChange=function(e){var t=e.instance;this.tip=t.popper,this._cleanTipClass(),this.addAttachmentClass(this._getAttachment(e.placement))},e._fixTransition=function(){var e=this.getTipElement(),t=this.config.animation;null===e.getAttribute("x-placement")&&(p(e).removeClass(Un),this.config.animation=!1,this.hide(),this.show(),this.config.animation=t)},i._jQueryInterface=function(n){return this.each(function(){var e=p(this).data(On),t="object"==typeof n&&n;if((e||!/dispose|hide/.test(n))&&(e||(e=new i(this,t),p(this).data(On,e)),"string"==typeof n)){if("undefined"==typeof e[n])throw new TypeError('No method named "'+n+'"');e[n]()}})},s(i,null,[{key:"VERSION",get:function(){return"4.4.1"}},{key:"Default",get:function(){return Rn}},{key:"NAME",get:function(){return An}},{key:"DATA_KEY",get:function(){return On}},{key:"Event",get:function(){return Wn}},{key:"EVENT_KEY",get:function(){return Nn}},{key:"DefaultType",get:function(){return jn}}]),i}();p.fn[An]=Xn._jQueryInterface,p.fn[An].Constructor=Xn,p.fn[An].noConflict=function(){return p.fn[An]=kn,Xn._jQueryInterface};var Gn="popover",$n="bs.popover",Jn="."+$n,Zn=p.fn[Gn],ei="bs-popover",ti=new RegExp("(^|\\s)"+ei+"\\S+","g"),ni=l({},Xn.Default,{placement:"right",trigger:"click",content:"",template:''}),ii=l({},Xn.DefaultType,{content:"(string|element|function)"}),oi="fade",ri="show",si=".popover-header",ai=".popover-body",li={HIDE:"hide"+Jn,HIDDEN:"hidden"+Jn,SHOW:"show"+Jn,SHOWN:"shown"+Jn,INSERTED:"inserted"+Jn,CLICK:"click"+Jn,FOCUSIN:"focusin"+Jn,FOCUSOUT:"focusout"+Jn,MOUSEENTER:"mouseenter"+Jn,MOUSELEAVE:"mouseleave"+Jn},ci=function(e){function i(){return e.apply(this,arguments)||this}!function(e,t){e.prototype=Object.create(t.prototype),(e.prototype.constructor=e).__proto__=t}(i,e);var t=i.prototype;return t.isWithContent=function(){return this.getTitle()||this._getContent()},t.addAttachmentClass=function(e){p(this.getTipElement()).addClass(ei+"-"+e)},t.getTipElement=function(){return this.tip=this.tip||p(this.config.template)[0],this.tip},t.setContent=function(){var e=p(this.getTipElement());this.setElementContent(e.find(si),this.getTitle());var t=this._getContent();"function"==typeof t&&(t=t.call(this.element)),this.setElementContent(e.find(ai),t),e.removeClass(oi+" "+ri)},t._getContent=function(){return this.element.getAttribute("data-content")||this.config.content},t._cleanTipClass=function(){var e=p(this.getTipElement()),t=e.attr("class").match(ti);null!==t&&0=this._offsets[o]&&("undefined"==typeof this._offsets[o+1]||ecode{color:inherit}kbd{padding:.2rem .4rem;font-size:87.5%;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:100%;font-weight:700}pre{display:block;font-size:87.5%;color:#212529}pre code{font-size:inherit;color:inherit;word-break:normal}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container{max-width:540px}}@media (min-width:768px){.container{max-width:720px}}@media (min-width:992px){.container{max-width:960px}}@media (min-width:1200px){.container{max-width:1140px}}.container-fluid,.container-lg,.container-md,.container-sm,.container-xl{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}.row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-15px;margin-left:-15px}.no-gutters{margin-right:0;margin-left:0}.no-gutters>.col,.no-gutters>[class*=col-]{padding-right:0;padding-left:0}.col,.col-1,.col-10,.col-11,.col-12,.col-2,.col-3,.col-4,.col-5,.col-6,.col-7,.col-8,.col-9,.col-auto,.col-lg,.col-lg-1,.col-lg-10,.col-lg-11,.col-lg-12,.col-lg-2,.col-lg-3,.col-lg-4,.col-lg-5,.col-lg-6,.col-lg-7,.col-lg-8,.col-lg-9,.col-lg-auto,.col-md,.col-md-1,.col-md-10,.col-md-11,.col-md-12,.col-md-2,.col-md-3,.col-md-4,.col-md-5,.col-md-6,.col-md-7,.col-md-8,.col-md-9,.col-md-auto,.col-sm,.col-sm-1,.col-sm-10,.col-sm-11,.col-sm-12,.col-sm-2,.col-sm-3,.col-sm-4,.col-sm-5,.col-sm-6,.col-sm-7,.col-sm-8,.col-sm-9,.col-sm-auto,.col-xl,.col-xl-1,.col-xl-10,.col-xl-11,.col-xl-12,.col-xl-2,.col-xl-3,.col-xl-4,.col-xl-5,.col-xl-6,.col-xl-7,.col-xl-8,.col-xl-9,.col-xl-auto{position:relative;width:100%;padding-right:15px;padding-left:15px}.col{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-first{-ms-flex-order:-1;order:-1}.order-last{-ms-flex-order:13;order:13}.order-0{-ms-flex-order:0;order:0}.order-1{-ms-flex-order:1;order:1}.order-2{-ms-flex-order:2;order:2}.order-3{-ms-flex-order:3;order:3}.order-4{-ms-flex-order:4;order:4}.order-5{-ms-flex-order:5;order:5}.order-6{-ms-flex-order:6;order:6}.order-7{-ms-flex-order:7;order:7}.order-8{-ms-flex-order:8;order:8}.order-9{-ms-flex-order:9;order:9}.order-10{-ms-flex-order:10;order:10}.order-11{-ms-flex-order:11;order:11}.order-12{-ms-flex-order:12;order:12}.offset-1{margin-left:8.333333%}.offset-2{margin-left:16.666667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.333333%}.offset-5{margin-left:41.666667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.333333%}.offset-8{margin-left:66.666667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.333333%}.offset-11{margin-left:91.666667%}@media (min-width:576px){.col-sm{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-sm-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-sm-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-sm-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-sm-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-sm-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-sm-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-sm-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-sm-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-sm-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-sm-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-sm-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-sm-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-sm-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-sm-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-sm-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-sm-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-sm-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-sm-first{-ms-flex-order:-1;order:-1}.order-sm-last{-ms-flex-order:13;order:13}.order-sm-0{-ms-flex-order:0;order:0}.order-sm-1{-ms-flex-order:1;order:1}.order-sm-2{-ms-flex-order:2;order:2}.order-sm-3{-ms-flex-order:3;order:3}.order-sm-4{-ms-flex-order:4;order:4}.order-sm-5{-ms-flex-order:5;order:5}.order-sm-6{-ms-flex-order:6;order:6}.order-sm-7{-ms-flex-order:7;order:7}.order-sm-8{-ms-flex-order:8;order:8}.order-sm-9{-ms-flex-order:9;order:9}.order-sm-10{-ms-flex-order:10;order:10}.order-sm-11{-ms-flex-order:11;order:11}.order-sm-12{-ms-flex-order:12;order:12}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.333333%}.offset-sm-2{margin-left:16.666667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.333333%}.offset-sm-5{margin-left:41.666667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.333333%}.offset-sm-8{margin-left:66.666667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.333333%}.offset-sm-11{margin-left:91.666667%}}@media (min-width:768px){.col-md{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-md-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-md-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-md-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-md-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-md-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-md-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-md-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-md-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-md-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-md-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-md-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-md-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-md-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-md-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-md-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-md-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-md-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-md-first{-ms-flex-order:-1;order:-1}.order-md-last{-ms-flex-order:13;order:13}.order-md-0{-ms-flex-order:0;order:0}.order-md-1{-ms-flex-order:1;order:1}.order-md-2{-ms-flex-order:2;order:2}.order-md-3{-ms-flex-order:3;order:3}.order-md-4{-ms-flex-order:4;order:4}.order-md-5{-ms-flex-order:5;order:5}.order-md-6{-ms-flex-order:6;order:6}.order-md-7{-ms-flex-order:7;order:7}.order-md-8{-ms-flex-order:8;order:8}.order-md-9{-ms-flex-order:9;order:9}.order-md-10{-ms-flex-order:10;order:10}.order-md-11{-ms-flex-order:11;order:11}.order-md-12{-ms-flex-order:12;order:12}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.333333%}.offset-md-2{margin-left:16.666667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.333333%}.offset-md-5{margin-left:41.666667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.333333%}.offset-md-8{margin-left:66.666667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.333333%}.offset-md-11{margin-left:91.666667%}}@media (min-width:992px){.col-lg{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-lg-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-lg-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-lg-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-lg-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-lg-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-lg-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-lg-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-lg-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-lg-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-lg-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-lg-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-lg-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-lg-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-lg-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-lg-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-lg-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-lg-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-lg-first{-ms-flex-order:-1;order:-1}.order-lg-last{-ms-flex-order:13;order:13}.order-lg-0{-ms-flex-order:0;order:0}.order-lg-1{-ms-flex-order:1;order:1}.order-lg-2{-ms-flex-order:2;order:2}.order-lg-3{-ms-flex-order:3;order:3}.order-lg-4{-ms-flex-order:4;order:4}.order-lg-5{-ms-flex-order:5;order:5}.order-lg-6{-ms-flex-order:6;order:6}.order-lg-7{-ms-flex-order:7;order:7}.order-lg-8{-ms-flex-order:8;order:8}.order-lg-9{-ms-flex-order:9;order:9}.order-lg-10{-ms-flex-order:10;order:10}.order-lg-11{-ms-flex-order:11;order:11}.order-lg-12{-ms-flex-order:12;order:12}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.333333%}.offset-lg-2{margin-left:16.666667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.333333%}.offset-lg-5{margin-left:41.666667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.333333%}.offset-lg-8{margin-left:66.666667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.333333%}.offset-lg-11{margin-left:91.666667%}}@media (min-width:1200px){.col-xl{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;max-width:100%}.row-cols-xl-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-xl-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-xl-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-xl-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-xl-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-xl-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-xl-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-xl-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-xl-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-xl-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-xl-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-xl-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-xl-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-xl-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-xl-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-xl-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-xl-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-xl-first{-ms-flex-order:-1;order:-1}.order-xl-last{-ms-flex-order:13;order:13}.order-xl-0{-ms-flex-order:0;order:0}.order-xl-1{-ms-flex-order:1;order:1}.order-xl-2{-ms-flex-order:2;order:2}.order-xl-3{-ms-flex-order:3;order:3}.order-xl-4{-ms-flex-order:4;order:4}.order-xl-5{-ms-flex-order:5;order:5}.order-xl-6{-ms-flex-order:6;order:6}.order-xl-7{-ms-flex-order:7;order:7}.order-xl-8{-ms-flex-order:8;order:8}.order-xl-9{-ms-flex-order:9;order:9}.order-xl-10{-ms-flex-order:10;order:10}.order-xl-11{-ms-flex-order:11;order:11}.order-xl-12{-ms-flex-order:12;order:12}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.333333%}.offset-xl-2{margin-left:16.666667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.333333%}.offset-xl-5{margin-left:41.666667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.333333%}.offset-xl-8{margin-left:66.666667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.333333%}.offset-xl-11{margin-left:91.666667%}}.table{width:100%;margin-bottom:1rem;color:#212529}.table td,.table th{padding:.75rem;vertical-align:top;border-top:1px solid #dee2e6}.table thead th{vertical-align:bottom;border-bottom:2px solid #dee2e6}.table tbody+tbody{border-top:2px solid #dee2e6}.table-sm td,.table-sm th{padding:.3rem}.table-bordered{border:1px solid #dee2e6}.table-bordered td,.table-bordered th{border:1px solid #dee2e6}.table-bordered thead td,.table-bordered thead th{border-bottom-width:2px}.table-borderless tbody+tbody,.table-borderless td,.table-borderless th,.table-borderless thead th{border:0}.table-striped tbody tr:nth-of-type(odd){background-color:rgba(0,0,0,.05)}.table-hover tbody tr:hover{color:#212529;background-color:rgba(0,0,0,.075)}.table-primary,.table-primary>td,.table-primary>th{background-color:#b8daff}.table-primary tbody+tbody,.table-primary td,.table-primary th,.table-primary thead th{border-color:#7abaff}.table-hover .table-primary:hover{background-color:#9fcdff}.table-hover .table-primary:hover>td,.table-hover .table-primary:hover>th{background-color:#9fcdff}.table-secondary,.table-secondary>td,.table-secondary>th{background-color:#d6d8db}.table-secondary tbody+tbody,.table-secondary td,.table-secondary th,.table-secondary thead th{border-color:#b3b7bb}.table-hover .table-secondary:hover{background-color:#c8cbcf}.table-hover .table-secondary:hover>td,.table-hover .table-secondary:hover>th{background-color:#c8cbcf}.table-success,.table-success>td,.table-success>th{background-color:#c3e6cb}.table-success tbody+tbody,.table-success td,.table-success th,.table-success thead th{border-color:#8fd19e}.table-hover .table-success:hover{background-color:#b1dfbb}.table-hover .table-success:hover>td,.table-hover .table-success:hover>th{background-color:#b1dfbb}.table-info,.table-info>td,.table-info>th{background-color:#bee5eb}.table-info tbody+tbody,.table-info td,.table-info th,.table-info thead th{border-color:#86cfda}.table-hover .table-info:hover{background-color:#abdde5}.table-hover .table-info:hover>td,.table-hover .table-info:hover>th{background-color:#abdde5}.table-warning,.table-warning>td,.table-warning>th{background-color:#ffeeba}.table-warning tbody+tbody,.table-warning td,.table-warning th,.table-warning thead th{border-color:#ffdf7e}.table-hover .table-warning:hover{background-color:#ffe8a1}.table-hover .table-warning:hover>td,.table-hover .table-warning:hover>th{background-color:#ffe8a1}.table-danger,.table-danger>td,.table-danger>th{background-color:#f5c6cb}.table-danger tbody+tbody,.table-danger td,.table-danger th,.table-danger thead th{border-color:#ed969e}.table-hover .table-danger:hover{background-color:#f1b0b7}.table-hover .table-danger:hover>td,.table-hover .table-danger:hover>th{background-color:#f1b0b7}.table-light,.table-light>td,.table-light>th{background-color:#fdfdfe}.table-light tbody+tbody,.table-light td,.table-light th,.table-light thead th{border-color:#fbfcfc}.table-hover .table-light:hover{background-color:#ececf6}.table-hover .table-light:hover>td,.table-hover .table-light:hover>th{background-color:#ececf6}.table-dark,.table-dark>td,.table-dark>th{background-color:#c6c8ca}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#95999c}.table-hover .table-dark:hover{background-color:#b9bbbe}.table-hover .table-dark:hover>td,.table-hover .table-dark:hover>th{background-color:#b9bbbe}.table-active,.table-active>td,.table-active>th{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover>td,.table-hover .table-active:hover>th{background-color:rgba(0,0,0,.075)}.table .thead-dark th{color:#fff;background-color:#343a40;border-color:#454d55}.table .thead-light th{color:#495057;background-color:#e9ecef;border-color:#dee2e6}.table-dark{color:#fff;background-color:#343a40}.table-dark td,.table-dark th,.table-dark thead th{border-color:#454d55}.table-dark.table-bordered{border:0}.table-dark.table-striped tbody tr:nth-of-type(odd){background-color:rgba(255,255,255,.05)}.table-dark.table-hover tbody tr:hover{color:#fff;background-color:rgba(255,255,255,.075)}@media (max-width:575.98px){.table-responsive-sm{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-sm>.table-bordered{border:0}}@media (max-width:767.98px){.table-responsive-md{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-md>.table-bordered{border:0}}@media (max-width:991.98px){.table-responsive-lg{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-lg>.table-bordered{border:0}}@media (max-width:1199.98px){.table-responsive-xl{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-xl>.table-bordered{border:0}}.table-responsive{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive>.table-bordered{border:0}.form-control{display:block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control::-ms-expand{background-color:transparent;border:0}.form-control:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.form-control:focus{color:#495057;background-color:#fff;border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.form-control::-webkit-input-placeholder{color:#6c757d;opacity:1}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control:-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}select.form-control:focus::-ms-value{color:#495057;background-color:#fff}.form-control-file,.form-control-range{display:block;width:100%}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem;line-height:1.5}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem;line-height:1.5}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;font-size:1rem;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.form-control-lg{height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}select.form-control[multiple],select.form-control[size]{height:auto}textarea.form-control{height:auto}.form-group{margin-bottom:1rem}.form-text{display:block;margin-top:.25rem}.form-row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-5px;margin-left:-5px}.form-row>.col,.form-row>[class*=col-]{padding-right:5px;padding-left:5px}.form-check{position:relative;display:block;padding-left:1.25rem}.form-check-input{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{color:#6c757d}.form-check-label{margin-bottom:0}.form-check-inline{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-align:center;align-items:center;padding-left:0;margin-right:.75rem}.form-check-inline .form-check-input{position:static;margin-top:0;margin-right:.3125rem;margin-left:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#28a745}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(40,167,69,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#28a745;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-valid,.was-validated .custom-select:valid{border-color:#28a745;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-valid:focus,.was-validated .custom-select:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#28a745}.form-check-input.is-valid~.valid-feedback,.form-check-input.is-valid~.valid-tooltip,.was-validated .form-check-input:valid~.valid-feedback,.was-validated .form-check-input:valid~.valid-tooltip{display:block}.custom-control-input.is-valid~.custom-control-label,.was-validated .custom-control-input:valid~.custom-control-label{color:#28a745}.custom-control-input.is-valid~.custom-control-label::before,.was-validated .custom-control-input:valid~.custom-control-label::before{border-color:#28a745}.custom-control-input.is-valid:checked~.custom-control-label::before,.was-validated .custom-control-input:valid:checked~.custom-control-label::before{border-color:#34ce57;background-color:#34ce57}.custom-control-input.is-valid:focus~.custom-control-label::before,.was-validated .custom-control-input:valid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.custom-control-input.is-valid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:valid:focus:not(:checked)~.custom-control-label::before{border-color:#28a745}.custom-file-input.is-valid~.custom-file-label,.was-validated .custom-file-input:valid~.custom-file-label{border-color:#28a745}.custom-file-input.is-valid:focus~.custom-file-label,.was-validated .custom-file-input:valid:focus~.custom-file-label{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-invalid,.was-validated .custom-select:invalid{border-color:#dc3545;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-invalid:focus,.was-validated .custom-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-input.is-invalid~.invalid-feedback,.form-check-input.is-invalid~.invalid-tooltip,.was-validated .form-check-input:invalid~.invalid-feedback,.was-validated .form-check-input:invalid~.invalid-tooltip{display:block}.custom-control-input.is-invalid~.custom-control-label,.was-validated .custom-control-input:invalid~.custom-control-label{color:#dc3545}.custom-control-input.is-invalid~.custom-control-label::before,.was-validated .custom-control-input:invalid~.custom-control-label::before{border-color:#dc3545}.custom-control-input.is-invalid:checked~.custom-control-label::before,.was-validated .custom-control-input:invalid:checked~.custom-control-label::before{border-color:#e4606d;background-color:#e4606d}.custom-control-input.is-invalid:focus~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.custom-control-input.is-invalid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus:not(:checked)~.custom-control-label::before{border-color:#dc3545}.custom-file-input.is-invalid~.custom-file-label,.was-validated .custom-file-input:invalid~.custom-file-label{border-color:#dc3545}.custom-file-input.is-invalid:focus~.custom-file-label,.was-validated .custom-file-input:invalid:focus~.custom-file-label{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-inline{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center}.form-inline .form-check{width:100%}@media (min-width:576px){.form-inline label{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;margin-bottom:0}.form-inline .form-group{display:-ms-flexbox;display:flex;-ms-flex:0 0 auto;flex:0 0 auto;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center;margin-bottom:0}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-plaintext{display:inline-block}.form-inline .custom-select,.form-inline .input-group{width:auto}.form-inline .form-check{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:auto;padding-left:0}.form-inline .form-check-input{position:relative;-ms-flex-negative:0;flex-shrink:0;margin-top:0;margin-right:.25rem;margin-left:0}.form-inline .custom-control{-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center}.form-inline .custom-control-label{margin-bottom:0}}.btn{display:inline-block;font-weight:400;color:#212529;text-align:center;vertical-align:middle;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;line-height:1.5;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529;text-decoration:none}.btn.focus,.btn:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.btn.disabled,.btn:disabled{opacity:.65}a.btn.disabled,fieldset:disabled a.btn{pointer-events:none}.btn-primary{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:hover{color:#fff;background-color:#0069d9;border-color:#0062cc}.btn-primary.focus,.btn-primary:focus{color:#fff;background-color:#0069d9;border-color:#0062cc;box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#007bff;border-color:#007bff}.btn-primary:not(:disabled):not(.disabled).active,.btn-primary:not(:disabled):not(.disabled):active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0062cc;border-color:#005cbf}.btn-primary:not(:disabled):not(.disabled).active:focus,.btn-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5a6268;border-color:#545b62}.btn-secondary.focus,.btn-secondary:focus{color:#fff;background-color:#5a6268;border-color:#545b62;box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:not(:disabled):not(.disabled).active,.btn-secondary:not(:disabled):not(.disabled):active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#545b62;border-color:#4e555b}.btn-secondary:not(:disabled):not(.disabled).active:focus,.btn-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-success{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:hover{color:#fff;background-color:#218838;border-color:#1e7e34}.btn-success.focus,.btn-success:focus{color:#fff;background-color:#218838;border-color:#1e7e34;box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:not(:disabled):not(.disabled).active,.btn-success:not(:disabled):not(.disabled):active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#1e7e34;border-color:#1c7430}.btn-success:not(:disabled):not(.disabled).active:focus,.btn-success:not(:disabled):not(.disabled):active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-info{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:hover{color:#fff;background-color:#138496;border-color:#117a8b}.btn-info.focus,.btn-info:focus{color:#fff;background-color:#138496;border-color:#117a8b;box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-info.disabled,.btn-info:disabled{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:not(:disabled):not(.disabled).active,.btn-info:not(:disabled):not(.disabled):active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#117a8b;border-color:#10707f}.btn-info:not(:disabled):not(.disabled).active:focus,.btn-info:not(:disabled):not(.disabled):active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-warning{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#212529;background-color:#e0a800;border-color:#d39e00}.btn-warning.focus,.btn-warning:focus{color:#212529;background-color:#e0a800;border-color:#d39e00;box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:not(:disabled):not(.disabled).active,.btn-warning:not(:disabled):not(.disabled):active,.show>.btn-warning.dropdown-toggle{color:#212529;background-color:#d39e00;border-color:#c69500}.btn-warning:not(:disabled):not(.disabled).active:focus,.btn-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#c82333;border-color:#bd2130}.btn-danger.focus,.btn-danger:focus{color:#fff;background-color:#c82333;border-color:#bd2130;box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:not(:disabled):not(.disabled).active,.btn-danger:not(:disabled):not(.disabled):active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#bd2130;border-color:#b21f2d}.btn-danger:not(:disabled):not(.disabled).active:focus,.btn-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-light{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#212529;background-color:#e2e6ea;border-color:#dae0e5}.btn-light.focus,.btn-light:focus{color:#212529;background-color:#e2e6ea;border-color:#dae0e5;box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-light.disabled,.btn-light:disabled{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:not(:disabled):not(.disabled).active,.btn-light:not(:disabled):not(.disabled):active,.show>.btn-light.dropdown-toggle{color:#212529;background-color:#dae0e5;border-color:#d3d9df}.btn-light:not(:disabled):not(.disabled).active:focus,.btn-light:not(:disabled):not(.disabled):active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-dark{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:hover{color:#fff;background-color:#23272b;border-color:#1d2124}.btn-dark.focus,.btn-dark:focus{color:#fff;background-color:#23272b;border-color:#1d2124;box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:not(:disabled):not(.disabled).active,.btn-dark:not(:disabled):not(.disabled):active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1d2124;border-color:#171a1d}.btn-dark:not(:disabled):not(.disabled).active:focus,.btn-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-outline-primary{color:#007bff;border-color:#007bff}.btn-outline-primary:hover{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary.focus,.btn-outline-primary:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#007bff;background-color:transparent}.btn-outline-primary:not(:disabled):not(.disabled).active,.btn-outline-primary:not(:disabled):not(.disabled):active,.show>.btn-outline-primary.dropdown-toggle{color:#fff;background-color:#007bff;border-color:#007bff}.btn-outline-primary:not(:disabled):not(.disabled).active:focus,.btn-outline-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary.focus,.btn-outline-secondary:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-secondary:not(:disabled):not(.disabled).active,.btn-outline-secondary:not(:disabled):not(.disabled):active,.show>.btn-outline-secondary.dropdown-toggle{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary:not(:disabled):not(.disabled).active:focus,.btn-outline-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-success{color:#28a745;border-color:#28a745}.btn-outline-success:hover{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success.focus,.btn-outline-success:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#28a745;background-color:transparent}.btn-outline-success:not(:disabled):not(.disabled).active,.btn-outline-success:not(:disabled):not(.disabled):active,.show>.btn-outline-success.dropdown-toggle{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success:not(:disabled):not(.disabled).active:focus,.btn-outline-success:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-info{color:#17a2b8;border-color:#17a2b8}.btn-outline-info:hover{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info.focus,.btn-outline-info:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#17a2b8;background-color:transparent}.btn-outline-info:not(:disabled):not(.disabled).active,.btn-outline-info:not(:disabled):not(.disabled):active,.show>.btn-outline-info.dropdown-toggle{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info:not(:disabled):not(.disabled).active:focus,.btn-outline-info:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning.focus,.btn-outline-warning:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-warning:not(:disabled):not(.disabled).active,.btn-outline-warning:not(:disabled):not(.disabled):active,.show>.btn-outline-warning.dropdown-toggle{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning:not(:disabled):not(.disabled).active:focus,.btn-outline-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger.focus,.btn-outline-danger:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-danger:not(:disabled):not(.disabled).active,.btn-outline-danger:not(:disabled):not(.disabled):active,.show>.btn-outline-danger.dropdown-toggle{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger:not(:disabled):not(.disabled).active:focus,.btn-outline-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light.focus,.btn-outline-light:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-light:not(:disabled):not(.disabled).active,.btn-outline-light:not(:disabled):not(.disabled):active,.show>.btn-outline-light.dropdown-toggle{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:not(:disabled):not(.disabled).active:focus,.btn-outline-light:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-dark{color:#343a40;border-color:#343a40}.btn-outline-dark:hover{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark.focus,.btn-outline-dark:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#343a40;background-color:transparent}.btn-outline-dark:not(:disabled):not(.disabled).active,.btn-outline-dark:not(:disabled):not(.disabled):active,.show>.btn-outline-dark.dropdown-toggle{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark:not(:disabled):not(.disabled).active:focus,.btn-outline-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-link{font-weight:400;color:#007bff;text-decoration:none}.btn-link:hover{color:#0056b3;text-decoration:underline}.btn-link.focus,.btn-link:focus{text-decoration:underline;box-shadow:none}.btn-link.disabled,.btn-link:disabled{color:#6c757d;pointer-events:none}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:.5rem}input[type=button].btn-block,input[type=reset].btn-block,input[type=submit].btn-block{width:100%}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{position:relative;height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.dropdown,.dropleft,.dropright,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:10rem;padding:.5rem 0;margin:.125rem 0 0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu-left{right:auto;left:0}.dropdown-menu-right{right:0;left:auto}@media (min-width:576px){.dropdown-menu-sm-left{right:auto;left:0}.dropdown-menu-sm-right{right:0;left:auto}}@media (min-width:768px){.dropdown-menu-md-left{right:auto;left:0}.dropdown-menu-md-right{right:0;left:auto}}@media (min-width:992px){.dropdown-menu-lg-left{right:auto;left:0}.dropdown-menu-lg-right{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-left{right:auto;left:0}.dropdown-menu-xl-right{right:0;left:auto}}.dropup .dropdown-menu{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-menu{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropright .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropright .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-toggle::after{vertical-align:0}.dropleft .dropdown-menu{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropleft .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropleft .dropdown-toggle::after{display:none}.dropleft .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropleft .dropdown-toggle:empty::after{margin-left:0}.dropleft .dropdown-toggle::before{vertical-align:0}.dropdown-menu[x-placement^=bottom],.dropdown-menu[x-placement^=left],.dropdown-menu[x-placement^=right],.dropdown-menu[x-placement^=top]{right:auto;bottom:auto}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid #e9ecef}.dropdown-item{display:block;width:100%;padding:.25rem 1.5rem;clear:both;font-weight:400;color:#212529;text-align:inherit;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#16181b;text-decoration:none;background-color:#f8f9fa}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#007bff}.dropdown-item.disabled,.dropdown-item:disabled{color:#6c757d;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1.5rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1.5rem;color:#212529}.btn-group,.btn-group-vertical{position:relative;display:-ms-inline-flexbox;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;-ms-flex:1 1 auto;flex:1 1 auto}.btn-group-vertical>.btn:hover,.btn-group>.btn:hover{z-index:1}.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus{z-index:1}.btn-toolbar{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-pack:start;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropright .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-left:0}.dropleft .dropdown-toggle-split::before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{-ms-flex-direction:column;flex-direction:column;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:center;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn:not(:first-child){border-top-left-radius:0;border-top-right-radius:0}.btn-group-toggle>.btn,.btn-group-toggle>.btn-group>.btn{margin-bottom:0}.btn-group-toggle>.btn input[type=checkbox],.btn-group-toggle>.btn input[type=radio],.btn-group-toggle>.btn-group>.btn input[type=checkbox],.btn-group-toggle>.btn-group>.btn input[type=radio]{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.input-group{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:stretch;align-items:stretch;width:100%}.input-group>.custom-file,.input-group>.custom-select,.input-group>.form-control,.input-group>.form-control-plaintext{position:relative;-ms-flex:1 1 0%;flex:1 1 0%;min-width:0;margin-bottom:0}.input-group>.custom-file+.custom-file,.input-group>.custom-file+.custom-select,.input-group>.custom-file+.form-control,.input-group>.custom-select+.custom-file,.input-group>.custom-select+.custom-select,.input-group>.custom-select+.form-control,.input-group>.form-control+.custom-file,.input-group>.form-control+.custom-select,.input-group>.form-control+.form-control,.input-group>.form-control-plaintext+.custom-file,.input-group>.form-control-plaintext+.custom-select,.input-group>.form-control-plaintext+.form-control{margin-left:-1px}.input-group>.custom-file .custom-file-input:focus~.custom-file-label,.input-group>.custom-select:focus,.input-group>.form-control:focus{z-index:3}.input-group>.custom-file .custom-file-input:focus{z-index:4}.input-group>.custom-select:not(:last-child),.input-group>.form-control:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-select:not(:first-child),.input-group>.form-control:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>.custom-file{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}.input-group>.custom-file:not(:last-child) .custom-file-label,.input-group>.custom-file:not(:last-child) .custom-file-label::after{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-file:not(:first-child) .custom-file-label{border-top-left-radius:0;border-bottom-left-radius:0}.input-group-append,.input-group-prepend{display:-ms-flexbox;display:flex}.input-group-append .btn,.input-group-prepend .btn{position:relative;z-index:2}.input-group-append .btn:focus,.input-group-prepend .btn:focus{z-index:3}.input-group-append .btn+.btn,.input-group-append .btn+.input-group-text,.input-group-append .input-group-text+.btn,.input-group-append .input-group-text+.input-group-text,.input-group-prepend .btn+.btn,.input-group-prepend .btn+.input-group-text,.input-group-prepend .input-group-text+.btn,.input-group-prepend .input-group-text+.input-group-text{margin-left:-1px}.input-group-prepend{margin-right:-1px}.input-group-append{margin-left:-1px}.input-group-text{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.375rem .75rem;margin-bottom:0;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-text input[type=checkbox],.input-group-text input[type=radio]{margin-top:0}.input-group-lg>.custom-select,.input-group-lg>.form-control:not(textarea){height:calc(1.5em + 1rem + 2px)}.input-group-lg>.custom-select,.input-group-lg>.form-control,.input-group-lg>.input-group-append>.btn,.input-group-lg>.input-group-append>.input-group-text,.input-group-lg>.input-group-prepend>.btn,.input-group-lg>.input-group-prepend>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.input-group-sm>.custom-select,.input-group-sm>.form-control:not(textarea){height:calc(1.5em + .5rem + 2px)}.input-group-sm>.custom-select,.input-group-sm>.form-control,.input-group-sm>.input-group-append>.btn,.input-group-sm>.input-group-append>.input-group-text,.input-group-sm>.input-group-prepend>.btn,.input-group-sm>.input-group-prepend>.input-group-text{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.input-group-lg>.custom-select,.input-group-sm>.custom-select{padding-right:1.75rem}.input-group>.input-group-append:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group>.input-group-append:last-child>.input-group-text:not(:last-child),.input-group>.input-group-append:not(:last-child)>.btn,.input-group>.input-group-append:not(:last-child)>.input-group-text,.input-group>.input-group-prepend>.btn,.input-group>.input-group-prepend>.input-group-text{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.input-group-append>.btn,.input-group>.input-group-append>.input-group-text,.input-group>.input-group-prepend:first-child>.btn:not(:first-child),.input-group>.input-group-prepend:first-child>.input-group-text:not(:first-child),.input-group>.input-group-prepend:not(:first-child)>.btn,.input-group>.input-group-prepend:not(:first-child)>.input-group-text{border-top-left-radius:0;border-bottom-left-radius:0}.custom-control{position:relative;display:block;min-height:1.5rem;padding-left:1.5rem}.custom-control-inline{display:-ms-inline-flexbox;display:inline-flex;margin-right:1rem}.custom-control-input{position:absolute;left:0;z-index:-1;width:1rem;height:1.25rem;opacity:0}.custom-control-input:checked~.custom-control-label::before{color:#fff;border-color:#007bff;background-color:#007bff}.custom-control-input:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-control-input:focus:not(:checked)~.custom-control-label::before{border-color:#80bdff}.custom-control-input:not(:disabled):active~.custom-control-label::before{color:#fff;background-color:#b3d7ff;border-color:#b3d7ff}.custom-control-input:disabled~.custom-control-label,.custom-control-input[disabled]~.custom-control-label{color:#6c757d}.custom-control-input:disabled~.custom-control-label::before,.custom-control-input[disabled]~.custom-control-label::before{background-color:#e9ecef}.custom-control-label{position:relative;margin-bottom:0;vertical-align:top}.custom-control-label::before{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;pointer-events:none;content:"";background-color:#fff;border:#adb5bd solid 1px}.custom-control-label::after{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;content:"";background:no-repeat 50%/50% 50%}.custom-checkbox .custom-control-label::before{border-radius:.25rem}.custom-checkbox .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::before{border-color:#007bff;background-color:#007bff}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4' viewBox='0 0 4 4'%3e%3cpath stroke='%23fff' d='M0 2h4'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-checkbox .custom-control-input:disabled:indeterminate~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-radio .custom-control-label::before{border-radius:50%}.custom-radio .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.custom-radio .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-switch{padding-left:2.25rem}.custom-switch .custom-control-label::before{left:-2.25rem;width:1.75rem;pointer-events:all;border-radius:.5rem}.custom-switch .custom-control-label::after{top:calc(.25rem + 2px);left:calc(-2.25rem + 2px);width:calc(1rem - 4px);height:calc(1rem - 4px);background-color:#adb5bd;border-radius:.5rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-switch .custom-control-label::after{transition:none}}.custom-switch .custom-control-input:checked~.custom-control-label::after{background-color:#fff;-webkit-transform:translateX(.75rem);transform:translateX(.75rem)}.custom-switch .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-select{display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem 1.75rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;vertical-align:middle;background:#fff url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px;border:1px solid #ced4da;border-radius:.25rem;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-select:focus{border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-select:focus::-ms-value{color:#495057;background-color:#fff}.custom-select[multiple],.custom-select[size]:not([size="1"]){height:auto;padding-right:.75rem;background-image:none}.custom-select:disabled{color:#6c757d;background-color:#e9ecef}.custom-select::-ms-expand{display:none}.custom-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.custom-select-sm{height:calc(1.5em + .5rem + 2px);padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem}.custom-select-lg{height:calc(1.5em + 1rem + 2px);padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.custom-file{position:relative;display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);margin-bottom:0}.custom-file-input{position:relative;z-index:2;width:100%;height:calc(1.5em + .75rem + 2px);margin:0;opacity:0}.custom-file-input:focus~.custom-file-label{border-color:#80bdff;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-file-input:disabled~.custom-file-label,.custom-file-input[disabled]~.custom-file-label{background-color:#e9ecef}.custom-file-input:lang(en)~.custom-file-label::after{content:"Browse"}.custom-file-input~.custom-file-label[data-browse]::after{content:attr(data-browse)}.custom-file-label{position:absolute;top:0;right:0;left:0;z-index:1;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem}.custom-file-label::after{position:absolute;top:0;right:0;bottom:0;z-index:3;display:block;height:calc(1.5em + .75rem);padding:.375rem .75rem;line-height:1.5;color:#495057;content:"Browse";background-color:#e9ecef;border-left:inherit;border-radius:0 .25rem .25rem 0}.custom-range{width:100%;height:1.4rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-range:focus{outline:0}.custom-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-ms-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range::-moz-focus-outer{border:0}.custom-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#007bff;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.custom-range::-webkit-slider-thumb:active{background-color:#b3d7ff}.custom-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#007bff;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-moz-range-thumb{-moz-transition:none;transition:none}}.custom-range::-moz-range-thumb:active{background-color:#b3d7ff}.custom-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-ms-thumb{width:1rem;height:1rem;margin-top:0;margin-right:.2rem;margin-left:.2rem;background-color:#007bff;border:0;border-radius:1rem;-ms-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-ms-thumb{-ms-transition:none;transition:none}}.custom-range::-ms-thumb:active{background-color:#b3d7ff}.custom-range::-ms-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:transparent;border-color:transparent;border-width:.5rem}.custom-range::-ms-fill-lower{background-color:#dee2e6;border-radius:1rem}.custom-range::-ms-fill-upper{margin-right:15px;background-color:#dee2e6;border-radius:1rem}.custom-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.custom-range:disabled::-webkit-slider-runnable-track{cursor:default}.custom-range:disabled::-moz-range-thumb{background-color:#adb5bd}.custom-range:disabled::-moz-range-track{cursor:default}.custom-range:disabled::-ms-thumb{background-color:#adb5bd}.custom-control-label::before,.custom-file-label,.custom-select{transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-control-label::before,.custom-file-label,.custom-select{transition:none}}.nav{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem}.nav-link:focus,.nav-link:hover{text-decoration:none}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-item{margin-bottom:-1px}.nav-tabs .nav-link{border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#007bff}.nav-fill .nav-item{-ms-flex:1 1 auto;flex:1 1 auto;text-align:center}.nav-justified .nav-item{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;text-align:center}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between;padding:.5rem 1rem}.navbar .container,.navbar .container-fluid,.navbar .container-lg,.navbar .container-md,.navbar .container-sm,.navbar .container-xl{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between}.navbar-brand{display:inline-block;padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;line-height:inherit;white-space:nowrap}.navbar-brand:focus,.navbar-brand:hover{text-decoration:none}.navbar-nav{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static;float:none}.navbar-text{display:inline-block;padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{-ms-flex-preferred-size:100%;flex-basis:100%;-ms-flex-positive:1;flex-grow:1;-ms-flex-align:center;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem}.navbar-toggler:focus,.navbar-toggler:hover{text-decoration:none}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;content:"";background:no-repeat center center;background-size:100% 100%}@media (max-width:575.98px){.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{padding-right:0;padding-left:0}}@media (min-width:576px){.navbar-expand-sm{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-sm .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-sm .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}}@media (max-width:767.98px){.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{padding-right:0;padding-left:0}}@media (min-width:768px){.navbar-expand-md{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-md .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-md .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}}@media (max-width:991.98px){.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{padding-right:0;padding-left:0}}@media (min-width:992px){.navbar-expand-lg{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-lg .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-lg .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}}@media (max-width:1199.98px){.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{padding-right:0;padding-left:0}}@media (min-width:1200px){.navbar-expand-xl{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-xl .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-xl .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}}.navbar-expand{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{padding-right:0;padding-left:0}.navbar-expand .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.5)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .active>.nav-link,.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .nav-link.show,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.5);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba(0, 0, 0, 0.5)' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.5)}.navbar-light .navbar-text a{color:rgba(0,0,0,.9)}.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.5)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .nav-link.show,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.5);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba(255, 255, 255, 0.5)' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.5)}.navbar-dark .navbar-text a{color:#fff}.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group:first-child .list-group-item:first-child{border-top-left-radius:.25rem;border-top-right-radius:.25rem}.card>.list-group:last-child .list-group-item:last-child{border-bottom-right-radius:.25rem;border-bottom-left-radius:.25rem}.card-body{-ms-flex:1 1 auto;flex:1 1 auto;min-height:1px;padding:1.25rem}.card-title{margin-bottom:.75rem}.card-subtitle{margin-top:-.375rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link:hover{text-decoration:none}.card-link+.card-link{margin-left:1.25rem}.card-header{padding:.75rem 1.25rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-header+.list-group .list-group-item:first-child{border-top:0}.card-footer{padding:.75rem 1.25rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-right:-.625rem;margin-bottom:-.75rem;margin-left:-.625rem;border-bottom:0}.card-header-pills{margin-right:-.625rem;margin-left:-.625rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1.25rem}.card-img,.card-img-bottom,.card-img-top{-ms-flex-negative:0;flex-shrink:0;width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-deck .card{margin-bottom:15px}@media (min-width:576px){.card-deck{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;margin-right:-15px;margin-left:-15px}.card-deck .card{-ms-flex:1 0 0%;flex:1 0 0%;margin-right:15px;margin-bottom:0;margin-left:15px}}.card-group>.card{margin-bottom:15px}@media (min-width:576px){.card-group{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap}.card-group>.card{-ms-flex:1 0 0%;flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.card-columns .card{margin-bottom:.75rem}@media (min-width:576px){.card-columns{-webkit-column-count:3;-moz-column-count:3;column-count:3;-webkit-column-gap:1.25rem;-moz-column-gap:1.25rem;column-gap:1.25rem;orphans:1;widows:1}.card-columns .card{display:inline-block;width:100%}}.accordion>.card{overflow:hidden}.accordion>.card:not(:last-of-type){border-bottom:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.accordion>.card:not(:first-of-type){border-top-left-radius:0;border-top-right-radius:0}.accordion>.card>.card-header{border-radius:0;margin-bottom:-1px}.breadcrumb{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding:.75rem 1rem;margin-bottom:1rem;list-style:none;background-color:#e9ecef;border-radius:.25rem}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{display:inline-block;padding-right:.5rem;color:#6c757d;content:"/"}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:underline}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:none}.breadcrumb-item.active{color:#6c757d}.pagination{display:-ms-flexbox;display:flex;padding-left:0;list-style:none;border-radius:.25rem}.page-link{position:relative;display:block;padding:.5rem .75rem;margin-left:-1px;line-height:1.25;color:#007bff;background-color:#fff;border:1px solid #dee2e6}.page-link:hover{z-index:2;color:#0056b3;text-decoration:none;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.page-item:first-child .page-link{margin-left:0;border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item.active .page-link{z-index:3;color:#fff;background-color:#007bff;border-color:#007bff}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;cursor:auto;background-color:#fff;border-color:#dee2e6}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem;line-height:1.5}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem;line-height:1.5}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.25em .4em;font-size:75%;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.badge{transition:none}}a.badge:focus,a.badge:hover{text-decoration:none}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.badge-pill{padding-right:.6em;padding-left:.6em;border-radius:10rem}.badge-primary{color:#fff;background-color:#007bff}a.badge-primary:focus,a.badge-primary:hover{color:#fff;background-color:#0062cc}a.badge-primary.focus,a.badge-primary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.badge-secondary{color:#fff;background-color:#6c757d}a.badge-secondary:focus,a.badge-secondary:hover{color:#fff;background-color:#545b62}a.badge-secondary.focus,a.badge-secondary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.badge-success{color:#fff;background-color:#28a745}a.badge-success:focus,a.badge-success:hover{color:#fff;background-color:#1e7e34}a.badge-success.focus,a.badge-success:focus{outline:0;box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.badge-info{color:#fff;background-color:#17a2b8}a.badge-info:focus,a.badge-info:hover{color:#fff;background-color:#117a8b}a.badge-info.focus,a.badge-info:focus{outline:0;box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.badge-warning{color:#212529;background-color:#ffc107}a.badge-warning:focus,a.badge-warning:hover{color:#212529;background-color:#d39e00}a.badge-warning.focus,a.badge-warning:focus{outline:0;box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.badge-danger{color:#fff;background-color:#dc3545}a.badge-danger:focus,a.badge-danger:hover{color:#fff;background-color:#bd2130}a.badge-danger.focus,a.badge-danger:focus{outline:0;box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.badge-light{color:#212529;background-color:#f8f9fa}a.badge-light:focus,a.badge-light:hover{color:#212529;background-color:#dae0e5}a.badge-light.focus,a.badge-light:focus{outline:0;box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.badge-dark{color:#fff;background-color:#343a40}a.badge-dark:focus,a.badge-dark:hover{color:#fff;background-color:#1d2124}a.badge-dark.focus,a.badge-dark:focus{outline:0;box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.jumbotron{padding:2rem 1rem;margin-bottom:2rem;background-color:#e9ecef;border-radius:.3rem}@media (min-width:576px){.jumbotron{padding:4rem 2rem}}.jumbotron-fluid{padding-right:0;padding-left:0;border-radius:0}.alert{position:relative;padding:.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:4rem}.alert-dismissible .close{position:absolute;top:0;right:0;padding:.75rem 1.25rem;color:inherit}.alert-primary{color:#004085;background-color:#cce5ff;border-color:#b8daff}.alert-primary hr{border-top-color:#9fcdff}.alert-primary .alert-link{color:#002752}.alert-secondary{color:#383d41;background-color:#e2e3e5;border-color:#d6d8db}.alert-secondary hr{border-top-color:#c8cbcf}.alert-secondary .alert-link{color:#202326}.alert-success{color:#155724;background-color:#d4edda;border-color:#c3e6cb}.alert-success hr{border-top-color:#b1dfbb}.alert-success .alert-link{color:#0b2e13}.alert-info{color:#0c5460;background-color:#d1ecf1;border-color:#bee5eb}.alert-info hr{border-top-color:#abdde5}.alert-info .alert-link{color:#062c33}.alert-warning{color:#856404;background-color:#fff3cd;border-color:#ffeeba}.alert-warning hr{border-top-color:#ffe8a1}.alert-warning .alert-link{color:#533f03}.alert-danger{color:#721c24;background-color:#f8d7da;border-color:#f5c6cb}.alert-danger hr{border-top-color:#f1b0b7}.alert-danger .alert-link{color:#491217}.alert-light{color:#818182;background-color:#fefefe;border-color:#fdfdfe}.alert-light hr{border-top-color:#ececf6}.alert-light .alert-link{color:#686868}.alert-dark{color:#1b1e21;background-color:#d6d8d9;border-color:#c6c8ca}.alert-dark hr{border-top-color:#b9bbbe}.alert-dark .alert-link{color:#040505}@-webkit-keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}@keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}.progress{display:-ms-flexbox;display:flex;height:1rem;overflow:hidden;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#007bff;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:progress-bar-stripes 1s linear infinite;animation:progress-bar-stripes 1s linear infinite}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.media{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start}.media-body{-ms-flex:1;flex:1}.list-group{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.75rem 1.25rem;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:.25rem;border-top-right-radius:.25rem}.list-group-item:last-child{border-bottom-right-radius:.25rem;border-bottom-left-radius:.25rem}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#007bff;border-color:#007bff}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal .list-group-item.active{margin-top:0}.list-group-horizontal .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:576px){.list-group-horizontal-sm{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-sm .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm .list-group-item.active{margin-top:0}.list-group-horizontal-sm .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:768px){.list-group-horizontal-md{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-md .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md .list-group-item.active{margin-top:0}.list-group-horizontal-md .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-lg .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg .list-group-item.active{margin-top:0}.list-group-horizontal-lg .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-xl .list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl .list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl .list-group-item.active{margin-top:0}.list-group-horizontal-xl .list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl .list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush .list-group-item{border-right-width:0;border-left-width:0;border-radius:0}.list-group-flush .list-group-item:first-child{border-top-width:0}.list-group-flush:last-child .list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#004085;background-color:#b8daff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#004085;background-color:#9fcdff}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#004085;border-color:#004085}.list-group-item-secondary{color:#383d41;background-color:#d6d8db}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#383d41;background-color:#c8cbcf}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#383d41;border-color:#383d41}.list-group-item-success{color:#155724;background-color:#c3e6cb}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#155724;background-color:#b1dfbb}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#155724;border-color:#155724}.list-group-item-info{color:#0c5460;background-color:#bee5eb}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#0c5460;background-color:#abdde5}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#0c5460;border-color:#0c5460}.list-group-item-warning{color:#856404;background-color:#ffeeba}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#856404;background-color:#ffe8a1}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#856404;border-color:#856404}.list-group-item-danger{color:#721c24;background-color:#f5c6cb}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#721c24;background-color:#f1b0b7}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#721c24;border-color:#721c24}.list-group-item-light{color:#818182;background-color:#fdfdfe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#818182;background-color:#ececf6}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#818182;border-color:#818182}.list-group-item-dark{color:#1b1e21;background-color:#c6c8ca}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#1b1e21;background-color:#b9bbbe}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#1b1e21;border-color:#1b1e21}.close{float:right;font-size:1.5rem;font-weight:700;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.5}.close:hover{color:#000;text-decoration:none}.close:not(:disabled):not(.disabled):focus,.close:not(:disabled):not(.disabled):hover{opacity:.75}button.close{padding:0;background-color:transparent;border:0;-webkit-appearance:none;-moz-appearance:none;appearance:none}a.close.disabled{pointer-events:none}.toast{max-width:350px;overflow:hidden;font-size:.875rem;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .25rem .75rem rgba(0,0,0,.1);-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);opacity:0;border-radius:.25rem}.toast:not(:last-child){margin-bottom:.75rem}.toast.showing{opacity:1}.toast.show{display:block;opacity:1}.toast.hide{display:none}.toast-header{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.25rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-body{padding:.75rem}.modal-open{overflow:hidden}.modal-open .modal{overflow-x:hidden;overflow-y:auto}.modal{position:fixed;top:0;left:0;z-index:1050;display:none;width:100%;height:100%;overflow:hidden;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:-webkit-transform .3s ease-out;transition:transform .3s ease-out;transition:transform .3s ease-out,-webkit-transform .3s ease-out;-webkit-transform:translate(0,-50px);transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{-webkit-transform:none;transform:none}.modal.modal-static .modal-dialog{-webkit-transform:scale(1.02);transform:scale(1.02)}.modal-dialog-scrollable{display:-ms-flexbox;display:flex;max-height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 1rem);overflow:hidden}.modal-dialog-scrollable .modal-footer,.modal-dialog-scrollable .modal-header{-ms-flex-negative:0;flex-shrink:0}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;min-height:calc(100% - 1rem)}.modal-dialog-centered::before{display:block;height:calc(100vh - 1rem);content:""}.modal-dialog-centered.modal-dialog-scrollable{-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;height:100%}.modal-dialog-centered.modal-dialog-scrollable .modal-content{max-height:none}.modal-dialog-centered.modal-dialog-scrollable::before{content:none}.modal-content{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:justify;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .close{padding:1rem 1rem;margin:-1rem -1rem -1rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem}.modal-footer{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:end;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}.modal-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{max-height:calc(100% - 3.5rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-dialog-centered::before{height:calc(100vh - 3.5rem)}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.tooltip{position:absolute;z-index:1070;display:block;margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[x-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[x-placement^=top] .arrow,.bs-tooltip-top .arrow{bottom:0}.bs-tooltip-auto[x-placement^=top] .arrow::before,.bs-tooltip-top .arrow::before{top:0;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[x-placement^=right],.bs-tooltip-right{padding:0 .4rem}.bs-tooltip-auto[x-placement^=right] .arrow,.bs-tooltip-right .arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=right] .arrow::before,.bs-tooltip-right .arrow::before{right:0;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[x-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[x-placement^=bottom] .arrow,.bs-tooltip-bottom .arrow{top:0}.bs-tooltip-auto[x-placement^=bottom] .arrow::before,.bs-tooltip-bottom .arrow::before{bottom:0;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[x-placement^=left],.bs-tooltip-left{padding:0 .4rem}.bs-tooltip-auto[x-placement^=left] .arrow,.bs-tooltip-left .arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=left] .arrow::before,.bs-tooltip-left .arrow::before{left:0;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1060;display:block;max-width:276px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .arrow{position:absolute;display:block;width:1rem;height:.5rem;margin:0 .3rem}.popover .arrow::after,.popover .arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[x-placement^=top],.bs-popover-top{margin-bottom:.5rem}.bs-popover-auto[x-placement^=top]>.arrow,.bs-popover-top>.arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=top]>.arrow::before,.bs-popover-top>.arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=top]>.arrow::after,.bs-popover-top>.arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[x-placement^=right],.bs-popover-right{margin-left:.5rem}.bs-popover-auto[x-placement^=right]>.arrow,.bs-popover-right>.arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=right]>.arrow::before,.bs-popover-right>.arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=right]>.arrow::after,.bs-popover-right>.arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[x-placement^=bottom],.bs-popover-bottom{margin-top:.5rem}.bs-popover-auto[x-placement^=bottom]>.arrow,.bs-popover-bottom>.arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=bottom]>.arrow::before,.bs-popover-bottom>.arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=bottom]>.arrow::after,.bs-popover-bottom>.arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[x-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f7f7f7}.bs-popover-auto[x-placement^=left],.bs-popover-left{margin-right:.5rem}.bs-popover-auto[x-placement^=left]>.arrow,.bs-popover-left>.arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=left]>.arrow::before,.bs-popover-left>.arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=left]>.arrow::after,.bs-popover-left>.arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem .75rem;margin-bottom:0;font-size:1rem;background-color:#f7f7f7;border-bottom:1px solid #ebebeb;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:.5rem .75rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{-ms-touch-action:pan-y;touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:-webkit-transform .6s ease-in-out;transition:transform .6s ease-in-out;transition:transform .6s ease-in-out,-webkit-transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-right,.carousel-item-next:not(.carousel-item-left){-webkit-transform:translateX(100%);transform:translateX(100%)}.active.carousel-item-left,.carousel-item-prev:not(.carousel-item-right){-webkit-transform:translateX(-100%);transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;-webkit-transform:none;transform:none}.carousel-fade .carousel-item-next.carousel-item-left,.carousel-fade .carousel-item-prev.carousel-item-right,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:15%;color:#fff;text-align:center;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:20px;height:20px;background:no-repeat 50%/100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M5.25 0l-4 4 4 4 1.5-1.5L4.25 4l2.5-2.5L5.25 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:15;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;padding-left:0;margin-right:15%;margin-left:15%;list-style:none}.carousel-indicators li{box-sizing:content-box;-ms-flex:0 1 auto;flex:0 1 auto;width:30px;height:3px;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators li{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:20px;left:15%;z-index:10;padding-top:20px;padding-bottom:20px;color:#fff;text-align:center}@-webkit-keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;border:.25em solid currentColor;border-right-color:transparent;border-radius:50%;-webkit-animation:spinner-border .75s linear infinite;animation:spinner-border .75s linear infinite}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1}}@keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:spinner-grow .75s linear infinite;animation:spinner-grow .75s linear infinite}.spinner-grow-sm{width:1rem;height:1rem}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.bg-primary{background-color:#007bff!important}a.bg-primary:focus,a.bg-primary:hover,button.bg-primary:focus,button.bg-primary:hover{background-color:#0062cc!important}.bg-secondary{background-color:#6c757d!important}a.bg-secondary:focus,a.bg-secondary:hover,button.bg-secondary:focus,button.bg-secondary:hover{background-color:#545b62!important}.bg-success{background-color:#28a745!important}a.bg-success:focus,a.bg-success:hover,button.bg-success:focus,button.bg-success:hover{background-color:#1e7e34!important}.bg-info{background-color:#17a2b8!important}a.bg-info:focus,a.bg-info:hover,button.bg-info:focus,button.bg-info:hover{background-color:#117a8b!important}.bg-warning{background-color:#ffc107!important}a.bg-warning:focus,a.bg-warning:hover,button.bg-warning:focus,button.bg-warning:hover{background-color:#d39e00!important}.bg-danger{background-color:#dc3545!important}a.bg-danger:focus,a.bg-danger:hover,button.bg-danger:focus,button.bg-danger:hover{background-color:#bd2130!important}.bg-light{background-color:#f8f9fa!important}a.bg-light:focus,a.bg-light:hover,button.bg-light:focus,button.bg-light:hover{background-color:#dae0e5!important}.bg-dark{background-color:#343a40!important}a.bg-dark:focus,a.bg-dark:hover,button.bg-dark:focus,button.bg-dark:hover{background-color:#1d2124!important}.bg-white{background-color:#fff!important}.bg-transparent{background-color:transparent!important}.border{border:1px solid #dee2e6!important}.border-top{border-top:1px solid #dee2e6!important}.border-right{border-right:1px solid #dee2e6!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-left{border-left:1px solid #dee2e6!important}.border-0{border:0!important}.border-top-0{border-top:0!important}.border-right-0{border-right:0!important}.border-bottom-0{border-bottom:0!important}.border-left-0{border-left:0!important}.border-primary{border-color:#007bff!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#28a745!important}.border-info{border-color:#17a2b8!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#343a40!important}.border-white{border-color:#fff!important}.rounded-sm{border-radius:.2rem!important}.rounded{border-radius:.25rem!important}.rounded-top{border-top-left-radius:.25rem!important;border-top-right-radius:.25rem!important}.rounded-right{border-top-right-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-bottom{border-bottom-right-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-left{border-top-left-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-lg{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-0{border-radius:0!important}.clearfix::after{display:block;clear:both;content:""}.d-none{display:none!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:-ms-flexbox!important;display:flex!important}.d-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}@media (min-width:576px){.d-sm-none{display:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:-ms-flexbox!important;display:flex!important}.d-sm-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:768px){.d-md-none{display:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:-ms-flexbox!important;display:flex!important}.d-md-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:992px){.d-lg-none{display:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:-ms-flexbox!important;display:flex!important}.d-lg-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:1200px){.d-xl-none{display:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:-ms-flexbox!important;display:flex!important}.d-xl-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media print{.d-print-none{display:none!important}.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:-ms-flexbox!important;display:flex!important}.d-print-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}.embed-responsive{position:relative;display:block;width:100%;padding:0;overflow:hidden}.embed-responsive::before{display:block;content:""}.embed-responsive .embed-responsive-item,.embed-responsive embed,.embed-responsive iframe,.embed-responsive object,.embed-responsive video{position:absolute;top:0;bottom:0;left:0;width:100%;height:100%;border:0}.embed-responsive-21by9::before{padding-top:42.857143%}.embed-responsive-16by9::before{padding-top:56.25%}.embed-responsive-4by3::before{padding-top:75%}.embed-responsive-1by1::before{padding-top:100%}.flex-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-center{-ms-flex-align:center!important;align-items:center!important}.align-items-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}@media (min-width:576px){.flex-sm-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-sm-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-sm-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-sm-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-sm-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-sm-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-sm-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-sm-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-sm-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-sm-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-sm-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-sm-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-sm-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-sm-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-sm-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-sm-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-sm-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-sm-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-sm-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-sm-center{-ms-flex-align:center!important;align-items:center!important}.align-items-sm-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-sm-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-sm-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-sm-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-sm-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-sm-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-sm-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-sm-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-sm-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-sm-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-sm-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-sm-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-sm-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-sm-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:768px){.flex-md-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-md-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-md-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-md-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-md-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-md-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-md-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-md-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-md-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-md-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-md-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-md-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-md-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-md-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-md-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-md-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-md-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-md-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-md-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-md-center{-ms-flex-align:center!important;align-items:center!important}.align-items-md-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-md-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-md-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-md-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-md-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-md-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-md-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-md-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-md-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-md-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-md-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-md-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-md-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-md-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:992px){.flex-lg-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-lg-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-lg-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-lg-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-lg-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-lg-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-lg-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-lg-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-lg-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-lg-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-lg-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-lg-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-lg-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-lg-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-lg-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-lg-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-lg-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-lg-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-lg-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-lg-center{-ms-flex-align:center!important;align-items:center!important}.align-items-lg-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-lg-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-lg-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-lg-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-lg-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-lg-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-lg-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-lg-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-lg-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-lg-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-lg-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-lg-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-lg-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-lg-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:1200px){.flex-xl-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-xl-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-xl-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-xl-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-xl-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-xl-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-xl-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-xl-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-xl-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-xl-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-xl-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-xl-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-xl-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-xl-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-xl-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-xl-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-xl-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-xl-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-xl-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-xl-center{-ms-flex-align:center!important;align-items:center!important}.align-items-xl-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-xl-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-xl-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-xl-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-xl-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-xl-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-xl-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-xl-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-xl-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-xl-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-xl-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-xl-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-xl-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-xl-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}.float-left{float:left!important}.float-right{float:right!important}.float-none{float:none!important}@media (min-width:576px){.float-sm-left{float:left!important}.float-sm-right{float:right!important}.float-sm-none{float:none!important}}@media (min-width:768px){.float-md-left{float:left!important}.float-md-right{float:right!important}.float-md-none{float:none!important}}@media (min-width:992px){.float-lg-left{float:left!important}.float-lg-right{float:right!important}.float-lg-none{float:none!important}}@media (min-width:1200px){.float-xl-left{float:left!important}.float-xl-right{float:right!important}.float-xl-none{float:none!important}}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}@supports ((position:-webkit-sticky) or (position:sticky)){.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;overflow:visible;clip:auto;white-space:normal}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mw-100{max-width:100%!important}.mh-100{max-height:100%!important}.min-vw-100{min-width:100vw!important}.min-vh-100{min-height:100vh!important}.vw-100{width:100vw!important}.vh-100{height:100vh!important}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;pointer-events:auto;content:"";background-color:rgba(0,0,0,0)}.m-0{margin:0!important}.mt-0,.my-0{margin-top:0!important}.mr-0,.mx-0{margin-right:0!important}.mb-0,.my-0{margin-bottom:0!important}.ml-0,.mx-0{margin-left:0!important}.m-1{margin:.25rem!important}.mt-1,.my-1{margin-top:.25rem!important}.mr-1,.mx-1{margin-right:.25rem!important}.mb-1,.my-1{margin-bottom:.25rem!important}.ml-1,.mx-1{margin-left:.25rem!important}.m-2{margin:.5rem!important}.mt-2,.my-2{margin-top:.5rem!important}.mr-2,.mx-2{margin-right:.5rem!important}.mb-2,.my-2{margin-bottom:.5rem!important}.ml-2,.mx-2{margin-left:.5rem!important}.m-3{margin:1rem!important}.mt-3,.my-3{margin-top:1rem!important}.mr-3,.mx-3{margin-right:1rem!important}.mb-3,.my-3{margin-bottom:1rem!important}.ml-3,.mx-3{margin-left:1rem!important}.m-4{margin:1.5rem!important}.mt-4,.my-4{margin-top:1.5rem!important}.mr-4,.mx-4{margin-right:1.5rem!important}.mb-4,.my-4{margin-bottom:1.5rem!important}.ml-4,.mx-4{margin-left:1.5rem!important}.m-5{margin:3rem!important}.mt-5,.my-5{margin-top:3rem!important}.mr-5,.mx-5{margin-right:3rem!important}.mb-5,.my-5{margin-bottom:3rem!important}.ml-5,.mx-5{margin-left:3rem!important}.p-0{padding:0!important}.pt-0,.py-0{padding-top:0!important}.pr-0,.px-0{padding-right:0!important}.pb-0,.py-0{padding-bottom:0!important}.pl-0,.px-0{padding-left:0!important}.p-1{padding:.25rem!important}.pt-1,.py-1{padding-top:.25rem!important}.pr-1,.px-1{padding-right:.25rem!important}.pb-1,.py-1{padding-bottom:.25rem!important}.pl-1,.px-1{padding-left:.25rem!important}.p-2{padding:.5rem!important}.pt-2,.py-2{padding-top:.5rem!important}.pr-2,.px-2{padding-right:.5rem!important}.pb-2,.py-2{padding-bottom:.5rem!important}.pl-2,.px-2{padding-left:.5rem!important}.p-3{padding:1rem!important}.pt-3,.py-3{padding-top:1rem!important}.pr-3,.px-3{padding-right:1rem!important}.pb-3,.py-3{padding-bottom:1rem!important}.pl-3,.px-3{padding-left:1rem!important}.p-4{padding:1.5rem!important}.pt-4,.py-4{padding-top:1.5rem!important}.pr-4,.px-4{padding-right:1.5rem!important}.pb-4,.py-4{padding-bottom:1.5rem!important}.pl-4,.px-4{padding-left:1.5rem!important}.p-5{padding:3rem!important}.pt-5,.py-5{padding-top:3rem!important}.pr-5,.px-5{padding-right:3rem!important}.pb-5,.py-5{padding-bottom:3rem!important}.pl-5,.px-5{padding-left:3rem!important}.m-n1{margin:-.25rem!important}.mt-n1,.my-n1{margin-top:-.25rem!important}.mr-n1,.mx-n1{margin-right:-.25rem!important}.mb-n1,.my-n1{margin-bottom:-.25rem!important}.ml-n1,.mx-n1{margin-left:-.25rem!important}.m-n2{margin:-.5rem!important}.mt-n2,.my-n2{margin-top:-.5rem!important}.mr-n2,.mx-n2{margin-right:-.5rem!important}.mb-n2,.my-n2{margin-bottom:-.5rem!important}.ml-n2,.mx-n2{margin-left:-.5rem!important}.m-n3{margin:-1rem!important}.mt-n3,.my-n3{margin-top:-1rem!important}.mr-n3,.mx-n3{margin-right:-1rem!important}.mb-n3,.my-n3{margin-bottom:-1rem!important}.ml-n3,.mx-n3{margin-left:-1rem!important}.m-n4{margin:-1.5rem!important}.mt-n4,.my-n4{margin-top:-1.5rem!important}.mr-n4,.mx-n4{margin-right:-1.5rem!important}.mb-n4,.my-n4{margin-bottom:-1.5rem!important}.ml-n4,.mx-n4{margin-left:-1.5rem!important}.m-n5{margin:-3rem!important}.mt-n5,.my-n5{margin-top:-3rem!important}.mr-n5,.mx-n5{margin-right:-3rem!important}.mb-n5,.my-n5{margin-bottom:-3rem!important}.ml-n5,.mx-n5{margin-left:-3rem!important}.m-auto{margin:auto!important}.mt-auto,.my-auto{margin-top:auto!important}.mr-auto,.mx-auto{margin-right:auto!important}.mb-auto,.my-auto{margin-bottom:auto!important}.ml-auto,.mx-auto{margin-left:auto!important}@media (min-width:576px){.m-sm-0{margin:0!important}.mt-sm-0,.my-sm-0{margin-top:0!important}.mr-sm-0,.mx-sm-0{margin-right:0!important}.mb-sm-0,.my-sm-0{margin-bottom:0!important}.ml-sm-0,.mx-sm-0{margin-left:0!important}.m-sm-1{margin:.25rem!important}.mt-sm-1,.my-sm-1{margin-top:.25rem!important}.mr-sm-1,.mx-sm-1{margin-right:.25rem!important}.mb-sm-1,.my-sm-1{margin-bottom:.25rem!important}.ml-sm-1,.mx-sm-1{margin-left:.25rem!important}.m-sm-2{margin:.5rem!important}.mt-sm-2,.my-sm-2{margin-top:.5rem!important}.mr-sm-2,.mx-sm-2{margin-right:.5rem!important}.mb-sm-2,.my-sm-2{margin-bottom:.5rem!important}.ml-sm-2,.mx-sm-2{margin-left:.5rem!important}.m-sm-3{margin:1rem!important}.mt-sm-3,.my-sm-3{margin-top:1rem!important}.mr-sm-3,.mx-sm-3{margin-right:1rem!important}.mb-sm-3,.my-sm-3{margin-bottom:1rem!important}.ml-sm-3,.mx-sm-3{margin-left:1rem!important}.m-sm-4{margin:1.5rem!important}.mt-sm-4,.my-sm-4{margin-top:1.5rem!important}.mr-sm-4,.mx-sm-4{margin-right:1.5rem!important}.mb-sm-4,.my-sm-4{margin-bottom:1.5rem!important}.ml-sm-4,.mx-sm-4{margin-left:1.5rem!important}.m-sm-5{margin:3rem!important}.mt-sm-5,.my-sm-5{margin-top:3rem!important}.mr-sm-5,.mx-sm-5{margin-right:3rem!important}.mb-sm-5,.my-sm-5{margin-bottom:3rem!important}.ml-sm-5,.mx-sm-5{margin-left:3rem!important}.p-sm-0{padding:0!important}.pt-sm-0,.py-sm-0{padding-top:0!important}.pr-sm-0,.px-sm-0{padding-right:0!important}.pb-sm-0,.py-sm-0{padding-bottom:0!important}.pl-sm-0,.px-sm-0{padding-left:0!important}.p-sm-1{padding:.25rem!important}.pt-sm-1,.py-sm-1{padding-top:.25rem!important}.pr-sm-1,.px-sm-1{padding-right:.25rem!important}.pb-sm-1,.py-sm-1{padding-bottom:.25rem!important}.pl-sm-1,.px-sm-1{padding-left:.25rem!important}.p-sm-2{padding:.5rem!important}.pt-sm-2,.py-sm-2{padding-top:.5rem!important}.pr-sm-2,.px-sm-2{padding-right:.5rem!important}.pb-sm-2,.py-sm-2{padding-bottom:.5rem!important}.pl-sm-2,.px-sm-2{padding-left:.5rem!important}.p-sm-3{padding:1rem!important}.pt-sm-3,.py-sm-3{padding-top:1rem!important}.pr-sm-3,.px-sm-3{padding-right:1rem!important}.pb-sm-3,.py-sm-3{padding-bottom:1rem!important}.pl-sm-3,.px-sm-3{padding-left:1rem!important}.p-sm-4{padding:1.5rem!important}.pt-sm-4,.py-sm-4{padding-top:1.5rem!important}.pr-sm-4,.px-sm-4{padding-right:1.5rem!important}.pb-sm-4,.py-sm-4{padding-bottom:1.5rem!important}.pl-sm-4,.px-sm-4{padding-left:1.5rem!important}.p-sm-5{padding:3rem!important}.pt-sm-5,.py-sm-5{padding-top:3rem!important}.pr-sm-5,.px-sm-5{padding-right:3rem!important}.pb-sm-5,.py-sm-5{padding-bottom:3rem!important}.pl-sm-5,.px-sm-5{padding-left:3rem!important}.m-sm-n1{margin:-.25rem!important}.mt-sm-n1,.my-sm-n1{margin-top:-.25rem!important}.mr-sm-n1,.mx-sm-n1{margin-right:-.25rem!important}.mb-sm-n1,.my-sm-n1{margin-bottom:-.25rem!important}.ml-sm-n1,.mx-sm-n1{margin-left:-.25rem!important}.m-sm-n2{margin:-.5rem!important}.mt-sm-n2,.my-sm-n2{margin-top:-.5rem!important}.mr-sm-n2,.mx-sm-n2{margin-right:-.5rem!important}.mb-sm-n2,.my-sm-n2{margin-bottom:-.5rem!important}.ml-sm-n2,.mx-sm-n2{margin-left:-.5rem!important}.m-sm-n3{margin:-1rem!important}.mt-sm-n3,.my-sm-n3{margin-top:-1rem!important}.mr-sm-n3,.mx-sm-n3{margin-right:-1rem!important}.mb-sm-n3,.my-sm-n3{margin-bottom:-1rem!important}.ml-sm-n3,.mx-sm-n3{margin-left:-1rem!important}.m-sm-n4{margin:-1.5rem!important}.mt-sm-n4,.my-sm-n4{margin-top:-1.5rem!important}.mr-sm-n4,.mx-sm-n4{margin-right:-1.5rem!important}.mb-sm-n4,.my-sm-n4{margin-bottom:-1.5rem!important}.ml-sm-n4,.mx-sm-n4{margin-left:-1.5rem!important}.m-sm-n5{margin:-3rem!important}.mt-sm-n5,.my-sm-n5{margin-top:-3rem!important}.mr-sm-n5,.mx-sm-n5{margin-right:-3rem!important}.mb-sm-n5,.my-sm-n5{margin-bottom:-3rem!important}.ml-sm-n5,.mx-sm-n5{margin-left:-3rem!important}.m-sm-auto{margin:auto!important}.mt-sm-auto,.my-sm-auto{margin-top:auto!important}.mr-sm-auto,.mx-sm-auto{margin-right:auto!important}.mb-sm-auto,.my-sm-auto{margin-bottom:auto!important}.ml-sm-auto,.mx-sm-auto{margin-left:auto!important}}@media (min-width:768px){.m-md-0{margin:0!important}.mt-md-0,.my-md-0{margin-top:0!important}.mr-md-0,.mx-md-0{margin-right:0!important}.mb-md-0,.my-md-0{margin-bottom:0!important}.ml-md-0,.mx-md-0{margin-left:0!important}.m-md-1{margin:.25rem!important}.mt-md-1,.my-md-1{margin-top:.25rem!important}.mr-md-1,.mx-md-1{margin-right:.25rem!important}.mb-md-1,.my-md-1{margin-bottom:.25rem!important}.ml-md-1,.mx-md-1{margin-left:.25rem!important}.m-md-2{margin:.5rem!important}.mt-md-2,.my-md-2{margin-top:.5rem!important}.mr-md-2,.mx-md-2{margin-right:.5rem!important}.mb-md-2,.my-md-2{margin-bottom:.5rem!important}.ml-md-2,.mx-md-2{margin-left:.5rem!important}.m-md-3{margin:1rem!important}.mt-md-3,.my-md-3{margin-top:1rem!important}.mr-md-3,.mx-md-3{margin-right:1rem!important}.mb-md-3,.my-md-3{margin-bottom:1rem!important}.ml-md-3,.mx-md-3{margin-left:1rem!important}.m-md-4{margin:1.5rem!important}.mt-md-4,.my-md-4{margin-top:1.5rem!important}.mr-md-4,.mx-md-4{margin-right:1.5rem!important}.mb-md-4,.my-md-4{margin-bottom:1.5rem!important}.ml-md-4,.mx-md-4{margin-left:1.5rem!important}.m-md-5{margin:3rem!important}.mt-md-5,.my-md-5{margin-top:3rem!important}.mr-md-5,.mx-md-5{margin-right:3rem!important}.mb-md-5,.my-md-5{margin-bottom:3rem!important}.ml-md-5,.mx-md-5{margin-left:3rem!important}.p-md-0{padding:0!important}.pt-md-0,.py-md-0{padding-top:0!important}.pr-md-0,.px-md-0{padding-right:0!important}.pb-md-0,.py-md-0{padding-bottom:0!important}.pl-md-0,.px-md-0{padding-left:0!important}.p-md-1{padding:.25rem!important}.pt-md-1,.py-md-1{padding-top:.25rem!important}.pr-md-1,.px-md-1{padding-right:.25rem!important}.pb-md-1,.py-md-1{padding-bottom:.25rem!important}.pl-md-1,.px-md-1{padding-left:.25rem!important}.p-md-2{padding:.5rem!important}.pt-md-2,.py-md-2{padding-top:.5rem!important}.pr-md-2,.px-md-2{padding-right:.5rem!important}.pb-md-2,.py-md-2{padding-bottom:.5rem!important}.pl-md-2,.px-md-2{padding-left:.5rem!important}.p-md-3{padding:1rem!important}.pt-md-3,.py-md-3{padding-top:1rem!important}.pr-md-3,.px-md-3{padding-right:1rem!important}.pb-md-3,.py-md-3{padding-bottom:1rem!important}.pl-md-3,.px-md-3{padding-left:1rem!important}.p-md-4{padding:1.5rem!important}.pt-md-4,.py-md-4{padding-top:1.5rem!important}.pr-md-4,.px-md-4{padding-right:1.5rem!important}.pb-md-4,.py-md-4{padding-bottom:1.5rem!important}.pl-md-4,.px-md-4{padding-left:1.5rem!important}.p-md-5{padding:3rem!important}.pt-md-5,.py-md-5{padding-top:3rem!important}.pr-md-5,.px-md-5{padding-right:3rem!important}.pb-md-5,.py-md-5{padding-bottom:3rem!important}.pl-md-5,.px-md-5{padding-left:3rem!important}.m-md-n1{margin:-.25rem!important}.mt-md-n1,.my-md-n1{margin-top:-.25rem!important}.mr-md-n1,.mx-md-n1{margin-right:-.25rem!important}.mb-md-n1,.my-md-n1{margin-bottom:-.25rem!important}.ml-md-n1,.mx-md-n1{margin-left:-.25rem!important}.m-md-n2{margin:-.5rem!important}.mt-md-n2,.my-md-n2{margin-top:-.5rem!important}.mr-md-n2,.mx-md-n2{margin-right:-.5rem!important}.mb-md-n2,.my-md-n2{margin-bottom:-.5rem!important}.ml-md-n2,.mx-md-n2{margin-left:-.5rem!important}.m-md-n3{margin:-1rem!important}.mt-md-n3,.my-md-n3{margin-top:-1rem!important}.mr-md-n3,.mx-md-n3{margin-right:-1rem!important}.mb-md-n3,.my-md-n3{margin-bottom:-1rem!important}.ml-md-n3,.mx-md-n3{margin-left:-1rem!important}.m-md-n4{margin:-1.5rem!important}.mt-md-n4,.my-md-n4{margin-top:-1.5rem!important}.mr-md-n4,.mx-md-n4{margin-right:-1.5rem!important}.mb-md-n4,.my-md-n4{margin-bottom:-1.5rem!important}.ml-md-n4,.mx-md-n4{margin-left:-1.5rem!important}.m-md-n5{margin:-3rem!important}.mt-md-n5,.my-md-n5{margin-top:-3rem!important}.mr-md-n5,.mx-md-n5{margin-right:-3rem!important}.mb-md-n5,.my-md-n5{margin-bottom:-3rem!important}.ml-md-n5,.mx-md-n5{margin-left:-3rem!important}.m-md-auto{margin:auto!important}.mt-md-auto,.my-md-auto{margin-top:auto!important}.mr-md-auto,.mx-md-auto{margin-right:auto!important}.mb-md-auto,.my-md-auto{margin-bottom:auto!important}.ml-md-auto,.mx-md-auto{margin-left:auto!important}}@media (min-width:992px){.m-lg-0{margin:0!important}.mt-lg-0,.my-lg-0{margin-top:0!important}.mr-lg-0,.mx-lg-0{margin-right:0!important}.mb-lg-0,.my-lg-0{margin-bottom:0!important}.ml-lg-0,.mx-lg-0{margin-left:0!important}.m-lg-1{margin:.25rem!important}.mt-lg-1,.my-lg-1{margin-top:.25rem!important}.mr-lg-1,.mx-lg-1{margin-right:.25rem!important}.mb-lg-1,.my-lg-1{margin-bottom:.25rem!important}.ml-lg-1,.mx-lg-1{margin-left:.25rem!important}.m-lg-2{margin:.5rem!important}.mt-lg-2,.my-lg-2{margin-top:.5rem!important}.mr-lg-2,.mx-lg-2{margin-right:.5rem!important}.mb-lg-2,.my-lg-2{margin-bottom:.5rem!important}.ml-lg-2,.mx-lg-2{margin-left:.5rem!important}.m-lg-3{margin:1rem!important}.mt-lg-3,.my-lg-3{margin-top:1rem!important}.mr-lg-3,.mx-lg-3{margin-right:1rem!important}.mb-lg-3,.my-lg-3{margin-bottom:1rem!important}.ml-lg-3,.mx-lg-3{margin-left:1rem!important}.m-lg-4{margin:1.5rem!important}.mt-lg-4,.my-lg-4{margin-top:1.5rem!important}.mr-lg-4,.mx-lg-4{margin-right:1.5rem!important}.mb-lg-4,.my-lg-4{margin-bottom:1.5rem!important}.ml-lg-4,.mx-lg-4{margin-left:1.5rem!important}.m-lg-5{margin:3rem!important}.mt-lg-5,.my-lg-5{margin-top:3rem!important}.mr-lg-5,.mx-lg-5{margin-right:3rem!important}.mb-lg-5,.my-lg-5{margin-bottom:3rem!important}.ml-lg-5,.mx-lg-5{margin-left:3rem!important}.p-lg-0{padding:0!important}.pt-lg-0,.py-lg-0{padding-top:0!important}.pr-lg-0,.px-lg-0{padding-right:0!important}.pb-lg-0,.py-lg-0{padding-bottom:0!important}.pl-lg-0,.px-lg-0{padding-left:0!important}.p-lg-1{padding:.25rem!important}.pt-lg-1,.py-lg-1{padding-top:.25rem!important}.pr-lg-1,.px-lg-1{padding-right:.25rem!important}.pb-lg-1,.py-lg-1{padding-bottom:.25rem!important}.pl-lg-1,.px-lg-1{padding-left:.25rem!important}.p-lg-2{padding:.5rem!important}.pt-lg-2,.py-lg-2{padding-top:.5rem!important}.pr-lg-2,.px-lg-2{padding-right:.5rem!important}.pb-lg-2,.py-lg-2{padding-bottom:.5rem!important}.pl-lg-2,.px-lg-2{padding-left:.5rem!important}.p-lg-3{padding:1rem!important}.pt-lg-3,.py-lg-3{padding-top:1rem!important}.pr-lg-3,.px-lg-3{padding-right:1rem!important}.pb-lg-3,.py-lg-3{padding-bottom:1rem!important}.pl-lg-3,.px-lg-3{padding-left:1rem!important}.p-lg-4{padding:1.5rem!important}.pt-lg-4,.py-lg-4{padding-top:1.5rem!important}.pr-lg-4,.px-lg-4{padding-right:1.5rem!important}.pb-lg-4,.py-lg-4{padding-bottom:1.5rem!important}.pl-lg-4,.px-lg-4{padding-left:1.5rem!important}.p-lg-5{padding:3rem!important}.pt-lg-5,.py-lg-5{padding-top:3rem!important}.pr-lg-5,.px-lg-5{padding-right:3rem!important}.pb-lg-5,.py-lg-5{padding-bottom:3rem!important}.pl-lg-5,.px-lg-5{padding-left:3rem!important}.m-lg-n1{margin:-.25rem!important}.mt-lg-n1,.my-lg-n1{margin-top:-.25rem!important}.mr-lg-n1,.mx-lg-n1{margin-right:-.25rem!important}.mb-lg-n1,.my-lg-n1{margin-bottom:-.25rem!important}.ml-lg-n1,.mx-lg-n1{margin-left:-.25rem!important}.m-lg-n2{margin:-.5rem!important}.mt-lg-n2,.my-lg-n2{margin-top:-.5rem!important}.mr-lg-n2,.mx-lg-n2{margin-right:-.5rem!important}.mb-lg-n2,.my-lg-n2{margin-bottom:-.5rem!important}.ml-lg-n2,.mx-lg-n2{margin-left:-.5rem!important}.m-lg-n3{margin:-1rem!important}.mt-lg-n3,.my-lg-n3{margin-top:-1rem!important}.mr-lg-n3,.mx-lg-n3{margin-right:-1rem!important}.mb-lg-n3,.my-lg-n3{margin-bottom:-1rem!important}.ml-lg-n3,.mx-lg-n3{margin-left:-1rem!important}.m-lg-n4{margin:-1.5rem!important}.mt-lg-n4,.my-lg-n4{margin-top:-1.5rem!important}.mr-lg-n4,.mx-lg-n4{margin-right:-1.5rem!important}.mb-lg-n4,.my-lg-n4{margin-bottom:-1.5rem!important}.ml-lg-n4,.mx-lg-n4{margin-left:-1.5rem!important}.m-lg-n5{margin:-3rem!important}.mt-lg-n5,.my-lg-n5{margin-top:-3rem!important}.mr-lg-n5,.mx-lg-n5{margin-right:-3rem!important}.mb-lg-n5,.my-lg-n5{margin-bottom:-3rem!important}.ml-lg-n5,.mx-lg-n5{margin-left:-3rem!important}.m-lg-auto{margin:auto!important}.mt-lg-auto,.my-lg-auto{margin-top:auto!important}.mr-lg-auto,.mx-lg-auto{margin-right:auto!important}.mb-lg-auto,.my-lg-auto{margin-bottom:auto!important}.ml-lg-auto,.mx-lg-auto{margin-left:auto!important}}@media (min-width:1200px){.m-xl-0{margin:0!important}.mt-xl-0,.my-xl-0{margin-top:0!important}.mr-xl-0,.mx-xl-0{margin-right:0!important}.mb-xl-0,.my-xl-0{margin-bottom:0!important}.ml-xl-0,.mx-xl-0{margin-left:0!important}.m-xl-1{margin:.25rem!important}.mt-xl-1,.my-xl-1{margin-top:.25rem!important}.mr-xl-1,.mx-xl-1{margin-right:.25rem!important}.mb-xl-1,.my-xl-1{margin-bottom:.25rem!important}.ml-xl-1,.mx-xl-1{margin-left:.25rem!important}.m-xl-2{margin:.5rem!important}.mt-xl-2,.my-xl-2{margin-top:.5rem!important}.mr-xl-2,.mx-xl-2{margin-right:.5rem!important}.mb-xl-2,.my-xl-2{margin-bottom:.5rem!important}.ml-xl-2,.mx-xl-2{margin-left:.5rem!important}.m-xl-3{margin:1rem!important}.mt-xl-3,.my-xl-3{margin-top:1rem!important}.mr-xl-3,.mx-xl-3{margin-right:1rem!important}.mb-xl-3,.my-xl-3{margin-bottom:1rem!important}.ml-xl-3,.mx-xl-3{margin-left:1rem!important}.m-xl-4{margin:1.5rem!important}.mt-xl-4,.my-xl-4{margin-top:1.5rem!important}.mr-xl-4,.mx-xl-4{margin-right:1.5rem!important}.mb-xl-4,.my-xl-4{margin-bottom:1.5rem!important}.ml-xl-4,.mx-xl-4{margin-left:1.5rem!important}.m-xl-5{margin:3rem!important}.mt-xl-5,.my-xl-5{margin-top:3rem!important}.mr-xl-5,.mx-xl-5{margin-right:3rem!important}.mb-xl-5,.my-xl-5{margin-bottom:3rem!important}.ml-xl-5,.mx-xl-5{margin-left:3rem!important}.p-xl-0{padding:0!important}.pt-xl-0,.py-xl-0{padding-top:0!important}.pr-xl-0,.px-xl-0{padding-right:0!important}.pb-xl-0,.py-xl-0{padding-bottom:0!important}.pl-xl-0,.px-xl-0{padding-left:0!important}.p-xl-1{padding:.25rem!important}.pt-xl-1,.py-xl-1{padding-top:.25rem!important}.pr-xl-1,.px-xl-1{padding-right:.25rem!important}.pb-xl-1,.py-xl-1{padding-bottom:.25rem!important}.pl-xl-1,.px-xl-1{padding-left:.25rem!important}.p-xl-2{padding:.5rem!important}.pt-xl-2,.py-xl-2{padding-top:.5rem!important}.pr-xl-2,.px-xl-2{padding-right:.5rem!important}.pb-xl-2,.py-xl-2{padding-bottom:.5rem!important}.pl-xl-2,.px-xl-2{padding-left:.5rem!important}.p-xl-3{padding:1rem!important}.pt-xl-3,.py-xl-3{padding-top:1rem!important}.pr-xl-3,.px-xl-3{padding-right:1rem!important}.pb-xl-3,.py-xl-3{padding-bottom:1rem!important}.pl-xl-3,.px-xl-3{padding-left:1rem!important}.p-xl-4{padding:1.5rem!important}.pt-xl-4,.py-xl-4{padding-top:1.5rem!important}.pr-xl-4,.px-xl-4{padding-right:1.5rem!important}.pb-xl-4,.py-xl-4{padding-bottom:1.5rem!important}.pl-xl-4,.px-xl-4{padding-left:1.5rem!important}.p-xl-5{padding:3rem!important}.pt-xl-5,.py-xl-5{padding-top:3rem!important}.pr-xl-5,.px-xl-5{padding-right:3rem!important}.pb-xl-5,.py-xl-5{padding-bottom:3rem!important}.pl-xl-5,.px-xl-5{padding-left:3rem!important}.m-xl-n1{margin:-.25rem!important}.mt-xl-n1,.my-xl-n1{margin-top:-.25rem!important}.mr-xl-n1,.mx-xl-n1{margin-right:-.25rem!important}.mb-xl-n1,.my-xl-n1{margin-bottom:-.25rem!important}.ml-xl-n1,.mx-xl-n1{margin-left:-.25rem!important}.m-xl-n2{margin:-.5rem!important}.mt-xl-n2,.my-xl-n2{margin-top:-.5rem!important}.mr-xl-n2,.mx-xl-n2{margin-right:-.5rem!important}.mb-xl-n2,.my-xl-n2{margin-bottom:-.5rem!important}.ml-xl-n2,.mx-xl-n2{margin-left:-.5rem!important}.m-xl-n3{margin:-1rem!important}.mt-xl-n3,.my-xl-n3{margin-top:-1rem!important}.mr-xl-n3,.mx-xl-n3{margin-right:-1rem!important}.mb-xl-n3,.my-xl-n3{margin-bottom:-1rem!important}.ml-xl-n3,.mx-xl-n3{margin-left:-1rem!important}.m-xl-n4{margin:-1.5rem!important}.mt-xl-n4,.my-xl-n4{margin-top:-1.5rem!important}.mr-xl-n4,.mx-xl-n4{margin-right:-1.5rem!important}.mb-xl-n4,.my-xl-n4{margin-bottom:-1.5rem!important}.ml-xl-n4,.mx-xl-n4{margin-left:-1.5rem!important}.m-xl-n5{margin:-3rem!important}.mt-xl-n5,.my-xl-n5{margin-top:-3rem!important}.mr-xl-n5,.mx-xl-n5{margin-right:-3rem!important}.mb-xl-n5,.my-xl-n5{margin-bottom:-3rem!important}.ml-xl-n5,.mx-xl-n5{margin-left:-3rem!important}.m-xl-auto{margin:auto!important}.mt-xl-auto,.my-xl-auto{margin-top:auto!important}.mr-xl-auto,.mx-xl-auto{margin-right:auto!important}.mb-xl-auto,.my-xl-auto{margin-bottom:auto!important}.ml-xl-auto,.mx-xl-auto{margin-left:auto!important}}.text-monospace{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace!important}.text-justify{text-align:justify!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text-left{text-align:left!important}.text-right{text-align:right!important}.text-center{text-align:center!important}@media (min-width:576px){.text-sm-left{text-align:left!important}.text-sm-right{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.text-md-left{text-align:left!important}.text-md-right{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.text-lg-left{text-align:left!important}.text-lg-right{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.text-xl-left{text-align:left!important}.text-xl-right{text-align:right!important}.text-xl-center{text-align:center!important}}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.font-weight-light{font-weight:300!important}.font-weight-lighter{font-weight:lighter!important}.font-weight-normal{font-weight:400!important}.font-weight-bold{font-weight:700!important}.font-weight-bolder{font-weight:bolder!important}.font-italic{font-style:italic!important}.text-white{color:#fff!important}.text-primary{color:#007bff!important}a.text-primary:focus,a.text-primary:hover{color:#0056b3!important}.text-secondary{color:#6c757d!important}a.text-secondary:focus,a.text-secondary:hover{color:#494f54!important}.text-success{color:#28a745!important}a.text-success:focus,a.text-success:hover{color:#19692c!important}.text-info{color:#17a2b8!important}a.text-info:focus,a.text-info:hover{color:#0f6674!important}.text-warning{color:#ffc107!important}a.text-warning:focus,a.text-warning:hover{color:#ba8b00!important}.text-danger{color:#dc3545!important}a.text-danger:focus,a.text-danger:hover{color:#a71d2a!important}.text-light{color:#f8f9fa!important}a.text-light:focus,a.text-light:hover{color:#cbd3da!important}.text-dark{color:#343a40!important}a.text-dark:focus,a.text-dark:hover{color:#121416!important}.text-body{color:#212529!important}.text-muted{color:#6c757d!important}.text-black-50{color:rgba(0,0,0,.5)!important}.text-white-50{color:rgba(255,255,255,.5)!important}.text-hide{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.text-decoration-none{text-decoration:none!important}.text-break{word-break:break-word!important;overflow-wrap:break-word!important}.text-reset{color:inherit!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media print{*,::after,::before{text-shadow:none!important;box-shadow:none!important}a:not(.btn){text-decoration:underline}abbr[title]::after{content:" (" attr(title) ")"}pre{white-space:pre-wrap!important}blockquote,pre{border:1px solid #adb5bd;page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}@page{size:a3}body{min-width:992px!important}.container{min-width:992px!important}.navbar{display:none}.badge{border:1px solid #000}.table{border-collapse:collapse!important}.table td,.table th{background-color:#fff!important}.table-bordered td,.table-bordered th{border:1px solid #dee2e6!important}.table-dark{color:inherit}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#dee2e6}.table .thead-dark th{color:inherit;border-color:#dee2e6}} +/*# sourceMappingURL=bootstrap.min.css.map */ diff --git a/docs/doxygen/assets/customdoxygen.css b/docs/doxygen/assets/customdoxygen.css index 4e6ef063dfe7b6..452a80948e1dcf 100644 --- a/docs/doxygen/assets/customdoxygen.css +++ b/docs/doxygen/assets/customdoxygen.css @@ -1,3 +1,21 @@ +/* +****************************************************************************** +Copyright 2017-2021 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +****************************************************************************** +*/ + /* CUSTOM FONTS */ /* lato-100 - latin */ @font-face { @@ -361,7 +379,7 @@ h3 { } h4 { - font: normal 400 1.25/1.25 "Lato", "Helvetica", sans-serif; + font: normal 400 1.25rem/1.25 "Lato", "Helvetica", sans-serif; } /* "H1" headings */ @@ -773,7 +791,11 @@ a.see-all { } #download-link { - margin-right:3.28rem; + margin-right: 2.25rem; + } + + #install-link { + margin-right: 3.28rem; } } @@ -798,7 +820,11 @@ div.old-version > p { } #download-link { - margin-right: 3.75rem; + margin-right: 2.25rem; +} + +#install-link { + margin-right: 3.75rem; } .nav-placeholder { @@ -1280,11 +1306,26 @@ pre.fragment { background: #f9f9f9; border: none; counter-reset: codegroup; - margin: 1.1rem 0; - padding: 0.75rem 1rem; + margin-bottom: 1.1rem; + padding: 0 1rem 0.75rem 0; overflow: auto; } +.code-container { + background: #f9f9f9; + margin-top: 0.75rem; +} + +.code-header { + display: flex; + justify-content: flex-end; + padding: 0.3rem; +} + +.copy-button { + cursor: pointer; +} + div.line { box-sizing: content-box; font-size: 12px; @@ -1407,6 +1448,7 @@ iframe#MSearchResults { transform: translateY(-50%); right: 9rem; width: 60vw; + padding-left:5px; } #search-slider { diff --git a/docs/doxygen/assets/menu.js b/docs/doxygen/assets/menu.js deleted file mode 100644 index 7465de60a4be5b..00000000000000 --- a/docs/doxygen/assets/menu.js +++ /dev/null @@ -1,53 +0,0 @@ -/* - @licstart The following is the entire license notice for the JavaScript code in this file. - - The MIT License (MIT) - - Copyright (C) 1997-2020 by Dimitri van Heesch - - Permission is hereby granted, free of charge, to any person obtaining a copy of this software - and associated documentation files (the "Software"), to deal in the Software without restriction, - including without limitation the rights to use, copy, modify, merge, publish, distribute, - sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is - furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in all copies or - substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING - BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, - DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - @licend The above is the entire license notice for the JavaScript code in this file - */ -function initMenu(relPath,searchEnabled,serverSide,searchPage,search) { - function makeTree(data,relPath) { - var result=''; - if ('children' in data) { - result+=''; - } - return result; - } - - $('#main-nav').append(makeTree(menudata,relPath)); - $('#main-nav').children(':first').addClass('sm sm-dox').attr('id','main-menu'); - if (searchEnabled) { - if (serverSide) { - $('#main-menu').append('
  • '); - } else { - $('#main-menu').append('
  • '); - } - } - // Do not create smartmenus - // $('#main-menu').smartmenus(); - } - /* @license-end */ - \ No newline at end of file diff --git a/docs/doxygen/assets/openvino-layout.js b/docs/doxygen/assets/openvino-layout.js index 907e7f47d346dd..db73c971843a57 100644 --- a/docs/doxygen/assets/openvino-layout.js +++ b/docs/doxygen/assets/openvino-layout.js @@ -1,3 +1,21 @@ +/* +****************************************************************************** +Copyright 2017-2021 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +****************************************************************************** +*/ + "use strict"; /** @@ -487,6 +505,7 @@ function openVinoContent() { searchSlider.on('click', function() { $(this).toggleClass('closed open'); $("#MSearchField").animate({width:'toggle'},200); + $('#MSearchField').focus(); }); if (['http:', 'https:'].indexOf(window.location.protocol) !== -1) { $('#MSearchField').replaceWith(''); @@ -522,6 +541,51 @@ function openVinoContent() { $(".contents").prepend($(".header")); } + // assign clipboard button for each .fragment element + $('.fragment').wrap('
    '); + $('.code-container').prepend($('
    ')); + var $copyButton = $('content_copy'); + $copyButton.click(function() { + var self = this; + $(self).text('check_circle_outline') + .css('color', '#003C71') + .css("pointer-events", 'none');; + $(self).next('.copy-tooltip') + .attr('data-original-title', 'Copied!') + .tooltip('show') + .addClass('active'); + var fragment = $(self.parentElement.parentElement).children('div.fragment')[0]; + var text = []; + $(fragment).children('div.line').each(function(key, val) { + text.push(val.innerText); + }); + text = text.join('\n'); + var $placeholder = $('