Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Es/lpt/lpt to ngraph fixes2 with master (openvinotoolkit#2671)
* [LPT] Replace creation of dequantization with factory * [ngraph][LPT] Add ScaleShift replace for dequantization operations * [LPT] SubtractMultiplyToMultiplyAdd refactoring * [LPT] Code style fix * [LPT] Edit SubtractMultiplyToMultiplyAdd transformation for dequantization * [LPT] Linux compilation quick fix * [LPT] [WIP] runtime info applying * [LPT] Concat transformation functional tests extending * [LPT] MultiplyToConvolution + Subtract to add fusing + improvements in LowPrecisionTransformer * [LPT] linux compilation error fix * [LPT] compilation error * [LPT] MultiplyToGroupConvolution fix: 5D support * [LPT] Multiply transformation extending: FQ weights support - wip * [LPT] FQ folding & precision selection * [LPT] code style fixes * [LPT] code style fixes * [LPT] Linux compilation error fix * [LPT] SubtractMultiplyToMultiplyAdd: refactoring * [LPT] Tests fixes * [LPT] MultiplyToGroupConvolution tests * [LPT] Convert subtract with int inputs to Eltwise sub * [LPT] Constant folding fix for quant models * [LPT] 1) Asymmetric quantization improvement 2) tests extending * [LPT] 2 fixes for se_resnext_50 * [LPT] Add transformation priority branch selection test * [LPT] AddMultiplyFusion: legacy transformation quick fix * [LPT] nGraph tests temporary disabling * [LPT] Fix for eltwise inputs with multiple outputs * [LPT] Fix for FQ fuse * [LPT] Reshape by channel, batch temporary disabled * [nGraph][LPT] MatMul fix for reading FP16 models * [LPT] 1) Add (not after Convolution/GroupConvolution/MatMul with Constant) to Subtract 2) precision selection fix: MultiplyToGroupConvolution quick fix * [LPT] DenseNet improvments: AddTransformation: Add to Subtract + tests * [LPT] AddTransformarion refactoring * [LPT] AddTransformation tests temporay disabled * [LPT] ReshapeTransformation improvements: degradation fix * [LPT] code style fix * [LPT] Concat tests temporary disabling * [LPT] tests unification 1) plugin tests: added test-cases and nGraph-validation for clamp, split and variadic split 2) func tests: added test-cases 3) transformNGraph: added the ability to run additional transformations * [LPT] split & variadic split merge fix * [LPT] Clamp: added support for asymmetric quantization * [LPT] added DequantizationAttr run-time attribute * [LPT] debug info removal * [LPT] ConcatTransformation: zero point fix * [LPT] CNNNetwork ReLU transformation quick fix * [LPT] 1) Concat fix 2) ConcatMultiChannels fix 3) Added "Concat with Split" test-cases 4) Subgraph fix * [LPT] 1) Concat fix 2) Added "Concat with different precision on childs" test-case * [LPT] concat fix Ubuntu18 * [LPT] Concat test fixes * [LPT] Not fp32 FQ input support * [LPT] MatMul Fix + separateInStandaloneBranch Fix * [LPT] Fix reference input types in mish fusion tests * [LPT] Fix cpuFuncTests on CentOS building * [nGraph][LPT] ScaleShift 2d, 3d nGraph conversion enabling * [LPT] 1) FullyConnected workaround removing 2) validate_nodes_and_infer_types for LPT * [ngraph] Add check for childs for ConvertSubtract * [LPT] Squeeze/Unsqueeze tests unification * [LPT] Squeeze/Unsqueeze change signature for getReference/getOriginal * [LPT] Mul & Add -> ScaleShift quick fix * [LPT] nGraph tests emporary disabling * [LPT] code style fix * [LPT] code style fix #2 * [LPT] nGraph tests temporary disabling * [LPT] code styl fix #3 * [LPT] shared plugin tests temporary disabling * [LPT] cleanup * [LPT] nGraph unit_tests tests temproary disabling * [LPT] nGraph unit tests disabling #2 * [LPT] nGraph tests disabling * [LPT] nGraph tests temporary disabling * [LPT] WA removing * [LPT] CentOS compilation fix * [LPT] KMB wa to avoid compilation error * [LPT] functional test temporary disabling * [nGraph] code style fixes * [LPT] ConcatTransformation: data movement operation as intermediate handling * [LPT] FuseSubtractToFakeQuantize after VariadicSplit * [LPT] ConcatWithSplitTransformation functional test temporary disabling * [LPT] Clamp and ConcatWithDifferentPrecisionsOnChilds: tests fix * [LPT] MatMul: bert-nv-mlperf-quantized fix * [LPT] Add to convolution biases fuse fix * [LPT] GPU plugin tests fixes * [LPT] Normalize GPU plugin tests fix * [LPT] test-commit * [LPT] CLDNN Plugin FP16 conversion * [LPT] AvgPool update precision if there is not FQ after + convolution precision limitation on activation * [LPT] Convolution fixes * [LPT] FuseSubtractToFakequantize & FuseMultiplyToFakeQuantize improvement * [LPT] FuseSubtractToFakeQuantize test fix * [LPT] FuseSubtractToFakeQuantizeTransformation tests * [LPT] code style fix * [LPT] AvgPool child recursive extend * [LPT] AvgPool tests + fix * [LPT] compilation quick fix * [LPT] Add to convolution biases fuse fix * [LPT] Linux issues: MatMulWithOptimizedConstantFakeQuantizeTransformation temporary disabled * [LPT] Normalize GPU plugin tests fix * [LPT] test-commit * [LPT] 1) added the ability to create sub without dequantizationAttribute 2) fixed optimizeMulAfter: added copying rt_info 3) Tests Unification: Convolution transformation 4) added cleanRunTimeInfo into Network Helper * [LPT] Tests Unification: GroupConvolution * [LPT] removed debug info * [LPT] functional tests for Convolution & GroupConvolution extending * [LPT] [MatMul] Quick fix ubuntu error * [LPT] MatMulTransformation quick test fix: one constant for both intervals * [nGraph] code style fix * [LPT] added output_precision to NormalizeIE * [nGraph] NormalizeIE fix for LPT support * [LPT] nGraph WA removal * [LPT] fixed fillSubgraph for concat multi channels * [LPT] MatMul fix * [nGraph] WA removal: 1) nGraph tests enabling 2) LPT extanding: not handle in FP32 * [LPT] nGraph WA removal: function tests skip config rollback * [LPT] WA removal: precision propagation fix * [LPT] ConvertMulOrAddFinally transformation extending * [nGraph] ConvolutionMultiplyFusion rollback (move from legacy to common) * [nGraph] ConvertMulAddToScaleShiftOrPower: WA removal * [nGraph] TypeRelaxed: WA removal * [nGraph] WA removal: TypeRelaxed * [LPT] WA removal: ConcatTransformation * [nGraph] WA removal: Eltwise & ConvertMulOrAddFinally fixes to support LPT * [nGraph] MulAddConversion fix: 2D & 3D ScaleShift are supproted * [nGraph] VisualizeTree extending * [LPT] FakeQuantizeDequantization extending: check element wise dequantization operation * [LPT] FakeQuantizeDequantization extending: SubtractMultiplyToMultiplyAddTransformation & WeightableLayerTransformation * [LPT] Convolution + test infrastructure update * [LPT] GPU compilation error * [nGraph] BatchNorm plugin tests: input tensor definition * [LPT] LowPrecisionTransformer::isFunctionQuantized was added * [nGraph] WA final cleanup * [nGraph] ScaleShiftIE quick fix * [LPT] Functional tests: added test-cases "Concat with intermediate with constant" * [LPT] Transformer::isNetworkquantized fix * [LPT] SubtractMultiplyToMultiplyAdd zero Add remove: fix for ssd300 on gpu * [LPT] MultiplyToGroupConvolution not transform on Const * [LPT] workaround for negative scales * [LPT] Convert standalone dequantization Mul,Sub,Add to ScaleShift * [LPT] SubtractMultiplyToMultiplyAdd test fix * [LPT] Clamp transformation: GPU tests fix * [LPT] Transformer tests * [LPT] FakeQuantizePrecisionSelectionTransformation was disabled for GPU * [LPT] TransformerIsFunctionQuantized refactoring * [nGraph] code style fix * [LPT] mobilenet_v2_tf_depthwise test update * [LPT] TMP: dequantization folding * [LPT] Elementwise transformation fix: dequantization operations constant folding * [LPT] cleanup * [LPT] denormal values fix * [LPT] FuseFakeQuantize test fixed + negative multiply case * [LPT] FP32 -> FP16 conversion info * [LPT] FQ dot interval support + swapMultiplyAdd safely division * [LPT] test fix * [LPT] Tests for dot interval on FQ + tests for addTransformation enabling * [LPT] Clamp transformation fix * [LPT] FQ prec selection test fix * [LPT] Clamp test case * [LPT] Concat division precision fix * [LPT] cleanup * [LPT] merge fix * [LPT] WIP: MatMul asymmetric quantization fix (BERT) * [LPT] MatMulWithOptimizedConstantFakeQuantizeTransformation disabled * [LPT] GPU Plugin set config fix * [LPT] Fix merge mistakes * [LPT] Rollback device specific INT8 * [LPT] ReshapeFullyConnected fix: FullyConnected output fix * [LPT] bert-base-chinese GPU fix * [ngraph/LPT] Tests for fix convert_mul_or_add_finally with dequantization [ngraph/LPT] Fix convert mul_or_add_finally with dequantization * [LPT] ScaleShift dim < 4 only dequantization conversion * [LPT] MatMul transformation tests extensing * [LPT] ReshapeFullyConnected legacy transformation: LPT test case addition * [nGraph] VisualizeTree extending: property names displying to simplify search * [LPT] getDequantization extending * [LPT] MulAddToScaleshiftOrPower: out precision fix & tests * [LPT] Multiply to ScaleShiftIE: Multiply transformation: remove DEQUANTIZATION if not valid * [LPT] Concat test case * [nGraph] try to fix opencv compatibility * [nGraph] nGraph code style fix * [LPT] InPlace dequantization folding * [LPT] Multiply constant folding test * [LPT] Fix plugin test case for MatMulWithOptimizedConstantFakeQuantize [LPT] Enable MatMulWithOptimizedConstantFakeQuantize plugin test * [LPT] Convolution transformation: mulConst shape fix * [LPT] INT8 Constant folding branch for elementwise ops optimization removal * [LPT] eltwise for const branch fix * [LPT] linux fix * [LPT] Multiply test refactoring * [LPT] Convert Fuse in Constant + tests * [LPT] function comparation: runtime info comparation rollback * [LPT] linux build fix * [LPT] linux build fix2 * [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT * [LPT] Reshape transformation update: don't broadcast by batch * [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT - refactoring * [LPT] MatMul transformation: transpose input tensors fix * [LPT] checkElementwise for AddTransformation WA: should be moved to getDequantization * [LPT] merge fix * [LPT] MatMul fix & tests * [LPT] AddTransformation tests * [LPT] Interpolate transformation enabled * [LPT] constant folding before LPT * [LPT] WIP: not completed tests * [LPT] GPU degradation fix * [LPT] FuseConvert workaround * [LPT] code cleanup * [LPT] Interpolate GPU test quick fix * [LPT] GroupConvolution fix * [LPT] Fix fusing multiply for non-dequantization layers * [LPT] GPU pipeline update: enableInt8 initialization place update * [LPT] tests compilation fix * [LPT] merge fix * [LPT] tests enabling * [LPT] merge issue resolving * [LPT] LPT CNNNetwork usage macros: part #1: source code * [LPT] LPT CNNNetwork usage macros: part #2: cmake files update and tests addoption * [LPT] LPT workaround from nGraph core removing * [LPT] previous LPT version tests * [LPT] inference_engine_lp_transformations was returned back * [LPT] replace_node rollback * [LPT] ConvertSubtract fix * [LPT] GPU: baselineIsFP16 reuse fix * [LPT] FakeQuantizeTransformation: GPU workaround: I32 -> FP32 Convert is not fused * [LPT] AvgPool output precision workaround * [LPT] Group convolution precision + Subtract to ScaleShift const fix * [LPT] SubMulToMulAdd & Transpose: action-recognition-0001 fix * [LPT] Transpose: added test with per-tensor quantization Co-authored-by: Aleksandr Pertovsky <[email protected]> Co-authored-by: Zinoviev, Vladimir <[email protected]> Co-authored-by: Vladislav Golubev <[email protected]> Co-authored-by: Gorokhov Dmitriy <[email protected]>
- Loading branch information