Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IE TESTS] [CPU] extended cpu specific tests to support int8 precision #6546

Closed

Conversation

antonvor
Copy link
Contributor

@antonvor antonvor commented Jul 6, 2021

This PR contains int8 tests for nodes:

  1. Convolution
  2. GroupConvolution
  3. ConvolutionBackpropData
  4. GroupConvolutionBackpropData

TODO

  • fill Convolution and GroupConvolution tests
  • fill ConvolutionBackpropData and GroupConvolutionBackpropData tests
  • split tests into files depending on the precision (fp32, bf16, int8)
  • add dw conv fusing tests for [CPU] fixed conv + dw conv fusing #6333
  • add tests for issue 63021
  • add tests for issue 62549

@antonvor antonvor self-assigned this Jul 6, 2021
@antonvor antonvor requested review from a team July 6, 2021 20:56
@openvino-pushbot openvino-pushbot added category: IE Tests OpenVINO Test: plugins and common category: Core OpenVINO Core (aka ngraph) labels Jul 6, 2021
@antonvor antonvor force-pushed the feature/cpu_tests_improvements branch 3 times, most recently from 2c95845 to a3cffa1 Compare July 19, 2021 12:00
@antonvor antonvor requested review from a team July 20, 2021 13:54
@antonvor antonvor force-pushed the feature/cpu_tests_improvements branch 2 times, most recently from d85f86f to 22fa239 Compare July 21, 2021 07:07
@antonvor antonvor changed the title WIP: [IE TESTS] [CPU] extended cpu specific tests to support int8 precision [IE TESTS] [CPU] extended cpu specific tests to support int8 precision Jul 21, 2021
@antonvor
Copy link
Contributor Author

@dmitry-gorokhov could you please review?

@antonvor antonvor force-pushed the feature/cpu_tests_improvements branch 2 times, most recently from aafeae7 to 1cedb93 Compare July 22, 2021 13:33
@antonvor antonvor mentioned this pull request Jul 27, 2021
@antonvor antonvor force-pushed the feature/cpu_tests_improvements branch 4 times, most recently from 64d613f to ae2d28e Compare August 3, 2021 08:08
@antonvor antonvor force-pushed the feature/cpu_tests_improvements branch from ae2d28e to 549258b Compare August 4, 2021 03:48
	1. extended cpu specific tests to support int8 presision

	2. added int8 deconvolution fusing tests
@antonvor antonvor force-pushed the feature/cpu_tests_improvements branch from 549258b to 8741933 Compare August 24, 2021 12:21
@antonvor antonvor assigned maxnick and unassigned dmitry-gorokhov Aug 24, 2021
@antonvor
Copy link
Contributor Author

@maxnick could you review please?

@antonvor antonvor requested a review from maxnick August 24, 2021 12:43

ngraph::element::Type outElemType = ngraph::element::f32;
// only for int8 testing
int quantizeInHigh = 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name is not very descriptive. Can we change the name so that the meaning become more obvious?

Comment on lines +273 to +278
if (outElemType == ngraph::element::u8) {
newLastNode = ngraph::builder::makeFakeQuantize(newLastNode, ngPrc, 256, {1, 1, 1, 1}, {0}, {static_cast<float>(quantizeInHigh)}, {0}, {255});
} else if (outElemType == ngraph::element::i8) {
newLastNode = ngraph::builder::makeFakeQuantize(newLastNode, ngPrc, 255, {1, 1, 1, 1}, {0}, {static_cast<float>(quantizeInHigh)}, {-127}, {127});
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like CPUTestsBase::modifyGraph is better place for this code. Do we have to place the FQ node after the simple operations chain that we add in fusing tests?

Comment on lines +135 to +139
if (outPrc == Precision::U8 || outPrc == Precision::I8) {
threshold = 1.001f;
outElemType = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(outPrc);
quantizeInHigh = calculateQuantizeInHigh(kernel, inputShape[1]);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like a code block that could be extracted to the base class method since it is repeated in every test SetUp() function that tests a quantized layer. Since we do not have a time to properly refactor CPU single layer tests, we can solve it using some functional stuff. For example we can define a functor pointer in the CPUTestBase that is called in this extracted code, but is initialized in implementations. If it is not initialized - the method throws. So that we define at least some int8 related stages: changing threshold, outElemType, quantizeInHigh calculation in one common place. Unfortunately this method itself should be called from the SetUp function but it insolate some amount of common code.

Comment on lines +141 to +144
if (inPrc == Precision::U8 || inPrc == Precision::I8) {
additionalPasses.push_back(std::make_shared<pass::ConvertPrecision<element::i8, element::f32>>());
additionalPasses.push_back(std::make_shared<pass::ConvertPrecision<element::u8, element::f32>>());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also a good candidate for method extraction. If we had a time for refactoring we would inherit CPUTestsBase from LayerTestCommon then using algorithm pattern we could define some mandatory setUp stages and extract such repeated code blocks to the base class methods.

InferenceEngine::SizeVector kernel, stride, dilation;
std::vector<ptrdiff_t> padBegin, padEnd, outPadding;
size_t convOutChannels;
std::tie(kernel, stride, padBegin, padEnd, dilation, convOutChannels, padType, outPadding) = convParams;
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
auto inElementType = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(inPrc);
auto outElementType = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(outPrc);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that this variable is not used.

::testing::Values(Precision::FP32, Precision::U8, Precision::I8),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(InferenceEngine::Layout::ANY),
::testing::Values(std::vector<size_t >({2, 64, 7, 7})),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we testing only channels shape evenly divided by the SIMD vector size, that could hide some problems related to tails processing. Suggest adding more shape variations where input and output channels number is a fraction of blocks number. Please check other new tests instances including other layers.

::testing::Values(Precision::FP32, Precision::U8, Precision::I8),
::testing::Values(Layout::ANY),
::testing::Values(Layout::ANY),
::testing::Values(std::vector<size_t >({ 2, 12, 7, 7, 7 })),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

12 channels does not form a full vector on avx512 VNNI, m.b. it makes sense to have a shape that contains at least one full SIMD vector and some fractional part to cover more possible code paths? Here in in the other new test instances.

InferenceEngine::SizeVector kernel, stride, dilation;
std::vector<ptrdiff_t> padBegin, padEnd, outputPadding;
size_t convOutChannels, numGroups;
std::tie(kernel, stride, padBegin, padEnd, dilation, convOutChannels, numGroups, padType, outputPadding) = groupConvParams;
auto inElementType = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(inPrc);
auto outElementType = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(outPrc);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that the variable is not used.

@ilya-lavrenov
Copy link
Contributor

@antonvor @dmitry-gorokhov is it still valid?

@akladiev
Copy link
Collaborator

This PR will be closed in 2 weeks in case of no activity.

@akladiev akladiev added the Stale label May 11, 2023
@akladiev
Copy link
Collaborator

This PR was closed because it has been stalled for 2 week with no activity.

@akladiev akladiev closed this May 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: Core OpenVINO Core (aka ngraph) category: IE Tests OpenVINO Test: plugins and common Stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants