Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/cpu eltwise node dynamic #9

Closed
Closed
Changes from 1 commit
Commits
Show all changes
148 commits
Select commit Hold shift + click to select a range
b3428b5
update references for memcheck pre-commit (refs from 2021.4) (#7063)
Sep 1, 2021
e07ac53
Removed speech demo (#7298)
ilya-lavrenov Sep 1, 2021
28075fb
MaxPool 8 reference implementation (#7115)
Sep 1, 2021
07f7061
CODEOWNERS: add template plugin maintainers (#5722)
andrei-kochin Sep 1, 2021
9eca6ba
Move pass pattern to ov (#7255)
ilyachur Sep 2, 2021
72c34ce
Moved NGRAPH_CHECK to OV namespace (#7251)
ilyachur Sep 2, 2021
fc92eea
[IE TESTS] Remove verified refs highlighting from report (#7285)
iefode Sep 2, 2021
98eaa93
[GNA] Fixed accuracy degradation caused by the input quantization res…
elilobanova Sep 2, 2021
90b2265
[ONNX] QLinearConvolution (#7210)
tsocha Sep 2, 2021
6cbeb18
use 1D convolution (#7291)
evkotov Sep 2, 2021
2cf7065
[GNA] Fixed import of model with several inputs (#7277)
mryzhov Sep 2, 2021
30adf04
[GPU] Fuse reorder to convolution (#6396)
kelvinchoi-intel Sep 2, 2021
4664605
Removed speech demo docs (#7350)
ilya-lavrenov Sep 2, 2021
b78f228
Moved op utils to ov namespace (#7274)
ilyachur Sep 2, 2021
e0c178e
Update SLT classes functions to use parameters passed by reference (#…
dkozykowski Sep 3, 2021
d748f2a
Remove references to prototxt from documentation and docstrings (#7346)
postrational Sep 3, 2021
f33c03e
Move all a ops to ov (#7336)
ilyachur Sep 3, 2021
bf8113c
[CPU] Fix graph serialization, use ngraph serialization directly (#7261)
EgorDuplensky Sep 3, 2021
1eca8a6
Combine all PDPD model generation scripts into one python command (#7…
nosovmik Sep 3, 2021
6dd14bf
[GNA] Fixes for GNA 3.0 library (#7236)
kbruniec Sep 3, 2021
7e9d98f
Revert "Azure CI: Remove IncrediBuild on Windows (#7085)" (#7358)
SDxKeeper Sep 3, 2021
63cb989
Fixed leftovers after PR 7336 (#7355)
ilyachur Sep 3, 2021
f68f423
All operations from B and C symbols moved to ov namespace (#7338)
ilyachur Sep 3, 2021
6fa6e48
[IE Python Speech Sample] Enable `-oname` for a imported model (`-rg`…
dpigasin Sep 3, 2021
781dcdf
Azure CI: Run tests on Mac from install dir (#7356)
Sep 3, 2021
b86984f
MaxPool-8 evaluate() (#7363)
Sep 3, 2021
005e7da
Removed auto plugin (#7310)
ilya-lavrenov Sep 3, 2021
bb84d11
[IE Python Speech Sample] Add `--scale_factor` and `--performance_cou…
dpigasin Sep 3, 2021
35fef3d
Moved operations D-F to ov namespace (#7341)
ilyachur Sep 6, 2021
e3aed98
Moved operations G-L to ov namespace (#7344)
ilyachur Sep 6, 2021
4978e8e
Fix setup.py paths (#7345)
slyubimt Sep 6, 2021
d82fed9
RandomUniform reference implementation. (#7012)
popovaan Sep 6, 2021
15bef9e
Remove deprecated mvn class for SLTs (#7340)
ggalieroc Sep 6, 2021
5f1ffc5
Propose new Slice-8 operation - update (#7257)
mitruska Sep 6, 2021
f99bf64
Moved operations M-P to ov namespace (#7354)
ilyachur Sep 6, 2021
8eeee5e
[FrontEnd][PaddlePaddle] fix fill_constant_batch_size_like when attri…
ceciliapeng2011 Sep 7, 2021
c568791
Deprecate passing nodes to op constructor (#7327)
Sep 7, 2021
9e68a67
Moved operations R-Z to ov namespace (#7365)
ilyachur Sep 7, 2021
8985fef
[GNA] Rewrite RemoveSingleInputConcatPass using ngraph (#7208)
evkotov Sep 7, 2021
5d6ef44
Reenable AddFakeQuantizeFusion and MulFakeQuantizeFusion (#5574)
mateusztabaka Sep 7, 2021
5fc0abe
Assertion message when blob precisions dont match (#7394)
Sep 7, 2021
f890b12
[XXX-55386] Change nets version to v10 (#7289)
Sep 7, 2021
72fb7d2
Merge tools and inference_engine/tools folders (#7359)
ilya-lavrenov Sep 7, 2021
c0a3ceb
[IE TESTS] Enable Opset8 in Conformance report (#7369)
iefode Sep 7, 2021
27a287b
Extend coverage versions in requirements_dev.txt (#7404)
rkazants Sep 7, 2021
a2aae78
Moved opsets to ov namespace (#7388)
ilyachur Sep 7, 2021
322c874
Feature/azaytsev/cherry picks from 2021 4 (#7389)
andrew-zaytsev Sep 7, 2021
d5e063d
Mark ngraph dependent tests (#7392)
Sep 7, 2021
4d37790
Move all utils to common folder (#7303)
Sep 7, 2021
4547818
Move TF OD API docs to code + several fixes for TF OD API models conv…
lazarevevgeny Sep 8, 2021
5096fe1
[CPU] Dynamic shapes support using fallback on reference (#6882)
maxnick Sep 8, 2021
66a14f1
[GNA] Fixed scale factors propagation for Eltwise with very different…
elilobanova Sep 8, 2021
5d68e89
Removed incorrect link from cnpy readme (#7405)
ilyachur Sep 8, 2021
3c22b2a
Revise NotEqual (#7198)
nsemaev Sep 8, 2021
7228917
Changed ov::PartialShape to ov::Shape (#7154)
ilyachur Sep 8, 2021
990b7e6
[MO] MulFakeQuantizeFuse - don't fuse if mul constant has zero or neg…
mateusztabaka Sep 8, 2021
8bd41a1
Fixed compilation with ov::opsetN:op (#7415)
ilyachur Sep 8, 2021
60714ce
Fix return values for lift_up_through func (#7323)
iimironov Sep 8, 2021
b99e1d0
[OV20 Preprocessing] Preprocessing API - basic preprocessing function…
nosovmik Sep 8, 2021
3f44858
Fix 'preprocess' test compilation (#7423)
nosovmik Sep 8, 2021
d1b0f06
Aligned macro name OV_CHECK->OPENVINO_ASSERT (#7400)
ilyachur Sep 8, 2021
75808b0
Hot fix: rename OV_CHECK to OPENVINO_ASSERT (#7429)
ilyachur Sep 8, 2021
42b93be
Add support of pkgutil-style namespace packages (#7422)
slyubimt Sep 8, 2021
f89b3d7
[IE PYTHON] dynamic shape api for python (#7282)
Sep 8, 2021
7bc6a8e
Fix clone_function method in case of Assign/ReadValue v3 (#7406)
itikhono Sep 8, 2021
1c1401b
Added default exec network result (#7352)
apankratovantonp Sep 8, 2021
c33856b
[GPU] Improve memory usage management to distinguish allocation type …
andrew-k-park Sep 9, 2021
aa106ad
Unused transformations deleted (#7428)
Sep 9, 2021
f508991
MaxPool-8 python API (#7170)
Sep 9, 2021
f5767d4
[LPT][Transformations] Dynamic shapes support: functional issues fixe…
v-Golubev Sep 9, 2021
36318ca
[Python API] move ngraph python api to the new location (#7364)
akuporos Sep 9, 2021
2a0140d
[CPU] Fixed sort port bug (#6812)
fengyisun Sep 9, 2021
f68a116
[MO] add uint32/uint8 into list of supported data types (#7424)
pavel-esir Sep 9, 2021
b282c74
[README.md] change latest release to 2021.4.1
Sep 9, 2021
c862aba
GatherTree specification refactored (#7326)
ggalieroc Sep 10, 2021
305b86f
Added openvino executable network API (#7230)
apankratovantonp Sep 10, 2021
171f6a6
Removed FPGA related deprecated documentation (#7348)
ilya-lavrenov Sep 10, 2021
288a763
[IE][VPU] Fix execTimeMcs for VPU (#7442)
Sep 10, 2021
deeb964
Revise GatherTree reference implementation (#7275)
ggalieroc Sep 10, 2021
a952540
Openvino cmake config (#7419)
ilya-lavrenov Sep 10, 2021
3ea74bd
Add PlaceOpONNX and some missing Place's methods (#7269)
Sep 10, 2021
754ee2e
Change PowerIE to ops chain (#7439)
Sep 10, 2021
9d53b35
[MO] Updating MO to detect TF 2.X OD API models (#6983)
yekruglov Sep 10, 2021
021639a
Remove optimization for sea_itt_lib (#7463)
akoryach Sep 11, 2021
66bad41
udpate scatter spec (#7086)
bszmelcz Sep 13, 2021
c50c0d5
Add default constructor for op. (#7368)
sdurawa Sep 13, 2021
13321e4
Trying to re-use OpenVINOConfig.cmake (#7467)
ilya-lavrenov Sep 13, 2021
3afc034
update shared tests classes (#7385)
dkozykowski Sep 13, 2021
0bc17a2
update subgraph test classes (#7383)
dkozykowski Sep 13, 2021
f1f7376
[Frontend][Paddle]Handle Exception in Op Conversion. (#7296)
zhangYiIntel Sep 13, 2021
f44369c
Updated requirements (#7397)
ishariko Sep 13, 2021
b3d6d11
[IE TESTS] Move SKIP macro from test bodies to SetUp() in InferReque…
iefode Sep 13, 2021
eae448e
[CPU] Added inplace support for concat with axis != 1 (#6864)
a-sidorova Sep 13, 2021
8fa386b
[Memory tests] Add new tests (#7306)
Sep 13, 2021
92445d4
add new exec with vpu compiler option (set config with MLIR compiler)…
Sep 13, 2021
2093af4
[CPU] DO optimization (#6360)
chenhu-wang Sep 13, 2021
cb0d6db
[CPU] Disable NotImpelmented exceptions mechanism for generic node. (…
IvanNovoselov Sep 13, 2021
11c4288
StridedSlice beg/end_mask should be a non empty list (#7396)
luo-cheng2021 Sep 13, 2021
07a3dc6
[47750] Validate conditional compilation with models from OMZ (#7207)
Sep 13, 2021
47aad8e
Small fixes in cmake (#7472)
ilya-lavrenov Sep 13, 2021
1050580
Add apt update (#7483)
evgenytalanin-intel Sep 13, 2021
b11b1d4
Use one set of parentheses around gcc attribute deprecated arg (#7413)
serhii-pavlovskyi-altran Sep 13, 2021
b373cb8
Removed information about FPGA plugin (#7474)
ilya-lavrenov Sep 13, 2021
2793963
added openvino runtime plugin (#7259)
apankratovantonp Sep 13, 2021
3bec324
OV Performance Hints (CPU and GPU logic for selecting the actual conf…
myshevts Sep 13, 2021
2236c61
[OV20] Layout class implementation - basic API (#7452)
nosovmik Sep 13, 2021
4aad638
Fixed test runner (#7498)
ishariko Sep 14, 2021
7328ee1
fix build issue due to implicit-const-int-float-conversion and remove…
henrywu2019 Sep 14, 2021
651f07b
[GNA] Fix permute precision handling (#7466)
elilobanova Sep 14, 2021
5e6896d
Reverted to Remote Context (#7453)
apankratovantonp Sep 14, 2021
39120a7
Add MulConvFusion transformation (#6951)
mateusztabaka Sep 14, 2021
ba34a19
[GNA] Expanding transformations: swap_input_matmul and handle_transpo…
dmitriikhurtin Sep 14, 2021
2c4009e
Add support for com.microsoft.BiasGelu operator (#7480)
mateusztabaka Sep 14, 2021
7ea1960
Revert shape renaming (#7490)
ilyachur Sep 14, 2021
c06a51f
[CPU] Models cache for CPU plugin (#6403)
vladislav-volkov Sep 14, 2021
a4b75b7
[CPU] Add check for MKLDNNDeconvolutionNode for int8 execution (#7201)
apertovs Sep 14, 2021
ae64b5e
Exclude 3rd party python packages from lsan (#7489)
Sep 14, 2021
fda3f5d
[requirements] Set TF to 2.5.0 (#6620)
akladiev Sep 14, 2021
cf48792
Random Uniform MO implementation (#6694)
popovaan Sep 14, 2021
f5cd75a
New Slice-8 ngraph op shell (#7304)
mitruska Sep 15, 2021
ffef5bd
Avg pool bug fix (#7493)
pszmel Sep 15, 2021
0d76993
Fixed issue with memcpy(). (#7416)
popovaan Sep 15, 2021
bdaa44d
Fixed Minimum op if u8/16/32/64 data type is used (#6665)
Sep 15, 2021
7654789
Removed QueryNetworkResult from new API (#7507)
ilya-lavrenov Sep 15, 2021
bfc6e61
Updated RandomUniform spec to align with the implementation. (#7491)
popovaan Sep 15, 2021
715769a
Remove `demo_security_barrier_camera` and update relevant docs (#7494)
dpigasin Sep 15, 2021
c0f01cd
Extend ONNX Importer for operation "If" (#7319)
mateusztabaka Sep 15, 2021
bd89f78
[ONNX FE] Enable Place classes r-value optimization (#7485)
t-jankowski Sep 15, 2021
08ea036
RandomUniform Python API. (#7373)
popovaan Sep 15, 2021
0df7dab
New IRC package structure (#6255)
ilya-lavrenov Sep 15, 2021
5b285ed
[LPT] MoveFakeQuantize (#6723)
ndemashov Sep 15, 2021
6df94af
[IE PYTHON] wait for infer callback to complete (#7374)
Sep 15, 2021
97d937c
revise RNNCell RNNsequence operation class (#7335)
tiger100256-hu Sep 15, 2021
790ecd5
Add atan to template plugin test (#7509)
davidsnam-intel Sep 16, 2021
10f0075
RandomUniformFusion transformation. (#7187)
popovaan Sep 16, 2021
57b5170
[GNA] Depth-wise separable convolution support (#7281)
sirzabek Sep 16, 2021
cb80764
Add stress unit tests with several processes to config (#7451)
alexander-shchepetov Sep 16, 2021
ffa07eb
Refactor code to output result after each utterance infer (#7450)
dpigasin Sep 16, 2021
5847b35
Make time && stress tests independent from IEDeveloperPackage (#7411)
ishariko Sep 16, 2021
d2333cc
Introduced template for OV2.0 migration guide (#7360)
ilyachur Sep 17, 2021
44186c3
Fixed path to setupvars.sh ti readme (#7537)
ilya-lavrenov Sep 17, 2021
8690e14
Disabled TBB Executor (#7454)
apankratovantonp Sep 17, 2021
1f85d42
Add `use_device_mem` option to benchmark_app (#7433)
sshlyapn Sep 17, 2021
ac8db25
Enable CPU accelerate FIL in MULTI (#7380)
tiger100256-hu Sep 17, 2021
a6bdb87
[IE Python Speech Sample] Enable --scale_factor for multiple input fi…
Sep 17, 2021
660c106
[GPU] Performance counters fix (#7143)
Lyamin-Roman Sep 17, 2021
58af4eb
Revert IB again (#7546)
Sep 17, 2021
118a373
[CPU] Supporting dynamism into Eltwise and Reorder
steve-y Sep 3, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Enable CPU accelerate FIL in MULTI (openvinotoolkit#7380)
* Enable CPU accelerate FIL in MULTI

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add configure to device

KEY_PERFORMANCE_HINT_NUM_REQUESTS

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
tiger100256-hu authored Sep 17, 2021

Verified

This commit was signed with the committer’s verified signature.
ImgBotApp Imgbot
commit ac8db25864c6c3c41a48411298fb84fc8c5b1b9f
337 changes: 277 additions & 60 deletions inference-engine/src/multi_device/multi_device_exec_network.cpp

Large diffs are not rendered by default.

39 changes: 38 additions & 1 deletion inference-engine/src/multi_device/multi_device_exec_network.hpp
Original file line number Diff line number Diff line change
@@ -16,14 +16,21 @@
#include <cpp_interfaces/impl/ie_executable_network_thread_safe_default.hpp>
#include <ie_parallel.hpp>
#include <threading/ie_itask_executor.hpp>
#include <threading/ie_executor_manager.hpp>
#include "ie_icore.hpp"

#if (IE_THREAD == IE_THREAD_TBB || IE_THREAD == IE_THREAD_TBB_AUTO)
# include <tbb/concurrent_queue.h>
#endif


namespace MultiDevicePlugin {

class MultiDeviceInferencePlugin;

using DeviceName = std::string;
using NetworkFuture = std::future<InferenceEngine::SoExecutableNetworkInternal>;
using NetworkPromise = std::promise<InferenceEngine::SoExecutableNetworkInternal>;

struct DeviceInformation {
DeviceName deviceName;
@@ -105,10 +112,16 @@ class MultiDeviceExecutableNetwork : public InferenceEngine::ExecutableNetworkTh
};
using NotBusyWorkerRequests = ThreadSafeBoundedQueue<WorkerInferRequest*>;

explicit MultiDeviceExecutableNetwork(const DeviceMap<InferenceEngine::SoExecutableNetworkInternal>& networksPerDevice,
explicit MultiDeviceExecutableNetwork(const DeviceMap<InferenceEngine::SoExecutableNetworkInternal>& networksPerDevice,
const std::vector<DeviceInformation>& networkDevices,
const std::unordered_map<std::string, InferenceEngine::Parameter>& config,
const bool needPerfCounters = false);
MultiDeviceExecutableNetwork(const std::string& modelPath,
const InferenceEngine::CNNNetwork& network,
const std::vector<DeviceInformation>& metaDevices,
const std::string& strDevices,
MultiDeviceInferencePlugin* plugin,
const bool needPerfCounters = false);

void SetConfig(const std::map<std::string, InferenceEngine::Parameter> &config) override;
InferenceEngine::Parameter GetConfig(const std::string &name) const override;
@@ -138,6 +151,30 @@ class MultiDeviceExecutableNetwork : public InferenceEngine::ExecutableNetworkTh
std::unordered_map<std::string, InferenceEngine::Parameter> _config;
bool _needPerfCounters = false;
std::atomic_size_t _numRequestsCreated = {0};

private:
void GenerateWorkers(const std::string& device, const InferenceEngine::SoExecutableNetworkInternal& executableNetwork);
void WaitActualNetworkReady() const;
void WaitFirstNetworkReady();
static bool RunPipelineTask(InferenceEngine::Task& inferPipelineTask,
NotBusyWorkerRequests& idleWorkerRequests,
const DeviceName& preferred_device);

private:
std::shared_ptr<InferenceEngine::ICore> _core;
InferenceEngine::IStreamsExecutor::Ptr _executor;
MultiDeviceInferencePlugin* _multiPlugin;
InferenceEngine::SoExecutableNetworkInternal _networkFirstReady;
mutable InferenceEngine::SoExecutableNetworkInternal _networkActualNeeded;
NetworkFuture _cpuFuture;
NetworkPromise _cpuPromise;
mutable NetworkFuture _acceleratorFuture;
mutable NetworkPromise _acceleratorPromise;
mutable bool _alreadyActualNetwork = {false};
bool _workModeIsAUTO = {false};
DeviceInformation _cpuDevice;
DeviceInformation _acceleratorDevice;
mutable std::once_flag _oc;
};

} // namespace MultiDevicePlugin
153 changes: 129 additions & 24 deletions inference-engine/src/multi_device/multi_device_plugin.cpp
Original file line number Diff line number Diff line change
@@ -219,34 +219,50 @@ IExecutableNetworkInternal::Ptr MultiDeviceInferencePlugin::LoadNetworkImpl(cons
bool workModeAuto = workMode != fullConfig.end() && workMode->second == InferenceEngine::PluginConfigParams::YES;
auto priorities = fullConfig.find(MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES);

// not found device priorities for -d AUTO use case
if (priorities == fullConfig.end()) {
if (workModeAuto) {
std::string allDevices;
auto availableDevices = GetCore()->GetAvailableDevices();
if (availableDevices.empty()) {
IE_THROW(NotFound) << "No available device found";
}
for (auto&& device : availableDevices) {
allDevices += device;
allDevices += ((device == availableDevices[availableDevices.size()-1]) ? "" : ",");
}
metaDevices = ParseMetaDevices(allDevices, fullConfig);
multiNetworkConfig.insert({MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, allDevices});
} else {
IE_THROW() << "KEY_MULTI_DEVICE_PRIORITIES key is not set for " << GetName() << " device";
// if workMode is AUTO
if (workModeAuto) {
// check the configure and check if need to set PerfCounters configure to device
// and set filter configure
bool needPerfCounters = false;
std::map<std::string, std::string> filterConfig;
CheckConfig(fullConfig, needPerfCounters, filterConfig);
// filter the device that supports filter configure
auto strDevices = GetDeviceList(fullConfig);
auto metaDevices = ParseMetaDevices(strDevices, fullConfig);
auto supportDevices = FilterDevice(metaDevices, filterConfig);
if (supportDevices.size() == 0) {
IE_THROW() << "there is no device support the configure";
}
// replace the configure with configure that auto want to pass to device
// and reset the strDevices to support devices
std::vector<std::string> validConfigKey;
validConfigKey.push_back(PluginConfigParams::KEY_PERF_COUNT);
validConfigKey.push_back(PluginConfigParams::KEY_EXCLUSIVE_ASYNC_REQUESTS);
validConfigKey.push_back(PluginConfigParams::KEY_PERFORMANCE_HINT);
validConfigKey.push_back(PluginConfigParams::KEY_PERFORMANCE_HINT_NUM_REQUESTS);
strDevices = "";
for (auto iter = supportDevices.begin(); iter != supportDevices.end(); iter++) {
std::map<std::string, std::string> deviceConfig;
auto& configs = iter->config;
for (auto& config : configs) {
if (std::find(validConfigKey.begin(), validConfigKey.end(), config.first) != validConfigKey.end()) {
deviceConfig.insert({config.first, config.second});
}
}
iter->config = deviceConfig;
strDevices = iter->deviceName;
strDevices += ((iter + 1) == supportDevices.end()) ? "" : ",";
}

return std::make_shared<MultiDeviceExecutableNetwork>(modelPath, network, supportDevices, strDevices, this, needPerfCounters);
}

if (priorities == fullConfig.end()) {
IE_THROW() << "KEY_MULTI_DEVICE_PRIORITIES key is not set for " << GetName() << " device";
} else { // for use case -d MULTI:xPU or -d AUTO:xPU
metaDevices = ParseMetaDevices(priorities->second, fullConfig);
multiNetworkConfig.insert(*priorities);
}
// check if it is -d AUTO or -d AUTO:xPU use case
if (workModeAuto) {
// select the device
auto device = SelectDevice(metaDevices, networkPrecision).deviceName;
// parse the config for the device
metaDevices = ParseMetaDevices(SelectDevice(metaDevices, networkPrecision).deviceName, fullConfig);
}

DeviceMap<SoExecutableNetworkInternal> executableNetworkPerDevice;
std::mutex load_mutex;
@@ -345,7 +361,6 @@ QueryNetworkResult MultiDeviceInferencePlugin::QueryNetwork(const CNNNetwork&
return queryResult;
}


DeviceInformation MultiDeviceInferencePlugin::SelectDevice(const std::vector<DeviceInformation>& metaDevices, const std::string& networkPrecision) {
if (metaDevices.empty()) {
IE_THROW(NotFound) << "No available device to select in " << GetName() << " plugin";
@@ -466,4 +481,94 @@ DeviceInformation MultiDeviceInferencePlugin::SelectDevice(const std::vector<Dev
return CPU[0];
}

std::string MultiDeviceInferencePlugin::GetDeviceList(const std::map<std::string, std::string>& config) const {
std::string allDevices;

auto deviceListConfig = config.find(MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES);
if (deviceListConfig == config.end()) {
auto deviceList = GetCore()->GetAvailableDevices();
for (auto&& device : deviceList) {
allDevices += device;
allDevices += ((device == deviceList[deviceList.size()-1]) ? "" : ",");
}
} else {
allDevices = deviceListConfig->second;
}

if (allDevices.empty()) {
IE_THROW() << "Please, check environment due to no supported devices can be used";
}

return allDevices;
}

void MultiDeviceInferencePlugin::CheckConfig(const std::map<std::string, std::string>& config,
bool& needPerfCounters, std::map<std::string, std::string>& filterConfig) {
// TODO need to optimize this code, too much duplicated code
const auto perf_hints_configs = PerfHintsConfig::SupportedKeys();
for (auto&& kvp : config) {
if (kvp.first.find("AUTO_") == 0) {
continue;
} else if (kvp.first == PluginConfigParams::KEY_PERF_COUNT) {
if (kvp.second == PluginConfigParams::YES) {
needPerfCounters = true;
filterConfig.insert({kvp.first, kvp.second});
} else if (kvp.second == PluginConfigParams::NO) {
needPerfCounters = false;
} else {
IE_THROW() << "Unsupported config value: " << kvp.second
<< " for key: " << kvp.first;
}
} else if (kvp.first == PluginConfigParams::KEY_EXCLUSIVE_ASYNC_REQUESTS) {
if (kvp.second == PluginConfigParams::YES ||
kvp.second == PluginConfigParams::NO) {
continue;
} else {
IE_THROW() << "Unsupported config value: " << kvp.second
<< " for key: " << kvp.first;
}
} else if (std::find(perf_hints_configs.begin(), perf_hints_configs.end(), kvp.first) != perf_hints_configs.end()) {
PerfHintsConfig::CheckConfigAndValue(kvp);
} else if (supported_configKeys.end() == std::find(supported_configKeys.begin(), supported_configKeys.end(), kvp.first)) {
IE_THROW() << "Unsupported config key: " << kvp.first;
}
}
}

std::vector<DeviceInformation> MultiDeviceInferencePlugin::FilterDevice(const std::vector<DeviceInformation>& metaDevices,
const std::map<std::string, std::string>& config) {
if (metaDevices.empty()) {
IE_THROW(NotFound) << "No available device to filter " << GetName() << " plugin";
}

if (config.size() == 0) {
return metaDevices;
}

std::vector<DeviceInformation> filterDevice;
for (auto&& item : metaDevices) {
bool support = true;
std::vector<std::string> supportedMetrics = GetCore()->GetMetric(item.deviceName, METRIC_KEY(SUPPORTED_METRICS));
if (std::find(supportedMetrics.begin(), supportedMetrics.end(), METRIC_KEY(SUPPORTED_CONFIG_KEYS)) != supportedMetrics.end()) {
std::vector<std::string> supportKeys = GetCore()->GetMetric(item.deviceName, METRIC_KEY(SUPPORTED_CONFIG_KEYS));
for (auto&& kvp : config) {
auto targetKey = std::find(supportKeys.begin(), supportKeys.end(), kvp.first);
// if device have the key, we think the device support it
if (targetKey != supportKeys.end()) {
continue;
} else {
support = false;
break;
}
}
} else {
support = false;
}

if (support) {
filterDevice.push_back(item);
}
}
return filterDevice;
}
} // namespace MultiDevicePlugin
8 changes: 7 additions & 1 deletion inference-engine/src/multi_device/multi_device_plugin.hpp
Original file line number Diff line number Diff line change
@@ -36,6 +36,9 @@ class MultiDeviceInferencePlugin : public InferenceEngine::IInferencePlugin {
std::vector<MultiDevicePlugin::DeviceInformation> ParseMetaDevices(const std::string & devicesRequestsCfg,
const std::map<std::string, std::string> & config) const;

std::string GetDeviceList(const std::map<std::string, std::string>& config) const;
DeviceInformation SelectDevice(const std::vector<DeviceInformation>& metaDevices, const std::string& networkPrecision = METRIC_VALUE(FP32));

protected:
std::map<std::string, std::string> GetSupportedConfig(const std::map<std::string, std::string>& config,
const MultiDevicePlugin::DeviceName & deviceName) const;
@@ -45,7 +48,10 @@ class MultiDeviceInferencePlugin : public InferenceEngine::IInferencePlugin {
InferenceEngine::CNNNetwork network,
const std::map<std::string, std::string>& config,
const std::string &networkPrecision = METRIC_VALUE(FP32));
DeviceInformation SelectDevice(const std::vector<DeviceInformation>& metaDevices, const std::string& networkPrecision = METRIC_VALUE(FP32));
static void CheckConfig(const std::map<std::string, std::string>& config, bool& needPerfCounters,
std::map<std::string, std::string>& filterConfig);
std::vector<DeviceInformation> FilterDevice(const std::vector<DeviceInformation>& metaDevices,
const std::map<std::string, std::string>& config);
};

} // namespace MultiDevicePlugin
Original file line number Diff line number Diff line change
@@ -18,6 +18,10 @@ const std::vector<std::map<std::string, std::string>> MulticonfigsPerfCounters =
{{ MULTI_CONFIG_KEY(DEVICE_PRIORITIES), targetDevice }}
};

const std::vector<std::map<std::string, std::string>> AutoconfigsPerfCounters = {
{{ MULTI_CONFIG_KEY(DEVICE_PRIORITIES), targetDevice }}
};

INSTANTIATE_TEST_SUITE_P(smoke_BehaviorTests, InferRequestPerfCountersTest,
::testing::Combine(
::testing::Values(targetDevice),
@@ -30,4 +34,11 @@ INSTANTIATE_TEST_SUITE_P(smoke_Multi_BehaviorTests, InferRequestPerfCountersTest
::testing::ValuesIn(MulticonfigsPerfCounters)),
InferRequestPerfCountersTest::getTestCaseName);

INSTANTIATE_TEST_SUITE_P(smoke_Auto_BehaviorTests, InferRequestPerfCountersTest,
::testing::Combine(
::testing::Values(CommonTestUtils::DEVICE_AUTO),
::testing::ValuesIn(AutoconfigsPerfCounters)),
InferRequestPerfCountersTest::getTestCaseName);


} // namespace
Original file line number Diff line number Diff line change
@@ -62,18 +62,5 @@ namespace {
::testing::ValuesIn(MultiInConfigs)),
InferRequestConfigTest::getTestCaseName);

INSTANTIATE_TEST_SUITE_P(smoke_Auto_BehaviorTests, InferRequestConfigTest,
::testing::Combine(
::testing::Values(1u),
::testing::Values(CommonTestUtils::DEVICE_AUTO),
::testing::ValuesIn(multiConfigs)),
InferRequestConfigTest::getTestCaseName);


INSTANTIATE_TEST_SUITE_P(smoke_Auto_BehaviorTests_, InferRequestConfigTest,
::testing::Combine(
::testing::Values(1u),
::testing::Values(CommonTestUtils::DEVICE_AUTO),
::testing::ValuesIn(MultiInConfigs)),
InferRequestConfigTest::getTestCaseName);
} // namespace
Original file line number Diff line number Diff line change
@@ -37,6 +37,10 @@ const std::vector<std::map<std::string, std::string>> Multiconfigs = {
{{ MULTI_CONFIG_KEY(DEVICE_PRIORITIES) , CommonTestUtils::DEVICE_CPU}}
};

const std::vector<std::map<std::string, std::string>> Autoconfigs = {
{{ MULTI_CONFIG_KEY(DEVICE_PRIORITIES) , CommonTestUtils::DEVICE_CPU}}
};

INSTANTIATE_TEST_SUITE_P(smoke_BehaviorTests, InferRequestPerfCountersTest,
::testing::Combine(
::testing::Values(CommonTestUtils::DEVICE_CPU),
@@ -48,4 +52,11 @@ INSTANTIATE_TEST_SUITE_P(smoke_Multi_BehaviorTests, InferRequestPerfCountersTest
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(Multiconfigs)),
InferRequestPerfCountersTest::getTestCaseName);

INSTANTIATE_TEST_SUITE_P(smoke_Auto_BehaviorTests, InferRequestPerfCountersTest,
::testing::Combine(
::testing::Values(CommonTestUtils::DEVICE_AUTO),
::testing::ValuesIn(Autoconfigs)),
InferRequestPerfCountersTest::getTestCaseName);

} // namespace
Original file line number Diff line number Diff line change
@@ -32,6 +32,10 @@ namespace {
{InferenceEngine::PluginConfigParams::KEY_CPU_THROUGHPUT_STREAMS, InferenceEngine::PluginConfigParams::CPU_THROUGHPUT_AUTO}}
};

const std::vector<std::map<std::string, std::string>> AutoConfigsInputOutput = {
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES , CommonTestUtils::DEVICE_CPU}}
};

const std::vector<std::map<std::string, std::string>> configsOutput = {
{},
{{InferenceEngine::PluginConfigParams::KEY_CPU_THROUGHPUT_STREAMS, InferenceEngine::PluginConfigParams::CPU_THROUGHPUT_AUTO}}
@@ -56,7 +60,7 @@ namespace {
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_AUTO),
::testing::ValuesIn(MultiConfigsInputOutput)),
::testing::ValuesIn(AutoConfigsInputOutput)),
BehaviorTestOutput::getTestCaseName);

INSTANTIATE_TEST_SUITE_P(smoke_BehaviorTests, BehaviorTests,
@@ -98,7 +102,7 @@ namespace {
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_AUTO),
::testing::ValuesIn(MultiConfigsInputOutput)),
::testing::ValuesIn(AutoConfigsInputOutput)),
BehaviorTestInput::getTestCaseName);

} // namespace
Loading