Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Layer Support [Bug] #8249

Closed
DwenGu opened this issue Oct 27, 2021 · 46 comments
Closed

Custom Layer Support [Bug] #8249

DwenGu opened this issue Oct 27, 2021 · 46 comments
Assignees

Comments

@DwenGu
Copy link

DwenGu commented Oct 27, 2021

  • OpenVINO => 2021.4
  • Operating System / Platform => Windows 64 Bit
  • Compiler => Visual Studio 2017
  • Problem classification: Model Conversion
  • Framework: Caffe (if applicable)

Follwing the openvino guide, I add the myop_ext.py in extensions/front/caffe and the myop.py in extensions/op. Howevere, when converting the caffe model to IR, some errors are reported as followed:

[ WARNING ] Consider building the Inference Engine Python API from sources or reinstall OpenVINO (TM) toolkit using "pip install openvino==2021.4"
[ ERROR ] List of operations that cannot be converted to Inference Engine IR:
[ ERROR ] BackwardWarp (1)
[ ERROR ] backwardwarp0
[ ERROR ] Part of the nodes was not converted to IR. Stopped.

Dose there detailed documents help me to add the custom layers rightly?

Thanks!

The code of myop_ext.py is :
image

The code of myop.py is:
image

@DwenGu DwenGu added bug Something isn't working support_request labels Oct 27, 2021
@DwenGu DwenGu changed the title [Bug] Custom Layer Support [Bug] Oct 27, 2021
@Iffa-Intel
Copy link

You'll need to use OpenVINO Model Optimizer to convert a native model into IR before feeding it to your custom inference app.
Could you share your model, plus which Caffe model's topology are you using?

@DwenGu
Copy link
Author

DwenGu commented Oct 28, 2021

You'll need to use OpenVINO Model Optimizer to convert a native model into IR before feeding it to your custom inference app. Could you share your model, plus which Caffe model's topology are you using?

First of all, thanks for your reply. When I use the mo.py to convert the caffe model into the IR, the errors are reported as above. For now, I didnot feeding this backwardwarp custom layer to my custom inference app.
https://drive.google.com/drive/folders/1vS1-7DazLYyzqxE67-kmymcFbcU-RFQ9?usp=sharing is our caffe model.

@DwenGu
Copy link
Author

DwenGu commented Oct 28, 2021

You'll need to use OpenVINO Model Optimizer to convert a native model into IR before feeding it to your custom inference app. Could you share your model, plus which Caffe model's topology are you using?

I want to konw how to add the custom layer rightly? For now, I have already followed the openvino guide to add the BackwardWarp layer, but it did not work well.

@Iffa-Intel
Copy link

Iffa-Intel commented Oct 28, 2021

Fyi, this is the list of supported layers for caffe

It seems that your layer (BackwardWarp Layer) is not listed. Therefore it's not supported.
OpenVINO latest guide had change the term Layers into Operation.
You may refer to the documentation here.

@DwenGu
Copy link
Author

DwenGu commented Oct 28, 2021

Have you tried this instead? : OpenVINO Custom Layer Implementation Tutorial for Windows

This example has step by step instructions including things required for creating and executing a custom layer.

Hello, thanks for your reply.
The organization of the latest version (2021.4) is some different from the guide you appended above. In the latest verison, I can not find the extension_generator\extgen.py file in the path of \deployment_tools\tools. And, there is not the path of deployment_tools\inference_engine\src\extension in the version of Openvino-2021.4.

Have there any other guide on how to add the customer layer? Or can you figure out that what's wrong with the code or process I appended?

@Iffa-Intel
Copy link

Iffa-Intel commented Oct 28, 2021

It seems that your layer (BackwardWarp Layer) is not listed in the supported Framework Layers. Therefore it's not supported. You may refer to the link I shared in my previous comment: the list of supported layers for caffe

@DwenGu
Copy link
Author

DwenGu commented Oct 28, 2021

It seems that your layer (BackwardWarp Layer) is not listed in the supported Framework Layers. Therefore it's not supported. You may refer to the link I shared in my previous comment: the list of supported layers for caffe

I followed the guide shared by you. I follow the part ' Convert the Frozen TensorFlow* Model to Intermediate Representation' and, add the backward.py into the dir of extension/ops and the backward_ext.py into the extensions/front/caffe/. After these two steps as the guide introduced, it may caonvert the custom layer sucessfuly. However, I still can not convert rightly.

Here is the convert log file.

@Iffa-Intel
Copy link

Iffa-Intel commented Oct 29, 2021

Tensorflow and caffe model has different specific conversion parameters required. That is why I recommended for you to use the steps of Model Optimizer for Caffe Model in the first place to get the IR file.

The guide you referring to is using Tensorflow, you'll need adjustment according to your model if it's not Tensorflow.

In Tensorflow you'll need the frozen file format (.pb) meanwhile, the caffe would have .caffemodel.

In your log, it clear that you were getting the "[ ERROR ] BackwardWarp (1)". Again as I mentioned previously the BackWarp Layer in Caffe model is not supported (in OpenVINO) as what are supported is only on this list: the list of supported layers for caffe

Your model need to have both Topology (eg: Caffe Model, Topology:MobileNet) and Layer (eg:Softmax) to be supported by OpenVINO in order to use them with OpenVINO no matter for which operations.

Could you just share your caffe model for me to confirm from my side whether the caffe model topology/layer is supported or not? I couldn't access the google drive link that you shared before.

@Iffa-Intel Iffa-Intel added category: MO Model Optimizer and removed bug Something isn't working labels Oct 29, 2021
@DwenGu
Copy link
Author

DwenGu commented Oct 29, 2021

Tensorflow and caffe model has different specific conversion parameters required. That is why I recommended for you to use the steps of Model Optimizer for Caffe Model in the first place to get the IR file.

The guide you referring to is using Tensorflow, you'll need adjustment according to your model if it's not Tensorflow.

In Tensorflow you'll need the frozen file format (.pb) meanwhile, the caffe would have .caffemodel.

In your log, it clear that you were getting the "[ ERROR ] BackwardWarp (1)". Again as I mentioned previously the BackWarp Layer in Caffe model is not supported (in OpenVINO) as what are supported is only on this list: the list of supported layers for caffe

Your model need to have both Topology (eg: Caffe Model, Topology:MobileNet) and Layer (eg:Softmax) to be supported by OpenVINO in order to use them with OpenVINO no matter for which operations.

Could you just share your caffe model for me to confirm from my side whether the caffe model topology/layer is supported or not? I couldn't access the google drive link that you shared before.

Hello, thanks for your reply.
Here is the caffe model link..

Again as you mentioned previously the BackWarp Layer in Caffe model is not supported (in OpenVINO). Therefore, as the guide introduced, I add the BackWarp layer in the specified pathwhen converting the network.

Thanks!

@DwenGu
Copy link
Author

DwenGu commented Nov 1, 2021

Is there any relevant progress?

@Iffa-Intel
Copy link

Iffa-Intel commented Nov 2, 2021

Both generic and Caffe*-Specific Parameter conversion using OpenVINO Model Optimizer failed and showed the same issue, which is the unsupported custom layer (Backwardwarp).

We'll look further for any workaround that maybe can be applied to yours.

mo
mo_proto

@Iffa-Intel Iffa-Intel added the PSE label Nov 2, 2021
@jgespino jgespino self-assigned this Nov 9, 2021
@jgespino
Copy link
Contributor

jgespino commented Nov 9, 2021

Hi @DunguTmp

I've been trying to add your custom layer by following the documentation and the code you provided but was unsuccessful. I will need to check with the development team for additional guidance.

Regards,
Jesus

Ref. 70316

@DwenGu
Copy link
Author

DwenGu commented Nov 12, 2021

Hi @DunguTmp

I've been trying to add your custom layer by following the documentation and the code you provided but was unsuccessful. I will need to check with the development team for additional guidance.

Regards, Jesus

Ref. 70316

Hi, @Iffa-Meah:
For now, I can convert the caffe model to the IR successfully. This link is the newest code.

But there is another issue reported.

When using the following code to load the IR files:

ie = IECore()
model_xml = 'C:\workspace\LenovoVideoInterp\OpenVino\Model\Subtitle\IR/1021\PCv0_Subtitle.xml'
model_bin = 'C:\workspace\LenovoVideoInterp\OpenVino\Model\Subtitle\IR/1021\PCv0_Subtitle.bin'
net = ie.read_network(model=model_xml, weights=model_bin)

The issues reported as :
net = ie.read_network(model=model_xml, weights=model_bin)
File "ie_api.pyx", line 324, in openvino.inference_engine.ie_api.IECore.read_network
File "ie_api.pyx", line 346, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Cannot create BackwardWarp layer backwardwarp0 id:76 from unsupported opset: extension

Does this is a common issue? What should I do to deal with this issue?
Thanks!

@jgespino
Copy link
Contributor

Hi @DunguTmp

Glad to see you are making progress! Have you implemented a custom operation for the Inference Engine Plugin?

There are three steps to support inference of a model with custom operation(s):

  1. Add support for a custom operation in the Model Optimizer - which you have already done
  2. Create an operation set and implement a custom nGraph operation
  3. Implement a customer operation in one of the Inference Engine plugins

Please take a look at the Custom Operation Support Overview for additional information.

Regards,
Jesus

@DwenGu
Copy link
Author

DwenGu commented Nov 22, 2021

Hi, @jgespino

Thanks for your reply. Recently I have read the part of creating an operation set and implement a custom nGraph operation.
Here are some of my questions:

  1. Do I need to download the code from github, then add the corresponding class according to the documentation, and then compile the source code?
  2. Which path should be added to the various files corresponding to the document?
    image
    From this guide, this only tells me to add a class, but I donnot know add this new class to which path.
    Are there more detailed document that can help me to rightly add the new Op.

Thanks!

@jgespino
Copy link
Contributor

Hi @DunguTmp

Do I need to download the code from github, then add the corresponding class according to the documentation, and then compile the source code?

No, you should be able to add the customer layer using the Intel Distribution of OpenVINO toolkit and not build from source.

Which path should be added to the various files corresponding to the document?

We have a document that has more detailed information, however, it's based on 2020.2 release. I will reach out to my peers and find out if there is a more recent version.
https://www.intel.com/content/dam/support/us/en/documents/boardsandkits/steps-to-add-unsupported-layers.pdf

Regards,
Jesus

@DwenGu
Copy link
Author

DwenGu commented Nov 24, 2021

Hi @DunguTmp

Do I need to download the code from github, then add the corresponding class according to the documentation, and then compile the source code?

No, you should be able to add the customer layer using the Intel Distribution of OpenVINO toolkit and not build from source.

Which path should be added to the various files corresponding to the document?

We have a document that has more detailed information, however, it's based on 2020.2 release. I will reach out to my peers and find out if there is a more recent version. https://www.intel.com/content/dam/support/us/en/documents/boardsandkits/steps-to-add-unsupported-layers.pdf

Regards, Jesus

Hi @jgespino

Thanks for your new document.
Plz notify me when you get the latest version of the document.

Regards,
DwenGu

@wdkwyf
Copy link
Contributor

wdkwyf commented Nov 24, 2021

Hi, @DunguTmp
I notice that you may have the optical flow network. And the only gap is the backward wrap operation.
Why not remove this node out of the network and treat it as the post-prcoessing part? because backward-wrap is not the standard deep learning operation (you can see the "backward-wrap" only appear once at the end of the network).
image

@DwenGu
Copy link
Author

DwenGu commented Nov 24, 2021

Hi, @DunguTmp I notice that you may have the optical flow network. And the only gap is the backward wrap operation. Why not remove this node out of the network and treat it as the post-prcoessing part? because backward-wrap is not the standard deep learning operation (you can see the "backward-wrap" only appear once at the end of the network). image

Hi, @wdkwyf
Thanks for your reply.
This provided network structure is just an example. In our optical flow, this backward warp operator appear at the middle of the network and the result of this backward warp operator will feed into next convotion layer.

Best regards,
DwenGu

@wdkwyf
Copy link
Contributor

wdkwyf commented Nov 30, 2021

@DunguTmp Understood. I think the only gap is the IE implementation. You can easily modify the template file to insert the BackwardWarp Operation. The only difficulity I think it's the kernel file. I think the OpenCV affinewarp function is similar to the pytorch grid_sample.
I upload the changed part with the codes and tested it successfully using the benchmark app with -l parameter.
benchmark.exe -m PCv0_Subtitle.xml -l template_extension.dll

https://drive.google.com/drive/folders/13LxBUg6k_yfxh8c4ynKJSRpA9JBTbDhk?usp=sharing

@dkurt
Copy link
Contributor

dkurt commented Nov 30, 2021

Hi! We have an example implementation of nn.functional.grid_sample at https://github.com/dkurt/openvino_pytorch_layers

@DwenGu
Copy link
Author

DwenGu commented Nov 30, 2021

Hi! We have an example implementation of nn.functional.grid_sample at https://github.com/dkurt/openvino_pytorch_layers

Hi, @dkurt
Thanks. I have already noticed your work. Your codes help me a lot.

Best regards,
DwenGu

@DwenGu
Copy link
Author

DwenGu commented Nov 30, 2021

@DunguTmp Understood. I think the only gap is the IE implementation. You can easily modify the template file to insert the BackwardWarp Operation. The only difficulity I think it's the kernel file. I think the OpenCV affinewarp function is similar to the pytorch grid_sample. I upload the changed part with the codes and tested it successfully using the benchmark app with -l parameter. benchmark.exe -m PCv0_Subtitle.xml -l template_extension.dll

https://drive.google.com/drive/folders/13LxBUg6k_yfxh8c4ynKJSRpA9JBTbDhk?usp=sharing

Hi, @wdkwyf
Thanks for your work.

Best regards,
DwenGu

@jgespino
Copy link
Contributor

jgespino commented Dec 2, 2021

Hi @DunguTmp

Thanks for your new document.
Plz notify me when you get the latest version of the document.

I've checked with the team, unfortunately, there isn't an updated version available.

Regards,
Jesus

@DwenGu
Copy link
Author

DwenGu commented Dec 7, 2021

@DunguTmp Understood. I think the only gap is the IE implementation. You can easily modify the template file to insert the BackwardWarp Operation. The only difficulity I think it's the kernel file. I think the OpenCV affinewarp function is similar to the pytorch grid_sample. I upload the changed part with the codes and tested it successfully using the benchmark app with -l parameter. benchmark.exe -m PCv0_Subtitle.xml -l template_extension.dll

https://drive.google.com/drive/folders/13LxBUg6k_yfxh8c4ynKJSRpA9JBTbDhk?usp=sharing

Hi, @wdkwyf, @jgespino and @dkurt

Thanks for all of your works.

Now, I have followed some guide and already implemented this backwardwarp operation. I tested it successfully using the benchmark app with -l parameter. benchmark.exe` -m backward_warp.xml -l user_extension.dll. For Now, I want to check the result of my extension operation on windows. I use following codes:

from openvino_extensions import get_extensions_path
from openvino.inference_engine import IECore

import argparse
import numpy as np

parser = argparse.ArgumentParser()
parser.add_argument('--num_inputs', type=int, default=1)
parser.add_argument('-d', '--device', default="CPU")
ie = IECore()
ie.add_extension(get_extensions_path(), 'CPU')

However, some error repotrs as followed:
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading Inference Engine
[ INFO ] GPU extensions is loaded C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\user_ie_extensions\gpu_extensions.xml
[ INFO ] InferenceEngine:
IE version ......... 2021.4.1
Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
[ INFO ] Device info:
[ ERROR ] Failed to create plugin C:\Users\gujun\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release\clDNNPlugin.dll for device GPU
Please, check your environment
invalid stoi argument

I don’t know if you have encountered the above problems. Is there a problem with my usage?

Best regards,
DwenGu

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 7, 2021

Your kernel implementation is based on CPU or GPU? Running benchmark.exe -m backward_warp.xml -l user_extension.dll successfully means CPU plugin is alright. But in your second codes you run it in GPU.

@DwenGu
Copy link
Author

DwenGu commented Dec 7, 2021

Your kernel implementation is based on CPU or GPU? Running benchmark.exe -m backward_warp.xml -l user_extension.dll successfully means CPU plugin is alright. But in your second codes you run it in GPU.

Hi, @wdkwyf
Sorry, I update wrong log before.

Following is the log when I try to using this ie.add_extension(get_extensions_path(), 'CPU')

C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master>python compare.py
C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\openvino_extensions\user_cpu_extension.dll
Traceback (most recent call last):
  File "compare.py", line 25, in <module>
    ie.add_extension(get_extensions_path(), 'CPU')
  File "ie_api.pyx", line 511, in openvino.inference_engine.ie_api.IECore.add_extension
RuntimeError: Cannot load library 'C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\openvino_extensions\user_cpu_extension.dll': 126 from cwd: C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master``

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 7, 2021

Maybe it's related to the windows path. I used C:**\\**xxx**\\**xxx.dll and worked well.

from openvino.inference_engine import IECore
ie = IECore()
# test CPU extension
ie.add_extension("C:\\xxx\\template_extension.dll", 'CPU')
net = ie.read_network("C:\\xxx.xml", "C:\\xxx.bin")
exec_net = ie.load_network(net, "CPU")

@DwenGu
Copy link
Author

DwenGu commented Dec 7, 2021

Maybe it's related to the windows path. I used C:**\\**xxx**\\**xxx.dll and worked well.

from openvino.inference_engine import IECore
ie = IECore()
# test CPU extension
ie.add_extension("C:\\xxx\\template_extension.dll", 'CPU')
net = ie.read_network("C:\\xxx.xml", "C:\\xxx.bin")
exec_net = ie.load_network(net, "CPU")

Thanks a lot. This method works for me.

Best regards,
DwenGu.

@DwenGu
Copy link
Author

DwenGu commented Dec 13, 2021

Hi,

Recently, I have already implement the backwardwarp opration with OpenCL. Follow the openvino guide, I have build the backward_warp_extension.xml. However, when I use following command.

benchmark_app.exe -m C:\workspace\LenovoVideoInterp\SupportLayers-Original\backward_warp.xml  -niter 200 -c C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\user_ie_extensions\backward_warp_extensions.xml -d GPU

Some errors occoured as follows:
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading Inference Engine
[ INFO ] GPU extensions is loaded C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\user_ie_extensions\backward_warp_extensions.xml
[ INFO ] InferenceEngine:
IE version ......... 2021.4.1
Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
[ INFO ] Device info:
[ ERROR ] Failed to create plugin C:\Users\gujun\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release\clDNNPlugin.dll for device GPU
Please, check your environment
invalid stoi argument

I don’t know if you have encountered the above problems. Is there a problem with my usage?
This link contains our specific implementation.

Best regards,
DwenGu.

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 13, 2021

Hi,
I suspect you didn't install the GPU plugin while installation: you can check whether this file existed.
image

I also found one issue that it seems benchmark_app -d GPU can't be used to test GPU kernel because it can't recevie the CPU dll and GPU xml at the same time. Because the OpenCL implementation only include cl kernel and xml file, no ngraph operation(inside dll).
So, I use the test.py to test:

from openvino.inference_engine import IECore
ie = IECore()
# test CPU extension
# ie.add_extension("C:\\xxx\\template_extension.dll", 'CPU')
# net = ie.read_network("C:\\xxx.xml", "C:\\xxx.bin")
# exec_net = ie.load_network(net, "CPU")

# test GPU extension
ie.add_extension("C:\\xxx\\template_extension.dll", 'CPU')
ie.set_config({'CONFIG_FILE': "C:\\gpu_extensions.xml"}, 'GPU')
net = ie.read_network("C:\\xxx.xml", "C:\\xxx.bin")
exec_net = ie.load_network(net, "GPU")

@DwenGu
Copy link
Author

DwenGu commented Dec 13, 2021

``> Hi, I suspect you didn't install the GPU plugin while installation: you can check whether this file existed. image

I also found one issue that it seems benchmark_app -d GPU can't be used to test GPU kernel because it can't recevie the CPU dll and GPU xml at the same time. Because the OpenCL implementation only include cl kernel and xml file, no ngraph operation(inside dll). So, I use the test.py to test:

from openvino.inference_engine import IECore
ie = IECore()
# test CPU extension
# ie.add_extension("C:\\xxx\\template_extension.dll", 'CPU')
# net = ie.read_network("C:\\xxx.xml", "C:\\xxx.bin")
# exec_net = ie.load_network(net, "CPU")

# test GPU extension
ie.add_extension("C:\\xxx\\template_extension.dll", 'CPU')
ie.set_config({'CONFIG_FILE': "C:\\gpu_extensions.xml"}, 'GPU')
net = ie.read_network("C:\\xxx.xml", "C:\\xxx.bin")
exec_net = ie.load_network(net, "GPU")

Hi, @wdkwyf
I have checked clDNNPlugin files, which have been installed in the right path.
捕获

I use the codes:

ie.add_extension("C:\\workspace\\LenovoVideoInterp\\openvino_pytorch_layers-master\\user_ie_extensions\\build\Release\\user_cpu_extension.dll", 'CPU')
ie.set_config({'CONFIG_FILE':"C:\\workspace\\LenovoVideoInterp\\openvino_pytorch_layers-master\\user_ie_extensions\\gpu_extensions.xml"}, 'GPU')

model_path = 'C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\\grid_sample.xml' 
weight_path = 'C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\\grid_sample.bin'
net = ie.read_network(model_path, weight_path)
print("Successfully Read the Model!")
net.reshape(shapes)
exec_net = ie.load_network(net, 'GPU')
print("Successfully Load the Model!")

out = exec_net.infer(inputs)
out = next(iter(out.values()))
print(out.shape)```

The same errors occured:
exec_net = ie.load_network(net, 'GPU')

File "ie_api.pyx", line 372, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 390, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Failed to create plugin C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\bin\intel64\Release\clDNNPlugin.dll for device GPU
Please, check your environment
invalid stoi argument


I have check my system path:

C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master>set path
Path=C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\ngraph\lib;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\external\tbb\bin;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\bin\intel64\Release;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\bin\intel64\Debug;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\external\hddl\bin;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\external\omp\lib;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\external\gna\lib;;C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer;C:\Program Files (x86)\Intel\openvino_2021\opencv\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files\TortoiseSVN\bin;C:\Program Files\PuTTY;C:\software\platform-tools_r31.0.0-windows\platform-tools;C:\Program Files\Git\cmd;C:\Users\gujun\AppData\Local\Programs\Python\Python36;C:\Users\gujun\AppData\Local\Programs\Python\Python36\Scripts;C:\Program Files\dotnet;C:\Program Files\CMake\bin;C:\Program Files (x86)\Intel\openvino_2021.4.689\opencv\bin;C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE;C:\Program Files (x86)\Intel\openvino_2021.4.689\inference_engine\bin;C:\Users\gujun\AppData\Local\Programs\Python\Python36\Scripts;C:\Users\gujun\AppData\Local\Programs\Python\Python36;C:\Users\gujun\AppData\Local\Continuum\Anaconda3;C:\Users\gujun\AppData\Local\Continuum\Anaconda3\Scripts;C:\Users\gujun\AppData\Local\Continuum\Anaconda3\Library\bin;C:\Users\gujun\AppData\Local\Microsoft\WindowsApps;C:\Users\gujun\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\gujun\AppData\Local\Sony\sGrabber64\Redist;C:\Users\gujun.dotnet\tools;c:\program files\esafenet\cobra docguard client


I don’t know if you have encountered the above problems. Is there a problem with my usage?

Best regards,
DwenGu


@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 13, 2021

@DwenGu
Copy link
Author

DwenGu commented Dec 13, 2021

oh, I think you didn't update the graphic driver: https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_windows.html#optional-steps-for-intel-processor-graphics-gpu My version is: image

Hi, @wdkwyf
I have already update the graphic driver to 27.20.100.9466.
image

The same error ocurred:
Traceback (most recent call last):
File "compare_gridsample.py", line 48, in
exec_net = ie.load_network(net, 'GPU')
File "ie_api.pyx", line 372, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 390, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Failed to create plugin C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\inference_engine\bin\intel64\Release\clDNNPlugin.dll for device GPU
Please, check your environment
invalid stoi argument

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 13, 2021

hi, did you restart your computer after driver installation?
invalid stoi argument is strange.
Also, you can use the cilinfo.exe to test your GPU environment. It's useful. Maybe some conflict with CUDA?
image

https://github.com/Oblomov/clinfo

@DwenGu
Copy link
Author

DwenGu commented Dec 13, 2021

Hi, @wdkwyf
1、I have restarted my computer after driver installation;
2、There is no dGPU in my Laptap;
3、I have used the cilinfo.exe. From the cilinfo log, I think there are no issues with mu iGPU enviroment.

clinfo

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 13, 2021

oh, I know what's your root cause. the version must be '1' not 'extension'
image

@DwenGu
Copy link
Author

DwenGu commented Dec 13, 2021

oh, I know what's your root cause. the version must be '1' not 'extension' image

Hi,
Thanks for your quickly reply. This is really a small detail.
After changing this version to '1', I can run this code rightly.
There is another question about the runtime performance. Since the benchmark_app.exe can not perform the extension files with the GPU, is there any other tools can rightly show the average runtime time. (The call of the python-side api cannot correctly reflect the correct time-consuming of our extension op. It's more smaller than the benchmark_app.exe tools.)

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 13, 2021

Hi,
Maybe you can try benchmark_app.exe -l xxx.dll -c xxx.xml -d HETERO:CPU,GPU, maybe it's the designed usage for GPU extension?
Or, you can modify the code, such as deletethe first if statement
image

@DwenGu
Copy link
Author

DwenGu commented Dec 13, 2021

Hello, I tried the first method. From the log, it seems that the extended operation is not running on the GPU device.
For the second method, I donnot understand. In my opinion, the time consumed by the python code calculation does not truly reflect the real time of the expansion operation. Because the efficiency of python code is not as good as c++. I don’t know if my idea is correct.

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 14, 2021

Hello, I tried the first method. From the log, it seems that the extended operation is not running on the GPU device. For the second method, I donnot understand. In my opinion, the time consumed by the python code calculation does not truly reflect the real time of the expansion operation. Because the efficiency of python code is not as good as c++. I don’t know if my idea is correct.

oh, I'm wrong. GPU should be placed firstly: -d HETERO:GPU,CPU —— GPU,CPU points to fallback policy with priority on GPU and fallback to CPU
In fact, python code only be used as Python API, the time consuming part is still writen in the C++, such as cldnnplugin.dll and mkldnn.dll. That's why OpenVINO use Cython and ie_api.pyx

@DwenGu
Copy link
Author

DwenGu commented Dec 14, 2021

Hi, @wdkwyf :
Thanks for your reply. When changing the -d HETERO:CPU,GPU to -d HETERO:GPU,CPU , the time consumption of the backward_warp operation is shortened a lot.
Now I have converted a model that includes multiple reverse twist operations to IR. When I using the following command:
benchmark_app.exe -l C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\user_ie_extensions\build\Release\user_cpu_extension.dll -c xx\backward_warp_extensions.xml -m xx\backwardwarp.xml -d HETERO:GPU,CPU -api sync -niter 100
Some errors are reported as:

[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading Inference Engine
[ INFO ] CPU (MKLDNN) extensions is loaded C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\user_ie_extensions\build\Release\user_cpu_extension.dll
[ INFO ] GPU extensions is loaded C:\workspace\LenovoVideoInterp\openvino_pytorch_layers-master\user_ie_extensions\backward_warp_extensions.xml
[ INFO ] InferenceEngine:
        IE version ......... 2021.4.1
        Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
[ INFO ] Device info:
        CPU
        MKLDNNPlugin version ......... 2021.4.1
        Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
        GPU
        clDNNPlugin version ......... 2021.4.1
        Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
        HETERO
        heteroPlugin version ......... 2021.4.1
        Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4

[Step 3/11] Setting device configuration
[Step 4/11] Reading network files
[ INFO ] Loading network files
[ INFO ] Read network took 8.11 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
Network inputs:
    input : FP32 / NCHW
    input1 : FP32 / NCHW
Network outputs:
    16/Split.2 : FP32 / NCHW
    30/Split.6 : FP32 / NCHW
    output : FP32 / NCHW
[Step 7/11] Loading the model to the device
[ ERROR ] Function contains several inputs and outputs with one friendly name!

When I change the -d HETERO:GPU,CPU to -d HETERO:CPU,GPU or -d CPU , this error will not report.

This is our model. This is our extension operator source code and dll file.

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 16, 2021

I think it's because some operation have two results. It's allowed in the master branch. #6844
But I think you'd better to modify the benchmark_app to allow the -d GPU, instead of HETERO:GPU,CPU

@DwenGu
Copy link
Author

DwenGu commented Dec 16, 2021

Hi,
Thanks for your reply.
It's there any guide for modifing the benchmark_app?

BR,
DwenGu

@wdkwyf
Copy link
Contributor

wdkwyf commented Dec 17, 2021

Hi, Thanks for your reply. It's there any guide for modifing the benchmark_app?

BR, DwenGu

Hi, you can check it, very simple modification.
https://github.com/openvinotoolkit/openvino/pull/9254/files

@jgespino
Copy link
Contributor

Closing due to inactivity, please re-open if additional assistance is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants