-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is MatMul still not supported? #134
Comments
Dear @konradNowicki Thanks, Shubha |
Hi, I manage to boil down the problem to just a few lines and I have added some code for easy running and debugging. import os
import tensorflow as tf
from tensorflow.layers import Dense
from tensorflow.python.framework import graph_io
import numpy as np
import subprocess
slim = tf.contrib.slim
def graph_and_freeze():
def the_graph(in1, in2): # in1=shape(?, 5, 64), in2=shape(?, 5, 1)
"""Graph definition (simplified)"""
net = Dense(5, activation=None, use_bias=False)(in1) # = shape(?, 5, 5)
net = net + in2 # = shape(?, 5, 5)
net = Dense(1, activation=None, use_bias=False)(net) # = shape(?, 5, 1)
net = tf.squeeze(net, axis=2) # = shape(?, 5)
net = tf.nn.softmax(net, axis=-1) # = shape(?, 5)
out1 = tf.reshape(net, [-1, 1, net.shape[-1]], name="out1") # = shape(?, 1, 5)
out2 = tf.matmul(out1, in1) # = shape(?, 1, 64) (THIS OPERATION IS MISSING IN XML)
out2 = tf.reshape(out2, [-1, out2.shape[-1]], name="out2") # = shape(?, 64)
return out1, out2
in1 = tf.placeholder(tf.float32, (None, 5, 64), "in1")
in2 = tf.placeholder(tf.float32, (None, 5, 1), "in2")
out1, out2 = the_graph(in1, in2)
with tf.Session() as sess:
print(sess.graph_def)
tf.global_variables_initializer().run()
batch_size = 10
sess.run([out1, out2], feed_dict={
in1: np.random.rand(batch_size, *in1.get_shape().as_list()[1:]),
in2: np.random.rand(batch_size, *in2.get_shape().as_list()[1:])
})
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['out1', 'out2'])
graph_io.write_graph(frozen, './', 'test.frozen.pb', as_text=False)
def translate():
open_vino = os.environ['INTEL_OPENVINO_DIR']
if open_vino is None:
print("\n\nInitialize INTEL_OPENVINO environement (setvars.sh) `\n\n\n")
raise RuntimeError("No INTEL_OPENVINO_DIR env. variable")
subprocess.call([
"python3", open_vino + "/deployment_tools/model_optimizer/mo_tf.py",
# "--log_level=DEBUG",
"--input_model", "./test.frozen.pb",
"--input", "in1,in2",
"--input_shape", "[1,5,64],[1,5,1]",
"--data_type", "FP16",
"-o", "./"])
def run_on_vpu():
# source INTEL_OPENVINO_DIR/bin/setupvars.sh
from openvino.inference_engine import IENetwork, IEPlugin
plugin = IEPlugin(device="MYRIAD", plugin_dirs=None)
plugin.set_config({
'LOG_LEVEL': 'LOG_DEBUG',
'VPU_PLATFORM': 'VPU_2480'
})
net = IENetwork(
model="./test.frozen.xml",
weights="./test.frozen.bin"
)
plugin.load(network=net, config={
"VPU_LOG_LEVEL": "LOG_DEBUG",
'VPU_FORCE_RESET': "NO"
})
# infer ....
if __name__ == '__main__':
graph_and_freeze()
translate()
run_on_vpu() And the error I receive is:
|
Dear @konradNowicki Here is more information on the FullyConnected layer: In order to diagnose your issue further, I would need your 1) model 2) the mo command you used to generate IR and 3) a simple short version of your inference script (which you gave above). Thanks, Shubha |
Dear @konradNowicki So I believe this must be a bug in the Myriad PlugIn. Thanks for your code above. I will reproduce it and file a bug against the Myriad Plugin. Sorry for the trouble and thanks for your patience ! Shubha |
Dear @shubha-ramani Matmul is supported but model optimizer might have a bug because I accidentally found that if I remove last reshape then XML is generated Edit:
Thanks |
Dearest @konradNowicki, Thanks, Shubha |
I have changed the device to CPU and the error is the same.
|
Dear @konradNowicki Shubha |
Dear @konradNowicki Stay tuned and thanks for using OpenVino ! Shubha |
@shubha-ramani @konradNowicki Hello, I have the same issues: [ ERROR ] Error reading network: in Layer detector/yolo-v3-tiny/pool2/MaxPool: trying to connect an edge to non existing output port: 2.1 1 3 416 416 1 3 416 416 1 16 416 416 1 16 416 416 1 16 208 208 |
Hello @shubha-ramani, I'm running into a similar issue. Any insights would be appreciated. I'm trying to load the BERT extract-features model but getting the same error.
This is how I'm running the model optimizer command with a saved_model folder:
Here are the .xml and .mapping created by mo.py Then just trying to load them with:
|
The issue has been fixed. Please, verify with the latest OpenVINO from github. |
* snippets CPU 1/6 * snippets CPU 2/6 * snippets CPU 3/6 * snippets CPU 4/6 * snippets CPU 5/6 * snippets CPU 6/6 * make module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC * Add static model creation in snippets for CPU
* first snippet * part1 * update model state snippet * add temp dir * CPU snippets update (#134) * snippets CPU 1/6 * snippets CPU 2/6 * snippets CPU 3/6 * snippets CPU 4/6 * snippets CPU 5/6 * snippets CPU 6/6 * make module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC * Add static model creation in snippets for CPU * export_comp_model done * leftovers * apply comments * apply comments -- properties * small fixes * add serialize * rempve debug info * return IENetwork instead of Function * apply comments * revert precision change in common snippets * update opset * [PyOV] Edit docs for the rest of plugins (#136) * modify main.py * GNA snippets * GPU snippets * AUTO snippets * MULTI snippets * HETERO snippets * Added properties * update gna * more samples * Update docs/OV_Runtime_UG/model_state_intro.md * Update docs/OV_Runtime_UG/model_state_intro.md --------- Co-authored-by: Jan Iwaszkiewicz <[email protected]> Co-authored-by: Karol Blaszczak <[email protected]>
* [Docs][PyOV] update python snippets * first snippet * Fix samples debug * Fix linter * part1 * Fix speech sample * update model state snippet * add serialize * add temp dir * CPU snippets update (#134) * snippets CPU 1/6 * snippets CPU 2/6 * snippets CPU 3/6 * snippets CPU 4/6 * snippets CPU 5/6 * snippets CPU 6/6 * make module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC * Add static model creation in snippets for CPU * export_comp_model done * leftovers * apply comments * apply comments -- properties * small fixes * rempve debug info * return IENetwork instead of Function * apply comments * revert precision change in common snippets * update opset * [PyOV] Edit docs for the rest of plugins (#136) * modify main.py * GNA snippets * GPU snippets * AUTO snippets * MULTI snippets * HETERO snippets * Added properties * update gna * more samples * Update docs/OV_Runtime_UG/model_state_intro.md * Update docs/OV_Runtime_UG/model_state_intro.md * attempt1 fix ci * new approach to test * temporary remove some files from run * revert cmake changes * fix ci * fix snippet * fix py_exclusive snippet * fix preprocessing snippet * clean-up main * remove numpy installation in gha * check for GPU * add logger * iexclude main * main update * temp * Temp2 * Temp2 * temp * Revert temp * add property execution devices * hide output from samples --------- Co-authored-by: p-wysocki <[email protected]> Co-authored-by: Jan Iwaszkiewicz <[email protected]> Co-authored-by: Karol Blaszczak <[email protected]>
Model Optimizer version: 2019.1.0-341-gc9b66a2
I have got a model that uses
tf.matmul
and in the previous versionmo_tf.py
returned an errorMatMul cannot be converted to IE IR
but in the new version, it converts to IR successfully[ SUCCESS ] Generated IR model.
However during loading time in IENetwork I get:
RuntimeError: Error reading network: in Layer X: trying to connect an edge to nonexisting output port: 98.2
Indeed In XML file, there is no layer 98 at all: the
matmul
operation is missing. It appears converter removed unsupported operation without warning. After a search, I found supported operations page but I do not understand how can I replacematmul
with Dense (FullConnected) layer.How can I enable
tf.matmul
in my graph?Can I just edit XML? Are there examples with explanation? Maybe IR docs?
The text was updated successfully, but these errors were encountered: