-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading IR: Primitive descriptor was not found for node #157
Comments
Dear @el1995 There is an issue on MatMul. Sorry about the inconvenience. Can you kindly check out the following github issue 134 ? I reproduced that customer's bug and filed a bug ticket. Thanks ! Shubha |
Hello Shubha, so I just recognize that you are also the Intel expert here, we already discuss my issue here: I checked github issue 134, however I do get totally different error messages. But obviously both problems refer to MatMul issues. As we strongly depend on OpenVINO I have two further questions for now:
Best regards and thank you, Elias Edit: I am able to load the intermediate representation of Intel's alexnet_fp32.xml for classification. It also contains FullyConnected-layers, and if I got it right the model optimizer converts Tensorflow's MatMul to FullyConnected. So looks like the conversion is the problem? I tried the same with two other models, exactly the same error for all the three. Please find attached zip-files containing all the three models, each with their .pb, .xml, .bin and .mapping. For model 3 .pb is missing because the model size was too large. Error for model 1: Error for model 2: Error for model 3: Model1.zip Moreover I point out that we generate our frozen graphs from keras .h5-format. I also attach a typical .h5-file that we use. |
Dear @el1995 Thanks for your patience. I will post my findings here. Sincerely, Shubha |
Hello Shubha, I hope they help, I tried plenty of them so I thought it'd be useful to provide them ;-) So the task I want to do later on has nothing to do with classification, we are not even in the field of image recognition. I want to run a residual network with 3 input sizes (floats) to get 15 output values (also floats), we use it within a fluid dynamics simulation. I only mentioned it because I recognized that alexnet_fp32.xml has FullyConnected layers, and to my understanding MatMuls from a frozen graph should be converted to this. However generating an executable network worked here (the code I used can be found in ~/intel/openvino/inference_engine/samples/hello_autoresize_classification). Another example I tried (I may have mentioned): I found a frozen graph called "googlenet-v3.frozen.pb" here: I will further work on the problem and keep you updated. Greets, Elias |
Dear @el1995 ,
When I do this, I get the following exception, which makes me suspect that your model has an issue. File "C:\Users\sdramani\AppData\Local\Programs\Python\Python36\lib\site-packages\keras\utils\generic_utils.py", line 165, in deserialize_keras_object I didn't try your other models because until I can convert a keras model to a frozen pb first, it's pointless to do so. I am using the latest version of Keras which is 2.2.4. I am wondering how you converted this model to a frozen pb. You must have succeeded because you were able to generate IR. Looking forward to hearing your response. Thanks ! Shubha |
Hi Shubha, thanks for your reply. I am sorry, I forgot to mention our custom coeff_r2-funktion. I added our python script (zipped as github did not accept .py) for conversion purposes which was generated by a colleauge. The conversion is done using the command line: Please let me know if there is something additional I should provide. Best regards, Elias Edit: as I just generated a new h5 file I decided to also add it so that you have a comparison, but both files (old and this one) should be nearly identical. |
Dear @el1995 Thanks, |
Dear @el1995, Thanks for your patience and I'm really truly sorry that it's taken me so long to get back to you. I was finally able to run your model and infer (using classfication_sample.exe). With even later than 2019R1.1 I am getting an error but it has nothing whatsoever to do with MatMul. The issue is that to build IR I'm using the following command:
Note the input has only 2 things, batch size and number of channels. What are the proper height and width required for your model input ? Visualizing your Keras model in Netron doesn't give me clues either. The IR does get converted successfully but I get an error when I run classification_sample.exe, an error in fact which makes perfect sense: C:\Users\sdramani\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release>classification_sample.exe -i c:\users\sdramani\Downloads\pics\horse.bmp -m c:\users\sdramani\Downloads\github\frozen_pb.xml [ INFO ] Loading network files: What the error is telling you is that the --input_shape in the IR is just [1,3] which is not the layer shape which Inference Engine expects (NCHW). There is no way for Inference Engine to guess H or W, you must provide it somehow. I tried guessing a few HxW based on the model image in Netron but nothing seemed to work, Model Optimizer complained that the --input_shape passed in was invalid. Anyway. I hope it helps. Thanks for using OpenVino ! Shubha |
Hello Shubha, no problem, thanks for your support. As mentioned above I do not want to do image recognition, instead we use it for fluid dynamics. So, assuming for now that the batch size is 1, we have an input tensor of shape [1,3] and an output tensor of shape [1,15]. The 3 sizes passed to the network are physical sizes (e.g. the so-called progress variable that describes combustion processes), the sizes taken from the inference are also physical sizes (e.g. density of the fluid, temperature, concentration of CO2,...). So for now I just want to get the code running with [1,3] as input shape and [1,15] as output shape. However later on we will drastically increase the batch size. I am not getting your error due to the fact that I set the format to HW instead of NCHW. To my understanding that is no problem, also the inference engine takes it without any complaints. However in line 75 of the file attached it crashes, during ExecutableNetwork executable_network = plugin.LoadNetwork(network, {}); This is where I get the error I am talking about: terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException' It would be great if you would try to do the inference with the code (and the CMake-File) I attached and see whether you can get rid of the "primitive descriptor not found" bug. Another notice: as you will see I do not provide an input image and the hardware type (GPU, CPU,...) when I try to run my code. The reasons therefore are:
To run my code, I use the following command: Greets, |
Hi Shubha, have you already been able to reproduce my issue? |
Dear @el1995 I ported your code over yesterday. Hopefully today I will have something for you. Thanks for your patience. Shubha |
Dear @el1995 I think it's a bug and I don't know why this should happen. I will file a bug on your behalf. It seems like someone else on the forum had a IDZ Forum Issue similar to yours. I do believe it has to do with MatMul, as I told you earlier. Thanks, Shubha |
Hello Shubha, great, thank you very much. Good to know you already found the underlying issue. As mentioned in my first response in this thread the issue in the Intel forum was also reported by myself, it is the identical issue:
I hope for a quick fix as we would be very excited to finally use OpenVINO within our project. Best regards and thank you very much, Elias |
Dear @el1995 Shubha |
Dear @el1995 Thanks ! Shubha |
Hello Shubha, error message remains unchanged:
|
Dear @el1995 , Shubha |
Dear @el1995 It looks like a coding bug. Please make the following change to your main.cpp When I tested your code, this worked for me ! Shubha |
Worked perfectly, thank you very much! I close this and post a link to this thread to the second post in you Developer Zone: |
…in return statement in is_dubug function. (openvinotoolkit#157)
Hello everyone,
I have converted an own Tensorflow frozen graph to the intermediate representation (.xml and .bin). Unfortunately, my code is unable to use it as an executable network, which means it fails at this command:
ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});
The error message reads:
terminate called after throwing an instance of 'InferenceEngine::details::InferenceEngineException'
what(): Primitive descriptor was not found for node dense_1/MatMul.
I attached a zip containing my .xml, my .bin and my .pb file, I would be very thankful to receive support as I have not been able to fix this issue for several days. Thanks!
PrimitiveDescriptorError.zip
The text was updated successfully, but these errors were encountered: