Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why RecursionLimit = 10 in OnnxTransformer? #5585

Closed
s-tory opened this issue Jan 13, 2021 · 3 comments · Fixed by #5796
Closed

Why RecursionLimit = 10 in OnnxTransformer? #5585

s-tory opened this issue Jan 13, 2021 · 3 comments · Fixed by #5796
Labels
onnx Exporting ONNX models or loading ONNX models P2 Priority of the issue for triage purpose: Needs to be fixed at some point. usability Smoothing user interaction or experience

Comments

@s-tory
Copy link

s-tory commented Jan 13, 2021

What is the purpose that Google.Protobuf.CodedInputStream.RecursionLimit is set 10 at the following code?

using (var codedStream = Google.Protobuf.CodedInputStream.CreateWithLimits(modelStream, Int32.MaxValue, 10))


It is set 100 by default in protocol-buffers C#-wrapper.

https://github.com/protocolbuffers/protobuf/blob/10599e6c8dde8a9875258e03054a696d53cadebd/csharp/src/Google.Protobuf/CodedInputStream.cs#L83

        internal const int DefaultRecursionLimit = 100;

I could not load some network / .onnx file by the following exception be caused by RecursionLimit = 10 without the monkey-patched Microsoft.ML.OnnxTransformer.dll.

Google.Protobuf.InvalidProtocolBufferException: Protocol message had too many levels of nesting.  May be malicious.  Use CodedInputStream.SetRecursionLimit() to increase the depth limit.

I think that the networks are including Inception construction especially.
(ex. tensorflow/models`s Faster-RCNN-Inception-V2 converted)

Thank you for coding great tools!

@s-tory s-tory changed the title Why RecursionLimit = 10 ? Why RecursionLimit = 10 in OnnxTransformer? Jan 13, 2021
@michaelgsharp
Copy link
Member

Thanks for bringing this up. Let me discuss this with my team and see if we can figure out why we were setting this to 10.

@michaelgsharp michaelgsharp added onnx Exporting ONNX models or loading ONNX models P2 Priority of the issue for triage purpose: Needs to be fixed at some point. usability Smoothing user interaction or experience labels Jan 13, 2021
@s-tory
Copy link
Author

s-tory commented Mar 5, 2021

@michaelgsharp
Did you figure out anything?

@darth-vader-lg
Copy link
Contributor

I'm having exactly the same problem with one my efficientdet-d1 custom object detection model by TensorFlow 2.4.1 (the same model built as SSD Mobile Net V2 doesn't have any problem).
Just for now I exported it as a frozen_graph.pb and all is ok.
Frozen because with saved_model format there is another issue: I can use it for inference without any problem but it cannot be saved after as an ML.NET standard zip repository. At the saving time all the pipe is correctly written in the zip but the saved_model file is missing...

darth-vader-lg added a commit to darth-vader-lg/protobuf that referenced this issue May 12, 2021
michaelgsharp added a commit that referenced this issue May 27, 2021
…#5796)

* Raised the limit of recursions in the creation of the CodedInputStream in the OnnxTransformer (as the default value in the Google.Protobuf). Otherwise some models cannot be loaded (ex. TF2 Efficentdet).

* Updated arcade to the latest version (#5783)

* updated arcade to the latest version

* updated eng/common correctly

* Fixed benchmark test.

* Use dotnet certificate (#5794)

* Use dotnet certificate

* Update 3.1 SDK

Co-authored-by: Prashanth Govindarajan <[email protected]>
Co-authored-by: Michael Sharp <[email protected]>

* Arm build changes (#5789)

* arm testing

* initial commit with build working on arm64

* windows changes

* build fixes for arm/arm64 with cross compilation

* cross build instructions added

* renamed arm to Arm. Changed TargetArchitecture to default to OS architecture

* fixed some formatting

* fixed capitilization

* fixed Arm Capitilization

* Fix cross-compilation if statement

* building on apple silicon

* removed non build related files

* Changes from PR comments. Removal of FastTreeNative flag.

* Changes from pr comments.

* Fixes from PR comments.

* Changed how we are excluding files.

* Onnx load model (#5782)

* fixed onnx temp model deleting

* random file path fixed

* updates from pr

* Changes from PR comments.

* Changed how auto ml caches.

* PR fixes.

* Update src/Microsoft.ML.AutoML/API/ExperimentSettings.cs

Co-authored-by: Eric Erhardt <[email protected]>

* Tensorflow fixes from PR comments

* fixed filepath issues

Co-authored-by: Eric Erhardt <[email protected]>

Co-authored-by: Michael Sharp <[email protected]>
Co-authored-by: Matt Mitchell <[email protected]>
Co-authored-by: Prashanth Govindarajan <[email protected]>
Co-authored-by: Eric Erhardt <[email protected]>
@ghost ghost locked as resolved and limited conversation to collaborators Mar 17, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
onnx Exporting ONNX models or loading ONNX models P2 Priority of the issue for triage purpose: Needs to be fixed at some point. usability Smoothing user interaction or experience
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants