Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a batch dimension #39

Closed
makaveli10 opened this issue Jan 17, 2021 · 4 comments · Fixed by #193
Closed

Add a batch dimension #39

makaveli10 opened this issue Jan 17, 2021 · 4 comments · Fixed by #193
Labels
enhancement New feature or request question Further information is requested

Comments

@makaveli10
Copy link

I was trying to add a batch dimension to the onnx model and run inference on multiple images concurrently. while doing that faced this issue :

torch.onnx.export(
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/onnx/__init__.py", line 225, in export
return utils.export(model, args, f, export_params, verbose, training,
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/onnx/utils.py", line 85, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/onnx/utils.py", line 632, in _export
_model_to_graph(model, args, verbose, input_names,
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/onnx/utils.py", line 409, in _model_to_graph
graph, params, torch_out = _create_jit_graph(model, args,
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/onnx/utils.py", line 379, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/onnx/utils.py", line 342, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/jit/_trace.py", line 1148, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/jit/_trace.py", line 125, in forward
graph, out = torch._C._create_graph_by_tracing(
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/jit/_trace.py", line 116, in wrapper
outs.append(self.inner(*trace_inputs))
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
  File "/home/gogetter/workspace-vineet/yolov5-rt-stack/models/yolo.py", line 132, in forward
images, targets = self.transform(images, targets)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
  File "/home/gogetter/anaconda3/envs/yolov5_v31/lib/python3.8/site-packages/torchvision/models/detection/transform.py", line 102, in forward
raise ValueError("images is expected to be a list of 3d tensors "
ValueError: images is expected to be a list of 3d tensors of shape [C, H, W], got torch.Size([2, 3, 1536, 2688])

I have added a batch dimension preciously in onnx models by simple expanding the dimension of the input but not in this case. Let me know if you have faced this issue and have any pointers for me?

@makaveli10 makaveli10 added the enhancement New feature or request label Jan 17, 2021
@zhiqwang
Copy link
Owner

zhiqwang commented Jan 18, 2021

Hi @makaveli10 , I've supplied a onnx batch inference tutorial in this notebook, do you try this first?

Actually the model exported as pointed in the tutorial support batch inference, I use a batch inference trick as in torchvision.

the only thing you need do is just to entry the multiple images inputs to a list and then

ort_session = onnxruntime.InferenceSession(export_onnx_name)
# compute onnxruntime output prediction
ort_inputs = dict((ort_session.get_inputs()[i].name, inpt) for i, inpt in enumerate(inputs))
ort_outs = ort_session.run(None, ort_inputs)

BTW, the master branch is not stable now, make sure you are using the release/0.2.0 branch now.

@makaveli10
Copy link
Author

@zhiqwang thanks! I tried the notebook and yes I did try the list input thing also. So what happens is when I create a onnx model with a input list of length 4 it doesnt work for if I have 2 inputs I have to give it some dummy inputs to make it a list of 4 images and that takes more time to run inference.

For 1 image time is 0.06 s. For 4 images it's 0.24s. so i don't see a speed up when using multiple images in a list. Correct me if I am wrong or you had different results.
Thanks

@zhiqwang
Copy link
Owner

zhiqwang commented Jan 18, 2021

Hi @makaveli10 ,

I tried the notebook and yes I did try the list input thing also. So what happens is when I create a onnx model with a input list of length 4 it doesnt work for if I have 2 inputs I have to give it some dummy inputs to make it a list of 4 images and that takes more time to run inference.

Yep, I face the same question as you when I export a onnx model with a input list of length 4, it asked for 4 full image in the API as in my notebooks.

ValueError: Model requires 4 inputs. Input Feed contains 2

A quick search in https://github.com/microsoft/onnxruntime I can't find a good solution :( If you have good solution, we are welcome for PR or proposal :) and I'll try how to solve this problem later.

Maybe a stupid solution is that we could export multiple onnx model supporting various length, such as with length 1, 2, 3 ... And then we determine the onnx model depends on the length of the inputs?

For 1 image time is 0.06 s. For 4 images it's 0.24s. so i don't see a speed up when using multiple images in a list. Correct me if I am wrong or you had different results.

The _onnx_batch_images is a little old for some historical reason as methioned in pytorch/vision#3225 (comment) , when this upstream have updated, I think it will help to speed up current batch inference.

@zhiqwang zhiqwang added the question Further information is requested label Jan 19, 2021
@zhiqwang
Copy link
Owner

zhiqwang commented Jan 28, 2021

Hi, @makaveli10

In my current limited understanding, I don't think this is a bug, and as such I'm closing this issue.

If you have a more flexible method to support the dynamic batch inference, feel free to open a new issue. And let us know if you have further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants