-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how i can use batch to accelerate #60
Comments
The performance boost from batch happens due to more efficient parallelization of bigger amount of work vs less amount of work. In regular case when you increase batch, you can just load computing capacity the more efficient way. I.e. the throughput will be increased (whlie latency will be increased from batch too, that is important to understand trade-off of batch mode). 129ms - means that network is quite heavy and probably even in case of bigger picture in your scenario, all cores are loaded the most efficient way. And the same throughput says that parallelization on spatial dimention has the same efficency as parallelization on batch dimention. |
@jlygit, could you please share which model is used? Is it from https://github.com/opencv/open_model_zoo? |
@dkurt Sorry, I can't share it since we used our own model. |
@jlygit, Is it object detection model or not? If so, try to run it on smaller input (use net's reshape to fit the shapes before). Then try to find the resolution which still gives satisfying accuracy. |
How can I use batch in python demo?Please help me.Thanks! |
Python bindings (+tests)
* [GPU] Enable pre_replace_deconv for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable prepare_primitive_fusing for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable reorder_inputs for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable concat_input_order/pre_optimize_bias for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable eltwise_shirinking/eltwise_remove_stride for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable strided_slice_optimize/handle_reshape for pre-optimization passes - Fix exception during reorder_inputs pre-opt pass in topk unittest with opt build option Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable prepare_padding/remove_redundant_reorders for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable progate_constants/prepare_buffer_sharing for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]> * [GPU] Enable add_required_reorders/add_onednn_optimization_attributes for pre-optimization passes Signed-off-by: Andrew Kwangwoong Park <[email protected]>
add pytorch frontend to wheel
* Configured dependabot updates for main pip requirements * Fixed MO working directory * Update .github/workflows/build_doc.yml * pip(deps): bump urllib3 from 1.26.5 to 1.26.13 (#62) Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.5 to 1.26.13. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.5...1.26.13) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * pip(deps): bump six from 1.15.0 to 1.16.0 (#60) Bumps [six](https://github.com/benjaminp/six) from 1.15.0 to 1.16.0. - [Release notes](https://github.com/benjaminp/six/releases) - [Changelog](https://github.com/benjaminp/six/blob/master/CHANGES) - [Commits](benjaminp/six@1.15.0...1.16.0) --- updated-dependencies: - dependency-name: six dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * pip(deps): bump imagesize from 1.2.0 to 1.4.1 (#70) Bumps [imagesize](https://github.com/shibukawa/imagesize_py) from 1.2.0 to 1.4.1. - [Release notes](https://github.com/shibukawa/imagesize_py/releases) - [Commits](shibukawa/imagesize_py@1.2.0...1.4.1) --- updated-dependencies: - dependency-name: imagesize dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…/extend-licenses Extend licenses
hello!
I infer one image with resolution of 1280* 720 using openvino and the infer time is 129ms .
I try to crop this image into four patchs with resolution of 640* 360, and infer these four batches, and the infer time is about 129ms .
Why it has not been accelerated when i using batching? how i can use batch to accelerate or other way?
Thanks,
jly
The text was updated successfully, but these errors were encountered: