This repository has been archived by the owner on Apr 1, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 64
No significant change in iters/sec while comparing cpu vs gpu performance #138
Comments
hemantranvir
changed the title
Unable to see significant change in iters/sec while compaing cpu/gpu performance
No significant change in iters/sec while compaing cpu/gpu performance
Nov 1, 2019
hemantranvir
changed the title
No significant change in iters/sec while compaing cpu/gpu performance
No significant change in iters/sec while comparing cpu/gpu performance
Nov 1, 2019
hemantranvir
changed the title
No significant change in iters/sec while comparing cpu/gpu performance
No significant change in iters/sec while comparing cpu vs gpu performance
Nov 1, 2019
I don't think current integration support CUDA now. But we have something WIP. @ilia-cher |
I have a local patch that adds support for CUDA, eta send it next week |
@ilia-cher Thanks for your response! |
plan to send cuda support PR this week |
@ilia-cher Any updates? Thanks! |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have installed torch_tvm with cuda/opencl support by enabling the following options:
https://github.com/dmlc/tvm/blob/master/cmake/config.cmake#L32
https://github.com/dmlc/tvm/blob/master/cmake/config.cmake#L129
https://github.com/dmlc/tvm/blob/master/cmake/config.cmake#L132
Trying to compare the cpu vs gpu performance by running the following test: https://github.com/pytorch/tvm/blob/master/test/benchmarks.py
Execution Log:
Edit
L39
ofbenchmarks.py
totorch_tvm.enable(opt_level=3, device_type='cuda')
Execution Log:
As seen above there is no significant change in iter/s.
CPU version: 62.80919757107452 iter/s
GPU version: 64.52328684937197 iter/s
If I check the GPU memory usage with
nvidia-smi
command, as expected, the GPU is idle.Is there any other configuration necessary to enable GPU backend?
(Apart from setting
set(USE_CUDA ON)
,set(USE_CUDNN ON)
,set(USE_CUBLAS ON)
in https://github.com/dmlc/tvm/blob/master/cmake/config.cmakeAnd setting
torch_tvm.enable(opt_level=3, device_type='cuda')
in https://github.com/pytorch/tvm/blob/master/test/benchmarks.py)The text was updated successfully, but these errors were encountered: