-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TVMError: src/runtime/cuda/cuda_module.cc:93: CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX #1027
Comments
@tqchen already commented in #315 (comment):
But it's still not really clear for me where to looking further. |
Are you using a custom schedule for your model? Usually this is caused by a schedule not being able to handle a specific input shape e.g., the input shape causes too much local memory to be used or too many threads per blocked to be allocated. |
Hi, My code is exactly the same as here with the only difference - the model is my own. Its input is |
Ok, then the issue is likely due to an operator not handling one or more of the shapes in your model correctly. One way to verify this is to temporarily try out more common shapes, e.g., those in Resnet-18 such as (224x224x3) and see that works. |
Ok, thanks for the tip, I'll try it |
Actually, following the original example with Keras ResNet-50 model, I got the error even earlier, on
but probably it more about |
This is confusing because the error is complaining about Can you verify that the target is the CUDA GPU? |
The built-in example fails before the |
BTW, with CPU context and my own model, I got such trace
|
The error for passing in cpu context is correct because we expect gpu. try the latest version in master and it might throw a error and tell you which graph causes the problem |
Hi, According to your suggestion, I rebuilt latest
for my own model and
for Keras ResNet-50 example. |
close due to we are not able to further act on this, discussions are moved to https://discuss.tvmlang.org/ |
I get the exact same issue. jetson@jetson:~/fast-depth/deploy$ python3 tx2_run_tvm.py --input-fp data/rgb.npy --output-fp data/pred.npy --model-dir ../results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ --cuda True File "tx2_run_tvm.py", line 91, in File "tx2_run_tvm.py", line 88, in main File "tx2_run_tvm.py", line 36, in run_model File "/home/jetson/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in call tvm._ffi.base.TVMError: Traceback (most recent call last): Still haven't found a solution to it. I am runnig it on a Jetson Nano. Please help. |
did you find some solution? I have exact same issue. I don't know how to fix it, could you help me? |
Hi everyone.
I got such error reproducing toy example from
nnvm
but with my own model. CallingI get the error similar to #315 (comment):
Can you clarify me what can be wrong now?
Thanks in advance!
BTW, I'm a bit confused by
tvm.gpu()
docstring 😃:The text was updated successfully, but these errors were encountered: