-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Did you forget to bind? #274
Comments
getting the same err |
Getting same error with CUDA 12.1 and ROCM 5.7 in the Web Stable Diffussion notebook demo. Using LLVM target and CPU device I get a different one:
Using mlc-ai-nightly-cu121 and mlc-ai-nightly-rocm57 TVM, relax.build fails EDIT: Same behaviour happens on Google Collab (Tesla T4 GPU with CUDA 12.1, mlc-ai-nightly-cu121, pytorch nightly 2.4 cpu). This is probably a regression, or a failure to bind correctly some variables. |
Expected behavior
generate a .so model file
Actual behavior
report error:
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/mlc-relax/relax/build/libtvm.so(tvm::ApplyPasses(tvm::IRModule, tvm::transform::Sequential)+0x42) [0x7f5212c4d512]
[bt] (7) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule) const+0x56) [0x7f5212d21dd6]
[bt] (6) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x347) [0x7f5212d21aa7]
[bt] (5) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::SequentialNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x44a) [0x7f5212d2429a]
[bt] (4) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x347) [0x7f5212d21aa7]
[bt] (3) /home/mlc-relax/relax/build/libtvm.so(tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const+0x1f4) [0x7f5212d22b84]
[bt] (2) /home/mlc-relax/relax/build/libtvm.so(+0x256a123) [0x7f5213820123]
[bt] (1) /home/mlc-relax/relax/build/libtvm.so(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x3d) [0x7f521298bfcd]
[bt] (0) /home/mlc-relax/relax/build/libtvm.so(tvm::runtime::Backtraceabi:cxx11+0x2c) [0x7f5214c849dc]
Did you forget to bind?
Variable
A
is directly accessed by host memory (it is not contained in a thread environment or in the function arguments.Variable
T_relu
is directly accessed by host memory (it is not contained in a thread environment or in the function arguments.File "/home/mlc-relax/relax/src/tir/analysis/verify_memory.cc", line 205
RuntimeError: Memory verification failed with the following errors:
# from tvm.script import tir as T
Environment
Ubuntu 20.04, mlc-relax up to date, python3.8
Steps to reproduce
cross_compiler = "/home/AndroidSdk/ndk/23.2.8568313/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android28-clang++"
target = tvm.target.Target("opencl --device=mali", host="llvm --mtriple=aarch64-linux-gnu")
#target = tvm.target.Target("llvm --num-cores=4 --mtriple=aarch64-linux-android --mattr=+neon")
model_out = "./libs/mobilenet.so"
relay_mod, relay_param,_, _ = get_network("mobilenet", 1)
relax_mod = relay_translator.from_relay(relay_mod["main"], target, relay_param)
ex = relax.build(relax_mod, target)
ex.export_library(model_out, cc=cross_compiler)
in function get_network, it use testing.mobilenet.get_workload to load model.
Triage
*(backend::OpenCL)
The text was updated successfully, but these errors were encountered: