-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify error message if onnx_graphsurgeon
is not installed?
#353
Comments
Hi @mattpopovich , The reason is that we want to make |
Got it, I'll make a PR in a bit to add some assertion information! |
Hi @mattpopovich , As you have found before, there are some similar issues elsewhere. I just released version 0.6.2, and I also introduce this poor implementation on this new releases for rapid development. The main reason for these things is that I want to make the core functionality of yolort as light as possible. Since there are many similar problems in yolort, I think it's time to resolve this problem in the next release. TorchAudio implement a def is_module_available(*modules: str) -> bool:
r"""Returns if a top-level module with :attr:`name` exists *without**
importing it. This is generally safer than try-catch block around a
`import X`. It avoids third party libraries breaking assumptions of some of
our tests, e.g., setting multiprocessing start method when imported
(see librosa/#747, torchvision/#544).
"""
return all(importlib.util.find_spec(m) is not None for m in modules) And then they implement a decorator from functools import wraps
def is_tensorrt_available():
return is_module_available("tensorrt")
def requires_tensorrt():
if is_tensorrt_available():
def decorator(func):
return func
else:
def decorator(func):
@wraps(func)
def wrapped(*args, **kwargs):
raise RuntimeError(f"{func.__module__}.{func.__name__} requires TensorRT")
return wrapped
return decorator More context |
Would you prefer to write specific I'm leaning towards just using I feel like they made specific |
I agree with you here, and this approach is also really a bit complicated and not necessary for us. Seems the dependencies for Maybe the def _module_available(module_path: str) -> bool:
"""
Check if a path is available in your environment.
>>> _module_available('os')
True
>>> _module_available('bla.bla')
False
"""
try:
return find_spec(module_path) is not None
except AttributeError:
# Python 3.6
return False
except ModuleNotFoundError:
# Python 3.7+
return False
except ValueError:
# Sometimes __spec__ can be None and gives a ValueError
return True And then they use a def example_requires(module_paths: Union[str, List[str]]):
return requires(module_paths)(lambda: None)() |
Check out this commit that I made. It follows the same thing that pytorch/audio did and I think is pretty straightforward. root@user:/home/user/git/yolov5-rt-stack/# python3 function_using_YOLOTRTGraphSurgeon.py
We're using TensorRT: 8.2.3.0 on cuda device: 0.
NOTE! Installing ujson may make loading annotations faster.
[...]
Traceback (most recent call last):
File "function_using_YOLOTRTGraphSurgeon.py", line 63, in <module>
export_tensorrt_engine(
File "/home/user/git/yolov5-rt-stack/yolort/runtime/trt_helper.py", line 69, in export_tensorrt_engine
yolo_gs = YOLOTRTGraphSurgeon(checkpoint_path, version=version, input_sample=input_sample)
File "/home/user/git/yolov5-rt-stack/yolort/utils/module_utils.py", line 39, in wrapped
raise RuntimeError(f"{func.__module__}.{func.__name__} requires {req}")
RuntimeError: yolort.relay.trt_graphsurgeon.YOLOTRTGraphSurgeon requires module: onnx_graphsurgeon
root@user:/home/user/git/yolov5-rt-stack/# pip install onnx_graphsurgeon
[successful]
root@user:/home/user/git/yolov5-rt-stack/# python3 function_using_YOLOTRTGraphSurgeon.py
[no Traceback] If you're okay with this change, I'll do the same for TensorRT, onnxruntime, and lightning. Then I'll make a PR to close this issue. Additionally, let me know if you are okay with the placement of |
Thanks, I think it's very pretty!
And there is already a file aimed to resolve the version and dependencies problem: https://github.com/zhiqwang/yolov5-rt-stack/blob/main/yolort/utils/dependency.py . I guess it will be better if we could move the |
Sounds good. I'll make the PR tomorrow! |
🐛 Describe the bug
When following the TensorRT deployment instructions, there is a point where we try to use
YOLOTRTGraphSurgeon
, but to use that, we need toimport onnx_graphsurgeon as gs
. Otherwise, this line will fail becausegs = None
.I'm not sure why you catch the
ImportError
ifonnx_graphsurgeon
is not found asYOLOTRTGraphSurgeon
is the only thing that is in that file and it seems like we needonnx_graphsurgeon
, but just wanted to bring this to your attention.Low priority 😄
Versions
The text was updated successfully, but these errors were encountered: