-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY][DOCS] Port from_mxnet tutorial to relay #2608
Conversation
447525f
to
349eb48
Compare
tutorials/relay/from_mxnet.py
Outdated
target = 'cuda' | ||
shape_dict = {'data': x.shape} | ||
with relay.build_config(opt_level=3): | ||
graph, lib, params = relay.build_module.build(net, target, params=params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think following is the recommended way to inference in Relay
with relay.build_config(opt_level=1):
intrp = relay.build_module.create_executor('graph', sym, tvm.gpu(0), target)
tvm_output = intrp.evaluate(sym)(tvm.nd.array(x.astype(dtype)), **params).asnumpy()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the suggestion; looks like the output of the interpreter is already numpy
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhreshold Please take a look again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thanks @eqy @zhreshold This is merged. |
* check in * update build and run
* check in * update build and run
* check in * update build and run
Port the prediction part of the NNVM
from_mxnet
tutorial to relay (note that the checkpointing part is omitted for now).