-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relay/TOPI][Op] Add erf intrinsic and op #3702
Conversation
Would you mind also adding this to the TensorFlow frontend? The op name is Erf. |
@soiferj Sure, will add it. |
@icemelon9 , is there a timeline to get this PR ready for review? This would be really nice to unblock running BERT-based models! |
@soiferj Sorry about the delay. I was working on the dynamic shape function support for Any. I'll start updating this PR this week. |
@tqchen Could you help check if the vectorizable intrinsic list is complete? |
@@ -85,6 +85,18 @@ RELAY_REGISTER_UNARY_OP("exp") | |||
.set_support_level(1) | |||
.set_attr<FTVMCompute>("FTVMCompute", RELAY_UNARY_COMPUTE(topi::exp)); | |||
|
|||
|
|||
RELAY_REGISTER_UNARY_OP("erf") | |||
.describe(R"code(Returns the error function value for input array, computed element-wise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.describe(R"code(Returns the error function value for input array, computed element-wise. | |
.describe(R"code(Returns the Gauss error function value for input array, computed element-wise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
erf changes looks good to me
@@ -55,10 +55,17 @@ def _mx_fully_connected(inputs, attrs): | |||
use_flatten = attrs.get_bool("flatten", True) | |||
if has_flatten and use_flatten: | |||
inputs[0] = _op.nn.batch_flatten(inputs[0]) | |||
data_shape = _infer_type(inputs[0]).checked_type.shape | |||
if len(data_shape) > 2: | |||
inputs[0] = _op.reverse_reshape(inputs[0], [-1, 0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you elaborate a bit what are these changes for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because mxnet dense op allows (d1, d2, ..., dk, in_dim), (out_dim, in_dim) --> (d1, d2, .., dk, out_dim). But this isn't allowed in topi's dense op.
Thanks @icemelon9 @yongwww |
* add more ops * stop vectorization for erf * x * cleanup * fix * add whitelist for vectorizable intrin * add tf converter * fix dense * fix * add missing intrin * fix mxnet frontend * fix nvptx
* add more ops * stop vectorization for erf * x * cleanup * fix * add whitelist for vectorizable intrin * add tf converter * fix dense * fix * add missing intrin * fix mxnet frontend * fix nvptx
* add more ops * stop vectorization for erf * x * cleanup * fix * add whitelist for vectorizable intrin * add tf converter * fix dense * fix * add missing intrin * fix mxnet frontend * fix nvptx
@yzhliu @masahi @yongwww @kevinthesun could you help review this pr?