Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Frontend][TFLite] Add parser support for squared difference #4652

Merged
merged 4 commits into from
Jan 16, 2020

Conversation

wyc-ruiker
Copy link
Contributor

Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

@wyc-ruiker
Copy link
Contributor Author

Could you check this PR, @kevinthesun @FrozenGene

@@ -723,6 +723,14 @@ def _test_greater(data):
""" One iteration of greater """
return _test_elemwise(math_ops.greater, data)

#######################################################################
# Squared_difference
# -------
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Align - the same length of squared_difference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

# Check if the input tensor is quantized, call QNN op
if self.is_quantized(op):
raise tvm.error.OpNotImplemented(
'TFlite quantized greater operator is not supported yet.')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment is wrong. Not greater op.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

raise tvm.error.OpNotImplemented(
'TFlite quantized greater operator is not supported yet.')
difference = self._convert_elemwise(_op.subtract, op)
out = _op.multiply(difference, difference)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe could consider using power.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link
Member

@FrozenGene FrozenGene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. Thanks!

raise tvm.error.OpNotImplemented(
'TFlite quantized squared difference operator is not supported yet.')
difference = self._convert_elemwise(_op.subtract, op)
out = _op.power(difference, relay.const(2, 'float32'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It it better not make it be relay.const(2, 'float32')). Because let us imagine if op's data is fp16 (in the future), 2 shouldn't be float32. So we could do like this: call get_tensor_type_str(self.get_output_tensors(op)[0].tensor.Type()) to get type, then call _op.power. Or if you think it is complex, you could simply call _op.multiple(difference, difference), I think it is OK. Previously, I mention _op.power, because we could have intrinsic of power to handle it. However, I think it shouldn't be bottleneck after thinking twice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your suggestion. I think _op.power is more appropriate here and I have already fixed it.

@FrozenGene FrozenGene self-assigned this Jan 16, 2020
@FrozenGene FrozenGene added the status: need update need update based on feedbacks label Jan 16, 2020
@FrozenGene
Copy link
Member

Thanks @wyc-ruiker LGTM now. Let us wait CI's result.

@FrozenGene FrozenGene merged commit b1d93ec into apache:master Jan 16, 2020
@FrozenGene
Copy link
Member

Thanks @wyc-ruiker This is merged.

@FrozenGene FrozenGene added status: accepted and removed status: need update need update based on feedbacks labels Jan 16, 2020
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 26, 2020
…pache#4652)

* [Relay][Frontend][TFLite] Add parser support for squared difference

* fix some error

* fix exp_type

* add comment
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 28, 2020
…pache#4652)

* [Relay][Frontend][TFLite] Add parser support for squared difference

* fix some error

* fix exp_type

* add comment
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2020
…pache#4652)

* [Relay][Frontend][TFLite] Add parser support for squared difference

* fix some error

* fix exp_type

* add comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants