Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PaddlePaddle Hackathon 4][Frontend][Paddle]add thresholded_relu/index_select/eye/linspace/take_alone_axis/dist for paddle frontend #14172

Merged
merged 8 commits into from
Mar 12, 2023

Conversation

XG-zheng
Copy link
Contributor

@XG-zheng XG-zheng commented Mar 2, 2023

Add thresholded_relu/index_select/eye/linspace/take_alone_axis/dist for paddle frontend.

But in paddle 2.1.3, eye/linspace/take_alone_axis are not supported.
The test case has passed completely in version 2.4.2.

@tvm-bot
Copy link
Collaborator

tvm-bot commented Mar 2, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@leandron
Copy link
Contributor

leandron commented Mar 9, 2023

@XG-zheng thanks for the PR.

Some operators here overlap with what's proposed in #14205, so can you work @DadJie, to see which PR gets merged?

@XG-zheng
Copy link
Contributor Author

@XG-zheng thanks for the PR.

Some operators here overlap with what's proposed in #14205, so can you work @DadJie, to see which PR gets merged?

The operators proposed in #14205 is covered by #14160 and #14172


x = g.get_node(op.input("X")[0])
y = g.get_node(op.input("Y")[0])
z = _op.abs(_op.subtract(x, y))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

z = x - y. It can't use _op.abs, otherwise inv_p=_expr.const(1.0 / p, dtype=dtype) miscalculated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am confused that z would lead to the miscalculation of inv_p. I refer to the code https://github.com/PaddlePaddle/Paddle2ONNX/blob/develop/paddle2onnx/mapper/tensor/dist.cc#L33 and api doc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I saw p as z.
No problem here, but _op.abs(z) should no longer be used in the following code.

stop = g.get_node(op.input("Stop")[0])
num = g.get_node(op.input("Num")[0])
dtype = _convert_dtype_value(op.attr("dtype"))
start, infered = try_infer_value(start, parameters=g.get_params())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dynamic shape should also be supported

if num == 1:
out = _op.full(_expr.const(start, dtype), shape=(1))
else:
if dtype in ["int32", "int64"]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dtype support float32, float64, int32 and int64

start = _expr.const(start, "float32")
stop = _expr.const(stop, "float32")
step = _expr.const(step, "float32")
out = _op.transform.arange(start=start, stop=stop, step=step, dtype="float32")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is dtype fixed to "float32"?

"""Operator converter for eye."""

num_rows = op.attr("num_rows")
num_columns = op.attr("num_columns")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_columns might equal -1, in which case num_columns equals num_rows.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should look at the c++ definition and implementation of op rather than the api.
The Paddle to TVM conversion is an op mapping rather than an api.
https://github.com/PaddlePaddle/Paddle/blob/b780a3ff4f89f73f4efd095002b320a5fe673afe/paddle/phi/kernels/impl/eye_kernel_impl.h#L44

class ThresholdedRelu(nn.Layer):
@paddle.jit.to_static
def forward(self, inputs):
return nn.functional.thresholded_relu(inputs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some cases where the threshold is 0.5, paddle.randn ranges from 0 to 1

def forward(self, inputs):
return paddle.eye(3, 5, dtype="int32"), paddle.eye(3, 5, dtype="float32"), inputs

class Eye2(nn.Layer):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add case where num_columns is None

@junrushao
Copy link
Member

Please update the PR to resolve the merge conflict :-) Happy to get it in afterwards

Copy link
Member

@junrushao junrushao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM

@junrushao junrushao merged commit 6fa88e3 into apache:main Mar 12, 2023
tmp_dtype = "float32"
else:
tmp_dtype = "float64"
start = _op.cast(start, tmp_dtype)
Copy link
Contributor

@heliqi heliqi Mar 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be static and dynamic shape cases.
If it is a static shape, the start value can be inferred, the op to TVM is very small;
If the value cannot be inferred from the dynamic shape, more op will be converted to TVM, which may affect performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry. I'm a bit confused. Even if the start value can be inferred, the output is still dynamic. How do we optimize it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the start,stop and num value can be inferred, that's what you implemented before.
There is no need to add op like where, subtract, divide, etc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I think tvm will optimize automatically. If the start stop and num value can be inferred, these ops will be eliminated by FoldConstant pass in relay, it only outputs a relay.Constant. https://github.com/apache/tvm/blob/main/python/tvm/relay/frontend/paddlepaddle.py#L2572

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested the case where the values can be infered, the relay IR as follow.

def @main() {
  meta[relay.Constant][0]
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants