Skip to content

Commit

Permalink
add autogen code support for reverse op (#52701)
Browse files Browse the repository at this point in the history
* add autogen code support for reverse op

* bug fixed
  • Loading branch information
GreatV authored Apr 11, 2023
1 parent c4e1fcb commit ab75441
Show file tree
Hide file tree
Showing 6 changed files with 27 additions and 132 deletions.
117 changes: 0 additions & 117 deletions paddle/fluid/operators/reverse_op.cc

This file was deleted.

6 changes: 6 additions & 0 deletions paddle/phi/api/yaml/backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1328,6 +1328,12 @@
kernel :
func : renorm_grad

- backward_op : reverse_grad
forward : reverse (Tensor x, IntArray axis) -> Tensor(out)
args : (Tensor out_grad, IntArray axis)
output : Tensor(x_grad)
invoke : reverse(out_grad, axis)

- backward_op : roll_grad
forward : roll(Tensor x, IntArray shifts, int64_t[] axis) -> Tensor(out)
args : (Tensor x, Tensor out_grad, IntArray shifts, int64_t[] axis)
Expand Down
6 changes: 0 additions & 6 deletions paddle/phi/api/yaml/legacy_backward.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -884,12 +884,6 @@
backward : reshape_double_grad
inplace : (out_grad -> x_grad)

- backward_op : reverse_grad
forward : reverse (Tensor x, IntArray axis) -> Tensor(out)
args : (Tensor out_grad, IntArray axis)
output : Tensor(x_grad)
invoke : reverse(out_grad, axis)

- backward_op : rnn_grad
forward : rnn (Tensor x, Tensor[] pre_state, Tensor[] weight_list, Tensor sequence_length, Tensor dropout_state_in, float dropout_prob, bool is_bidirec, int input_size, int hidden_size, int num_layers, str mode, int seed, bool is_test) -> Tensor(out), Tensor(dropout_state_out), Tensor[](state), Tensor(reserve)
args : (Tensor x, Tensor[] pre_state, Tensor[] weight_list, Tensor sequence_length, Tensor out, Tensor dropout_state_out, Tensor reserve, Tensor out_grad, Tensor[] state_grad, float dropout_prob, bool is_bidirec, int input_size, int hidden_size, int num_layers, str mode, int seed, bool is_test)
Expand Down
9 changes: 0 additions & 9 deletions paddle/phi/api/yaml/legacy_ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1133,15 +1133,6 @@
intermediate : xshape
backward: reshape_grad

- op : reverse
args : (Tensor x, IntArray axis)
output : Tensor
infer_meta :
func : ReverseInferMeta
kernel :
func : reverse
backward : reverse_grad

- op : rmsprop_
args : (Tensor param, Tensor mean_square, Tensor grad, Tensor moment, Tensor learning_rate, Tensor mean_grad, Tensor master_param, float epsilon, float decay, float momentum, bool centered, bool multi_precision)
output : Tensor(param_out), Tensor(moment_out), Tensor(mean_square_out), Tensor(mean_grad_out), Tensor(master_param_out)
Expand Down
11 changes: 11 additions & 0 deletions paddle/phi/api/yaml/op_compat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1751,6 +1751,17 @@
extra :
attrs : [bool use_mkldnn = false, str mkldnn_data_type = "float32", bool use_quantizer = false]

- op : reverse
inputs:
x : X
outputs:
out : Out
int_array:
axis :
data_type : int
support_tensor : true
manual_signature : [reverse]

- op : roll
backward : roll_grad
inputs :
Expand Down
10 changes: 10 additions & 0 deletions paddle/phi/api/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1436,6 +1436,16 @@
func : renorm
backward : renorm_grad

- op : reverse
args : (Tensor x, IntArray axis)
output : Tensor
infer_meta :
func : ReverseInferMeta
kernel :
func : reverse
data_type : x
backward : reverse_grad

- op : roll
args : (Tensor x, IntArray shifts={}, int64_t[] axis={})
output : Tensor(out)
Expand Down

0 comments on commit ab75441

Please sign in to comment.