-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NNVM][TOPI] Add FTVMCompute for matmul #1239
Conversation
nnvm/python/nnvm/top/nn.py
Outdated
@@ -73,6 +73,9 @@ def schedule_dense(_, outs, target): | |||
|
|||
reg.register_pattern("dense", OpPattern.OUT_ELEMWISE_FUSABLE) | |||
|
|||
#matmul | |||
reg.register_pattern("matmul", OpPattern.INJECTIVE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Matmul is unlikely going to be injective, it should be OUT_ELEMWISE_FUSABLE
, but the current set of schedule does not support this well. Consider change it to OUT_ELEMWISE_FUSABLE for now and use schedule injective
nnvm/src/top/tensor/matrix_op.cc
Outdated
const Array<Tensor>& out_info) { | ||
const MatMulParam& param = nnvm::get<MatMulParam>(attrs.parsed); | ||
return Array<Tensor>{ | ||
topi::matmult(inputs[0], inputs[1], param.transpose_a, param.transpose_b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
topi::matmult was an API that need to be updated to topi::matmul
Kind of hard to define the gradient in terms of the forward operator without having an implementation thereof ;)