Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

A todo list for the sparse feature (GPU) #10506

Open
7 tasks
eric-haibin-lin opened this issue Apr 11, 2018 · 10 comments
Open
7 tasks

A todo list for the sparse feature (GPU) #10506

eric-haibin-lin opened this issue Apr 11, 2018 · 10 comments

Comments

@eric-haibin-lin
Copy link
Member

#8168

Operator (forward pass only unless specified):

  • elemwise_mul(csr, dense) = csr: both forward and backward
  • sum(csr)
  • mean(csr)
  • concat(csr, csr)
  • stack(csr, csr)
  • unary operators for csr (arcsin, arcsinh, arctan, arctanh, ceil, expm1, floor, log1p, power, rint, sign, sin, sinh, sqrt, tan, tanh, trunc) - most of them can just reuse the template code implemented for row_sparse

Iterator:

  • NDArrayIter(csr) with last_batch_handle = pad / roll_over
@eric-haibin-lin
Copy link
Member Author

cc @ZiyueHuang

@leezu
Copy link
Contributor

leezu commented May 3, 2018

@eric-haibin-lin
Copy link
Member Author

LogisticLoss with csr labels (requires reshape and reshape_like)

@ghost
Copy link

ghost commented Oct 18, 2018

Has there been any progress on this?

@eric-haibin-lin
Copy link
Member Author

Not that I'm aware of. @Codewithsk are you looking for any of these?

@ghost
Copy link

ghost commented Oct 18, 2018

I'm looking to contribute and this looks interesting. I can maybe submit a proposal on the dev list?

@ghost
Copy link

ghost commented Oct 19, 2018

@eric-haibin-lin Any pointers on where I can start?
I see this: A Guide To Implementing Sparse Operators in MXNet Backend

@szha szha removed the Feature label Nov 14, 2018
@jermainewang
Copy link
Contributor

@eric-haibin-lin Do we have gradient support of sparse matrix for the dot operator right now?

@eric-haibin-lin
Copy link
Member Author

@jermainewang No. For operator: dot(csr, dense) = dense, in the backward pass gradient calculation for left-hand side is not yet supported. Do you know if anyone is interested in contributing?

@jermainewang
Copy link
Contributor

It's actually quite important for implementing Graph Attention Network. Pytorch 1.0 has enabled this support. There is also a workaround here (https://github.com/Diego999/pyGAT). It would be great if this could be prioritized.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants