Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for GPU sparse matrix-matrix multiply #69

Open
bpasley opened this issue Sep 1, 2016 · 2 comments
Open

Support for GPU sparse matrix-matrix multiply #69

bpasley opened this issue Sep 1, 2016 · 2 comments

Comments

@bpasley
Copy link

bpasley commented Sep 1, 2016

It seems matrix multiply is not implemented for GSMat and GSMat. Are there any plans to support this (e.g., cuSPARSE's csrGEMM)?

@jcanny
Copy link
Contributor

jcanny commented Oct 6, 2016

There are a few issues with making this work: First is to decide what form of output you want (dense or sparse). Sparse is probably easy (but I'm not sure that's what you need), because most sparse-sparse operators return a sparse result. Then there's wrapping the CUDA routine, adding a C native code equivalent, adding operator logic for conversion etc. We havent really needed it, so we've avoided biulding it. What's your use case?

@bpasley
Copy link
Author

bpasley commented Oct 7, 2016

the main use case is building large (sparse) similarity matrices from sparse features (e.g., tfidf vectors). This works fine using the CUDA routine directly. of course it's more convenient to have that process integrated directly into BidMat but not strictly necessary

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants