Unofficial python interface to Spectra library; GPU accelerated eigenvalue / Truncated (partial) SVD problems solving
Spectra is a C++ library for large scale eigenvalue problems.
By default Spectra uses Eigen library for computations, but exploiting its design it is possible to outsource linear algebra computations to any external library.
pyspectra
allows you to redefine matrix-vector operation using python code. For example you can utilize libraries with GPU support (such as PyTorch, TensorFlow, CuPy) which leads to large speedups.
pip install git+https://github.com/u1234x1234/pyspectra.git@0.0.1
You need to have Eigen library installed. (fatal error: Eigen/Core: No such file or directory
)
Installation using apt
:
sudo apt install libeigen3-dev
import pyspectra
import numpy as np
X = np.random.uniform(size=(10_000, 1000))
U, s, V = pyspectra.truncated_svd(X, 20) # similar to scipy.sparse.linalg.svds; Eigen
U, s, V = pyspectra.truncated_svd(X, 20, backend="torch") # GPU acceleration with PyTorch
eigenvalues, eigenvectors = pyspectra.eigsh(X.T.dot(X), 20) # Symmetric eigenvalue problem, scipy.sparse.linalg.eigsh
- SymEigsSolver - real symmetric; dense/sparse; float32/float64
- PartialSVDSolver - partial SVD without explicit A.t * A construction; float32/float64
Please open an issue you want to use some solver which is not supported yet.
- Numpy
- PyTorch
- cupy
- Tensorflow
- Jax
- Scipy
- Cupy
- Jax
- numpy
- PyTorch
- Jax
Example of dense matrix-vector product with numpy
:
class DenseNumpyBackend:
def __init__(self, mat):
self._mat = mat
def perform_op(self, x, out):
out[:] = self._mat.dot(x)
GPU accelerated matrix-vector product with PyTorch
import torch
class DenseTorchBackend:
def __init__(self, mat):
self._mat = torch.from_numpy(mat).cuda()
def perform_op(self, x, out):
x = torch.from_numpy(x).cuda()
yt = torch.from_numpy(y).cuda()
torch.mv(self._mat, x, out=yt)
out[:] = yt.cpu().numpy()
Actually it is not the fastest library for basic usage (e.g real symmetric eigen solver) as it introduces overheads: cpu/gpu memory copies and c++/python interoperability.
For example for cupy backend the sequence of calls will be similar to:
- your python code
- pyspectra python code
- pyspectra c++ code
- spectra c++ code
- user-defined matrix ops in python
- cupy python code
- actual number crunching with cublas