-
Notifications
You must be signed in to change notification settings - Fork 104
FAQ
#summary Frequently Asked Questions
###How can I compute the eigenvalues and/or singularvalues of a general sparse matrix?###
MTJ uses LAPACK for computing eigen- and singularvalues. LAPACK will only do this for dense or structured sparse matrices, and not for general sparse matrices (such as the compressed row matrices). Its singular value computer only operates on dense matrices, and if eigenvectors or singular vectors are to be computed, they will be stored as the columns in dense matrices.
The power method and its derivatives is a simple way to get portions of the matrix spectrum, and it is easy to implement on top of MTJ. More sophisticated methods are described in the Eigenvalue Templates book. Lastly, outside of Java there is the Fortran package ARPACK which is a well regarded library for sparse spectral analysis.
MTJ supports sparse matrix storage but does not provide solvers for sparse matrices. Have a look at Sparse Eigensolvers for Java.
###MTJ fails with UnsupportedClassVersionError
###
You must use Java 5 or newer. The older versions of Java are not supported.
###How does MTJ compare to other Java matrix packages?###
The three primary alternatives are JAMA, Colt and Apache Commons Math.
JAMA is a small and easy to use package for dense matrix computations. It can compute all common decompositions and solve linear and least squares systems. It is based on the same algorithms as found in LINPACK and EISPACK. MTJ is a much larger package which includes more matrix types, is based on the more modern LAPACK library, and it supports general sparse computations.
Colt is a collection of libraries for high performance computations. It includes much more than just matrix algorithms, such as its own collection types, statistical methods, random number generators, and multidimensional arrays. Its linear algebra part can be divided into two parts: one which is largely Jama, but with some performance enhancements; and a second which consists of 1D, 2D, and 3D matrices storing double
s and Object
s, holding them in either dense or sparse arrays. The sparse arrays are implemented as either hashmaps, or using compressed rows. MTJ does not supply 3D matrices, as they are not actually linear operators (more like 3D arrays). However, MTJs sparse matrices are highly optimized and it supplies a large set of iterative solvers and preconditioners. The capability of its dense matrices to use a native BLAS ensures that they will always attain optimal performance on a given machine.
Apache Commons Math is an attempt to create a complete toolkit for numerical methods. It includes linear algebra classes, but does not provide support for sparse matrices. If you have only simple matrix requirements (e.g. small dense matrices), I would actually recommend the Apache Commons Math library for new projects.
We'd love it! Indeed this idea was well received but ultimately rejected.
Unfortunately, the netlib-java backend requires Fortran code that is converted directly into Java bytecode and therefore MTJ is not eligible for inclusion in Apache Common Math. The reason for this is because Fortran netlib uses goto
statements which cannot be converted into Java source.
No. This is partly because MTJ is built on top of BLAS, which limits the numerical types to reals and complex numbers, and partly because the Java translation of BLAS and LAPACK, JLAPACK, is only available in double precision. Also, CBLAS and CLAPACK differ somewhat in how complex numbers are to be treated.
###Must I compile a native BLAS library to use MTJ?###
No. In the absence of a native BLAS MTJ automatically uses JLAPACK, the Java translation. It is only for larger problems that you should expect performance differences, and even then it may not be large.
###Earlier versions included additional functionality###
Previous incarnations of MTJ included some support for parallelisation and some simple sparse eigenvalue solvers. These were removed, and the interfaces of the package were simplified in the current version. The reason was to make it simpler to use, remove sources for bugs and other problems, and to ensure a higher overall quality of each release. Also, this functionality was seldom used by target applications.
###How do I invert a matrix?###
DenseMatrix A = ...
DenseMatrix I = Matrices.identity(n);
DenseMatrix AI = I.copy();
A.solve(I, AI);
If you just need to solve the linear system AX=B
, it is faster to do just solve it directly, like this:
DenseMatrix A = ...
DenseMatrix B = ...
DenseMatrix X = B.copy();
A.solve(B, X);
###After performing a QR, EVD, SVD, etc decomposition, my matrix changed###
This is intentionally. The factor
methods overwrite the passed matrix to save memory, a design inherited from the use of LAPACK. However, the factorize
methods operate on a copy, and can be used instead. Another option is to pass a copy of the matrix to the factor
method.