-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LargestEigenValueFinder bugs #3
Comments
I also recommend the following change (which makes #nextLargestEigenValueFinder return a finder configured with the same value for maximumIterations as the original): `nextLargestEigenValueFinder
|
Thanks for the feedback, we will investigate this |
Here's a unit test from VisualWorks: `rapidFirstEigenvalue
|
Thanks for the tests! |
Yes. The original code had this bug. It only shows up on matrices where the algorithm converges rapidly on the eigenvalue so it is likely Didier didn't notice it. |
The LargestEigenValueFinder class has a bug that will cause it to produce incorrect eigenvectors in some cases. When the algorithm converges to the eigenvector quickly, it may be the case that the eigenvector of the transpose matrix has not converged yet. This causes the nextLargestEigenValueFInder method to pass an inaccurate matrix. Normally this will have no impact on the actual eigenvalues but can significantly impact eigenvectors. The solution is simple: add an i-var transposeResult (name chosen to be symmetric with result but a better name might be transposeEigenValue) and then modify #evaluateIteration so that it requires convergence on both the eigenvalue and the transposeEigenvalue:
`evaluateIteration
"Iterate the product of the matrix of the eigen vector and the transpose.
(c) Copyrights Didier BESSET, 1999, all rights reserved.
Initial code: 6/1/99 "
Here's a matrix which shows the difference:
m := Matrix rows: #((0.5d 0.4d 0.1d) (0.3d 0.4d 0.3d) (0.2d 0.3d 0.5d)).
Any accurate eigenvalue finder will list (value -> vector) -- note that many normalize the eigenvectors so that their last element is 1:
1 -> (1 1 1)
0.34142136 -> (-1.03099482 0.15872440 1)
0.05857864 -> (2.79570070 -3.33520499 1)
whereas the old algorithm gives:
with the supplied change we get:
I don't use Pharo (I use this code under VW) hence I'm not going to go to the effort of a PR.
BTW, this bug will show up a lot in MCMC algorithms!
The text was updated successfully, but these errors were encountered: