0

I'm currently computing several eigenvalues of different matrices and trying to find their closed-form solutions. The matrix is hermitian/self-adjoint, and tri-diagonal. Additionally, every diagonal element is positive and every off-diagonal is negative.

Due to what I suspect is an issue with trying to algebraically solve the quintic, sympy cannot solve the eigenvalues of my 14x14 matrix.

Numpy has given me great results that I'm sometimes able to use via wolfram-alpha, but other times the precision is lacking to be able to determine which of several candidates the closed form solution could take. As a result, I'm wishing to increase the precision with which numpy.linalg.eigenvaluesh outputs eigenvalues. Any help would be greatly appreciated!

Cegarza
  • 13
  • 4
  • Check out whether the `scipy` version of it [`scipy.linalg.eigvalsh()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvalsh.html#scipy.linalg.eigvalsh) can be of any help. There is even a specialized method for tridiagonals [`scipy.linalg.eigvalsh_tridiagonal()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvalsh_tridiagonal.html#scipy.linalg.eigvalsh_tridiagonal). – norok2 Jul 16 '19 at 22:55
  • [this question](https://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-algebra) might be helpful – Anton Menshov Jul 17 '19 at 19:07

1 Answers1

0

Eigenvalue problems of size>=5 have no general closed form solution (for the reason you mention), and so all general eigensolvers are iterative. As a result, there are a few sources of error. First, there are the errors with the convergence of the algorithm itself. I.e. even if all your computations were exact, you would need to run a certain number of iterations to get a certain accuracy. Second, finite precision limits the overall accuracy. Numerical analysts study how accurate a solution you can get for a given algorithm and precision and there are results on this.

As for your specific problem, if you are not getting enough accuracy there are a few things you can try to do. The first, is make sure you are using the best solvers for your method. I.e. since your matrix is symmetric and tridiagonal, make sure you are using solvers for this type (as suggested by norok2).

If that still doesn't give you enough accuracy, you can try to increase the precision. However, the main issue with doing this in numpy is that the LAPACK functions under the hood are compiled for float64. Thus, even if the numpy function allows inputs of higher precision (float128), it will round them before calling the LAPACK functions. It might be possible to recompile those functions for higher precision, but that may not be worth the effort for your particular problem. (As a side note, I'm not very familiar with scipy, so it may be the case that they have eigensolvers written in python which support all different types, but you need to be careful that they are actually doing every step in the higher precision and not silently rounding to float64 somewhere.)

For your problem, I would suggest using the package mpmath, which supports arbitrary precision linear algebra. It is a bit slower since everything is done in software, but for 14x14 matrices it should still be pretty quick.

tch
  • 697
  • 3
  • 9