The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition. ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.
more from Wikipedia
PLAPACK
PLAPACK (Parallel Linear Algebra Package) is an infrastructure for coding parallel linear algebra algorithms at a high level of abstraction.
more from Wikipedia
Inverse iteration
In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows one to find an approximate eigenvector when an approximation to a corresponding eigenvalue is already known. The method is conceptually similar to the power method and is also known as the inverse power method. It appears to have originally been developed to compute resonance frequencies in the field of structural mechanics.
more from Wikipedia
Machine epsilon
Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon or bold Roman u, respectively.
more from Wikipedia
Tridiagonal matrix
In linear algebra, a tridiagonal matrix is a matrix that has nonzero elements only in the main diagonal, the first diagonal below this, and the first diagonal above the main diagonal. For example, the following matrix is tridiagonal: The determinant of a tridiagonal matrix is given by a continuant of its elements. Determining an orthogonal transformation to tridiagonal form can be done with the Lanczos algorithm.
more from Wikipedia
Dot product
In mathematics, the dot product' or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number obtained by multiplying corresponding entries and then summing those products. The name "dot product" is derived from the centered dot " ' " that is often used to designate this operation; the alternative name "scalar product" emphasizes the scalar nature of the result.
more from Wikipedia
Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Let A be a symmetric matrix. Then: The entries of a symmetric matrix are symmetric with respect to the main diagonal (top left to bottom right). So if the entries are written as A = (aij), then for all indices i and j. The following 3×3 matrix is symmetric: Every diagonal matrix is symmetric, since all off-diagonal entries are zero.
more from Wikipedia