Concepts inStability of block algorithms with fast level-3 BLAS
Basic Linear Algebra Subprograms
Basic Linear Algebra Subprograms (BLAS) is a de facto application programming interface standard for publishing libraries to perform basic linear algebra operations such as vector and matrix multiplication. They were first published in 1979, and are used to build larger packages such as LAPACK. Heavily used in high-performance computing, highly optimized implementations of the BLAS interface have been developed by hardware vendors such as Intel and AMD, as well as by other authors, e.g.
more from Wikipedia
Block size (cryptography)
In modern cryptography, symmetric key ciphers are generally divided into stream ciphers and block ciphers. Block ciphers operate on a fixed length string of bits. The length of this bit string is the block size. Both the input and output are the same length; the output cannot be shorter than the input ¿ this follows logically from the Pigeonhole principle and the fact that the cipher must be reversible ¿ and it is undesirable for the output to be longer than the input.
more from Wikipedia
Coppersmith¿Winograd algorithm
In linear algebra, the Coppersmith¿Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, was the asymptotically fastest known algorithm for square matrix multiplication until 2010. It can multiply two matrices in time. This is an improvement over the naïve time algorithm and the time Strassen algorithm. Algorithms with better asymptotic running time than the Strassen algorithm are rarely used in practice.
more from Wikipedia
Numerical linear algebra
Numerical linear algebra is the study of algorithms for performing linear algebra computations, most notably matrix operations, on computers. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, and bioinformatics, fluid dynamics, and many other areas.
more from Wikipedia
LAPACK
LAPACK (Linear Algebra PACKage) is a software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008).
more from Wikipedia
LU decomposition
In linear algebra, LU decomposition (also called LU factorization) factorizes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition is a key step in several fundamental numerical algorithms in linear algebra such as solving a system of linear equations, inverting a matrix, or computing the determinant of a matrix. It can be viewed as the matrix form of Gaussian elimination.
more from Wikipedia
Iterative refinement
Iterative refinement is an iterative method proposed by James H. Wilkinson to improve the accuracy of numerical solutions to systems of linear equations. When solving a linear system Ax = b, due to the presence of rounding errors, the computed solution x¿ may sometimes deviate from the exact solution x. Starting with x1 = x¿, iterative refinement computes a sequence {x1,x2,x3,¿} which converges to x when certain assumptions are met.
more from Wikipedia