Concepts inA column approximate minimum degree ordering algorithm
Degree (graph theory)
In graph theory, the degree (or valency) of a vertex of a graph is the number of edges incident to the vertex, with loops counted twice. The degree of a vertex is denoted The maximum degree of a graph G, denoted by ¿(G), and the minimum degree of a graph, denoted by ¿(G), are the maximum and minimum degree of its vertices. In the graph on the right, the maximum degree is 5 and the minimum degree is 0. In a regular graph, all degrees are the same, and so we can speak of the degree of the graph.
more from Wikipedia
Pivot element
The pivot or pivot element is the element of a matrix, an array, or some other kind of finite set, which is selected first by an algorithm, to do certain calculations. In the case of matrix algorithms, a pivot entry is usually required to be at least distinct from zero, and often distant from it; in this case finding this element is called pivoting.
more from Wikipedia
Gaussian elimination
In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix. The method is named after Carl Friedrich Gauss, but it was not invented by him. Elementary row operations are used to reduce a matrix to what is called triangular form (in numerical analysis) or row echelon form (in abstract algebra).
more from Wikipedia
LU decomposition
In linear algebra, LU decomposition (also called LU factorization) factorizes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition is a key step in several fundamental numerical algorithms in linear algebra such as solving a system of linear equations, inverting a matrix, or computing the determinant of a matrix. It can be viewed as the matrix form of Gaussian elimination.
more from Wikipedia
Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André-Louis Cholesky for real matrices. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
more from Wikipedia
Column vector
In linear algebra, a column vector or column matrix is an m × 1 matrix, i.e. a matrix consisting of a single column of m elements. The transpose of a column vector is a row vector and vice versa. The set of all column vectors with a given number of elements forms a vector space which is the dual space to the set of all row vectors with that number of elements.
more from Wikipedia
Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Let A be a symmetric matrix. Then: The entries of a symmetric matrix are symmetric with respect to the main diagonal (top left to bottom right). So if the entries are written as A = (aij), then for all indices i and j. The following 3×3 matrix is symmetric: Every diagonal matrix is symmetric, since all off-diagonal entries are zero.
more from Wikipedia