Concepts inSmoothed analysis of tensor decompositions
Higher-order singular value decomposition
In multilinear algebra, there does not exist a general decomposition method for multi-way arrays (also known as N-arrays, higher-order arrays, or data-tensors) with all the properties of a matrix singular value decomposition (SVD). A matrix SVD simultaneously computes (a) a rank-R decomposition and (b) the orthonormal row/column matrices. These two properties can be captured separately by two different decompositions for multi-way arrays.
more from Wikipedia
Smoothed analysis
Smoothed analysis is a way of measuring the complexity of an algorithm. It gives a more realistic analysis of the practical performance of the algorithm, such as its running time, than using worst-case or average-case scenarios. For instance the simplex algorithm runs in exponential-time in the worst-case and yet in practice it is a very efficient algorithm. This was one of the main motivations for developing smoothed analysis.
more from Wikipedia
Tensor product
In mathematics, the tensor product, denoted by ¿, may be applied in different contexts to vectors, matrices, tensors, vector spaces, algebras, topological vector spaces, and modules, among many other structures or objects. In each case the significance of the symbol is the same: the most general bilinear operation. In some contexts, this product is also referred to as outer product. The term "tensor product" is also used in relation to monoidal categories.
more from Wikipedia
Linear independence
In linear algebra, a family of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the collection. A family of vectors which is not linearly independent is called linearly dependent. For instance, in the three-dimensional real vector space we have the following example.
more from Wikipedia
Singular value
In mathematics, in particular functional analysis, the singular values, or s-numbers of a compact operator T¿: X ¿ Y acting between Hilbert spaces X and Y, are the square roots of the eigenvalues of the nonnegative self-adjoint operator T*T¿: X ¿ X (where T* denotes the adjoint of T). The singular values are nonnegative real numbers, usually listed in decreasing order (s1, s2, ¿). If T is self-adjoint, then the largest singular value s1(T) is equal to the operator norm of T.
more from Wikipedia
Tensor
whose columns are the forces acting on the,, and faces of the cube. ]] Tensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples of such relations include the dot product, the cross product, and linear maps. Vectors and scalars themselves are also tensors. A tensor can be represented as a multi-dimensional array of numerical values.
more from Wikipedia
Rank (linear algebra)
The column rank of a matrix A is the maximum number of linearly independent column vectors of A. The row rank of a matrix A is the maximum number of linearly independent row vectors of A. Equivalently, the column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A. A result of fundamental importance in linear algebra is that the column rank and the row rank are always equal (see below for proofs). This number (i.e.
more from Wikipedia
Invertible matrix
In linear algebra an n-by-n (square) matrix A is called invertible (some authors use nonsingular or nondegenerate) if there exists an n-by-n matrix B such that where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A and is called the inverse of A, denoted by A.
more from Wikipedia