Concepts inLeast-squares fitting using orthogonal multinomials
Least squares
The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e. , sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. The most important application is in data fitting.
more from Wikipedia
Variable (mathematics)
In mathematics, a variable is a value that may change within the scope of a given problem or set of operations. In contrast, a constant is a value that remains unchanged, though often unknown or undetermined. The concepts of constants and variables are fundamental to many areas of mathematics and its applications. A "constant" in this context should not be confused with a mathematical constant which is a specific number independent of the scope of the given problem.
more from Wikipedia
Orthogonality
Orthogonality comes from the Greek orthos, meaning "straight", and gonia, meaning "angle". It has somewhat different meanings depending on the context, but most involve the idea of perpendicular, non-overlapping, varying independently, or uncorrelated. In mathematics, two lines or curves are orthogonal if they are perpendicular at their point of intersection. Two vectors are orthogonal if and only if their dot product is zero.
more from Wikipedia
Inner product space
In mathematics, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product).
more from Wikipedia
Polynomial
In mathematics, a polynomial is an expression of finite length constructed from variables and constants, using only the operations of addition, subtraction, multiplication, and non-negative integer exponents. For example, x − x/4 + 7 is a polynomial, but x − 4/x + 7x is not, because its second term involves division by the variable x (4/x), and also because its third term contains an exponent that is not an integer (3/2).
more from Wikipedia
Basis (linear algebra)
Basis vector redirects here. For basis vector in the context of crystals, see crystal structure. For a more general concept in physics, see frame of reference. In linear algebra, a basis is a set of linearly independent vectors that, in a linear combination, can represent every vector in a given vector space or free module, or, more simply put, which define a "coordinate system" (as long as the basis is given a definite order).
more from Wikipedia
Approximation
An approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws. Approximations may be used because incomplete information prevents use of exact representations. Many problems in physics are either too complex to solve analytically, or impossible to solve using the available analytical tools.
more from Wikipedia
Multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. The binomial distribution is the probability distribution of the number of "successes" in n independent Bernoulli trials, with the same probability of "success" on each trial.
more from Wikipedia