Concepts inNumerical stability of barycentric Hermite root-finding
Root-finding algorithm
A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given function f. Such an x is called a root of the function f. This article is concerned with finding scalar, real or complex roots, approximated as floating point numbers. Finding integer roots or exact algebraic roots are separate problems, whose algorithms have little in common with those discussed here.
more from Wikipedia
Numerical stability
In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is derived from the accuracy of the algorithm. An opposite phenomenon is instability.
more from Wikipedia
Companion matrix
In linear algebra, the companion matrix of the monic polynomial is the square matrix defined as With this convention, and writing the basis as, one has (for), and generates V as a -module: C cycles basis vectors. Some authors use the transpose of this matrix, which (dually) cycles coordinates, and is more convenient for some purposes, like linear recursive relations.
more from Wikipedia
Lagrange polynomial
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points and numbers, the Lagrange polynomial is the polynomial of the least degree that at each point assumes the corresponding value (i.e. the functions coincide at each point).
more from Wikipedia
Zero of a function
In mathematics, a zero, also sometimes called a root, of a real-, complex- or generally vector-valued function ¿ is a member of the domain of ¿ such that ¿ vanishes at, that is, In other words, a "zero" of a function ¿ is a value for x that produces a result of zero ("0"). For example, consider the function ¿ defined by the formula ¿ has a root at 3 since If the function maps real numbers to real numbers, its zeros are the x-coordinates of the points where its graph meets the x-axis.
more from Wikipedia
Eigenvalues and eigenvectors
The eigenvectors of a square matrix are the non-zero vectors that, after being multiplied by the matrix, remain parallel to the original vector. For each eigenvector, the corresponding eigenvalue is the factor by which the eigenvector is scaled when multiplied by the matrix. The prefix eigen- is adopted from the German word "eigen" for "self" in the sense of a characteristic description. The eigenvectors are sometimes also called characteristic vectors.
more from Wikipedia
Coefficient
In mathematics, a coefficient is a multiplicative factor in some term of an expression (or of a series); it is usually a number, but in any case does not involve any variables of the expression. For instance in the first three terms respectively have the coefficients 7, ¿3, and 1.5 (in the third term the variables are hidden, so the coefficient is the term itself; it is called the constant term or constant coefficient of this expression).
more from Wikipedia
Iterative method
In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations.
more from Wikipedia