A note on the computational complexity of the moment-SOS hierarchy for polynomial optimization

The moment-sum-of-squares (moment-SOS) hierarchy is one of the most celebrated and widely applied methods for approximating the minimum of an n-variate polynomial over a feasible region defined by polynomial (in)equalities. A key feature of the hierarchy is that, at a fixed level, it can be formulated as a semidefinite program of size polynomial in the number of variables n. Although this suggests that it may therefore be computed in polynomial time, this is not necessarily the case. Indeed, as O'Donnell (2017) and later Raghavendra&Weitz (2017) show, there exist examples where the sos-representations used in the hierarchy have exponential bit-complexity. We study the computational complexity of the moment-SOS hierarchy, complementing and expanding upon earlier work of Raghavendra&Weitz (2017). In particular, we establish algebraic and geometric conditions under which polynomial-time computation is guaranteed to be possible.


INTRODUCTION
Consider the polynomial optimization problem: min := min (x) s.t.(x) ≥ 0 (1 ≤ ≤ ), where , , ℎ ∈ R[x] are given -variate polynomials.The feasible region of (POP) is a basic semialgebraic set, which we denote by: Problems of the form (POP) are generally hard and non-convex.They naturally capture several classical combinatorial problems, and have applications in finance, energy optimization, machine learning, optimal control and quantum computing.As they are often intractable, several techniques have been proposed to approximate them.Perhaps the most well-known and studied among these techniques is the so-called moment-SOS hierarchy, due to Lasserre [13] and Parillo [18].The main idea behind the hierarchy is that one can certify the nonnegativity of a polynomial ∈ R[x] on (g, h) by representing it as a weighted sum of squares: where ∈ Σ[x] are sums of squares, ∈ R[x] and we set 0 (x) = 1 for convenience.We say that a representation (1) is of degree if deg( ) ≤ and deg(ℎ ) ≤ for all , .For ∈ N, one then obtains a lower bound sos( ) ≤ min on the minimum of by: sos( ) := sup ∈R : − has a representation (1) of degree 2 . (SOS) For fixed level , this lower bound may be computed by solving a semidefinite program (SDP) involving matrices of size polynomial in .It is often claimed that one may therefore (approximately) compute sos( ) in polynomial time, for instance by applying the ellipsoid algorithm.As was noted by O'Donnell [17] and later by Raghavendra & Weitz [21], this is not necessarily the case.Indeed, polynomial runtime of the ellipsoid algorithm is only guaranteed when the feasible region of the SDP contains an inner ball which is not too small, and is contained in an outer ball which is not too large.Informally, these two balls ensure that it is possible to choose the coefficients of the multipliers , in the representation (1) so that their bit-complexity is polynomial in .We call such a representation compact.Roughly speaking, the following mild algebraic boundedness assumption on the feasible region (g, h) guarantees the existence of the inner ball for the SDP-formulation of sos( ) .
The remaining question, then, is whether an outer ball always exists.O'Donnell [17] shows that in fact, it does not; he constructs an example where every representation (1) of (x) − sos( ) 2 necessarily involves multipliers , ℎ whose coefficients are doublyexponentially large in .Raghavendra & Weitz [21] subsequently show that it is possible to construct such an example even when the equalities h include the boolean constraints x − x 2 = 0, negatively answering a question posed by O'Donnell [17].On the positive side, they show conditions under which existence of a compact representation (1) is guaranteed.These conditions are met for the reformulation of several well-known combinatorial problems as a (POP), as well as for optimization over the unit hypersphere.To state our and their results, we make the natural assumption that the coefficients of the objective function and the polynomials , ℎ defining the feasible region (g, h) of (POP) have polynomial bit-complexity.Assumption 4. Throughout, we assume that the coefficients of the polynomials , , ℎ in (POP) have polynomial bit-complexity in and that their degree is independent of .We also assume that the number of constraints ( + ℓ) is polynomial in . T
The conditions of Theorem 5 have a natural interpretation in the dual formulation of (SOS), the moment formulation, which reads (see, e.g., [5]): of degree 2 .

(MOM)
We refer to Section 3.3 for a detailed discussion of (MOM) and the duality between it and (SOS).For now let us introduce two types of matrices associated to linear functionals ∈ R[x] * 2 : the moment matrix ( ) and, for a polynomial , the localizing matrix ( ).Their entries are given by For ease of notation, we use a subscript for the localizing matrix ( ) even though it is indexed by monomials of degree at most − ⌈deg( )/2⌉.Here we use terminology associated to measures since the linear functionals in (MOM) can be viewed as relaxations of probability measures: any probability measure whose support is contained in (g, h) gives rise to a feasible defined by We can then interpret the conditions in Theorem 5 as follows: (2) says that ( ) has smallest non-zero eigenvalue ≥ 2 − poly( ) and (3) implies that the same holds for all localizing matrices ( ) with ∈ g.

Our contributions
The main goal of this paper is to carefully map under what circumstances computation of the bounds sos( ) (and/or mom( ) ) and the corresponding representation (SOS) is guaranteed to be possible in polynomial time.
Our starting point will be the following proposition which concerns the bounds mom( ) .The proof of this proposition consists of a straightforward reformulation of the conditions of strict feasibility for the SDP formulation of mom( ) for explicitly bounded polynomial optimization problems, see Section 3.4.As we will see there, the conditions (1) and (2) in Proposition 6 below will guarantee the existence of a ball in the feasible region of (MOM), whose radius depends on the smallest non-zero eigenvalue of the (localizing) moment matrices corresponding to a feasible solution .P 6.Let (g, h) be a semialgebraic set and let ≥ ⌈deg( )/2⌉ be fixed.Assume that (g, h) is explicitly bounded: for some 1 ≤ ≤ 2 poly( ) .Suppose furthermore that there exists an ∈ R[x] * 2 with (1) = 1 and the following properties: (1) For any ∈ g and any ∈ R[x] − ⌈deg( )/2⌉ , if ( 2 ) = 0, then there are 1 , 2 , . . ., ℓ ∈ R[x] such that: and deg( ) ≤ 2 − deg(ℎ ) for each ∈ [ℓ].We recall that 1 ∈ g by convention.
Then for ≥ 2 −poly( ) , the bound mom( ) (which equals sos( ) ) may be computed in polynomial time in up to an additive error of at most .The statement of Proposition 6 is stronger than the result of Raghavendra & Weitz in the sense that it guarantees polynomialtime computation of the bound sos( ) , whereas Theorem 5 only guarantees existence of a compact representation (1).Furthermore, its conditions do not require strict positivity of the inequality constraints g on (g, h).As we see below, it therefore applies to several natural settings where Theorem 5 may not be applied.On the other hand, the first condition of Proposition 6 is more restrictive than the first condition of Theorem 5. We note that it is satisfied for example when is the linear operator associated to a positive Borel measure supported on (g, h), and the constraints h form a Gröbner basis of a real radical ideal (cf.[14, Sec.2]).
Our first contribution is a sufficient condition for the second requirement of Proposition 6.

T
7. Let (g, h) be an explicitly bounded semialgebraic set with ≤ 2 poly( ) and let ∈ R[x] * 2 be a feasible solution to mom( ) for ∈ N fixed.Assume that (x ) ∈ Q has polynomial bit-complexity for all ∈ N 2 .Then the smallest non-zero eigenvalue of ( ) is at least 2 − poly( ) and the same holds for the localizing matrices ( ) for ∈ g.
Our second contribution is an alternative, geometric condition on the feasible region (g, h) of (POP) which guarantees polynomial-time computation of sos( ) in the special case where the formulation does not contain any equality constraints.We write (g) for (g, ∅).T 8. Let (g) ⊆ R be a semialgebraic set defined only by inequalities.Assume that the following two conditions are satisfied: (2) (g) contains a ball of radius ≥ 2 −poly( ) , i.e., ( , ) ⊆ (g) for some ∈ R .
Then, for fixed ≥ ⌈deg( )/2⌉ and ≥ 2 −poly( ) , the bound sos( ) may be computed in polynomial time in up to an additive error of at most .
The inclusions ( , ) ⊆ (g) ⊆ ( , ) for 1 , ≤ 2 poly( ) are a natural way to ensure that has an approximate minimizer over (g) whose bit-complexity is poly( ).Furthermore, they are very reminiscent of the sufficient conditions for solving semidefinite programs in polynomial time, see Theorem 15 below.
As we will see in Proposition 14, it is possible to choose constraints g = ( ), each with constant bit-complexity, such that the second condition of Theorem 8 is not satisfied.Notably, the resulting semialgebraic set (g) does not satisfy the conditions of Theorem 5 or Theorem 6 either.
Finally, as a third contribution, we make explicit the connection between computational aspects of the primal formulation (SOS) of the sos-hierarchy, and its dual formulation (MOM) in terms of moments.This connection is implicitly present in the proof of Theorem 5 in [21].T 9. Let (g, h) be a semialgebraic set and suppose that the conditions of Theorem 6 or Theorem 8 are satisfied.Then, for a fixed ≥ ⌈deg( )/2⌉ and > 0, there exists a sum-of-squares representation (1) proving nonnegativity of − sos( ) + on (g, h) with bit-complexity poly( , log(1/ )).

ON THE GEOMETRIC CONDITION
Before we move on to the proofs of our results, let us give some examples of natural settings where they may be applied.In general, Theorem 8 is better equipped to deal with non-discrete semialgebraic sets (g, h) than Theorem 5.The third condition of Theorem 5, which demands in particular that (x) > 0 for each x ∈ (g, h), is rather hard to satisfy: Example 1.The unit hypercube [−1, 1] , the unit ball ⊆ R and the standard simplex Δ ⊆ R are semialgebraic sets, defined by: It is straightforward to see that they each satisfy the conditions of Theorem 8 (after adding a ball constraint 1 (x) = 2 − x 2 2 if needed).They do not, however, satisfy the third condition of Theorem 5.
The following proposition gives an alternative sufficient condition related to strict feasibility of (POP), which implies that the conditions for Theorem 8 are satisfied and thus guarantees polynomial-time computability of the bound mom( ) .P 10.Let (g) be a full-dimensional semialgebraic set contained in a ball of radius ≤ 2 poly( ) .Assume that there exists a rational point x ∈ (g) with (x) > 0 for all , whose bit-complexity is polynomial in .Then (g) contains a ball of radius ≥ 2 − poly( ) .

P
. As the polynomials are of fixed degree and have bounded coefficients, their Lipschitz constants on (0, ) can be bounded by 2 poly( ) .Furthermore, as x has polynomial bitcomplexity in , we know that (x) ∈ Q has polynomial bitcomplexity as well, and therefore (x) > 0 implies that (x) ≥ 2 − poly( ) .Together, this implies that there exists an ≥ 2 − poly( ) such that (y) ≥ 0 for all y with x − y 2 ≤ , i.e., such that (g) contains the ball of radius ≥ 2 − poly( ) centered at x.

Semialgebraic sets with large volume contain a large ball
Another class of semialgebraic sets satisfying the conditions of Theorem 8 are those of sufficiently large volume.In the case that (g) is convex, we have the following consequence of John's theorem.
L 11.Let (g) ⊆ R be a convex semialgebraic set.Assume that (g) is contained in a ball of radius ≤ 2 poly( ) .Then there exists a constant such that (g) contains a (translated) ball of radius .

C
12. Let ⊆ R be a full-dimensional polytope defined by rational linear inequalities with polynomial bit-complexity.Then is contained in a ball of radius ≤ 2 poly( ) and vol ≥ 2 − poly( ) .Using Lemma 11, thus contains a ball of radius ≥ 2 − poly( ) .

P
. Let { 1 , . . ., } be the vertices of , each of which has polynomial bit-complexity in , as a consequence of Cramer's rule.As is full-dimensional, we may assume w.l.o.g. that 0 ∈ and that 1 , 2 , . . .are linearly independent.Therefore, But since the 's have polynomial bit-complexity in , we may conclude that In fact, a result similar to Lemma 11 can be shown without the assumption that (g) is convex.That is, any bounded semialgebraic set (g) ∈ (0, ) with ≤ 2 poly( ) and vol( (g)) ≥ 2 − poly( ) satisfies the conditions of Theorem 8. To show this, we use an upper bound on the volume of neighborhoods of algebraic varieties from [1].Obtaining such volume bounds is a problem with a long history, see, e.g., [1,15,25] for a discussion.

P
13. Let (g) ⊆ R .Assume that (g) is contained in a ball of radius ≤ 2 poly( ) .Then there exists a constant such that (g) contains a (translated) ball of radius .
which implies that there exists an x ∈ (g) that has distance greater than to the boundary of (g).The ball (x, ) with center x and radius = ≥ vol( (g))/2 poly( ) is thus contained in (g).

A counterexample
To end this section, we show that the second condition of Theorem 8 is not superfluous by exhibiting a full-dimensional semialgebraic set which does not contain a hypercube [− , ] of size ≥ 2 − poly( ) .As a direct result, it cannot contain a ball of radius ≥ 2 − poly( ) , either.We use a simple repeated-squaring argument.

P
. Let (g) be the set defined by the system of inequalities: But from the inequalities it follows that 0 ≤ x 1 ≤ for any x ∈ (g), meaning (g) cannot contain a cube of size at least 2 −poly( ) .

PRELIMINARIES 3.1 Notation
We denote by R[x] the space of -variate polynomials.For ∈ N, we write R[x] for the subspace of polynomials of degree at most , whose dimension is equal to ℎ( , ) := + .It has a basis of monomials x = x 1 1 . . .x , with ∈ N := { ∈ N : 1 ≤ }.We denote by S the space of -by-real symmetric matrices.We have the trace inner product , = Tr( ) on S , which induces the Frobenius norm = , .

Complexity of semidefinite programming
We will make use of the following result on the complexity of semidefinite programming.
∈ S is positive semidefinite. (SDP) We denote the feasible region of (SDP) by: One can show, for example using the ellipsoid method [9], that under certain assumptions (SDP) can be solved in polynomial time.
We use the following explicit formulation from [6] where a similar result was shown using an interior point method.
T 15 (T .1.1 [6]).Let , > 0 be given and suppose that there exists an 0 ∈ F so that: where ( 0 , ) is the ball of radius ∈ R (in the norm • ) centered at 0 in the subspace: Then for any rational > 0 one can find a rational matrix * ∈ F such that: in time polynomial in , , log( / ), log(1/ ) and the bit-complexity of the input data , , and the feasible point 0 .

Dual formulation and moments of measures
It will be convenient to work with the dual formulation of (SOS), which we recall is (see, e.g., [5]): mom( ) := inf ( ) (MOM) Assuming that (POP) is explicitly bounded, these formulations are equivalent.
There is a natural relation between the dual formulation (MOM) and moments of measures supported on (g, h), which clarifies the assumptions made in Theorem 5 and Theorem 6.For a measure supported on (g, h), the moment of degree ∈ N is defined by: For ∈ N, the (truncated) moment matrix ( ) of order for is the matrix of size ℎ( , ) = + given by: Consider the linear functional ∈ R[x] * 2 defined by: 3 If we assume instead that (g, h) is Archimedean with non-empty interior, then it is known that sos( ) = mom( ) for large enough [13, Theorem 4.2].
For any constraint and ∈ R[x] with deg( 2 ) ≤ 2 , we have: where p denotes the vector of coefficients of ∈ R[x] in the monomial basis.Here the ( , )-entry of the localizing matrix ( ) is defined as ∫ (g,h) (x)x + (x).In particular, for each the matrix ( ) is positive semidefinite.Note that ( ) = ( ) and similarly for the localizing matrices.Furthermore, for any constraint ℎ and ∈ N with deg(x ℎ ) ≤ 2 , we have: If is a probability measure, we get (1) = 1, and it follows that is a feasible solution to (MOM).

(MOM-SDP)
To see the equivalence it suffices to observe that the conditions ( 2 ) ≥ 0 for all and of appropriate degree of (MOM) are equivalent to 0. The upshot is that we may apply Theorem 15 to the formulation (MOM) in the following way.

P
18. Consider an instance of (MOM) with feasible solution 0 ∈ R[x] * 2 that satisfies the property: ( 0 ) be the matrix associated to 0 via (7) and (8).If all non-zero eigenvalues of 0 are at least > 0, then ( 0 , /2) is contained in the feasible region of (MOM-SDP).

P
. Let ∈ ( 0 , /2) and write = 0 + ˜ .Here ( 0 , /2) is the ball of radius /2 (in the Frobenius norm) in the affine space (F ) defined by the linear equalities of (MOM-SDP).We first show that a zero-eigenvector of 0 is a zero-eigenvector of .Since 0 is block-diagonal, a zero-eigenvector of 0 corresponds to a zero-eigenvector of a single block ( 0 ).For such a vector we have where x .By assumption on 0 , we therefore have 2 = ℓ =1 ℎ for polynomials with deg( ) ≤ 2 − deg(ℎ ).
To show that corresponds to a zero-eigenvector of it remains to observe that, since ∈ (F ), it corresponds to a linear functional ∈ R[x] * 2 via ( 7) and ( 8) with the property that (ℎ ) = 0 for all ∈ [ ] and polynomials of degree at most 2 − deg(ℎ ).In particular, and therefore corresponds to a zero-eigenvector of the -th block of .
Finally, to show that is positive semidefinite, note that by assumption = 0 + ˜ where ˜ ≤ /2.In particular, the eigenvalues of ˜ are all at most /2.Since the kernel of 0 is contained in the kernel of , this means that all non-zero eigenvalues of are at least /2 and hence is positive semidefinite.
As a corollary of Proposition 18 we obtain Proposition 6.
By assumption is such that for all ∈ g and ∈ R[x] − ⌈deg( )/2⌉ we have Moreover, all non-zero eigenvalues of ( ) ⊕ ∈ [ ] ( ) are at least 2 − poly( ) .By Proposition 18, the feasible region of (MOM-SDP) contains a ball of radius 2 − poly( ) (in (F )), and by Lemma 17 it is contained in a ball of radius 2 poly( ) .Theorem 15 thus shows that we can compute an -additive approximation to the value mom( ) in time polynomial in and log(1/ ).

AN ALGEBRAIC CONDITION FOR POLYNOMIAL-TIME COMPUTABILITY: PROOF OF THEOREM 7
We split the proof of Theorem 7 into two parts: we first consider the moment matrix ( ) and then the localizing matrices ( ) for ∈ g.A key tool is the following auxiliary lemma.Using this lemma, we are able to show the following.P 20.Assume that (g, h) ⊆ (0, ) is explicitely bounded with ≤ 2 poly( ) , and let ∈ R[x] * 2 be a feasible solution to (MOM).Assume further that (x ) ∈ Q has polynomial bit-complexity for all ∈ N 2 .Then min ( ( )), the smallest nonzero eigenvalue of the moment matrix ( ) of (7), satisfies: min ( ( )) ≥ 2 − poly( ) .

P
21.Under the same assumptions as in Proposition 20, let ∈ R[x] be one of the constraints defining (g, h).Then the smallest non-zero eigenvalue of the localizing matrix ( ) of (8) satisfies min ( ( )) ≥ 2 − poly( ) .

P
. We may express in the monomial basis as: The coefficients have polynomial bit-complexity by assumption, and so | | ≤ 2 poly( ) for each | | ≤ .For the same reason, there exists an integer ≤ 2 poly( ) such that ∈ Z for all | | ≤ .Recall that the entries of ( ) are linear combinations of the entries of ( ), namely for | |, | | ≤ − ⌈deg( )/2⌉ the ( , )-th entry is of the form:

Now, as in Proposition 20, we see that
• • ( ) is an integer matrix for some integer ≤ 2 poly( ) .Furthermore, since the entries of ( ) are at most 2 poly( ) , the entries of ( ) are bounded from above by As before, we may thus invoke Lemma 19 to conclude the proof.

A GEOMETRIC CONDITION FOR POLYNOMIAL-TIME COMPUTABILITY: PROOF OF THEOREM 8
Recall that we consider in Theorem 8 an explicitly bounded semialgebraic set (g) ⊆ R with the additional geometric assumption: for some ≥ 2 − poly( ) and ∈ R .To prove Theorem 8, we will exploit this assumption to exhibit a feasible solution ∈ R[x] * 2 to (MOM) that satisfies the conditions of Theorem 7.
We begin by noting that the inclusion (10) implies that (g) contains a translated hypercube := [− , ] + , for a (slightly smaller) ≥ 2 − poly( ) .We consider the probability measure obtained by restricting the Lebesgue measure to and renormalizing.We show that the operator ∈ R[x] * 2 associated to via (4) satisfies the conditions of Proposition 6.Let us first note that condition (1) is satisfied automatically as is full-dimensional.Indeed, this means that ∫ (g)

2
= 0 if and only if 2 = 0.It remains to show that condition (2) also holds, for which we use Theorem 7.
It remains to consider the case ≠ 0. Since (g) is explicitly bounded, we must have 2 ≤ ≤ 2 poly( ) .After possibly choosing a slightly smaller , we may assume that ∈ Q and that has polynomial bit-complexity in .First, note for all | | ≤ 2 that Second, note that the coefficients in the expansion of (x − ) in the monomial basis all have polynomial bit-complexity.The nonzero coefficients are those for which is entrywise less than or equal to , which we denote ≤ .Note that there are at most ℎ( , ) := + such coefficients.We may thus use (11) to find that has polynomial bit-complexity for all ∈ N with | | ≤ 2 (as is fixed).It follows that satisfies the condition of Theorem 7.

FROM MOMENTS TO SUMS OF SQUARES: PROOF OF THEOREM 9
Proposition 6 and Theorem 8 show that, under their respective conditions, we can find an additive -approximation to mom( ) = sos( ) in time poly( , log(1/ )) by solving (MOM-SDP).We now show that, under these conditions, we have a compact sum-ofsquares proof for this bound as well.
From the SDP-formulation of (SOS) it follows that one can bound by the rank of the matrices involved, i.e., we have ≤ + and thus ≤ poly( ).
, and as in (13) be given.We first show that the coefficients of the , are upper bounded.To do so, let be a linear functional that satisfies the conditions of Proposition 6 (in the proof of Theorem 8 we also construct such an ).We then have ( − ) = ( 0 ) + =1 ( ), and since all terms on the right-hand side are nonnegative, this implies in particular that (using Lemma 17): • poly(bitcomplexity( )), for all 0 ≤ ≤ and 1 ≤ ≤ , where we set 0 = 1 for convenience.We can now distinguish two cases: (i) ( 2 , ) ≠ 0, or (ii) ( 2 , ) = 0.In the first case, if ( 2 , ) ≠ 0, then we also have , where min ( ( )) is the smallest non-zero eigenvalue of ( ), and so • poly(bitcomplexity( )).For the second case, if on the other hand ( 2 , ) = 0, then by condition (1) of Proposition 6 we have 2 , = ℓ =1 ℎ for some polynomials .We may thus remove such , from the sum-ofsquares part of (13) and add them to the ideal part of the certificate.
Using the above bound on the coefficients of the , , we now show a bound on the size of the coefficients of the .The polynomial identity (13) allows us to view the coefficients of the as the solution of a linear system p = b where p is a vector that contains the coefficients of the (p is at most ℓ +2 2 -dimensional), contains coefficients of the ℎ , and b is the +2 2 -dimensional vector that contains the coefficients of (x) − − =0 (x) (x).The system p = b is feasible, b ≠ 0, and therefore = rank( ) is strictly positive.Let be an invertible -by-submatrix of and write p and b for the restrictions of p and b to the corresponding rows/columns.Cramer's rule then shows that the th coordinate of p can be written as ) .Indeed, is an invertible -by-matrix with ∈ poly( ), its entries have bit-complexity poly( ) since they correspond to coefficients of the ℎ , and therefore applying Lemma 19 to a suitable integer multiple of shows that | det( )| ≥ 2 − poly( ) .To upper bound | det( )| it suffices to observe that all entries of are upper bounded in absolute value by 2 poly( ) : for the th column this follows from the above-derived bound on the , for the other columns, as before, we observe that they contain coefficients of the ℎ .By setting all remaining coordinates to zero, we can extend p to a feasible solution p of p = b.To summarize, this shows that there exists a sum-of-squares decomposition of − as in (13) where , 2 2 ≤ 2 min ( ( )) −1 • poly( ) for all , and ∞ ≤ 2 poly( ) for all .We finally show that rounding each coefficient of this certificate to few bits introduces a small error.For each ∈ [ ], ∈ [ ], let ˜ , be the polynomial , with each of the coefficients rounded to the nearest integer multiple of .Therefore, for 2 ).Similarly, for each ∈ [ℓ], let ˜ be the polynomial with each of the coefficients rounded to the nearest integer multiple of .Then we have (14) for the polynomial E (x) defined as and hence As shown above, we have ∈ poly( ), ˜ − ∞ ≤ , and Using Proposition 21 we moreover have min ( ( )) −1 ≤ 2 poly( ) .Combining these estimates shows E 1 ≤ 2 2 poly( ) .The statement (x) − + 2 E 1 ≥ 0 on (g, h) follows from ( 14) by adding 2 E 1 − E (x) to both sides of the equation, using the fact that 2 E 1 − E (x) ∈ M (g) 2 , which follows from Lemma 23 below.
where ∈ [ ] is an index for which > 0 and is the -th unit vector.Here we use that the first term on the right hand side belongs to M (g) 2 and the second term belongs to M (g 2 .We then observe that we have the following identity where in the last equality we use the identities In the first part of the proof we have shown that

DISCUSSION
We have given algebraic and geometric conditions that guarantee polynomial-time computability of the moment-SOS hierarchy for polynomial optimization problems (POP).In the general, explicitly bounded setting, our conditions are similar to the ones considered by Raghavendra & Weitz [21] to show existence of compact sumof-squares certificates.For full-dimensional feasible regions (g), we give explicit, geometric conditions, which include for instance that (g) either contains a small ball, a strictly feasible point of low bit-complexity, or has sufficient volume.Furthermore, we make explicit the connection between polynomial-time computability of the bound mom( ) and the existence of compact feasible solutions to the sum-of-squares formulation sos( ) , which is only implicitly present in [21].

A general geometric condition
Theorem 8 applies only when the feasible region (g) of (POP) is a full-dimensional semialgebraic set.It would be very interesting to formulate a similar, geometric condition that guarantees polynomial-time computability of the moment-SOS hierarchy in the general case.This requires finding an appropriate analog of the second condition of Theorem 8 in the setting where (g, h) might not be full-dimensional.

Relation to the complexity of SDP
Our present discussion relates closely to the more general study of the computational complexity of semidefinite programming.It is an open question whether SDPs can be solved to (near-)optimality in polynomial-time.Even the exact complexity of testing feasibility of SDPs is not known.We do know that in the bit-model, membership of the feasibility problem in NP and Co-NP is simultaneous [22].(In the real number model of Blum-Shub-Smale it lies in NP ∩ Co-NP [22].)On the positive side, polynomial-time solvability of SDPs is guaranteed when the feasible region contains an 'inner ball' and is contained in an 'outer ball' of appropriate size [6,9].On the negative side, there are several classes of relatively simple SDPs whose feasible solutions nonetheless have exponential bit-complexity, see, e.g., [19] and the discussion therein.
In principle, these positive and negative results on SDPs provide conditions on (SOS) and (MOM) that (partly) show when polynomial-time computation can and cannot be guaranteed.The key difference with our results is that we only impose conditions on the original polynomial optimization problem (POP), rather than on the semidefinite programs resulting from the moment/sum-of-squares relaxations.

Finding exact SOS-decompositions
In the setting of polynomial optimization, it usually suffices to find approximate SOS-decompositions, for which one can use (standard) SDP-solvers.The problem of finding exact SOSdecompositions is more complicated.In the general case one could in principle use, for example, quantifier-elimination algorithms [2,23].In the univariate case specialized algorithms have been developed [4,24], see also [16].We note however that none of these methods come with polynomial runtime guarantees.
( ) where is the matrix formed by replacing the th column of with the vector .To upper bound |p | we must give a lower bound on | det( )| and an upper bound on | det( )|.Let us first observe that | det( )| ≥ 2 − poly(