Semidefinite programming and linear equations vs. homomorphism problems

We introduce a relaxation for homomorphism problems that combines semidefinite programming with linear Diophantine equations, and propose a framework for the analysis of its power based on the spectral theory of association schemes. We use this framework to establish an unconditional lower bound against the semidefinite programming + linear equations model, by showing that the relaxation does not solve the approximate graph homomorphism problem and thus, in particular, the approximate graph colouring problem


Introduction
Semidefinite programming plays a central role in the design of efficient algorithms and in dealing with NP-hardness.For many fundamental problems, the best known (and sometimes provably best possible) approximation algorithms are achieved via relaxations based on semidefinite programs [3,52,67,70,71,87].In this work, we focus on computational problems of the following general form: Given two structures (say, two digraphs) X and A, is there a homomorphism from X to A? A plethora of different computational problems-in particular, those involving satisfiability of constraints-can be cast in this form.The semidefinite programming paradigm is naturally applicable to this type of problems, and it yields relaxations that are robust to noise: They are able to find a near-satisfying assignment even when the instance is almost-but not perfectly-satisfiable [9] (see also [21]).On the other hand, certain homomorphism problems can be solved exactly in polynomial time but are inherently fragile to noise-the primary example being systems of linear equations, which are tractable via Gaussian elimination but whose noisy version is NP-hard [58].Problems that behave like linear equations are hopelessly stubborn against the semidefinite programming model [27,89,95].It is then natural, in the context of homomorphism problems, to consider stronger versions of semidefinite programming relaxations that are equipped with a built-in linear-equation solver.
Consider a homomorphism1 f : X → A. Letting |V (X)| = p and |V (A)| = n, we can encode f in a pn × pn matrix M f containing blocks of size n × n, where the blocks are indexed by pairs of vertices of X, and the entries in a block by pairs of vertices of A. For x, y ∈ V (X) and a, b ∈ V (A), the (a, b)-th entry of the (x, y)-th block is 1 if a = f (x) and b = f (y), and 0 otherwise.Let us explore the structure of M f .Each block has nonnegative entries summing up to 1, and diagonal blocks are diagonal matrices.Since f is a homomorphism, the (a, b)-th entry of the (x, y)-th block is 0 when (x, y) ∈ E(X) and (a, b) ̸ ∈ E(A).Finally, M f is positive semidefinite since it is symmetric and, for a pn-vector v, it satisfies v T M f v = ( x v x,f (x) )2 ≥ 0. The standard semidefinite programming relaxation (SDP) of the homomorphism problem "X → A? " consists in looking for a real matrix M with the properties described above.We write SDP(X, A) = Yes if such a matrix M exists.
Any Constraint Satisfaction Problem (CSP) may be expressed as the homomorphism problem of checking whether an instance structure X homomorphically maps to a template structure A. Up to polynomial-time equivalence, X and A can be assumed to be digraphs without loss of generality [42].(As was shown in [19], a similar fact holds for the promise version of CSP, which we shall encounter in a while.)The power of semidefinite programming in the realm of CSPs is well understood: The CSPs solved by SDP are exactly those having bounded width [9,42].Crucially, for CSPs, boosting SDP via the so-called lift-and-project technique [80] does not increase its power: Any semidefinite programming relaxation of polynomial size-in particular, any constant number of rounds of the Lasserre "Sum-of-Squares" hierarchy [79]solves precisely the same CSPs as SDP [9,94].The positive resolution of Feder-Vardi CSP Dichotomy Conjecture [42] by Bulatov [26] and Zhuk [97] implies that any tractable CSP is a (nontrivial) combination of (i) bounded-width CSPs and (ii) CSPs that can simulate linear equations (which have unbounded width).The aim to find a universal solver for all tractable CSPs has then driven a new generation of algorithms that combine (i) techniques suitable for exploiting bounded width with (ii) variants of Gaussian elimination (which solves linear equations).This line of work was pioneered by [18,22], with the description of the algorithm BA mixing a linear-programming-based relaxation with Gaussian elimination.Variants of this algorithm were later considered in [31,33,35].
The algorithm we propose in this work (which we call SDA) can be described as follows.First, notice that the matrix M f encoding a homomorphism f : X → A has entries in {0, 1}, and all of the properties of M f highlighted above are in fact linear equations, with the exception of the nonnegativity of its entries and the positive semidefiniteness.Hence, a different relaxation can be obtained by looking for a matrix M ′ that respects the linear conditions, and whose entries are integers.We end up with a linear Diophantine system, that can be solved efficiently through integer variants of Gaussian elimination, see [90].We write SDA(X, A) = Yes if both M and M ′ exist, and a technical refinement condition constraining the supports of M and M ′ holds (see Section 2 for the formal definition of the algorithm). 2  The first main goal of our work is to introduce a technique based on the spectral theory of association schemes for the analysis of this relaxation model.Our approach aims to describe how the algorithm exploits the symmetry of the problem under relaxation.To that end, we gradually refine and abstract the way symmetry is expressed.Starting from automorphisms, which capture symmetry of X and A, we lift the analysis to the orbitals of X and A under the action of the automorphism groups and, finally, we endow the orbitals with the algebraic structure of association schemes.The progressively more abstract language for expressing the symmetry of the problem yields a progressively cleaner description of the impact of symmetry on the relaxation.For the SDP part of SDA, the abstraction process "automorphisms → orbitals → association schemes" may be viewed in purely linear-algebraic terms, as the quest for a convenient (i.e., low-dimensional) vector space where the output of the algorithm lives, and a suitable basis for this space.The last stage of this metamorphosis of symmetry discloses a new algebraic perspective on the relaxation.In particular, for certain classes of digraphs, association schemes allow turning SDP into a linear program.On a high level, this is an instance of a general invariant-theoretic phenomenon: The presence of a rich group of symmetries makes it possible to reduce the size of semidefinite programs [36,46,66] and, in certain cases, to describe their feasible regions in terms of linear inequalities [38,51], see also [39,92].The non-convex nature of Diophantine equations makes the linear part of SDA process the symmetry of the inputs in a quite different way.We exploit the dihedral structure of the automorphism group of cycles to show that each associate in their scheme can be assigned an integral matrix with a small support; this, in turn, can be used to produce a solution M ′ to the linear system.
This approach allows for a direct transfer of the results available in algebraic combinatorics on association schemes to the study of relaxations of homomorphism problems.For example, the explicit expression for the character table of a specific scheme known as the Johnson scheme shall be crucial for establishing a lower bound against the SDA model.One peculiarity of this framework is that it is not forgetful of the structure of the instance X.This contrasts with the techniques for describing relaxations of CSPs [9,34,76,93] based on the polymorphic approach [25,64,65], whose gist is that the complexity of a CSP depends on the identities satisfied by the polymorphisms of the CSP template A [10].The polymorphic approach yields elegant characterisations of the power of some relaxations, in the sense that a CSP is solved by a certain algorithm if and only if its polymorphisms satisfy identities typical of the algorithm.(As established in [8], a similar approach also works for the promise version of CSP that we shall discuss shortly.)These "instance-free" characterisations rely on having access to both the identities typical of the algorithm-not available in the case of SDP and, thus, SDA-and a succinct description of the polymorphisms of the template-which is missing in the case of the approximate homomorphism problems we shall see next.In contrast, the description based on association schemes does take the structure of the instance into account, which results in a higher control over the behaviour of the algorithm on certain highly symmetric instances.
The second main goal of our work is to apply the framework of association schemes to obtain an unconditional lower bound against SDA (and, a fortiori, against SDP).We consider the Approximate Graph Homomorphism problem (AGH): Given two (undirected) graphs A and B such that A → B and an instance X, distinguish between the cases (i) X → A and (ii) X ̸ → B. 3 This problem is commonly studied in the context of Promise CSPs [5,8,19,78], and we shall thus denote it by PCSP(A, B).Observe that PCSP(A, B) is well defined for any pair of digraphs (and, in fact, relational structures) A → B. If we let A = K n (the n-clique) and B = K n ′ where n ≤ n ′ , AGH specialises to the Approximate Graph Colouring problem (AGC): Distinguish whether a given graph is n-colourable or not even n ′ -colourable.The computational complexity of these problems is a long-standing open question.In contrast, the complexity of the non-approximate versions of AGC and AGH (i.e., the cases n = n ′ and A = B, respectively) was already classified by Karp [69] and Hell-Nešetřil [59], respectively.In 1976, Garey and Johnson conjectured that AGC is always NP-hard if 3 ≤ n (the case n = 2 reduces to 2-colouring and is thus tractable).
More recently, Brakensiek and Guruswami proposed the stronger conjecture that even AGH may always be NP-hard except in trivial cases (if either A or B has a loop or is bipartite, the problem is trivial or reduces to 2-colouring).
Among the several papers making progress on the two conjectures above, we mention [8,17,20,23,41,56,63,72,73,78].However, they both remain wide open in their full generality.Given the apparent "hardness of proving hardness" surrounding these problems, significant efforts have been directed towards showing inapplicability of specific algorithmic models, following an established line of work on lower bounds against relaxations, e.g., [2,11,27,28,47,77,81,95].Nonsolvability of AGC via sublinear levels of local consistency and via linear Diophantine equations was proved in [4] and [29], respectively.It was shown in [67] that the technique of vector colouring, based on a semidefinite program akin to Lovász's orthonormal representation [83], is inapplicable to solving AGC.It follows from [57,75] that polynomial levels of the Sum-of-Squares hierarchy (and, in particular, SDP) are also not powerful enough to solve AGC.Very recently, [30] improved on the result in [29] by proving non-solvability of AGC via constant levels of the BA hierarchy, obtained by applying the lift-and-project technique to the BA relaxation of [22].By leveraging the framework of association schemes, we establish that AGH is not solved by SDA.
Theorem 1.Let A, B be non-bipartite loopless undirected graphs such that A → B. Then SDA does not solve PCSP(A, B).
The improvement on the state of the art is twofold: Theorem 1 yields (i) the first nonsolvability result for the whole class of problems AGH, as opposed to the subclass AGC, and (ii) the first lower bound against the combined "SDP + linear equations" model (which is strictly stronger than both models individually).Via Raghavendra's framework [87], the (SDP part of the) integrality gap in Theorem 1 directly yields a conditional hardness-of-approximation result for AGH: Assuming Khot's Unique Games Conjecture [74] and P ̸ = NP, AGH is not solved by any polynomial-time robust algorithm.

Related work on association schemes
The Johnson scheme and other association schemes such as the Hamming scheme have appeared in the analysis of the performance ratio of Goemans-Williamson Max-Cut algorithm [52] based on semidefinite programming, see [1,51,68].In [84], certain spectral properties of the Johnson scheme were used to obtain lower bounds against the Positivestellensatz proof system (and, thus, against the Sum-of-Squares hierarchy) applied to the planted clique problem, see also [40].

Structure of the paper
In Section 2, we formally define the algorithms used in this work and list some useful preliminary observations about them.The analysis of how the algorithms process the symmetry of the input structures begins in Section 3, which provides a basis for the space of symmetry-invariant matrices output by the algorithms in terms of the orbitals of the input structures.Leveraging the theory of association schemes, Section 4 describes an alternative basis, which gives easier access to the spectral properties of the matrices involved in the relaxations.Sections 5 and 6 investigate two specific association schemes-the Johnson scheme and the cycle scheme, respectively-whose properties are then used in Section 7 to conclude the proof of Theorem 1. Finally, in Section 8, we compare the SDA algorithm with the BA hierarchy from [18,30], and we show that our lower bound is incomparable with the one in [30], as SDA-and, in fact, even a weaker version of SDP described in Subsection 8.1-is not less powerful than the BA hierarchy in a technical sense (Subsection 8.2).
Notation We let N be the set of positive integers, while N 0 = N ∪ {0}.For t ∈ N, we let [t] = {1, . . ., t}.We view vectors in R t as column vectors, but sometimes write them as tuples for typographical convenience.We denote by I t and J t the t × t identity and all-one matrices, by O t,t ′ the t × t ′ all-zero matrix, and by 1 t and 0 t the all-one and all-zero vectors of length t.Indices shall sometimes be omitted when clear from the context.We denote by e i the i-th standard unit vector of length t (which shall be clear from the context); i.e., the vector in R t all of whose entries are 0 except the i-th entry that is 1.Given a field F and a set V of vectors in F t , span F (V ) is the set of linear combinations over F of the vectors in V .We write span(V ) for span R (V ).
A matrix is Boolean if its entries are in {0, 1}.Given a real matrix M , we write M ≥ 0 if M is entrywise nonnegative, and we write M ≽ 0 if M is positive semidefinite (i.e., if M is symmetric and has a nonnegative spectrum).For two matrices M = (m ij ) and M ′ of size m × n and m ′ × n ′ , respectively, we let their Kronecker product be the mm ′ × nn ′ block-matrix M ⊗ M ′ whose (i, j)-th block, for i ∈ [m] and j ∈ [n], is the matrix m ij M ′ .If M and M ′ have equal size, we let M • M ′ denote their Schur product (i.e., their entrywise product, also known as Hadamard product).We shall often use the fact that (M ⊗ M ′ )(N ⊗ N ′ ) = M N ⊗ M ′ N ′ , provided that the products are well defined (see [61]).The support of M , denoted by supp(M ), is the set of indices of nonzero entries of M ; for two matrices M, M ′ of equal size, we write M ◁ M ′ to denote that supp(M ) ⊆ supp(M ′ ).Given a digraph X, we let A (X) and Aut(X) denote the adjacency matrix and the automorphism group of X, respectively.We view undirected graphs as digraphs, by turning each undirected edge {x, y} into a pair of directed edges (x, y) and (y, x).

The algorithms
In the literature on CSPs, it is customary to define SDP in terms of systems of vectors satisfying certain orthogonality requirements [9,21,32,87,94].We now present this standard vector formulation of SDP and define the augmented algorithm SDA along the same lines.Then, we describe an alternative formulation of both relaxations, which is more suitable for our purposes, and establish the equivalence between the two.To enhance the paper's readability and help the reader reach the technical core more quickly, we defer the proofs of the results in this section to Appendix A.
Let X and A be two digraphs, and label their vertex sets as V (X) = [p] and for some p, n ∈ N. We introduce a vector variable λ x,a taking values in R pn for all vertices x ∈ V (X), a ∈ V (A), and we set λ x,A = a∈V (A) λ x,a .Consider the system (SDP) Note that (SDP 4 ) forces all vectors λ x,A to be equal.We say that SDP applied to X, A accepts, and we write SDP(X, A) = Yes, if the system (SDP) has a solution.
In order to augment SDP with the linear Diophantine part, we introduce additional variables µ x,a taking values in Z for all vertices x ∈ V (X), a ∈ V (A), and variables µ x,a taking values in Z for all directed edges x ∈ E(X), a ∈ E(A), and we consider the equations The SDA relaxation consists in (i) searching a solution to (SDP), (ii) discarding assignments having zero probability, and then (iii) searching a solution to (AIP). 4 Formally, we say that SDA applied to X and A accepts, and we write SDA(X, A) = Yes, if the system (SDP) admits a solution λ and the system (AIP) admits a solution µ such that the following refinement condition holds: (where ∥ • ∥ is the Euclidean vector norm).Given two digraphs A, B such that A → B, we say that SDP (resp., SDA) solves PCSP(A, B) if, for any digraph X, SDP(X, A) = Yes (resp., SDA(X, A) = Yes) implies X → B. It follows from the definitions of the algorithms that X → A always implies SDP(X, A) = SDA(X, A) = Yes.
It shall be convenient for our purposes to use a slightly modified, matrix formulation of the relaxations, which allows for a more intuitive linear-algebraic view of the algorithms' behaviour.The equivalence of the two formulations is given in Proposition 4 below.Definition 2. A real pn × pn matrix M is a relaxation matrix for X, A if M satisfies the following requirements: Given a relaxation matrix M , we say that M is an SDP-matrix for X, A if M ≽ 0 and M ≥ 0, and we say that M is an AIP-matrix for X, A if all of its entries are integral.
Notice that the definition of an SDP-matrix captures the description of the matrix M f considered at the beginning of the Introduction to illustrate SDP.Indeed, viewing M as a block matrix whose n × n blocks are indexed by pairs in V (X) 2 , (r 1 ) states that diagonal blocks are diagonal matrices, (r 2 ) states that the supports of blocks corresponding to edges of X are included in E(A), (r 3 ) and (r 4 ) state that the row-sum (resp.column-sum) vectors of blocks aligned horizontally (resp.vertically) are equal, and (r 5 ) is a normalisation condition.Interestingly, the very similar definition of an AIP-matrix is able to capture the linear Diophantine part of SDA.
For a square matrix A, the set of vectors v for which the Rayleigh quotient v T Av is zero clearly includes the null space of A. The inclusion is in general strict, even assuming A to be symmetric.However, the two sets coincide when A is positive semidefinite (see [62,Obs. 7.1.6]).The next result is essentially a specialisation of this fact to the conditions (r 1 )-(r 5 ) defining relaxation matrices.Given a real pn × pn matrix M , consider the condition Proposition 3. Let M be a real pn × pn matrix.Then (i) the conditions (r 3 ), (r 4 ), and (r 5 ) imply the condition (r 6 ); (ii) if M ≽ 0, the conditions (r 3 ), (r 4 ), and (r 5 ) are equivalent to the condition (r 6 ).
By using Proposition 3, we can convert the vector formulation of the algorithms SDP and SDA into an equivalent formulation in terms of relaxation matrices.Proposition 4. Let X, A be digraphs.Then (i) SDP(X, A) = Yes if and only if there exists an SDP-matrix for X, A; (ii) if X is loopless, SDA(X, A) = Yes if and only if there exist an SDP-matrix M and an AIP-matrix N for X, A such that N • (( In particular, if M is an SDP-matrix for X, A, the corresponding vector formulation involves the vectors λ x,a consisting of the columns of a pn × pn matrix L such that M = L T L is a Cholesky decomposition of M .Since M ≽ 0, such a decomposition always exists.Moreover, the requirement N •((I p +A (X))⊗J n )◁M in part (ii) of Proposition 4 captures the refinement condition (ref) of the SDA algorithm.
In the remaining part of this section, we give a useful result on the behaviour of the relaxations defined above with respect to digraph homomorphisms, which can be easily derived by looking at the corresponding relaxation matrices.Given two finite sets R and S and a function f : R → S, we let Q f be the |R| × |S| matrix whose (r, s)-th entry is 1 if f (r) = s, 0 otherwise.The next lemma, whose trivial proof is omitted, lists some useful properties of Q f .Lemma 5. Let R, S, T be finite sets, and let f : R → S, g : S → T be functions.Then • Q f e s = r∈f −1 (s) e r for each s ∈ S; We now look at what happens when a relaxation is applied to two different pairs of inputs (X, A) and (X ′ , A ′ ) such that X ′ → X and A → A ′ .Expressing the two outputs in the form of relaxation matrices, the next proposition shows that one output can be obtained from the other through Kronecker products of the matrices Q f .Proposition 6.Let X, X ′ , A, A ′ be digraphs, let f : X ′ → X and g : A → A ′ be homomorphisms, and let M be a relaxation matrix for X, A. Then is a relaxation matrix for X ′ , A ′ .Furthermore, if M is an SDP-matrix (resp.AIP-matrix) for X, A, then M (f,g) is an SDP-matrix (resp.AIP-matrix) for X ′ , A ′ .
The last part of Lemma 5 states that the matrix Q f corresponding to a bijective function Note that the Kronecker product of orthogonal matrices is an orthogonal matrix.Therefore, if the homomorphisms f and g in Proposition 6 are both bijective (for example, if they are isomorphisms), the linear operator (•) (f,g) : M → M (f,g) is an orthogonal transformation with respect to the Frobenius inner product ⟨M, A straightforward consequence of Proposition 6 is that the algorithms are monotone with respect to the homomorphism preorder.

Automorphisms and orbitals
The leitmotif of this work is the use of linear algebra to manipulate relaxation algorithms.In this section, we begin to explore how the symmetries of the input digraphs-expressed via their automorphism groups-affect the outputs-expressed via relaxation matrices.
As usual, we let X and A be two digraphs whose vertex sets have size p and n, respectively.If ξ and α are automorphisms of X and A, respectively, we may permute the rows and columns of a relaxation matrix M according to ξ and α and the result would still be a relaxation matrix.
By averaging over all pairs of automorphisms (ξ, α), we end up with a relaxation matrix that is invariant under automorphisms of X and A. Since the set of positive semidefinite, entrywise-nonnegative matrices is closed under simultaneous permutations of rows and columns and under convex combinations, the same can be done for SDP-matrices.
We now formalise this observation.The next definition captures the invariance property mentioned above.Recall the description of the matrices Q f and M (f,g) given in Section 2.
Proposition 9. Let X, A be digraphs, let s = |Aut(X)| and t = |Aut(A)|, and let M be a relaxation matrix for X, A. Then the matrix is a balanced relaxation matrix for X, A. Furthermore, if M is an SDP-matrix for X, A, then so is M .
Proof.For any ξ ∈ Aut(X) and α ∈ Aut(A), since automorphisms are, in particular, homomorphisms, we deduce from Proposition 6 that M (ξ,α) is a relaxation matrix for X, A. Since the conditions (r 1 )-(r 5 ) are clearly preserved by taking convex combinations, it follows that M is a relaxation matrix for X, A, too.We are left to show that M is balanced.For ξ 0 ∈ Aut(X) and α 0 ∈ Aut(A), using Lemma 5, we find as required (where the penultimate equality holds since Aut(X) and Aut(A) are groups).If M is an SDP-matrix for X, A, the same holds for M (ξ,α) for any ξ, α (by virtue of Proposition 6) and for M (since positive semidefiniteness and entrywise nonnegativity are preserved under convex combinations).
The next result, obtained as a consequence of Proposition 9, is a symmetric version of Proposition 4.
Proposition 10.Let X, A be digraphs.Then (i) SDP(X, A) = Yes if and only if there exists a balanced SDP-matrix for X, A; (ii) if X is loopless, SDA(X, A) = Yes if and only if there exist a balanced SDP-matrix M and an AIP-matrix N for X, A such that N • (( Proof.The result immediately follows by combining Proposition 4 with Proposition 9 and observing that, if M is an SDP-matrix, the matrix M defined in (2) satisfies M ◁ M .
As a result of Proposition 10, the output of SDP (and of the SDP-part of SDA) may be assumed to be balanced without loss of generality. 5In linear-algebraic terms, it follows that, instead of studying the outputs of SDP in R pn×pn with the basis of standard unit matrices e i e T j (as we have implicitly done so far), we may work without loss of generality in the real vector space L of balanced matrices for X, A. (The fact that L is a real vector space easily follows from Definition 8.) As we see next, the concept of orbitals provides a natural basis for the space L .
Take a digraph X, and consider the action of the group Aut(X) onto the set V (X) 2 given by (x, y) ξ = (ξ(x), ξ(y)) for ξ ∈ Aut(X), x, y ∈ V (X).An orbital of X is an orbit of V (X) 2 with respect to this action; i.e., it is a minimal subset of V (X) 2 that is invariant under the action.We let O(X) be the set of orbitals of X.Given an orbital ω ∈ O(X), we let R ω be the p × p matrix whose (x, y)-th entry is 1 if (x, y) ∈ ω and 0 otherwise.Orbitals provide an alternative description of balanced matrices: A block matrix M is balanced for X, A if and only if the block structure of M is constant over the orbitals of X, and each block is constant over the orbitals of A. As stated next, it follows that we can find a basis for L by taking Kronecker products of the matrices R ω .We shall see later that a different basis for the same space may be found under certain conditions using the theory of association schemes.
Proposition 11.Let X, A be digraphs, and let L be the real vector space of balanced matrices for X, A.
thus showing that the matrix R ω ⊗ R ω is balanced for X, A. It follows that R ⊆ L .Since the orbits of a group action partition the underlying set, we can write V (X) 2 as the disjoint union of the orbitals: Therefore, R consists of Boolean matrices summing up to the all-one matrix, and it is thus a linearly independent set.Given M ∈ L , ω ∈ O(X), and ω ∈ O(A), let v ω ω = (e x ⊗ e a ) T M (e y ⊗ e b ) for some (x, y) ∈ ω, (a, b) ∈ ω.This definition is well posed by virtue of Definition 8, and it guarantees that It follows that span(R) = L , which concludes the proof.
It follows from Proposition 11 that, given a balanced matrix M , there exists a unique list of coefficients v ω ω satisfying the equation ( 3).We shall refer to the |O(X)| × |O(A)| matrix V = (v ω ω) as the orbital matrix of M .Expressing a balanced matrix M in the new basis R rather than in the standard basis for R pn×pn is especially convenient when X and A are highly symmetric.Indeed, if Aut(X) and Aut(A) are large, O(X) and O(A) are small.Working with R allows then compressing the information of the pn × pn matrix M in the smaller |O(X)| × |O(A)| orbital matrix V .However, if we want to make use of V to certify acceptance of SDP, we need to be able to check if M is an SDP-matrix by only looking at V .While lifting the requirements defining an SDP-matrix to the orbital matrix, it should come with little surprise that the crucial one is positive semidefiniteness: How to translate the fact that M ≽ 0 into a condition on V ?We shall see in the next section that the key for recovering the spectral properties of M from the orbital matrix is to endow the set of orbitals with a certain algebraic structure.

Association schemes
Our strategy for gaining access to the spectral properties of a balanced matrix (in particular, its positive semidefiniteness) from the corresponding orbital matrix consists in studying the orbitals of a given digraph algebraically, via the concept of association schemes.Definition 12.An association scheme is a set S = {S 0 , S 1 , . . ., S d } of p × p Boolean matrices satisfying Association schemes were introduced by Bose and Nair [15] and Bose and Shimamoto [16] in the context of statistical design of experiments, but the root of the theory can be traced back to the work of Frobenius, Schur, and Burnside on representation theory of finite groups, see [7].Indeed, if all S i in Definition 12 are permutation matrices, S is a finite group; association schemes allow developing a theory of symmetry that generalises character theory for group representations.Later, Delsarte's work in algebraic coding theory [38] initiated the study of association schemes as a separate area in the domain of algebraic combinatorics.
The Bose-Mesner algebra B of S is the vector space span C (S ), which consists of all complex linear combinations of the matrices in S (see [14]).Since the matrices in S are Boolean and satisfy (s 2 ), they form a basis for B. Notice also that the set S ∪ {O p,p } is closed under the Schur product, and so is B.Moreover, the matrices in S are Schur-orthogonal and Schur-idempotent, in that S i • S j equals S i when i = j, and equals O p,p otherwise.Hence, we have the following. 6act 13.Let S be an association scheme.Then S forms a Schur-orthogonal basis of Schur-idempotents for its Bose-Mesner algebra B. Now, by (s 4 ), B is also closed under the standard matrix product; in other words, it is a matrix algebra, thus justifying the name.It turns out that a different basis exists for B, whose members enjoy similar properties to those for the basis S , but with a different product being involved.
Fact 14.Let S be an association scheme.Then there exists an orthogonal basis E = {E 0 , E 1 , . . ., E d } of Hermitian idempotents7 for its Bose-Mesner algebra B.
The interaction between the two bases S and E allows deriving several interesting features of association schemes.The change-of-basis matrix shall be particularly important for our purposes.More precisely, we can (uniquely) express the elements of S as for some coefficients p ij .The (d + 1) × (d + 1) matrix P = (p ij ) is known as the character table of the association scheme [6].
For our purposes, association schemes will provide a natural language for describing how the SDA algorithm-in particular, its semidefinite programming part-processes the symmetries of the input digraphs.We say that a digraph X is generously transitive if for any x, y ∈ V (X) there exists ξ ∈ Aut(X) such that ξ(x) = y and ξ(y) = x.It turns out that the set of orbitals for a generously transitive digraph forms an association scheme.Indeed, in this case, the condition (s 1 ) is trivially satisfied, since each generously transitive digraph is in particular vertex-transitive, while (s 2 ) follows from the fact that the orbitals partition V (X) 2 .The conditions (s 3 )-in fact, the stronger condition that R T ω = R ω for each ω ∈ O(X)-and (s 5 ) directly come from the definition of generous transitivity.As for the condition (s 4 ), it can be proved by considering the Hecke ring of the permutation representation of Aut(X); we refer the reader to [6] or [50] for further details.
We shall refer to the character table of the association scheme {R ω : ω ∈ O(X)} as the character table of X.Note that this is a |O(X)| × |O(X)| matrix.Recall that our current objective is to decipher the spectral properties of a balanced matrix M from the corresponding orbital matrix V .The idea is then to consider a new basis for the space of balanced matrices, alternative to the one of Proposition 11, given by the Kronecker product of the orthogonal bases from Fact 14 for the two schemes of the input digraphs X and A. As the next result shows, working in this new basis allows recovering the spectrum of a balanced matrix from the corresponding orbital matrix.Moreover, the character table serves as the dictionary required for the translation.
Theorem 16.Let X and A be generously transitive digraphs, let M be a balanced matrix for X, A, let V be the orbital matrix of M , and let P and P be the character tables of X and A, respectively.Then the spectrum of M consists of the entries of the matrix P V P T .
Proof.Let E = {E ω : ω ∈ O(X)} be an orthogonal basis of Hermitian idempotents for the Bose-Mesner algebra of the association scheme corresponding to O(X), as per Fact 14.Also, let Ẽ = {E ω : ω ∈ O(A)} be an orthogonal basis of Hermitian idempotents for the Bose-Mesner algebra for O(A).Note that E consists of p × p matrices, while Ẽ consists of n × n matrices (where p = |V (X)| and n = |V (A)|, as usual).Denote the entries of P and P by p ij and pij , respectively.Using Proposition 11, we can express the balanced matrix M in the basis R as in (3).For σ ∈ O(X) and σ ∈ O(A), we find Using (4)-and recalling that, by Theorem 15, the matrices R ω and R ω take the roles of the Schur-idempotents for the respective association schemes-we obtain where the second equality follows from the fact that the members of the bases E and Ẽ are orthogonal and idempotent.Consider now, for σ ∈ O(X) and σ ∈ O(A), the complex vector space We claim that where " " denotes the sum of vector subspaces of C pn .Using (s 1 ), let τ ∈ O(X) and τ ∈ O(A) be such that R τ = I p and R τ = I n .By (4), we have that p στ E σ and As a consequence, we find that Given a vector v ∈ C pn , we obtain thus proving the nontrivial inclusion in the claimed identity.It follows from the orthogonality of the idempotents that the sum in ( 6) is in fact an orthogonal direct sum.Furthermore, by (5), each C (σ,σ) is an eigenspace for M relative to the eigenvalue e T σ P V P T e σ.As a consequence, (6) yields a decomposition of C pn into eigenspaces 9 for M , and it follows that the eigenvalues of M are precisely the numbers e T σ P V P T e σ (i.e., the entries of the matrix P V P T ), with geometric and algebraic multiplicity given by the dimension of C (σ,σ) .One consequence of Theorem 16 is that, in the new basis for the space of balanced matrices, the semidefinite-programming condition M ≽ 0 is transformed into the linear-programming condition P V P T ≥ 0. All other conditions making M an SDP-matrix (namely, the conditions (r 1 )-(r 5 ) and the entrywise nonnegativity) are trivially translated into equivalent (linearprogramming) conditions on V .Hence, Theorem 16 turns the semidefinite program (SDP) applied to two generously transitive digraphs X and A into an equivalent linear program, whose constraints are now in terms of the character tables of X and A. This is made explicit in the next result.
For a digraph X, we let µ X be the vector, indexed by the elements of O(X), whose ω-th entry is |ω|.We say that an orbital ω is the diagonal orbital if R ω is the identity matrix, and we say that ω is an edge orbital if ω ⊆ E(X); non-diagonal and non-edge orbitals are defined in the obvious way.Notice that the edge orbitals of X partition E(X).
Corollary 17.Let X and A be generously transitive digraphs and let P and P be the character tables of X and A, respectively.Then SDP(X, A) = Yes if and only if there exists a real entrywise-nonnegative (c 3 ) v ω ω = 0 if ω is the diagonal orbital of X and ω is a non-diagonal orbital of A; (c 4 ) v ω ω = 0 if ω is an edge orbital of X and ω is a non-edge orbital of A.
Proof.Combining Proposition 10 with Proposition 11, we find that SDP(X, A) = Yes if and only if there exists a real |O(X)| × |O(A)| matrix V such that, letting M is an SDP-matrix.Hence, we need to show that M is an SDP-matrix precisely when V is entrywise nonnegative and satisfies (c 1 )-(c 4 ).One readily sees from ( 7) that M ≥ 0 exactly when V ≥ 0. Since the association schemes corresponding to O(X) and O(A) are symmetric by Theorem 15, all matrices R σ and R σ are symmetric, and so are their Kronecker products.Hence, by (7), M is symmetric as well. 10It follows that M ≽ 0 if and only if its spectrum is nonnegative.By Theorem 16, this is equivalent to (c 1 ).Consider two orbitals ω ∈ O(X) and ω ∈ O(A), and let (x, y) ∈ ω, (a, b) ∈ ω.We find It follows that (c 3 ) and (c 4 ) are equivalent to (r 1 ) and (r 2 ), respectively.Furthermore, given (x, y) ∈ ω ∈ O(X), we have As a consequence, (c 2 ) is equivalent to (r 6 )-which, by Proposition 3, is equivalent to (r 3 ), (r 4 ), and (r 5 ).
In order to prove that SDA does not solve PCSP(A, B) for any pair of non-bipartite loopless undirected graphs such that A → B, thus establishing Theorem 1, we seek a fooling instance: a digraph X such that SDA(X, A) = Yes but X ̸ → B. If we wish to apply Corollary 17 and take advantage of the machinery developed so far for describing the output of SDP, we need both X and A to be generously transitive digraphs.Regarding A, this requirement does not create problems.Indeed, it is not hard to check that it is enough to establish the result in the case that A is an odd undirected cycle and B is a clique.Since cycles happen to be generously transitive, Theorem 15 does apply; as we shall see, the structure of the scheme for odd cycles also allows dealing with the linear part of SDA.The more challenging part is to come up with a digraph X that (i) is generously transitive, (ii) is not homomorphic to B (i.e., is highly chromatic), and (iii) is accepted by SDA.A promising candidate is the class of Kneser graphs, as they (i) are generously transitive and (ii) have unbounded chromatic number (that is easily derived from the parameters of the graphs through a classic result by Lovász [82]).In the next two sections, we look at the association schemes for Kneser graphs and odd cycles.The task is to collect enough information on their character tables to design an orbital matrix witnessing the fact that (iii) SDA(X, A) = Yes.

The Johnson scheme
Given s, t ∈ N such that s > 2t, the Kneser graph G s,t is the undirected graph whose vertices are all subsets of [s] of size t, and whose edges are all disjoint pairs of such subsets.As a consequence of the Erdős-Ko-Rado theorem [13,50], the automorphism group of G s,t is isomorphic to the symmetric group Sym s consisting of the permutations of [s].More precisely, given f ∈ Sym s , the corresponding automorphism ξ f of G s,t is given by ξ f : {a 1 , . . ., a t } → {f (a 1 ), . . ., f (a t )} for any set {a 1 , . . ., a t } of t elements of [s].Let U, V be vertices of G s,t , let Z = U ∩ V , and Figure 1: From the left, the generalised Johnson graphs J 7,3,1 , J 7,3,2 , and J 7,3,3 .label the elements of U \ Z, V \ Z, and Z by {u 1 , . . ., u q }, {v 1 , . . ., v q }, and {z 1 , . . ., z t−q } for some 0 ≤ q ≤ t.Letting f ∈ Sym s be the permutation that switches each u i with v i and is constant over [s] \ (U ∪ V \ Z), we see that ξ f (U ) = V and ξ f (V ) = U .This means that G s,t is generously transitive and, thus, O(G s,t ) generates an association scheme, which is known as the Johnson scheme.
Observe that, for two vertices U and V as above, the orbital of (U, V ) contains precisely all pairs of vertices (U Hence, the association scheme corresponding to O(G s,t ) consists of the adjacency matrices of the generalised Johnson graphs J s,t,q for q = 0, . . ., t, where J s,t,q is the graph having the same vertex set as G s,t , with two vertices being adjacent if and only if their intersection has size t − q (cf. Figure 1).For q = t, J s,t,q is G s,t ; for q = 1, it is known as the Johnson graph, see [49, § 1.6]; for q = 0, it is the disjoint union of s t loops (whose adjacency matrix is the identity).Hence, the diagonal orbital corresponds to q = 0, while the (unique) edge orbital corresponds to q = t.
In order to design an orbital matrix witnessing that G s,t is accepted by SDP (and, as we will see, SDA), it shall be useful to gain some insight into the behaviour of the character table of G s,t when it is multiplied by column vectors (which, ultimately, will be the columns of the orbital matrix, cf.Corollary 17).We shall see that, if a column vector is interpolated by a polynomial of low degree, multiplying it by the character table yields a vector living in a fixed, low-dimensional subspace of R t+1 .This observation leads us to choose an orbital matrix whose nonzero columns are polynomials of degree one (cf.the proof of Theorem 1 in Section 7).
Let h be the vector (0, 1, . . ., t) and, given a univariate polynomial f ∈ R[x], let h f be the vector (f (0), f (1), . . ., f (t)).Henceforth, we shall label the members of the Johnson scheme by 0, 1, . . ., t, where the q-th member is A (J s,t,q ).Hence, we label the entries of the character table of G s,t and the standard unit vectors e i in Theorem 18 accordingly, with indices ranging over {0, . . ., t} rather than {1, . . ., t + 1}.A similar labelling shall also be used for the cycle scheme, cf.Propositions 24 and 26.
Theorem 18.Let s, t ∈ N with s > 2t, and let P be the character table of G s,t .Then (i) P h f ∈ span(e 0 , . . ., e d ) for any univariate polynomial f of degree d ≤ t; To prove Theorem 18, we take advantage of the explicit expression for the character table of the Johnson scheme obtained by Delsarte [38] (see also [50, § 6.5]) in terms of the Eberlein polynomials defined, for s, t, q, j ∈ N 0 , by 11β(s, t, q, j) Here, we are using the conventions that x y = 0 unless 0 ≤ y ≤ x, and that 0 0 = 1.In particular, this implies that β(s, t, q, j) = 0 unless q, j ≤ t ≤ s.
Our strategy consists in associating with the entries of the character table a family of bivariate generating functions (parameterised by t and j) defined by γ t,j (x, y) = s,q∈N 0 β(s, t, q, j)x s y q . (8) We now find a closed formula for these generating functions.Henceforth, the range in a summation shall always be meant to be N 0 unless otherwise specified.
Proposition 20.The identity holds for each t, j ∈ N 0 and x, y ∈ R such that j ≤ t and −1 < x < 1.
Proof.We use the well-known identity We find as required.
Theorem 18 is then proved by expressing the entries of the vector P h f in terms of partial derivatives of the generating functions γ t,j , and by finding analytic expressions for these partial derivatives through Proposition 20.In particular, the quantity ϑ s,t,j (k) = q q k β(s, t, q, j) shall be crucial in the following.Observe that we can view it as the j-th entry of the vector obtained by multiplying the character table of G s,t by a vector whose q-th entry is q k .It is possible to isolate ϑ s,t,j (k) by differentiating the generating functions γ t,j .Using the closed formula for γ t,j we just obtained, one can then deduce some useful identities.
Proposition 21.Let s, t, q, j, k ∈ N 0 .Then (i) ϑ s,t,j (k) = 0 if k < j; (ii) ϑ s,t,j (j) = (−1) j j! s−2j t−j ; (iii) ϑ s,t,j (j + 1) = (−1) j (j + 1)! s−2j t−j t − j 2 − (t−j) 2 s−2j .Proof.Notice that the result is trivially true if j > t, as in this case β(s, t, q, j) = 0. Differentiating the polynomial γ t,j as defined in (8) k times with respect to the variable y yields Observe that k! q k is a polynomial in q of degree k.Hence, we can find coefficients a i q i ; in particular, we have a We now make use of Proposition 20 to get an alternative expression for the object above.If −1 < x < 1, applying the Leibniz rule for differentiation, we obtain Notice that, for 0 If k ≥ j, it follows that, over the disk −1 < x < 1, In fact, (12) also holds if k < j, as both terms in the equality are zero in that case.Using that γ t,j is an analytic function, we can then compare (11) and ( 12) equating the coefficients.This yields the identity If k < j, the right-hand side of ( 13) is zero.Recalling that a (k) k = 1, it is then immediate to conclude by induction over k that ϑ s,t,j (k) = 0, thus establishing (i).
Remark 22.We observe that the equation ( 13) yields the following recursive identity satisfied by the quantities ϑ s,t,j (k): The numbers a (k) i are the Stirling numbers of the first kind, see [91].
We can now prove Theorem 18. Recall that h = (0, 1, . . ., t) and h f = (f (0), f (1), . . ., f (t)) for a univariate polynomial f ∈ R[x] (where, as usual, we interpret tuples as column vectors); if f is the monomial given by f (x) = x k , we denote h f by h k .We now establish that multiplying the character table of the Johnson scheme by h f yields a vector with the property that the entries with index bigger than the degree of f are zero. 12This fact is particularly useful when f has a low degree, as in this case all but the first few entries in the resulting vector are zero.Using parts (ii) and (iii) of Proposition 21, we are able to find the expressions for the nonzero coefficients in the case that the degree is zero or one.
Proof of Theorem 18.Let f be a polynomial of degree d ≤ t, and write it as f (x) = d k=0 a k x k for some coefficients a k .We obtain h f = d k=0 a k h k .Hence, for each j ∈ {0, . . ., t}, we have q k e T j P e q = d k=0 a k t q=0 q k β(s, t, q, j) If j > d, it follows from Proposition 21(i) that the quantity in ( 14) is zero, which means that P h f ∈ span(e 0 , . . ., e d ), thus proving (i).

The cycle scheme
Consider the undirected cycle C n with n vertices, where n ≥ 3. It is well known that Aut(C n ) is the dihedral group of order 2n, consisting of all rotations and reflections of the cycle.Any pair of distinct vertices is switched by a suitable reflection, so C n is generously transitive.Hence, the orbitals of C n form an association scheme.The automorphism group of any graph X acts isometrically on V (X) 2 , in the sense that dist(ξ(x), ξ(y)) = dist(x, y) for any x, y ∈ V (X) and any ξ ∈ Aut(X).In addition, the structure of the dihedral group implies that two pairs (x, y) and (x ′ , y ′ ) of vertices of C n lie in the same orbital whenever dist(x, y) = dist(x ′ , y ′ ).Hence, if n = 2m + 1 is an odd integer, there are exactly m + 1 orbitals-one for each possible distance between two vertices in the cycle.We can thus write O(C n ) = {ω 0 , . . ., ω m }, with In other words, ω 0 and ω 1 are the diagonal orbital and the (unique) edge orbital, respectively.Each orbital has size 2n except ω 0 , which has size n.Instead of providing a complete description of the character table of C n , it shall be enough for our purposes to derive one property using the Perron-Frobenius theorem.We say that a real entrywise-nonnegative square matrix M is primitive if M c is entrywise positive for some power c ∈ N.
Theorem 23 (Perron-Frobenius theorem [43,86]).Let M be a primitive matrix.Then M has a unique eigenvalue ρ associated with an entrywise-nonnegative eigenvector.Moreover, ρ is a simple eigenvalue 13 , it is real and positive, and |λ| < ρ for each other eigenvalue λ of M .
Like in the case of the Johnson scheme, it shall be convenient to let the indices of the entries in the character table of C n range over {0, . . ., m} rather than {1, . . ., m + 1}.
Proposition 24.Let n ≥ 3 be an odd integer, and let P be the character table of C n .Then P e 0 = 1, while P e 1 contains exactly one entry equal to 2, and all other entries are strictly smaller than 2 in absolute value.
Proof.Let E = {E 0 , . . ., E m } be an orthogonal basis of idempotents for the association scheme of O(C n ) as per Fact 14, where m = n−1 2 .Since E is a basis, all of its members are different from the zero matrix, so for each ℓ ∈ {0, . . ., m} there exists v (ℓ) ∈ C n such that E ℓ v (ℓ) ̸ = 0. Using the idempotency of the basis, we find, for each j ∈ {0, . . ., m}, This means that p ℓj is an eigenvalue of R ω j for each ℓ, j.Choosing j = 0 and recalling that R ω 0 = I n , it follows that P e 0 = 1.Reasoning as in the proof of Theorem 16, we find that (15), yields a simultaneous decomposition of C n into eigenspaces14 of R ω j for each ω j ∈ O(C n ).As a consequence, all eigenvalues of R ω j appear as entries of the vector P e j .Since n is odd, given two vertices x and y of C n , we can always find a path connecting x to y whose length is even; clearly, any such path can be extended to a walk of length exactly n − 1 connecting x to y.This means that the matrix (A By Theorem 23, ρ = 2 is the spectral radius of R ω 1 , it is a simple eigenvalue, and all other eigenvalues of R ω 1 have strictly smaller absolute value.This yields the desired description for P e 1 . Remark 25.The orbitals of C n form an association scheme also in the case that n is even.However, the description of P e 1 in Proposition 24 fails to be true in that case, the reason being that the adjacency matrix of an even cycle is not primitive.In fact, it is well known that the spectrum of the adjacency matrix of a bipartite graph is symmetric around 0 (see [12,Prop. 8.2]).As a consequence, there are two entries of P e 1 whose values are 2 and −2.Ultimately, this slight difference is able to break the whole argument in the proof of Theorem 1 if one tries to replace odd cycles with even cycles.This is a good sanity check, as we know that SDP does solve CSP(K 2 ) and, thus, CSP(A) for any undirected bipartite graph A [9] (where we denote by CSP(A) the CSP parameterised by A; i.e., CSP(A) = PCSP(A, A)).
Theorem 18 and Proposition 24 contain the instructions needed to design an orbital matrix corresponding to a balanced SDP-matrix-which shall be concretely done in the proof of Theorem 1.In order to show that Kneser graphs are fooling instances for SDA applied to the approximate graph homomorphism problem, this SDP-matrix should be augmented with a suitable AIP-matrix, as prescribed by Proposition 10.The key to make the augmentation possible is to assign to each member of the cycle scheme an integral matrix whose support is included in the corresponding orbital, in a way that the row-and column-sum vectors are equal and constant over the whole scheme.The next result shows that such an assignment does exist.
Proposition 26.For any odd integer n ≥ 3 there exists a function f : Proof.Let {0, . . ., n − 1} be the vertex set of C n , and take ω ∈ O(C n ).If ω is the diagonal orbital, we let f (ω) = e 0 e T 0 , which clearly satisfies the requirements.If ω is not the diagonal orbital, the description of O(C n ) given at the beginning of this section implies that there exists j ∈ [m] such that ω = {(x, y) ∈ {0, . . ., n − 1} 2 : dist(x, y) = j}.For each vertex x ∈ {0, . . ., n − 1}, there exist exactly two vertices y ̸ = y ′ ∈ {0, . . ., n − 1} such that dist(x, y) = dist(x, y ′ ) = j.In other words, ω is the edge set of an undirected graph H ω all of whose vertices have degree two.It follows that H ω is the disjoint union of n ω undirected cycles, for some n ω ∈ N. Let C be one of these cycles and let x be a vertex belonging to C. The length of C is the minimum ℓ ∈ N such that x + ℓj = x mod n.Since this quantity does not depend on x, we deduce that all cycles in H ω have the same length ℓ ω ≥ 3.As n ω ℓ ω = n, it follows in particular that ℓ ω is odd.Choose as C the ℓ ω -cycle in H ω containing the vertex 0, and relabel the vertices of C as 0 = 0 ′ , 1 ′ , . . ., (ℓ ω − 1) ′ , in the natural order.Consider the oriented graph C obtained by setting an alternating orientation for each edge of C, starting from (0 ′ , 1 ′ ); i.e., We define f (ω) as the n × n matrix whose (x, y)-th entry is 1 if (x, y) ∈ S + , −1 if (x, y) ∈ S − , and 0 otherwise.Notice that With the exception of 0, each vertex of C n either is the tail of exactly two directed edges in C, whose contributions in f (ω) have opposite signs, or it is not the tail of any directed edge in C. As for 0 = 0 ′ , it is the tail of exactly one directed edge, whose contribution is +1.The same statement is true if we replace "tail" with "head".As a consequence, we find that f (ω)1 = f (ω) T 1 = e 0 , thus concluding the proof.

A lower bound against SDA
We now have all the ingredients for proving the main result of the paper.
Theorem (Theorem 1 restated).Let A, B be non-bipartite loopless undirected graphs such that A → B. Then SDA does not solve PCSP(A, B).
Proof.Observe that, for A and B as in the statement of the theorem, there exist n, n ′ ≥ 3 with n odd such that C n → A and B → K n ′ .(For example, we may choose n and n ′ as the odd girth of A and the chromatic number of B, respectively.)Let m = n−1 2 , and let P be the character table of the association scheme corresponding to O(C n ).By Proposition 24, there exists 0 < δ < 2 such that, up to a permutation of the rows, P e 0 = 1 and P e 1 = 2 z for some vector z ∈ R m all of whose entries have absolute value strictly smaller than δ.Without loss of generality, we can assume that δ is rational.Let t ∈ N be such that t ≥ 2n ′ 2−δ and t δ ∈ N, and let s = 2t δ + t.Observe that s > 2t.We claim that SDA(G s,t , C n ) = Yes.Since, as shown in Proposition 7, SDA is monotone with respect to the homomorphism preorder of the arguments, this would imply that SDA(G s,t , A) = Yes.However, using Lovász's formula for the chromatic number of Kneser graphs [82], we find This means that G s,t ̸ → K n ′ and, hence, G s,t ̸ → B. As a consequence, the truth of the claim would establish that SDA does not solve PCSP(A, B), thus concluding the proof of the theorem.
Let P be the character table of G s,t , and recall that h denotes the vector (0, 1, . . ., t) (which, as usual, we view as a column vector).Consider the matrices m+1) , and V = W K. We now show that V meets the conditions in Corollary 17 and, thus, it is the orbital matrix for a balanced SDP-matrix.Recall that the diagonal orbitals of G s,t and C n are those having index 0, while the (unique) edge orbitals of G s,t and C n are those having index t and 1, respectively.Since v i,j = 0 whenever i = 0, j ̸ = 0 or i = t, j ̸ = 1, the conditions (c 3 ) and (c 4 ) are satisfied.Observe that µ Cn is the vector 2n1 − ne 0 .Therefore, V µ Cn = W Kµ Cn = W 1 = 1, so (c 2 ) holds, too.Theorem 18 yields .
It follows that We have 2 It follows that P V P T ≥ 0, which means that (c 1 ) is met.Applying Corollary 17, we deduce that SDP(G s,t , C n ) = Yes and that the matrix The next step is to add AIP.For each x ∈ V (G s,t ) 2 , let ω (x) be the orbital of G s,t containing x, and choose an orbital ω(x) of C n satisfying v ω (x) ω(x) ̸ = 0. Letting f : O(C n ) → Z n×n be the function from Proposition 26, we consider the s t n × s t n matrix N defined by N x = f (ω (x) ) for each x (where N x = (e x 1 ⊗ I n ) T N (e x 2 ⊗ I n ) is the x-th block of N ).We claim that N is an AIP-matrix for G s,t , C n .Note that, if x = (x, x) ∈ V (G s,t ) 2 , we have ω (x) = ω 0 and, thus, ω(x) = ω0 , which gives supp(N x ) = supp(f (ω 0 )) ⊆ ω0 .Similarly, if x ∈ E(G s,t ), then ω (x) = ω t and, thus, ω(x) = ω1 , which gives supp(N x ) = supp(f (ω 1 )) ⊆ ω1 = E(C n ).This yields the conditions (r 1 ) and (r 2 ).Moreover, for x = (x 1 , x 2 ) ∈ V (G s,t ) 2 , we find which, by the properties of f , is constant over the orbitals of C n ; this gives (r 3 ).Similarly, using that f (ω (x) ) T 1 n is constant over the orbitals, we obtain (r 4 ).Finally, (r 5 ) follows by observing that T n e 0 = 1 for any x.As a consequence, N is a relaxation matrix; since its entries are integral, it is an AIP-matrix.For any x ∈ V (G s,t ) 2 , the x-th block of M satisfies Since v ω (x) ω(x) ̸ = 0, using that the orbitals of a graph are disjoint, we deduce that R ω(x) ◁ M x .
On the other hand, we have supp Applying Proposition 10, we conclude that SDA(G s,t , C n ) = Yes, as required.
We note that the SDP part of the integrality gap in Theorem 1 may be directly converted into Unique-Games approximation hardness of AGH through Raghavendra's framework [87].Given two digraphs X, X ′ and a real number 0 ≤ ϵ ≤ 1, an ϵ-homomorphism from X to X ′ is a map f : V (X) → V (X ′ ) that preserves at least (1 − ϵ)-fraction of the edges of X.A robust algorithm for PCSP(A, B) is an algorithm that finds a g(ϵ)-homomorphism from X to B whenever the instance X is such that there exists an ϵ-homomorphism from X to A, where g is some monotone, nonnegative function satisfying g(ϵ) → 0 as ϵ → 0. As observed in [21], it follows from [87] that any PCSP admitting a polynomial-time robust algorithm is solved by SDP, assuming the Unique Games Conjecture (UGC) of [74].Thus, Theorem 1 implies the following conditional hardness result for AGH.
Corollary 27.Let A, B be non-bipartite loopless undirected graphs such that A → B. Then, assuming the UGC and P ̸ = NP, PCSP(A, B) does not admit a polynomial-time robust algorithm.

Incomparability to the BA hierarchy
The BLP + AIP algorithm-whose name we abbreviate to BA in this paper-was introduced in [22] as a combination of two standard algorithmic techniques for CSPs: the basic linear programming relaxation BLP and the affine integer programming relaxation AIP (the same we use in the current work as the linear Diophantine part of SDA).Unlike SDA, this relaxation does not solve all bounded-width CSPs, as noted in [22].Consequently, BA is strictly less powerful than SDA.It is possible to progressively strengthen the BA algorithm through the lift-and-project technique [80], which results in the so-called BA hierarchy [32].The k-th level of the hierarchy, denoted by BA k , corresponds to applying BA to a modified instance whose variables are sets of variables of the original instance of size up to k. 15 Recently, [30] established a lower bound against this model, by showing that no constant level of the BA hierarchy solves the approximate graph colouring problem.Since constant levels of the BA hierarchy do solve all bounded-width CSPs, it is natural to investigate how the hierarchy compares to SDA.In particular, if some level of the hierarchy dominated SDA (in a sense that we will now make formal), the lower bound in [30] would immediately imply the same lower bound against SDA (though only for the approximate graph colouring problem).In this section, we establish that this is not the case: SDA (and, in fact, already SDP and a weaker version of the latter) is not dominated by any level of the BA hierarchy.As a consequence, the specialisation of Theorem 1 to approximate graph colouring does not follow as a corollary of [30].
Definition 28.Let D be the family of all (finite) digraphs.We define a test to be a function T : D 2 → {Yes, No}.We say that a test T is a polynomial-time test if, for each A ∈ D, there exists an algorithm Alg A taking digraphs as inputs and returning values in {Yes, No}, such that (i) T (X, A) = Alg A (X) for each X ∈ D, and (ii) Alg A can be implemented in polynomial time in the size of the input. 15For the explicit definitions of BLP, BA, and BA k , we refer the reader to [8], [22], and [32], respectively.Also, we say that a test T is complete if T (X, A) = Yes for any X, A ∈ D such that X → A; i.e., a complete test has no false negatives.We define a partial order "⪯" on the set of tests: Given two tests T 1 , T 2 , we write T 1 ⪯ T 2 (and we say that T 2 dominates T 1 ) if, for any X, A ∈ D, T 2 (X, A) = Yes implies T 1 (X, A) = Yes.
All relaxations mentioned in this work are complete tests.For such tests, the fact that T 1 ⪯ T 2 means that T 2 is at least as powerful as T 1 , in that it has fewer false positives.For example, it directly follows from the definitions that SDA dominates both SDP and AIP.Moreover, since any solution to SDP can be turned into a solution to BLP (by taking the norms of the vector variables λ x,a ), it follows that BLP ⪯ SDP and BA = BA 1 ⪯ SDA.If, for some k ∈ N, we had SDA ⪯ BA k , the results in [30] would directly imply that approximate graph colouring is not solved by SDA and, moreover, that the same fooling instances produced in [30] could be used for fooling SDA.In Subsection 8.2, we show that this is not the case, as not even a weaker, polynomial-time version of SDP-the test SDP ϵ described in Subsection 8.1 below-is dominated by BA k .

A tale of two polytopes
A few years ago, O'Donnell noted that polynomial-time solvability of certain semidefinite programming relaxations, assumed in several papers in the context of the Sum-of-Squares proof system, is not known in general [85].In fact, it is a well-known open question in optimisation theory whether all semidefinite programs can be solved to near-optimality in polynomial time [45,88].To the best of the authors' knowledge, details of how the semidefinite program SDP can be efficiently solved to near-optimality (if at all) have not been made explicit in the literature.This motivates us to give a formal argument showing that this is indeed possible.As we shall see, the issue requires some unexpected matrix-theoretic considerations.
It is well known that a polynomial-time algorithm (in the Turing model of computation) for semidefinite programming based on the ellipsoid method exists [54,55,96] under the assumption that the feasible region contains a "large enough" inner ball and is contained in a "small enough" outer ball-a requirement known as Slater condition. 16In this subsection, we show that the semidefinite program SDP can be solved to near-optimality in polynomial time, by reformulating it as an optimisation problem meeting Slater condition.
Recall that ⟨M, N ⟩ F = Tr(M T N ) denotes the Frobenius inner product of matrices, and let ∥M ∥ F = ⟨M, M ⟩ F denote the corresponding norm.Given a set M of square matrices of equal size, a matrix M ∈ M , and a real number r > 0, we consider the ball B M (M ; r) = {N ∈ M : ∥N − M ∥ F < r}.Throughout this and the next subsections, we shall denote the cone of positive semidefinite matrices by P. For ℓ, m ∈ N, let C, A 1 , . . ., A m be rational ℓ × ℓ matrices, and let b 1 , . . ., b m be rational numbers.We denote by V the polytope containing all real symmetric ℓ × ℓ matrices and let ν be the optimal value of the program.Let also V a denote the affine hull of V , i.e., the intersection of all affine spaces containing V (where a set S of ℓ × ℓ real matrices is an affine space if λM + (1 − λ)N ∈ S whenever M, N ∈ S and λ ∈ R).For rationals r, R > 0, we say that a matrix M 0 ∈ V ∩ P is an (r, R)-Slater point for ( 16) if B V a (M 0 ; r) ⊆ V ∩ P ⊆ B V a (M 0 ; R).The next result from [54] (see also the formulation in [37]) establishes that the ellipsoid method can be used to solve a semidefinite program up to arbitrary precision in polynomial time provided that there exists a Slater point.
Theorem 29 ([54]).Let M 0 be an (r, R)-Slater point for the semidefinite program (16).Then for any rational ϵ > 0 one can find a rational matrix M * ∈ V ∩ P such that ⟨C, M * ⟩ F − ν ≤ ϵ in time polynomial in ℓ, m, log(R/r), log(1/ϵ), and the bit-complexity of the input data C, A i , b i , and M 0 .
Our goal is then to reformulate the system (SDP) as a program in the form (16), and to find for it a suitable Slater point.First of all, observe that we cannot simply introduce a dummy objective function to be minimised over the feasible set of (SDP) (i.e., the set of solutions to (SDP 1 )-(SDP 4 )), as this set can be empty, in which case it clearly contains no Slater points. 17The natural choice is then to relax the condition (SDP 3 )-which requires that the solution should be compatible with the edge sets of X and A-by turning it into an objective function to be minimised: Given a pn × pn matrix M , we let (Notice that we are working with the matrix formulation of (SDP), which is compatible with the standard form (16).) This is sufficient to make the feasible set nonempty, as is witnessed, for example, by the positive semidefinite matrix 1 n J p ⊗ I n .We now need to declare which polytope takes the role of V in the standard form (16).This is an important choice: The definition of Slater points takes into account not only the feasible set V ∩ P of a program, but also the polytope V involved in its formulation.Hence, it might happen that Slater condition can be enforced by modifying the formulation of a program in a way that the dimension of the polytope V is reduced, while still preserving the feasible set V ∩ P. In the current setting, one natural candidate is the polytope described by taking the constraints of (SDP) and discarding (SDP 3 ) and positive semidefiniteness-i.e., in the matrix formulation, the polytope W containing all pn × pn symmetric entrywise-nonnegative matrices satisfying the conditions (r 1 ) ("diagonal blocks are diagonal") and (r 6 ) ("the entries in each block sum up to 1").Another natural choice consists in looking at the conditions defining an SDP-matrix, and discarding the condition (r 2 ) and positive semidefiniteness: We let U denote the polytope of pn × pn symmetric entrywise-nonnegative matrices satisfying the conditions (r 1 ), (r 3 ), (r 4 ), and (r 5 ).The two choices result in two different programs: For this reason, we define the test SDP ϵ using the formulation (SDP ′′ ).More precisely, for ϵ > 0, SDP ϵ is described as follows: • Take two digraphs X, A as input; • run the ellipsoid method [54] on the program (SDP ′′ ) with precision ϵ, obtaining an output M * ∈ U ∩ P; We thus obtain the following result.
Proof.It follows from Proposition 30 that the program (SDP ′′ ) has a ( 1 n 2 , 2p 2 + 1)-Slater point.Using Theorem 29, we deduce that we can find a near-optimal solution to (SDP ′′ ) up to any given precision ϵ > 0 in time polynomial in the sizes of X and A. In particular, if we fix A, SDP ϵ can be implemented in polynomial time in the size of X and it is thus a polynomial-time test, as per Definition 28.Moreover, if SDP(X, A) = Yes, the optimal value of (SDP ′′ ) is 0. As a consequence, the solution M * found by the ellipsoid method satisfies f (M * ) ≤ ϵ (cf.Theorem 29), which means that SDP ϵ (X, A) = Yes.It follows that SDP ϵ ⪯ SDP.In particular, since SDP is complete, this implies that SDP ϵ is also complete.
In Subsection 8.2, this weaker, polynomial-time version of SDP will prove to be strong enough to correctly classify cliques and, therefore, not to be dominated by the BA hierarchy.We now give a proof of Proposition 30, that finds a Slater point for the program (SDP ′′ ).The following, simple description of the association schemes corresponding to cliques shall be useful.
Remark 33.It is straightforward to check that, for any n ≥ 2, the clique K n is generously transitive, and the association scheme corresponding to O(K n ) consists of the two matrices I n and J n − I n .Either by a direct computation or noting that K n = G n,1 , we find that the character table of K n is the matrix and notice that M 0 ∈ U .Letting ω 0 , ω 1 denote the diagonal and edge orbitals of K p and ω0 , ω1 denote the diagonal and edge orbitals of K n , we see that M 0 may be written in the form (3) with v ω 0 ω0 = 1 n , v ω 0 ω1 = 0, and . Let P and P be the character tables of K p and K n , respectively, and recall their expressions from Remark 33.Using Theorem 16, we deduce that the spectrum of M 0 consists of the entries of the matrix In particular, M 0 ∈ P.
In order to show that U ∩ P ⊆ B U a (M 0 ; 2p 2 + 1), notice that any matrix as needed.
We now need to prove that B U a (M 0 ; 1 n 2 ) ⊆ U ∩ P. Let Z = span({(e x − e y ) ⊗ 1 n : x, y ∈ V (K p )}), and consider two matrices Q 1 and Q 2 whose columns form orthonormal bases for Z and Z ⊥ , respectively, where Z ⊥ denotes the orthogonal complement of Z in R pn .Let H be the vector space of pn × pn symmetric matrices satisfying (r 3 ).Since H is in particular an affine space and U ⊆ H , we have We claim that Z = ker(M 0 ).The inclusion Z ⊆ ker(M 0 ) is clear from the fact that M 0 ∈ H . Take w ∈ Z ⊥ , and notice that this implies that there exists a constant c for which (e x ⊗ 1 n ) T w = c for every x ∈ V (K p ).If w ∈ ker(M 0 ), we have whence we find c = 0 and, thus, w = 0.It follows that ker(M 0 ) ∩ Z ⊥ = {0}, which yields the claimed identity Z = ker(M 0 ).Take N ∈ B U a (M 0 ; 1 n 2 ); we need to show that N ∈ U ∩ P. Observe that N ∈ H , so N is well defined.We claim that N is a positive definite matrix.By (19), this would imply that Q T N Q ∈ P and, thus, that N ∈ P. Since H is a vector space, we have N − M 0 ∈ H ; moreover, N − M 0 = N − M 0 .Let q = dim(Z) = dim(ker(M 0 )), and take a vector v ∈ R pn−q having unitary norm.Order the eigenvalues of M 0 as λ 1 (M 0 ) ≤ • • • ≤ λ pn (M 0 ).From (18), we see that 0 = λ q (M 0 ) < λ q+1 (M 0 ) = 1 n .Using the Courant-Fischer variational characterisation of the spectrum of symmetric matrices (see [60, § 8.2]), we deduce that Moreover, letting ∥ • ∥ 2 denote the spectral matrix norm, we have In the expression above, the first inequality comes from the definition of the spectral norm and the Cauchy-Schwarz inequality, the second inequality is due to the submultiplicativity of the spectral norm, the first equality follows from the fact that ∥Q 2 ∥ 2 = 1 since the columns of Q 2 are orthonormal, the third inequality is a standard property of matrix norms (see [62,Thm. 5.6.34]), and the fourth inequality holds since N ∈ B U a (M 0 ; 1 n 2 ).It follows that thus proving the claim.We are left to show that N ∈ U .It suffices to prove that N is entrywise nonnegative, as all other conditions describing U are implied by the fact that N ∈ U a .Take x, y ∈ V (X) and a, b ∈ V (A).If x = y and a ̸ = b, (e x ⊗ e a ) T N (e y ⊗ e b ) = 0 by (r 1 ).Otherwise, noticing that ∥M 0 − N ∥ 2 F is the sum of the squares of the entries of M 0 − N , we find Noting that (e x ⊗ e a ) T M 0 (e y ⊗ e b ) ≥ 1 n 2 , it follows that (e x ⊗ e a ) T N (e y ⊗ e b ) > 0, which establishes that N ≥ 0 and concludes the proof of the proposition.
Finally, we prove Proposition 31, implying that there exist no Slater points for the program (SDP ′ ).
Proof of Proposition 31.Given M 0 ∈ W ∩ P and r > 0, choose two distinct vertices x, y ∈ V (X) and a vertex a ∈ V (A).Consider the matrix H = 1 n (J p − e x e T y − e y e T x ) ⊗ I n + (e x e T y + e y e T x ) ⊗ e a e T a and the number s = min( r 2p 2 +1 , 1), and define N = (1 − s)M 0 + sH.It is straightforward to check that H ∈ W ; since W is a convex set, it follows that N ∈ W ⊆ W a .Observe that each matrix M ∈ W satisfies ∥M ∥ F ≤ p 2 (because of (r 6 ) and the entrywise nonnegativity of the entries).Therefore, thus showing that N ∈ B W a (M 0 ; r).We now prove that N ̸ ∈ P. Let the space Z and the matrices Q 1 , Q 2 , Q be defined in the same way as in the proof of Proposition 30.Since U ∩ P = W ∩ P, we have that M 0 ∈ U and, thus, Noting that H ∈ W and, hence, it satisfies (r 6 ), we see that [(e x ′ − e y ′ ) ⊗ 1 n ] T H[(e x ′′ − e y ′′ ) ⊗ 1 n ] = 0 for each x ′ , x ′′ , y ′ , y ′′ ∈ V (X).Hence, w T Hw ′ = 0 for each w, w ′ ∈ Z, which implies that It is well known that, if a diagonal entry of a positive semidefinite matrix is zero, the corresponding row and column are zero (see [62,Obs. 7.1.10]).Looking at (20), we deduce that Q T N Q ̸ ∈ P, thus yielding N ̸ ∈ P, as needed.

SDP ϵ vs. BA k
The goal of this subsection is to establish the following result, which states that the test SDP ϵ is not dominated by the BA hierarchy for ϵ sufficiently small.Theorem 34.For each k ∈ N there exists ϵ > 0 such that SDP ϵ ̸ ⪯ BA k .
Since SDP ϵ ⪯ SDP ⪯ SDA, it will follow that SDP ̸ ⪯ BA k and SDA ̸ ⪯ BA k . 18In order to prove Theorem 34, we need to exhibit a class of digraphs that are correctly classified by SDP ϵ and not by BA k .It turns out that cliques can serve as separating instances.Indeed, a result from [30] implies that the BA hierarchy is not sound on cliques.In contrast, SDP ϵ is able to correctly classify cliques provided that ϵ is small enough-as we shall prove next by leveraging the framework of association schemes.
We start by adapting the machinery developed in the previous sections to SDP ϵ , thus obtaining the following result (akin to Corollary 17).
Proposition 35.Let X and A be generously transitive digraphs, let P and P be the character tables of X and A, respectively, let ϵ > 0, and suppose that SDP ϵ (X, A) = Yes.Then there exists a real entrywise-nonnegative |O(X)| × |O(A)| matrix V such that (c 1 ) P V P T ≥ 0; (c 2 ) V µ A = 1; (c 3 ) v ω ω = 0 if ω is the diagonal orbital of X and ω is a non-diagonal orbital of A; (c ′ 4 ) v ω ω ≤ ϵ if ω is an edge orbital of X and ω is a non-edge orbital of A.
Proof.Let M ∈ U ∩ P be a matrix witnessing that SDP ϵ (X, A) = Yes, and observe that f (M ) ≤ ϵ by the definition of SDP ϵ (where f (M ) is defined in (17)).Given two automorphisms ξ ∈ Aut(X) and α ∈ Aut(A), consider the matrix The same argument as in the proof of Proposition 6 (see Appendix A) shows that M (ξ,α) ∈ U ∩ P and f (M (ξ,α) ) = f (M ).Hence, a minor modification of the proof of Proposition 9 implies that the matrix is a balanced matrix for X, A and satisfies M ∈ U ∩ P and f (M ) = f (M ).In particular, this means that (e x ⊗ e a ) T M (e y ⊗ e b ) ≤ ϵ for each (x, y) ∈ E(X), (a, b) ∈ V (A) 2 \ E(A).We can then express M in the basis R as per Proposition 11, and conclude the proof by making use of Theorem 16 in the same way as in the proof of Corollary 17.
We point out that the necessary condition for SDP ϵ acceptance expressed in Proposition 35 is not sufficient, unlike the similar condition for SDP in Corollary 17.The reason is that the existence of a feasible solution to (SDP ′′ ) with objective-function value at most ϵ does not guarantee that SDP ϵ (X, A) = Yes.Indeed, the only piece of information we would be able to derive in this case is that the optimal value ν of the program (SDP ′′ ) satisfies ν ≤ ϵ.Theorem 29 would then guarantee that the ellipsoid method finds a feasible solution M * with f (M * ) ≤ ϵ + ν ≤ 2ϵ, while to ensure that SDP ϵ (X, A) = Yes we need that f (M * ) ≤ ϵ.
We now show that SDP ϵ is sound on cliques for ϵ small enough, by using the description of the corresponding association schemes.Of course, it follows that the same holds for SDP and SDA, since SDP ϵ ⪯ SDP ⪯ SDA.On the other hand, the next result from [30]-used to rule out solvability of approximate graph colouring through the BA hierarchy-shows that BA k (X, A) always accepts if A is a large-enough clique.
Conversely, let and N be an SDP-matrix and an AIP-matrix for X, A, respectively.As was shown above, the vectors λ x,a obtained through a Cholesky decomposition of M give a solution to SDP(X, A).Moreover, we define µ x,a = (e x ⊗ e a ) T N (e x ⊗ e a ) for x ∈ V (X) and a ∈ V (A), and µ (x,y),(a,b) = (e x ⊗ e a ) T N (e y ⊗ e b ) for (x, y) ∈ E(X) and (a, b) ∈ E(A).Reversing the argument above, it easily follows from the conditions defining an AIP-matrix that µ satisfies (AIP 1 ) and (AIP 2 ), while the requirement (ref) follows from the fact that N • ((I p + A (X)) ⊗ J n ) ◁ M .
Recall that, given two finite sets R and S and a function f : R → S, Q f denotes the |R| × |S| matrix whose (r, s)-th entry is 1 if f (r) = s, 0 otherwise.Proposition (Proposition 6 restated).Let X, X ′ , A, A ′ be digraphs, let f : X ′ → X and g : A → A ′ be homomorphisms, and let M be a relaxation matrix for X, A. Then is a relaxation matrix for X ′ , A ′ .Furthermore, if M is an SDP-matrix (resp.AIP-matrix) for X, A, then M (f,g) is an SDP-matrix (resp.AIP-matrix) for X ′ , A ′ .

)
Letting z = (e x − e y ) ⊗ 1 n , observe that Hz = 1 n (J p − e x e T y − e y e T x )(e x − e y ) ⊗ (I n 1 n ) + (e x e T y + e y e T x )(e x − e y ) ⊗ (e a e T a 1 n ) = 1 n (e x − e y ) ⊗ 1 n + (e y − e x ) ⊗ e a , which does not belong to Z (as we are assuming x ̸ = y and n ≥ 2).It follows that Q T 2 Hz ̸ = 0; indeed, otherwise, we would have Hz ∈

Proposition 36 . 1 .
Let p, n ≥ 2 and 0 < ϵ < 1 n 3 .Then SDP ϵ (K p , K n ) = Yes if and only if p ≤ n.Proof.If p ≤ n, K p → K n ; since SDP ϵ is a complete test by Theorem 32, it follows that SDP ϵ (K p , K n ) = Yes.Conversely, suppose that SDP ϵ (K p , K n ) = Yes.Recall the description of the association schemes for cliques in Remark 33, and observe that it implies µ Kn = n n 2 − n .Let V = a b c d be the 2 × 2 matrix whose existence is guaranteed by Proposition 35, and notice that the requirements (c 3 ) and (c ′ 4 ) yield b = 0 and c ≤ ϵ, respectively.As for (c 2 )n = an cn + d(n 2 − n) , whence it follows that a = c + d(n − 1) = 1 n and c − d = n 2 c−1 n 2 −n .Letting P and P denote the character tables of K p and K n , respectively, we obtainV P T =The condition (c 1 ) implies in particular that the (1, 2)-th entry of the matrix above is nonnegative.Combining this with the fact that c ≤ ϵ and the assumption that ϵ <1 n 3 yields 0 ≤ n − p + n 2 c(p − 1) < n − p + p − 1 n = n − 1 n (n + 1 − p),which implies that p ≤ n, as needed.