Symmetric Exponential Time Requires Near-Maximum Circuit Size

We show that there is a language in S2E/1 (symmetric exponential time with one bit of advice) with circuit complexity at least 2n/n. In particular, the above also implies the same near-maximum circuit lower bounds for the classes Σ2E, (Σ2E∩Π2E)/1, and ZPENP/1. Previously, only ”half-exponential” circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was Δ3E = EΣ2P (Miltersen, Vinodchandran, and Watanabe COCOON’99). Our circuit lower bounds are corollaries of an unconditional zero-error pseudodeterministic algorithm with an NP oracle and one bit of advice (FZPPNP/1) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitely-often pseudodeterministic FZPPNP/1 constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, and Kpoly-random strings with nearly optimal parameters. Our proofs relativize. The two main technical ingredients are (1) Korten’s PNP reduction from the range avoidance problem to constructing hard truth tables (FOCS’21), which was in turn inspired by a result of Jeřábek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS’23).


INTRODUCTION
Proving lower bounds against non-uniform computation (i.e., circuit lower bounds) is one of the most important challenges in theoretical computer science.From Shannon's counting argument [18,48], we know that almost all -bit Boolean functions have near-maximum (2 / ) circuit complexity. 1 Therefore, the task of proving circuit lower bounds is simply to pinpoint one such hard function.More formally, one fundamental question is: What is the smallest complexity class that contains a language of exponential (2 Ω ( ) ) circuit complexity?Compared with super-polynomial lower bounds, exponential lower bounds are interesting in their own right for the following reasons.First, an exponential lower bound would make Shannon's argument fully constructive.Second, exponential lower bounds have more applications than super-polynomial lower bounds: For example, if one can show that E has no 2 ( ) -size circuits, then we would have prP = prBPP [28,43], while super-polynomial lower bounds such as EXP ⊄ P/ poly only imply sub-exponential time derandomization of prBPP. 2  Unfortunately, despite its importance, our knowledge about exponential lower bounds is quite limited.Kannan [31] showed that there is a function in Σ 3 E ∩ Π 3 E that requires maximum circuit complexity; the complexity of the hard function was later improved to Δ 3 E = E Σ 2 P by Miltersen, Vinodchandran, and Watanabe [42], via a simple binary search argument.This is essentially all we know regarding exponential circuit lower bounds. 3 We remark that Kannan [31,Theorem 4] claimed that Σ 2 E ∩ Π 2 E requires exponential circuit complexity, but [42] pointed out a gap in Kannan's proof, and suggested that exponential lower bounds for Σ 2 E ∩ Π 2 E were "reopened and considered an open problem."Recently, Vyas and Williams [51] emphasized our lack of knowledge regarding the circuit complexity of Σ 2 EXP, even with respect to relativizing proof techniques.In particular, the following question has been open for at least 20 years (indeed, if we count from [31], it would be at least 40 years): Open Problem 1.1.Can we prove that Σ 2 EXP ⊄ SIZE [2 ] for some absolute constant > 0, or at least show a relativization barrier for proving such a lower bound?
Unfortunately, all these works fail to prove exponential lower bounds.All of their proofs go through certain Karp-Lipton collapses [32]; such a proof strategy runs into a so-called "half-exponential barrier", preventing us from getting exponential lower bounds.See subsection 5.1 for a detailed discussion.

New Near-Maximum Circuit Lower Bounds
In this work, we overcome the half-exponential barrier mentioned above and resolve Theorem 1.1 by showing that both Σ 2 E and (Σ 2 E ∩ Π 2 E)/ 1 require near-maximum (2 / ) circuit complexity.Moreover, our proof indeed relativizes: Theorem 2.1.
Moreover, they hold in every relativized world.
Up to one bit of advice, we nally provide a proof of Kannan's original claim in [31,Theorem 4].Moreover, with some more work, we extend our lower bounds to the smaller complexity class S 2 E/ 1 , again with a relativizing proof: Moreover, this holds in every relativized world.
The symmetric time class S 2 E. S 2 E can be seen as a "randomized" version of E NP since it is sandwiched between E NP and ZPE NP : it is easy to show that E NP ⊆ S 2 E [45], and it is also known that S 2 E ⊆ ZPE NP [8].We also note that under plausible derandomization assumptions (e.g., E NP requires 2 Ω ( ) -size SAT-oracle circuits), all three classes simply collapse to E NP [34].
Hence, our results also imply a near-maximum circuit lower bound for the class ZPE NP / 1 ⊆ (Σ 2 E∩ Π 2 E)/ 1 .This vastly improves the previous lower bound for Moreover, this holds in every relativized world.
There is a trivial FZPP NP algorithm solving Avoid: randomly generate strings ∈ {0, 1} +1 and output the rst that is outside the range of (note that we need an NP oracle to verify if ∉ Range( )).The class APEPP (Abundant Polynomial Empty Pigeonhole Principle) [33] is the class of total search problems reducible to Avoid.
As demonstrated by Korten [36,Section 3], APEPP captures the complexity of explicit construction problems whose solutions are guaranteed to exist by the probabilistic method (more precisely, the dual weak pigeonhole principle [29,37]), in the sense that constructing such objects reduces to the range avoidance problem.This includes many important objects in mathematics and theoretical computer science, including Ramsey graphs [16], rigid matrices [19,22,49], two-source extractors [11,38], linear codes [22], hard truth tables [36], and strings with maximum time-bounded Kolmogorov complexity (i.e., K poly -random strings) [44].Hence, derandomizing the trivial FZPP NP algorithm for Avoid would imply explicit constructions for all these important objects.
Our results: new pseudodeterministic algorithms for Avoid.We show that, unconditionally, the trivial FZPP NP algorithm for Avoid can be made pseudodeterministic on in nitely many input lengths.A pseudodeterministic algorithm [20] is a randomized algorithm that outputs the same canonical answer on most computational paths.In particular, we have: Theorem 2.4.For every constant ≥ 1, there is a randomized algorithm A with an NP oracle such that the following holds for in nitely many integers .For every circuit : {0, 1} → {0, 1} +1 of size at most , there is a string ∈ {0, 1} \ Range( ) such that A ( ) either outputs or ⊥, and the probability (over the internal randomness of A) that A ( ) outputs is at least 2/3.Moreover, this theorem holds in every relativized world.
As a corollary, for every problem in APEPP, we obtain zero-error pseudodeterministic constructions with an NP oracle and one bit of advice (FZPP NP / 1 ) that works in nitely often4 : Corollary 2.5 (Informal).There are in nitely-often zero-error pseudodeterministic constructions for the following objects with an NP oracle and one-bit of advice: Ramsey graphs, rigid matrices, twosource extractors, linear codes, hard truth tables, and K poly -random strings.
Actually, we obtain single-valued FS 2 P/ 1 algorithms for the explicit construction problems above, and the pseudodeterministic FZPP NP / 1 algorithms follow from Cai's theorem that S 2 P ⊆ ZPP NP [8].We stated them as pseudodeterministic FZPP NP / 1 algorithms since this notion is better known than the notion of single-valued FS 2 P/ 1 algorithms.
Theorem 2.4 is tantalizingly close to an in nitely-often FP NP algorithm for Avoid (with the only caveat of being zero-error instead of being completely deterministic).However, since an FP NP algorithm for range avoidance would imply near-maximum circuit lower bounds for E NP , we expect that it would require fundamentally new ideas to completely derandomize our algorithm.Previously, Hirahara, Lu, and Ren [25,Theorem 36] presented an in nitely-often pseudodeterministic FZPP NP algorithm for the range avoidance problem using bits of advice, for any small constant > 0. Our result improves the above in two aspects: rst, we reduce the number of advice bits to 1; second, our techniques relativize but their techniques do not.
Lower bounds against non-uniform computation with maximum advice length.Finally, our results also imply lower bounds against non-uniform computation with maximum advice length.We mention this corollary because it is a stronger statement than circuit lower bounds, and similar lower bounds appeared recently in the literature of super-fast derandomization [15].

INTUITIONS
In the following, we present some high-level intuitions for our new circuit lower bounds.

Perspective: Single-Valued Constructions
A key perspective in this paper is to view circuit lower bounds (for exponential-time classes) as single-valued constructions of hard truth tables.This perspective is folklore; it was also emphasized in recent papers on the range avoidance problem [36,44].
Let Π ⊆ {0, 1} * be an -dense property, i.e., for every integer (In what follows, we use Π := Π ∩ {0, 1} to denote the length-slice of Π.)As a concrete example, let Π hard be the set of hard truth tables, i.e., a string ∈ Π hard if and only if it is the truth table of a function : {0, 1} → {0, 1} whose circuit complexity is at least 2 / , where := log .(We assume that := log is an integer.)Shannon's argument [18,48] shows that Π hard is a 1/2-dense property.We are interested in the following question: What is the complexity of single-valued constructions for any string in Π hard ?Here, informally speaking, a computation is single-valued if each of its computational paths either fails or outputs the same value.For example, an NP machine is a single-valued construction for Π if there is a "canonical" string ∈ Π such that (1) outputs on every accepting computational path; (2) has at least one accepting computational path.(That is, it is an NPSV construction in the sense of [4,17,23,47].)Similarly, a BPP machine is a singlevalued construction for Π if there is a "canonical" string ∈ Π such that outputs on most (say ≥ 2/3 fraction of) computational paths.(In other words, single-valued ZPP and BPP constructions are another name for pseudodeterministic constructions [20].) 5ence, the task of proving circuit lower bounds is equivalent to the task of de ning, i.e., single-value constructing, a hard function, in the smallest possible complexity class.For example, a single-valued BPP construction (i.e., pseudodeterministic construction) for Π hard is equivalent to the circuit lower bound 6 In this regard, the previous near-maximum circuit lower bound for Δ 3 E := E Σ2 P [42] can be summarized in one sentence: The lexicographically rst string in Π hard can be constructed in Δ 3 P := P Σ 2 P (which is necessarily single-valued).
Reduction to Avoid.It was observed in [33,36] that explicit construction of elements from Π hard is a special case of range avoidance: Let TT : {0, 1} −1 → {0, 1} (here = 2 ) be a circuit that maps the description of a 2 / -size circuit into its 2 -length truth table (by [18], this circuit can be encoded by − 1 bits).Hence, a single-valued algorithm solving Avoid for TT is equivalent to a single-valued construction for Π hard .This explains how our new range avoidance algorithms imply our new circuit lower bounds (as mentioned in subsection 2.2).
In the rest of section 3, we will only consider the special case of Avoid where the input circuit for range avoidance is a P-uniform circuit family.Speci cally, let { : {0, 1} → {0, 1} 2 } ∈N be a P-uniform family of circuits, where | | ≤ poly( ). 7Our goal is to nd an algorithm such that for in nitely many , (1 ) ∈ {0, 1} 2 \ Range( ); see Sections 5.3 and 5.4 of the full version for how to turn this into an algorithm that works for arbitrary input circuit with a single bit of stretch.Also, since from now on we will not talk about truth tables anymore, we will use instead of to denote the input length of Avoid instances.

The Iterative Win-Win Paradigm of [12]
In a recent work, Chen, Lu, Oliveira, Ren, and Santhanam [12] introduced the iterative win-win paradigm for explicit constructions, and used that to obtain a polynomial-time pseudodeterministic construction of primes that works in nitely often.Since our construction algorithm closely follows their paradigm, it is instructive to take a detour and give a high-level overview of how the construction from [12] works. 8 In this paradigm, for a (starting) input length 0 and some = (log 0 ), we will consider an increasing sequence of input lengths disjoint, and therefore our algorithm succeeds on in nitely many input lengths.
In more detail, xing a sequence of input lengths 0 , 1 , . . ., and letting Π be an -dense property, for each ∈ {0, 1, . . ., }, we specify a (deterministic) algorithm ALG that takes 1 as input and aims to construct an explicit element from Π .We let ALG 0 be the simple brute-force algorithm that enumerates all length-0 strings and nds the lexicographically rst string in Π 0 ; it is easy to see that ALG 0 runs in 0 := 2 ( 0 ) time.
The win-or-improve mechanism.The core of [12] is a novel winor-improve mechanism, which is described by a (randomized) algorithm .Roughly speaking, for input lengths and +1 , (1 ) attempts to simulate ALG faster by using the oracle Π +1 (hence it runs in poly( +1 ) time).The crucial property is the following win-win argument: (Win) Either (1 ) outputs ALG (1 ) with probability at least 2/3 over its internal randomness, (Improve) or, from the failure of (1 ), we can construct an algorithm ALG +1 that outputs an explicit element from Π +1 and runs in +1 = poly( ) time.
We call the above (Win-or-Improve), since either we have a pseudodeterministic algorithm (1 ) that constructs an explicit element from Π in poly( +1 ) ≤ poly( ) time (since it simulates ALG ), or we have an improved algorithm ALG +1 at the input length +1 (for example, on input length 1 , the running time of ).The (Win-or-Improve) part in [12] is implemented via the Chen-Tell targeted hitting set generator [14] (we omit the details here).Jumping ahead, in this paper, we will implement a similar mechanism using Korten's P NP reduction from the range avoidance problem to constructing hard truth tables [36].
Getting polynomial time.Now we brie y explain why (Win-or-Improve) implies a polynomial-time construction algorithm.Let be an absolute constant such that we always have +1 ≤ ; we now set := 2 .Recall that = −1 for every .The crucial observation is the following: Although 0 is much larger than 0 , the sequence { } grows slower than { }.
For each 0 ≤ < , if (1 ) successfully simulates ALG , then we obtain an algorithm for input length running in poly( +1 ) ≤ poly( ) time.Otherwise, we have an algorithm ALG +1 running in +1 time on input length +1 .Eventually, we will hit such that ≤ poly( ), in which case ALG itself gives a polynomial-time construction on input length .Therefore, we obtain a polynomialtime algorithm on at least one of the input lengths 0 , 1 , . . ., .

Algorithms for Range-Avoidance via Korten's Reduction
Now we describe our new algorithms for Avoid.Roughly speaking, our new algorithm makes use of the iterative win-win argument introduced above, together with an easy-witness style argument [27] and Korten's reduction [36]. 9In the following, we introduce the latter two ingredients and show how to chain them together via the iterative win-win argument.
An easy-witness style argument.Let BF be the 2 ( ) -time bruteforce algorithm outputting the lexicographically rst non-output of .Our rst idea is to consider its computational history, a unique 2 ( ) -length string ℎ BF (that can be computed in 2 ( ) time), and branch on whether ℎ BF has a small circuit or not.Suppose ℎ BF admits a, say, -size circuit for some large , then we apply an easywitness-style argument [27] to simulate BF by a single-valued FΣ 2 P algorithm running in poly( ) = poly( ) time (see subsection 4.2).Hence, we obtained the desired algorithm when ℎ BF is easy.
However, it is less clear how to deal with the other case (when ℎ BF is hard) directly.The crucial observation is that we have gained the following ability: we can generate a string ℎ BF ∈ {0, 1} 2 ( ) that has circuit complexity at least , in only 2 ( ) time.
Korten's reduction.We will apply Korten's recent work [36] to make use of the "gain" above.So it is worth taking a detour to review the main result of [36].Roughly speaking, [36] gives an algorithm that uses a hard truth table to solve a derandomization task: nding a non-output of the given circuit (that has more output bits than input bits). 10ormally, [36] gives a P NP -computable algorithm Korten( , ) that takes as inputs a circuit : {0, 1} → {0, 1} 2 and a string ∈ {0, 1} (think of ≪ ), and outputs a string ∈ {0, 1} 2 .The guarantee is that if the circuit complexity of is su ciently larger than the size of , then the output is not in the range of .This ts perfectly with our "gain" above: for ≪ and = , Korten( , ℎ BF ) solves Avoid for since the circuit complexity of ℎ BF , , is su ciently larger than the size of .Moreover, Korten( , ℎ BF ) runs in only 2 ( ) time, which is much less than the brute-force running time 2 ( ) .Therefore, we obtain an improved algorithm for Avoid on input length .
The iterative win-win argument.What we described above is essentially the rst stage of an win-or-improve mechanism similar to that from subsection 3.2.Therefore, we only need to iterate the argument above to obtain a polynomial-time algorithm.
For this purpose, we need to consider the computational history of not only BF, but also algorithms of the form Korten( , ). 11 For any circuit and "hard" truth table , there is a unique "computational history" ℎ of Korten( , ), and the length of ℎ is upper bounded by poly(| |).We are able to prove the following statement akin to the easy witness lemma [27]: if ℎ admits a size-circuit (think of ≪ ), then Korten( , ) can be simulated by a single-valued FΣ 2 P algorithm in time poly( ); see subsection 4.2 for details on this argument. 12ow, following the iterative win-win paradigm of [12], for a (starting) input length 0 and some = (log 0 ), we consider an increasing sequence of input lengths 0 , 1 , . . ., , and show that our algorithm succeeds on at least one of the input lengths (i.e., (1 ) ∈ {0, 1} 2 \ Range( ) for some ∈ {0, 1, . . ., }).For each ∈ {0, 1, . . ., }, we specify an algorithm ALG of the form Korten( , −) that aims to solve Avoid for ; in other words, we specify a string ∈ {0, 1} for some and let ALG := Korten( , ).
The algorithm ALG 0 is simply the brute force algorithm BF at input length 0 .(A convenient observation is that we can specify an exponentially long string 0 ∈ {0, 1} 2 ( 0 ) so that Korten( 0 , 0 ) is equivalent to BF = ALG 0 ; see Fact 3.4 in the full version.)For each 0 ≤ < , to specify ALG +1 , let +1 denote the history of the algorithm ALG , and consider the following win-or-improve mechanism.
(Win) If +1 admits an -size circuit (for some large constant ), by our easy-witness argument, we can simulate ALG by a poly( )-time single-valued FΣ 2 P algorithm.(Improve) Otherwise +1 has circuit complexity at least , we plug it into Korten's reduction to solve Avoid for +1 .That is, we take ALG +1 := Korten( +1 , +1 ) as our new algorithm on input length +1 .Let = | |, then +1 ≤ poly( ).By setting +1 = for a su ciently large , a similar analysis as [12] shows that for some = (log 0 ) we would have ≤ poly( ), meaning that ALG would be a poly( )-time FP NP algorithm (thus also a single-valued FΣ 2 P algorithm) solving Avoid for .Putting everything together, we obtain a polynomial-time single-valued FΣ 2 P algorithm that solves Avoid for at least one of the .
The hardness condenser perspective.Below we present another perspective on the construction above which may help the reader understand it better.In the following, we x : {0, 1} → {0, 1} 2 to be the truth table generator TT ,2 that maps an -bit description of a log(2 )-input circuit into its length-2 truth table.Hence, instead of solving Avoid in general, our goal here is simply constructing hard truth tables (or equivalently, proving circuit lower bounds).
We note that Korten(TT ,2 , ) can then be interpreted as a hardness condenser [7]: 13 Given a truth table ∈ {0, 1} whose circuit complexity is su ciently larger than , it outputs a length-2 truth table that is maximally hard (i.e., without /log -size circuits).The win-or-improve mechanism can be interpreted as an iterative application of this hardness condenser.
At the stage , we consider the algorithm which runs in ≈ | | time and creates (roughly) bits of hardness.
(That is, the circuit complexity of the output of ALG is roughly .)In the (Win) case above, ALG admits an -size history +1 (with length approximately | |) and can therefore be simulated in FΣ 2 P. The magic is that in the (Improve) case, we actually have access to much more hardness than : the history string +1 has ≫ bits of hardness.So we can distill these hardness by applying the condenser to +1 to obtain a maximally hard truth tables of length 2 +1 = 2 , establish the next algorithm ALG +1 := Korten(TT +1 ,2 +1 , +1 ), and keep iterating.

PROOF OVERVIEW
In this section, we elaborate on the computational history of Korten and how the easy-witness-style argument gives us FΣ 2 P and FS 2 P algorithms.
In particular, letting be a truth table whose circuit complexity is su ciently larger than SIZE( ), by the rst property above, it is not in Range(GGM [ ]), and therefore Korten( , ) solves Avoid for .This con rms our description of Korten in subsection 2.2.

Computational History of Korten and an
Easy-Witness Argument for FΣ 2 P Algorithms The algorithm Korten( , ) works as follows: we rst view as the labels of the last layer of the binary tree, and try to reconstruct the whole binary tree, layer by layer (start from the bottom layer to the top layer, within each layer, start from the rightmost node to the leftmost one), by lling the labels of the intermediate nodes.To ll , , we use an NP oracle to nd the lexicographically rst string ∈ {0, 1} such that ( ) = +1,2 • +1,2 +1 , and set , = .If no such exists, the algorithm stops and report +1,2 • +1,2 +1 as the solution to Avoid for .Observe that this reconstruction procedure must stop somewhere, since if it successfully reproduces all the labels in the binary tree, we would have = GGM [ ] ( 0,0 ) ∈ Range(GGM [ ]), contradicting the assumption.For details, see [36,Theorem 7] or Lemma 3.3 of the full version.
The computational history of Korten.The algorithm described above induces a natural description of the computational history of Korten, denoted as History( , ), as follows: the index ( ★ , ★ ) when the algorithm stops (i.e., the algorithm fails to ll in ★ , ★ ) concatenated with the labels of all the nodes generated by Korten( , ) (for the intermediate nodes with no label assigned, we set their labels to a special symbol ⊥); see Figure 2 for an illustration.This history has length at most 5 , and for convenience, we pad additional zeros at the end of it so that its length is exactly 5 .A local characterization of History( , ).The crucial observation we make on History( , ) is that it admits a local characterization in the following sense: there is a family of local constraints { } ∈ {0,1} poly( ) , where each : {0, 1} 5 × {0, 1} → {0, 1} reads only poly( ) many bits of its input (we think about it as a local constraint since usually ≪ ), such that for xed , History( , ) • is the unique string making all the outputting 1.The constraints are follows: (1) for every leaf node , , its content is consistent with the corresponding block in ; (2) all labels at or before node ( ★ , ★ ) are ⊥; 15 (3) for every ∈ {0, 1} , ( ) ≠ ★ +1,2 ★ • ★ +1,2 ★ +1 (meaning the algorithm fails at ★ , ★ ); (4) for every ( , ) after ( ★ , ★ ), ( , ) = +1,2 • +1,2 +1 ( , is the correct label); (5) for every ( , ) after ( ★ , ★ ) and for every ′ < , , ( ′ ) ≠ +1,2 • +1,2 +1 ( , is the lexicographically rst correct label).It is clear that each of these constraints above only reads poly( ) many bits from the input and a careful examination shows they precisely de ne the string History( , ).
A more intuitive way to look at these local constraints is to treat them as a poly( )-time oracle algorithm History that takes a string ∈ poly( ) as input and two strings ℎ ∈ {0, 1} 5 and ∈ {0, 1} as oracles, and we simply let ℎ, History ( ) = (ℎ • ).Since the constraints above are all very simple and only read poly( ) bits of ℎ • , History runs in poly( ) time.In some sense, History is a local Π 1 veri er: it is local in the sense that it only queries poly( ) bits from its oracles, and it is Π 1 since it needs a universal quanti er over ∈ {0, 1} poly( ) to perform all the checks.FΣ 2 P algorithms.Before we proceed, we give a formal de nition of a single-valued FΣ 2 P algorithm .Here is implemented by an algorithm taking an input and two poly(| |)-length witnesses • there exists 1 such that for every 2 , History ( , 1 , 2 , ) = 1. 16e can view History as a veri er that checks whether is the desired output using another universal quanti er: given a proof 1 and a string ∈ {0, 1} ℓ .accepts if and only if for every 2 , History ( , 1 , 2 , ) = 1.That is, can perform exponentially many checks on 1 and , each taking poly(| |) time.
The easy-witness argument.Now we are ready to elaborate on the easy-witness argument mentioned in subsection 2.2.Recall that at stage , we have ALG = Korten( , ) and +1 = History( , ) (the history of ALG ).Assuming that +1 admits a poly( )-size circuit, we want to show that Korten( , ) can be simulated by a poly( )-time single-valued FΣ 2 P algorithm.
The crucial observation is that since all the have size poly( ), each check above can be implemented in poly( ) time as they only read at most poly( ) bits from their input, despite that (C t+1 ) • (C t ) itself can be much longer than poly( ).Assuming that all the checks of above are passed, by induction we know that +1 = History( , ) for every ∈ {0, 1, . . ., }.Finally, checks whether corresponds to the answer described in (C i+1 ) = f i+1 .

Selectors and an Easy-Witness Argument for
FS 2 P Algorithms Finally, we discuss how to implement the easy-witness argument above with a single-valued FS 2 P algorithm.It is known that any single-valued FS 2 BPP algorithm can be converted into an equivalent single-valued FS 2 P algorithm outputting the same string [10,45].Therefore, in the following we aim to give a single-valued FS 2 BPP algorithm for solving range avoidance, which is easier to achieve.
FS 2 BPP algorithms and randomized selectors.Before we proceed, we give a formal de nition of a single-valued FS 2 BPP algorithm .We implement by a randomized algorithm that takes an input and two poly(| |)-length witnesses 1 and 2 . 17We say that ( ) outputs a string ∈ {0, 1} ℓ (we assume ℓ = ℓ ( ) can be computed in polynomial time from ) if the following hold: • there exists a string ℎ such that for every , both ( , ℎ, ) and ( , , ℎ) output with probability at least 2/3.(Note that such must be unique if it exists.) Actually, our algorithm will be implemented as a randomized selector: given two potential proofs 1 and 2 , it rst selects the correct one and then outputs the string induced by the correct proof. 18ecap.Revising the algorithm in subsection 3.3, our goal now is to give an FS 2 BPP simulation of Korten( , ), assuming that History( , ) admits a small circuit.Similar to the local Π 1 verier used in the case of FΣ 2 P algorithms, now we consider a local randomized selector select which takes oracles 1 , 2 ∈ {0, 1} 5 and ∈ {0, 1} such that if exactly one of the 1 and 2 is History( , ), select outputs its index with high probability.
Assuming that +1 = History( , ) admits a small circuit, one can similarly turn select into a single-valued FS 2 BPP algorithms computing Korten( , ): treat two proofs 1 and 2 as two small circuits and both supposed to compute +1 , from and we can obtain a sequence of circuits { } and { } supposed to compute the for ∈ [ ]. Then we can use the selector select to decide for each ∈ [ + 1] which of the and is the correct circuit for .Finally, we output the answer encoded in the selected circuit for +1 . 19bservation: it su ces to nd the rst di ering node label.Ignore the ( ★ , ★ ) part of the history for now.Let { 1 , } and { 2 , } be the node labels encoded in 1 and 2 , respectively.We also assume that exactly one of them corresponds to the correct node labels in History( , ).The crucial observation here is that, since the correct node labels are generated by a deterministic procedure node by node (from bottom to top and from rightmost to leftmost), it is possible to tell which of the { 1 , } and { 2 , } is correct given the largest ( ′ , ′ ) such that 1 ′ , ′ ≠ 2 ′ , ′ .(Note that since all ( , ) are processed by Korten( , ) in reverse lexicographic order, this ( ′ , ′ ) corresponds to the rst node label that the wrong process di ers from the correct process, so we call this the rst di ering point.) In more detail, assuming we know this ( ′ , ′ ), we proceed by discussing several cases.First of all, if ( ′ , ′ ) corresponds to a leaf, then one can query to gure out which of 1 ′ , ′ and 2 ′ , ′ is consistent with the corresponding block in .Now we can assume ( ′ , ′ ) corresponds to an intermediate node.Since ( ′ , ′ ) is the rst di ering point, we know that (we let this string to be for convenience).By the de nition of History( , ), it follows that the correct ′ , ′ should be uniquely determined by , which means the selector only needs to read , 1 ′ , ′ , and 2 ′ , ′ , and can then be implemented by a somewhat tedious case analysis (so it is local).We refer readers to the proof of Lemma 5.5 in the full version for the details and only highlight the most illuminating case here: if both of 1 ′ , ′ and 2 ′ , ′ are good (we say a string is good, if ≠ ⊥ and ( ) = ), we select the lexicographically smaller one.To handle the ( ★ , ★ ) part, one needs some additional case analysis.We omit the details here and refer the reader to the proof in the full version.
The takeaway here is that if we can nd the rst di ering label ( ′ , ′ ), then we can construct the selector select and hence the desired single-valued FS 2 BPP algorithm.
Encoded history.However, the above assumes the knowledge of ( ′ , ′ ).In general, if one is only given oracle access to { 1 , } and { 2 , }, there is no poly( )-time oracle algorithm computing ( ′ , ′ ) because there might be exponentially many nodes.To resolve this issue, we will encode { 1 , } and { 2 , } via Reed-Muller codes.Formally, recall that History( , ) is the concatenation of ( ★ , ★ ) and the string , where is the concatenation of all the labels on the binary tree.We now de ne the encoded history, denoted as History( , ), as the concatenation of ( ★ , ★ ) and a Reed-Muller encoding of .The new selector is given oracle access to two candidate encoded histories together with .By applying low-degree tests and self-correction of polynomials, we can assume that the Reed-Muller parts of the two candidates are indeed low-degree polynomials.Then we can use a reduction to polynomial identity testing to compute the rst di ering point between { 1 , } and { 2 , } in randomized polynomial time.See the proof of Lemma 5.3 in the full version for the details.This part is similar to the selector construction from [24].

DISCUSSIONS
We conclude the introduction by discussing some related works.

Previous Approach: Karp-Lipton Collapses and the Half-Exponential Barrier
In the following, we elaborate on the half-exponential barrier mentioned earlier in the introduction. 20 [3,40], and ZPP MCSP [26].All the aforementioned super-polynomial circuit lower bounds for Σ 2 EXP, ZPEXP NP , S 2 EXP, PEXP, MA -EXP, and ZPEXP MCSP are proven in this way. 21  The half-exponential barrier.The above argument is very successful at proving various super-polynomial lower bounds.However, a closer look shows that it is only capable of proving sub-halfexponential circuit lower bounds.Indeed, suppose we want to show that C-EXP does not have circuits of size ( ).We will have to perform the following win-win analysis: Intuitively speaking, the two cases above are competing with each other: we cannot get exponential lower bounds in both cases. 20A function : N → N is sub-half-exponential if ( ( ) ) = 2 ( ) for every constant ≥ 1, i.e., composing twice yields a sub-exponential function.For example, for constants ≥ 1 and > 0, the functions ( ) = and ( ) = 2 log are sub-half-exponential, but the functions ( ) = 2 and ( ) = 2 are not. 21There are some evidences that Karp-Lipton collapses are essential for proving circuit lower bounds [13].

Implications for the Missing-String
Problem?
In the Missing-String problem, we are given a list of strings 1 , 2 , . . ., ∈ {0, 1} where < 2 , and the goal is to output any length-string that does not appear in { 1 , 2 , . . ., }. Vyas and Williams [51] connected the circuit complexity of Missing-String with the (relativized) circuit complexity of Σ 2 E: , the Missing-String problem can be solved by a "good" circuit family (roughly speaking, a uniform family of depth-3 AC 0 circuits of size 2 (1) and bottom fan-in poly( )).
The intuition behind Theorem 5.1 is roughly as follows.For every oracle , the set of truth tables with low -oracle circuit complexity induces an instance for Missing-String, and solving this instance gives us a hard truth table relative to .If the algorithm for Missing-String is a uniform AC 0 circuit of depth 3, then the hard function is inside Σ 2 E .
However, despite our Theorem 2.1 being completely relativizing, it does not seem to imply any non-trivial depth-3 AC 0 circuit for Missing-String.The reason is the heavy win-win analysis across multiple input lengths: for each 0 ≤ < , we have a singlevalued FΣ 2 P construction algorithm for hard truth tables relative to oracle on input length , but this algorithm needs access to +1 , a higher input length of .Translating this into the language of Missing-String, we obtain a weird-looking depth-3 AC 0 circuit that takes as input a sequence of Missing-String instances I 0 , I 1 , . . ., I (where each I ⊆ {0, 1} is a set of strings), looks at all of the instances (or, at least I and I +1 ), and outputs a purportedly missing string of I .It is guaranteed that for at least one input length , the output string is indeed a missing string of I .However, if our algorithm is only given one instance I ⊆ {0, 1} , without assistance from a larger input length, it does not know how to nd any missing string of I.

SUBSEQUENT DEVELOPMENTS
Just one month after our paper was posted online, Li [39] strengthened our results and removed the need of the iterative win-win argument.This allows [39] to prove that: Theorem 6.1 ( [39]).The following are true: and ZPE NP also admit the same almost-everywhere nearmaximum circuit lower bounds.Moreover, this holds in every relativized world.• There is a single-valued FS 2 P algorithm for the range avoidance problem that works on every input length.Consequently, there are zero-error pseudodeterministic polynomial-time constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, hard truth tables, and K poly -random strings, with an NP oracle.• There is a uniform family of quasi-polynomial-size depth-3 AC 0 circuit solving the Missing-String problem.
Compared to our results, Theorem 6.1 holds on almost every input length and does not require the advice bit.
Following our work, the proof of [39] also utilizes the history of Korten's reduction.The crucial insight of [39] is that a variant of "history" (called Histree in [39, De nition 3.5]) always have succinct descriptions.Instead, our proof needs to branch on whether our History has succinct descriptions and perform a win-win analysis.
then of course C-EXP ⊇ EXP does not have circuits of size ( ); • if EXP ⊆ SIZE[ ( )], then (a scaled-up version of) the Karp-Lipton collapse implies that EXP can be computed by a C machine of poly( ( )) time.Note that TIME[2 poly( ( ) ) ] does not have circuits of size ( ) by direct diagonalization.By padding, TIME[2 poly( ( ) ) ] can be computed by a C machine of poly( (poly( ( )))) time.Therefore, if is sub-half-exponential (meaning (poly( ( ))) = 2 ( ) ), then C-EXP does not have circuits of size ( ).
[32]C be a "typical" uniform complexity class containing P, a Karp-Lipton collapse to C states that if a large class (say EXP) has polynomial-size circuits, then this class collapses to C. For example, there is a Karp-Lipton collapse to C = Σ 2 P:Suppose EXP ⊆ P/ poly , then EXP = Σ 2 P. ([32], attributed to Albert Meyer) Now, assuming that EXP ⊆ P/ poly =⇒ EXP = C, the following win-win analysis implies that C-EXP, the exponential-time version of C, is not in P/ poly : (1) if EXP ⊄ P/ poly , then of course C-EXP ⊇ EXP does not have polynomial-size circuits; (2) otherwise EXP ⊆ P/ poly .We have EXP = C and by padding EEXP = C-EXP.Since EEXP contains a function of maximum circuit complexity by direct diagonalization, it follows that C-EXP does not have polynomialsize circuits.Karp-Lipton collapses are known for the classes Σ 2 P [32], ZPP NP [5], S 2 P [8] (attributed to Samik Sengupta), PP, MA