Adaptively-Sound Succinct Arguments for NP from Indistinguishability Obfuscation

A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement x is true with a proof of size o(|x| + |w|), where w is the associated NP witness. A SNARG satisfies adaptive soundness if the malicious prover can choose the statement to prove after seeing the scheme parameters. In this work, we provide the first adaptively-sound SNARG for NP in the plain model assuming sub-exponentially-hard indistinguishability obfuscation, sub-exponentially-hard one-way functions, and either the (polynomial) hardness of the discrete log assumption or the (polynomial) hardness of factoring. This gives the first adaptively-sound SNARG for NP from falsifiable assumptions. All previous SNARGs for NP in the plain model either relied on non-falsifiable cryptographic assumptions or satisfied a weak notion of non-adaptive soundness (where the adversary has to choose the statement it proves before seeing the scheme parameters).


INTRODUCTION
A succinct non-interactive argument (SNARG) for NP allows a (computationally-bounded) prover to convince a veri er that an NP statement is true with a proof whose size scales with (| | + | |), where is the associated NP witness.While succinct arguments for NP can be constructed unconditionally in the random oracle model [38,40], the same is not true in the plain model.In the plain model, we assume the prover and the veri er have access to a common reference string (CRS).SNARGs for NP that do not have a common reference string are unlikely to exist [3,45].
Existing constructions of SNARGs in the plain model either rely on strong, non-falsi able cryptographic assumptions [1, 4-9, 18, 19, 21, 25, 39] or only support subsets of NP [12, 13, 15-17, 26, 29, 31-35, 43].To date, the only exception is the construction of Sahai and Waters of a SNARG for NP from indistinguishability obfuscation ( O) and one-way functions [42].While the existence of O is itself not a falsi able assumption, recent work has shown how to base O on a collection of falsi able assumptions [27,28].However, the Sahai-Waters construction only achieves a weak notion of non-adaptive soundness, where soundness only holds against an adversary that declares its false statement before seeing the common reference string.The more natural notion of security is adaptive soundness which allows the adversary to choose its statement after seeing the scheme parameters.Achieving adaptively-sound SNARGs for NP from standard falsi able cryptographic assumptions has proven to be an elusive goal and any such construction must overcome black-box separations [14,23].
This work.In this work, we construct the rst adaptively-sound SNARG for NP assuming the existence of a sub-exponentially-hard O scheme,1 a sub-exponentially-hard one-way function, and polynomial hardness of a standard number-theoretic assumption (e.g., the hardness of discrete log or the hardness of factoring).In conjunction with the results basing O on falsi able assumptions [27,28], this yields the rst adaptively-sound SNARG for NP from falsi able assumptions.We summarize our construction below and provide a technical overview of our construction in Section 2.
Theorem 1.1 (Informal).Assuming (1) either the polynomial hardness of computing discrete logs in a prime-order group or the polynomial hardness of factoring, (2) the existence of a sub-exponentiallysecure indistinguishability obfuscation scheme for Boolean circuits, and (3) the existence of a sub-exponentially-secure one-way function, there exists an adaptively-sound SNARG for NP.Speci cally, we construct a SNARG for NP with the following properties: • Preprocessing SNARG: Similar to [42], we work in the preprocessing model where there is a large CRS that depends on the Boolean circuit : {0, 1} × {0, 1} ℎ → {0, 1} that computes the NP relation (i.e., is the statement length and ℎ is the witness length for the NP relation).The size of the common reference string is poly( + | |), where denotes a security parameter.• Proof size: The size of the proof is poly( ).
Moreover, the SNARG satis es perfect zero-knowledge.
The Gentry-Wichs separation.A classic result of Gentry and Wichs [23] rules out adaptively-sound SNARGs for NP whose security can be based on a black-box reduction to a falsi able cryptographic assumption (and in some settings, even with a CRS whose size grows with the size of the NP relation [14]).A critical assumption in the Gentry-Wichs separation is that the running time of the SNARG security reduction is insu cient to decide the associated NP language.The existing reductions of O to falsi able assumptions [27,28] run in time that is exponential in the input length of the obfuscated program.In our construction, the CRS contains obfuscated programs that take the statement and the witness as input.Correspondingly, our security reduction runs in time that is su cient to decide membership in the underlying NP language.For this reason, the Gentry-Wichs separation does not apply to our construction.As was noted in [26], this caveat is also true for the original Sahai-Waters SNARG based on O, so the Gentry-Wichs separation also does not say anything about the Sahai-Waters construction.
One interpretation of the Gentry-Wichs separation is that to build adaptively-sound SNARGs for NP from falsi able assumptions, we need some form of sub-exponential hardness (and complexity leveraging).In this case, either the size of the CRS or the size of the proof must grow with the size of the statement or witness.The challenge then is to o oad the complexity leveraging overhead in the construction entirely to the CRS in order to keep the proofs succinct.This is precisely what our new approach achieves.
In addition to O, the Sahai-Waters construction requires a oneway function : Y → Z and a puncturable pseudorandom function (PRF) F with domain {0, 1} and output space Y.A puncturable PRF [10,11,37] is a pseudorandom function where the PRF key can be "punctured" at an input point .We write ( ) to denote a key punctured at input .The punctured key can be used to evaluate the PRF F( , •) on all inputs ′ ≠ (i.e., F( , ′ ) = F( ( ) , ′ ) for all ′ ≠ ).Moreover, the value of the PRF F( , ) should remain pseudorandom even given the punctured key ( ) .
The Sahai-Waters construction.In the Sahai-Waters construction, the common reference string consists of two obfuscated programs: a program Prove for generating proofs and a Verify program for validating proofs.Here, we describe a variant where the Verify program is replaced by an "instance-generator" GenInst, which will be a useful stepping stone to our construction.The two programs are de ned as follows: The CRS contains obfuscations of the Prove and GenInst programs.To construct a proof for a statement and witness , the prover simply runs the (obfuscated) Prove program on input ( , ) and obtain a proof .To check the proof on statement , the veri er runs the (obfuscated) GenInst program on input to obtain a challenge ∈ Z.The veri er then checks that ( ) = .
In the original Sahai-Waters construction [42, §5.5], the CRS contained a Verify program that combines GenInst and the veri cation check ( ) = .We separate it out because it will be helpful for understanding our SNARG construction.
At a high-level, we can view the GenInst as generating a challenge (for the one-way function) for each statement and the Prove program as generating solutions to those challenges.Informally, soundness follows from the fact that the Prove program only solves instances associated with the true statements ∈ L , where we de ne In order to construct a proof for a false statement * ∉ L , the adversary essentially has to invert the one-way function on a (pseudorandom) input F( , * ), which should be hard.This is formalized via the following hybrid argument: • In the non-adaptive soundness game, the adversary rst commits to the false statement * ∉ L .Now, we can construct an equivalent pair of programs that use a punctured key ( * ) in place of .In the case of the Prove program, since * ∉ L , the program never needs to evaluate F( , * ).Thus, we can replace the obfuscated programs in the CRS with obfuscations of the following programs (which compute identical functionality as the previous programs):

GenInst( , ):
- • By puncturing security of the PRF, the value of F( , * ) is pseudorandom even given the punctured key ( * ) .More precisely, the distribution of (F( , * )) is computationally indistinguishable from the distribution of = ( ) where r ← Y is a uniformly-random string sampled from the codomain of the PRF.This means that the following two programs are computationally indistinguishable:

GenInst( , ):
- In this experiment, the only way the adversary produces a valid proof for the statement * is by outputting a such that ( ) = .Thus, to come up with a valid proof for * , the adversary must invert at a random = ( ).This is computationally infeasible by security of the one-way function, and so we conclude that an e cient adversary is unable to produce a valid proof for * .
The challenge of adaptivity.The reduction strategy described above critically relies on knowing the false statement * ∈ {0, 1} in advance.Indeed, the rst step in the security reduction is to replace the PRF key with a key punctured at * .Puncturing the PRF at * enables us to program a random challenge (for the oneway function) at * .This ensures that any successful adversary that comes up with a proof for * must be able to invert the one-way function.This strategy breaks down if the statement * is not known in advance (i.e., the setting of adaptive soundness).Notably, it is not clear where to puncture the PRF key and embed the challenge for the one-way function.
Why not complexity leverage?One possible way to argue adaptive soundness is to complexity leverage and guess the statement * and rely on sub-exponential hardness.Here, the security reduction would guess a random statement * r ← {0, 1} and then apply the previous reduction as if the adversary had committed to * .The security reduction succeeds if the guess was correct, which occurs with probability 1/2 .In turn, we rely on sub-exponential hardness of the underlying cryptographic primitives and assume that the advantage of any computationally-bounded adversary breaking each primitive should be much smaller than even 1/2 .In particular, this means that the probability that an e cient adversary succeeds in inverting the one-way function must also be smaller than 1/2 .But this means the length of the preimage of the one-way function must be at least , which is the statement size (otherwise, the preimage can be guessed with probability better than 1/2 ).Since the proof is precisely the preimage, this means the proof size is now at least Ω( ).Thus, while the standard complexity-leveraging strategy su ces to prove adaptive soundness, the resulting proof system is no longer succinct. 2tarting point: embedding a second challenge.To build an adaptivelysound SNARG, we will need a di erent proof technique (and construction).Our starting point is to modify the Sahai-Waters variant described above by having the GenInst program output two independent challenges ( ,0 , ,1 ) for each statement ∈ {0, 1} .In the modi ed scheme, the veri er accepts if the prover solves either challenge: namely, a proof = ( , ) is valid if ∈ {0, 1} and ( ) = , .The critical property is that the Prove program will only output a solution to one of the challenges.In more detail, we consider three di erent (puncturable) PRFs: F sel , F 0 , and F 1 .The selector PRF F sel takes as input a statement ∈ {0, 1} and outputs a selection bit ∈ {0, 1}.The PRFs F 0 and F 1 takes as input a statement ∈ {0, 1} and outputs a value ∈ Y in the domain of the one-way function.We now de ne the Prove and GenInst programs as follows: Before proceeding, we provide some brief intuition for why having two challenges is bene cial for arguing adaptive security.In the non-adaptive soundness analysis above, the adversary has to pre-commit to the statement * and the security reduction then embeds the one-way function challenge at * in the GenInst program.When considering adaptive security, the security reduction does not know which statement the adversary will choose so it is not clear where to embed the one-way function challenge (and guessing does not work for the reasons outlined above).
One possible approach is to change GenInst to output the oneway function challenge on every statement ∈ {0, 1} in the security proof. 3Then, an adversary that outputs a proof for any false statement would successfully invert the one-way function.However, this leads to a correctness issue: we still need to be able to generate proofs for true statements (i.e., the Prove program needs to give out preimages for the challenges associated with true statements).This is no longer possible if GenInst outputs a one-way function challenge for which the Prove algorithm does not know a corresponding preimage on every input ∈ {0, 1} .Ideally, we would only embed the challenge instances on inputs ∉ L .However, GenInst cannot decide the language itself to determine whether it should output a challenge instance or one with a known preimage.
Our solution is to associate two challenges with every .One of the challenges (as determined by F sel ( sel , )) will be sampled with a known preimage while the other can be arbitrary (and will be used to embed a challenge instance in the reduction).This way, the Prove program still has the capability to produce a proof for every statement, while simultaneously ensuring that an adversary that succeeds on any instance breaks security of the one-way function.This is conceptually similar to other "two-challenge" approaches used to argue adaptive security for digital signatures [36], broadcast encryption [22], or registered attribute-based encryption [20], albeit in the random oracle model.In fact, as we discuss in the full version of this paper [44], our techniques can be used to "instantiate" the random oracle in the Katz-Wang signature scheme with an obfuscated PRF to obtain a provably-secure variant of their scheme in the plain model.
Proving adaptive security.Our proof of adaptive security proceeds in a sequence of hybrid experiments.Our reduction still relies on complexity leveraging (and speci cally, an exponential number of hybrids), but the parameter blowup from complexity leveraging only factors into the CRS size, and not the proof size.We now survey our main hybrids and refer to the proof of Theorem 5.3 for the full details.
• Hyb 0 : This corresponds to the real adaptive soundness game.In this game, the adversary is considered successful if it outputs a false statement ∉ L along with a proof = ( , ) where ( ) = , and ( ,0 , ,1 ) ← GenInst( ).Notably, there is no requirement on the value of (i.e., the adversary wins if it can invert on either ,0 or ,1 ).• Hyb 1 : This is the same experiment as before, except we consider the adversary successful only if it outputs a false statement ∉ L and a proof = ( , ) where ≠ F sel ( sel , ).
We claim that this can only reduce the adversary's advantage in winning the game by a factor of 2. This is because for all ∉ L , the value of F sel ( sel , ) is hidden from the adversary (i.e., never computed by the Prove algorithm).If the adversary's advantage decreased by more than a factor of 2 between Hyb 0 and Hyb 1 , then the adversary is able to predict the value of F sel ( sel , ) with probability better than 1/2, thereby breaking security of the PRF.Formally, we argue this by considering 2 possible experiments (for each possible statement ∈ {0, 1} the adversary could output).By relying on (sub-exponential) security of O and the puncturable PRF, we can show that in each of these experiments, the adversary's success probability is always within a factor of 2 of the adversary's success probability in Hyb 0 (up to negligible di erences).We refer to Lemma 5.4 for the full details.Note that even though we rely on complexity leveraging and sub-exponential-hardness, we are only complexity leveraging on the O scheme and the puncturable PRF, not on the security of the one-way function.This means the cost of complexity leveraging is only incurred in the CRS size (now larger by a poly( ) factor), but not in the proof size (whose length is governed by the preimage length for the one-way function).
At this point in the proof, we are in a conceptually-similar end-point as in the non-adaptive soundness proof of Sahai-Waters.In order to win, the adversary needs to invert the one-way function at a (pseudorandom) point: to give a proof for , the adversary needs to invert ,1− = (F 1− ( 1− , )) where = F sel ( sel , ).All we need is a way to plant a one-way function challenge instance at each ,1− .Rerandomizable one-way functions.To complete the proof, we want to argue that any adversary that succeeds at inverting ,1− for any (false) statement ∈ {0, 1} translates into one that inverts a one-way function challenge * .Moreover, the challenges ,1− for di erent should look indistinguishable from fresh challenges.Phrased di erently, we require a way to derive fresh challenges ,1− from * such that a solution where ( ) = ,1− for any implies a solution * where ( * ) = * .We refer to one-way functions with this property as rerandomizable one-way functions (see Section 4).Suppose we have a rerandomizable oneway function.Then, we de ne the nal hybrid Hyb 2 as follows: 4• Hyb 2 : In this experiment, the challenger rst samples * r ← Y and sets the challenge * = ( * ).The challenger also samples a new (puncturable) PRF key F rerand that will be used for rerandomizing the challenge * .The CRS then consists of obfuscations of the following programs in Fig. 1.
To argue that the distributions of Hyb 1 and Hyb 2 are computationally indistinguishable, we again use a sequence of 2 di erent hybrids, one for each value of ∈ {0, 1} .In the th hybrid, we change the distribution of ,1− from being a freshly-sampled challenge (as in Hyb 1 ) to being a rerandomized challenge derived from * (as in Hyb 2 ).Each of these intermediate transitions relies on security of O, security of the underlying puncturable PRFs, and the rerandomizability of the one-way function.Since there are 2 hybrids, we require sub-exponential hardness of each of the underlying primitives.
An astute reader might observe that unlike the transition from Hyb 0 to Hyb 1 , the transition from Hyb 1 to Hyb 2 does rely on security of the one-way function, speci cally, the property that a rerandomized instance is indistinguishable from a fresh instance.Thus it might seem like we need to increase the parameters of the one-way function to achieve this.However, this need not be the case.By considering algebraic one-way functions (e.g., based on discrete log or factoring), it is possible to statistically rerandomize an instance so that the statistical distance between a fresh instance and a rerandomized instance is at most 2 −Ω ( ) even though the length of the instances is poly( ), where is the security parameter (independent of the statement length ).Importantly, this hybrid transition only relies on rerandomization and not security of the one-way function.This is the reason we are able to consider an exponential number of hybrids without a ecting the proof length.Thus, once again, we are able to complexity leverage, but not incur any overhead in the length of the proof.We provide the full details in Lemma 5.6.
In Hyb 2 , a successful prover succeeds only if it inverts the one-way function on the value ,1− for some ∈ {0, 1} .But in Hyb 2 , the value of ,1− was derived by rerandomizing the instance * .By the rerandomization property of the one-way function, a preimage of ,1− can be used to recover a preimage of * .In other words, inverting ,1− for any is su cient to invert * .By security of the one-way function, the advantage of the adversary in Hyb 2 must be negligible, and adaptive soundness holds.This is the only step in the security proof where we rely on security of the one-way function.As such, polynomial hardness of the one-way function su ces for the analysis.Since the proof is still just a preimage of the one-way function, the size of the proof is poly( ), independent of the statement length (or the witness length).The overhead from the complexity leveraging only manifests in the CRS size and not the proof size.We provide the formal description and analysis in Section 5.
Constructing rerandomizable one-way functions.The remaining ingredient we need to complete the construction is a rerandomizable one-way function.We describe two constructions here based on the hardness of computing discrete logs and the hardness of factoring (speci cally, the hardness of computing modular square roots).Both constructions rely on the random self-reducibility of the underlying assumption.We give the formal constructions and analysis in Section 6.

Prove( , ):
- -Output ( ,0 , ,1 ).• Construction from discrete log: Let G be a group of prime order and generated by .The discrete log assumption in G says that given ℎ r ← G, it is hard to nd ∈ Z such that = ℎ.In our construction, we sample the challenge ℎ as ℎ r ← G \ { 0 }. 5 This allows for perfect rerandomization: given any challenge ℎ ∈ G\{ 0 }, the distribution of ℎ where r ← Z * is exactly the uniform distribution over the original challenge space G \ { 0 }.Moreover, given the discrete log of ℎ (i.e., ∈ Z * where = ℎ ), we can recover the discrete log of ℎ by computing −1 mod .This yields a perfectly rerandomizable one-way function.We give the full details in Section 6.1.
• Construction from factoring: We obtain a second rerandomizable one-way function based on the hardness of computing square roots modulo = , where , are distinct primes (i.e., given 2 ∈ Z where r ← Z , nd ∈ Z such that 2 = 2 mod ).This problem is equivalent to the hardness of factoring [41].This problem also has a random self-reduction: namely, given a challenge ∈ Z , we can construct a new instance by sampling r ← Z and outputting 2 mod .Any solution ∈ Z where 2 = 2 yields a solution −1 mod for the original challenge (provided that is invertible modulo ).Some extra care is needed to ensure that the statistical distance of the rerandomized distribution and the original challenge distribution is at most 2 − ≪ 2 − .As we show in Section 6.2, this is possible by rejection sampling (since membership in Z * is e ciently-checkable).

PRELIMINARIES
Throughout this work, we write to denote the security parameter.We write poly( ) to denote a xed polynomial in the security parameter .We say a function ( ) is negligible in if ( ) = ( − ) for all ∈ N and denote this by writing ( ) = negl( ).When , ∈ {0, 1} , we will view and as both bit-strings of length as well as the binary representation of an integer between 0 and 2 − 1.We write " ≤ " to refer to the comparison of the integer representations of and .We say an algorithm is e cient if it runs in probabilistic polynomial time in the length of its input.
Our construction will rely on sub-exponential hardness assumptions, so we will formulate some of our security de nitions using ( , )-notation.Generally, we say that a primitive is ( , )-secure, if for all adversaries A running in time at most ( ) • poly( ), there exists A ∈ N such that for all ≥ A , the adversary's advantage is bounded by ( ).We say a primitive is polynomially-secure if it is (1, negl( ))-secure for some negligible function negl(•) and we say that it is sub-exponentially secure if it is (1, 2 − )-secure for some constant ∈ N. We now recall the main cryptographic primitives we use in this work.
De nition 3.1 (Indistinguishability Obfuscation [2]).An indistinguishability obfuscator for Boolean circuits is an e cient algorithm O (•, •, •) with the following properties: • Correctness: For all security parameters ∈ N, circuit size parameters ∈ N, all Boolean circuits of size at most , and all inputs , • Security: For a bit ∈ {0, 1} and a security parameter , we de ne the program indistinguishability game between an adversary A and a challenger as follows: -On input the security parameter 1 , the adversary outputs a size parameter 1 and two Boolean circuits 0 , 1 of size at most .-If there exists an input such that 0 ( ) ≠ 1 ( ), then the challenger halts with output ⊥.Otherwise, the challenger replies with O (1 , 1 , ). -The adversary A outputs a bit ′ ∈ {0, 1}, which is the output of the experiment.We say that O is ( , )-secure if for all adversaries A running in time at most ( ) • poly( ), there exists A ∈ N such that for all ≥ A , we have that in the program indistinguishability game de ned above.
De nition 3.2 (Puncturable PRF [10,11,37]).A puncturable pseudorandom function consists of a tuple of e cient algorithms Π PPRF = (KeyGen, Eval, Puncture) with the following syntax: • KeyGen(1 , 1 ℓ in , 1 ℓ out ) → : On input the security parameter , an input length ℓ in , and an output length ℓ out , the keygeneration algorithm outputs a key .We assume that the key contains an implicit description of ℓ in and ℓ out .
• Puncture( , * ) → ( * ) : On input a key and a point * ∈ {0, 1} ℓ in , the puncture algorithm outputs a punctured key ( * ) .We assume the punctured key also contains an implicit description of ℓ in and ℓ out (same as the key ).
-At the end of the game, the adversary outputs a bit ′ ∈ {0, 1}, which is the output of the experiment.We say that Π PPRF satis es ( , )-punctured pseudorandomness if for all adversaries A running in time at most ( ) • poly( ), there exists A ∈ N such that for all ≥ A , it holds that in the punctured pseudorandomness security game.Theorem 3.3 (Puncturable PRFs [10,11,24,37]).Assuming the existence of polynomially-secure (resp., sub-exponentially-secure) oneway functions, then there exists a selective polynomially-secure (resp., sub-exponentially-secure) puncturable PRF.Succinct non-interactive arguments.We now recall the de nition of a succinct non-interactive argument for the language of Boolean circuit satis ability.We start by de ning the language of Boolean circuit satis ability: De nition 3.4 (Boolean Circuit Satis ability).We de ne the circuit satis ability language L SAT as De nition 3.5 (Succinct Non-Interactive Argument).A succinct non-interactive argument (SNARG) in the preprocessing model for Boolean circuit satis ability is a tuple Π SNARG = (Setup, Prove, Verify) with the following syntax: • Setup(1 , ) → crs: On input the security parameter and a Boolean circuit , the setup algorithm outputs a common reference string crs.• Prove(crs, , ) → : On input a common reference string crs, a statement , and a witness , the prove algorithm outputs a proof .
Remark 3.7 (Fast Veri cation).In a preprocessing SNARG, the length of the common reference string crs can depend polynomially on the size of (i.e., |crs| = poly( + | |)).Correspondingly, this means the running time of Verify(crs, •, •) can be as large as poly( + | |).We can compose the SNARG with a RAM delegation scheme (i.e., a SNARG for P) [17,30,35] to obtain a SNARG for NP where the veri cation time is poly( +| |+log | |).Instead of computing Verify itself, the veri er delegates the computation of Verify(crs, , ) to the prover and veri es the proof that Verify(crs, , ) = 1.We sketch the full construction below: • Let be an arbitrary RAM machine that takes two inputs and and outputs a bit ∈ {0, 1}.The soundness requirement says that if dig is an honestlygenerated digest of ( , ), then an e cient prover cannot produce ( , , ) such that Verify RAM (crs RAM , dig, , , ) = 1 and ( , ) ≠ , except with negligible probability.• To support fast veri cation for the SNARG, we de ne the new common reference string to be crs = (crs SNARG , crs RAM , dig), where crs SNARG is a CRS for the underlying SNARG for NP, crs RAM is the CRS for the RAM delegation scheme, and dig is a digest for ( , crs SNARG ), where (crs SNARG , ( , )) is the RAM machine that computes the veri cation algorithm Verify SNARG (crs SNARG , , ) for the underlying SNARG.• A proof for a statement consists of a SNARG proof SNARG together with a RAM delegation proof RAM that (crs SNARG , ( , SNARG )) := Verify SNARG (crs SNARG , , ) = 1.
Adaptive computational soundness follows from the fact that if Verify RAM (crs RAM , dig, ( , SNARG ), RAM , 1), then with all but negligible probability, Verify SNARG (crs SNARG , , SNARG ) = 1, and soundness reduces to that of the underlying SNARG.Moreover, the size of RAM is poly( + log | |), so the composed scheme remains succinct.In the composed scheme, the veri cation algorithm only needs crs RAM and dig (but not crs SNARG ).Thus, we can de ne a separate veri cation key for the composed scheme vk = (crs RAM , dig), which has size poly( + log | |).The running time of the composed veri cation algorithm is then poly(

RERANDOMIZABLE ONE-WAY FUNCTIONS
In this section, we introduce the notion of a rerandomizable oneway function, which is one of the main building blocks we use in our construction.Then, in Section 6, we show that rerandomizable one-way functions can be based on standard number-theoretic assumptions.
De nition 4.1 (Rerandomizable One-Way Functions).A rerandomizable one-way function is a tuple of e cient algorithms Π ROWF = (Setup, GenInstance, Rerandomize, Verify, RecoverSolution) with the following syntax: • Setup(1 , 1 ) → crs: On input a security parameter and a rerandomization parameter , the setup algorithm outputs a common reference string crs.• GenInstance(crs) → ( , ): On input the common reference string crs, the instance-generator algorithm outputs an instance together with a solution .
• Rerandomize(crs, ) → ( ′ , st): On input the common reference string crs, the rerandomize algorithm outputs a new instance ′ and a randomization state st.• Verify(crs, , ) → : On input the common reference string crs, an instance , and a solution , the veri cation algorithm outputs a bit ∈ {0, 1}.• RecoverSolution(crs, ′ , st) → : On input the common reference string crs, a solution ′ and a randomization state st, the solution-recovery algorithm outputs a solution .
-Algorithm A outputs a bit ′ ∈ {0, 1}, which is the output of the experiment.We say that the rerandomizable one-way function satis es ( , )-rerandomizable security if for all polynomials = ( ), all adversaries A running in time ( ) • poly( ), there exists A, ∈ N such that for all ≥ A, , in the rerandomization security game.We refer to this quantity as the rerandomization advantage RerandAdv A, ( ).In particular, the distinguishing advantage is a function of the rerandomization parameter .We say that Π ROWF satis es -statistical rerandomizable security if for all polynomials = ( ), all (possibly unbounded) adversaries A, and all ∈ N, RerandAdv A, ( ) ≤ ( ( )).
We say Π ROWF satis es perfect rerandomizable security if it satis es -statistical rerandomizable security for = 0. • Succinctness: There exists a polynomial such that for all , ∈ N, all crs in the support of Setup(1 , 1 ), and all ( , ) in the support of GenInstance(crs), it holds that | | ≤ ( + log ).

CONSTRUCTING ADAPTIVELY-SOUND SNARGS FOR NP
In this section, we show how to construct an adaptively-sound SNARG from indistinguishability obfuscation together with a rerandomizable one-way function.
Proof.Let A be an e cient adversary for the adaptive soundness game for Construction 5.1 that succeeds with (non-negligible) advantage = ( ).We rst claim that without loss of generality, we can assume that for every security parameter , algorithm A always outputs a Boolean circuit with statements of a xed length = ( ).To argue this formally, we rst use the fact that A is a polynomial-time algorithm, so on input the security parameter 1 , algorithm A outputs a Boolean circuit of size at most max ( ), where max ( ) = poly( ).This in turn de ne a maximum statement length max ( ) ≤ max ( ).In an execution of the adaptive soundness game, let E the event that algorithm A outputs a Boolean circuit with statements of length .Then, the probability that A wins the soundness game is Pr[A wins the soundness game ∧ E ].
If A wins the soundness game with advantage ( ), then it must be the case that there exists some index * ∈ [ max ( )] such that Pr[A wins the soundness game ∧ E * ] ≥ ( ) max ( ) . (5.1) For each security parameter , de ne ( ) := * to be the smallest index * where Eq. (5.1) holds.We can now construct a new (non-uniform) adversary A ′ that functions as a wrapper around A. Namely, algorithm A ′ takes as input the security parameter 1 and the non-uniform advice ( ).Algorithm A ′ runs A on the same security parameter 1 .If A outputs a Boolean circuit where the statement length is not ( ), then algorithm A ′ aborts.Otherwise, algorithm A ′ simply follows the behavior of A (and outputs whatever A outputs).By construction, the probability A ′ wins the soundness game is the probability that A wins the soundness game and E ( ) occurs.This is at least ( ) max ( ) .The advantage of A ′ is only polynomially-smaller than that of A and moreover, algorithm A ′ always outputs a Boolean circuit with xed statement size ( ).Thus, if there exists an adaptive soundness adversary A that succeeds with non-negligible probability, then we can construct from A an e cient (non-uniform) adversary A ′ that also succeeds with non-negligible probability.For the remainder of this proof, we will thus assume that the adaptive soundness adversary always outputs a circuit for statements of length exactly = ( ).We now de ne a sequence of hybrid experiments: • Hyb 0 : This is the real adaptive soundness experiment.Namely, the adversary starts by outputting a circuit : {0, 1} × {0, 1} ℎ → {0, 1}.The challenger then constructs the CRS as follows: - where GenProof and GenInst on the programs from ?? 1?? 2, and is the same size parameter from Construction 5.1.The challenger gives crs = (crs ROWF , ObfProve, ObfVerify) to A. Algorithm A then outputs a statement and a proof = ( , ).The challenger computes ( 0 , 1 ) = ObfVerify( ) and the output is 1 if ( , ) ∉ L SAT and R.Verify(crs ROWF , , ) = 1.
-Let be the number of bits of randomness the algorithm R.Rerandomize(crs ROWF , ) takes.Sample a fresh PRF key

Rerandomizable One-Way Function from Discrete Log
In this section, we show how to construct a rerandomizable oneway function from discrete log.We begin by recalling the discrete log assumption in prime-order groups.
Notation.For a positive integer > 1, we write Z to denote the set of integers {0, . . ., − 1}.We write Z * to denote the multiplicative group of integers modulo .
De nition 6.1 (Prime-Order Group Generator).Let be a security parameter.A prime-order group generator is an e cient algorithm GroupGen that takes as input a security parameter 1 and outputs the description G = (G, , ) of a group G of prime order = 2 Θ( ) and generated by ∈ G.Moreover, we require that the group operation in G be e ciently-computable.
Proof.Suppose there exists an e cient adversary A where OWFAdv A ( ) > ( ) for some non-negligible .We use A to construct an adversary B for the discrete log problem: (1) At the beginning of the game, algorithm B receives the security parameter 1 , the group G = (G, , ) and the challenge ℎ ∈ G.If ℎ = 0 , then algorithm B outputs 0. ← Z , and sets ℎ = .If ℎ = 0 , then algorithm B solves the discrete log problem.If ≠ 0, then is uniformly distributed over Z * , so algorithm B perfectly simulates the onewayness game for A. In this case, with probability at least , algorithm A outputs ∈ Z * such that Verify(crs, ℎ, ) = 1, or equivalently, such that ℎ = .But in this case, algorithm B also solves the discrete log problem.We conclude that algorithm B succeeds in solving the discrete log problem with the same non-negligible advantage .□ Theorem 6.7 (Rerandomization Security).Construction 6.3 satis es perfect rerandomizable security.Namely, for all polynomials = ( ) and all adversaries A, RerandAdv A, ( ) = 0.

Rerandomizable One-Way Functions from Computing Modular Square Roots
In the full version of this paper [44], we also show how to construct a rerandomizable one-way function from the hardness of computing modular square roots (and in particular, the hardness of factoring).

6 A
RAM delegation scheme allows a prover to convince the veri er that ( , ) = with a proof of size | | = poly( + log | | + log | | + log | | + log ), where is the running time of .The veri cation algorithm Verify RAM takes as input a common reference string crs RAM for the delegation scheme, a hash digest dig of ( , ), the value , the proof , and the claimed value and either accepts (with output 1) or rejects (with output 0).The length of the veri cation key and the length of the proof satis es |vk|, | | = poly( + log | | + log | | + log | | + log ).

( 2 )
Algorithm B runs A on the security parameter 1 .Algorithm A outputs the rerandomization parameter 1 , and algorithm B replies with crs = (G, , ) and the instance ℎ ∈ G.(3)After algorithm A outputs a solution , algorithm B also outputs .By construction, the discrete log challenger samples crs = (G, , ) ← Setup(1 , 1 ), r