Modular Sumcheck Proofs with Applications to Machine Learning and Image Processing

Cryptographic proof systems provide integrity, fairness, and privacy in applications that outsource data processing tasks. However, general-purpose proof systems do not scale well to large inputs. At the same time, ad-hoc solutions for concrete applications - e.g., machine learning or image processing - are more efficient but lack modularity, hence they are hard to extend or to compose with other tools of a data-processing pipeline. In this paper, we combine the performance of tailored solutions with the versatility of general-purpose proof systems. We do so by introducing a modular framework for verifiable computation of sequential operations. The main tool of our framework is a new information-theoretic primitive called Verifiable Evaluation Scheme on Fingerprinted Data (VE) that captures the properties of diverse sumcheck-based interactive proofs, including the well-established GKR protocol. Thus, we show how to compose VEs for specific functions to obtain verifiability of a data-processing pipeline. We propose a novel VE for convolution operations that can handle multiple input-output channels and batching, and we use it in our framework to build proofs for (convolutional) neural networks and image processing. We realize a prototype implementation of our proof systems, and show that we achieve up to 5x faster proving time and 10x shorter proofs compared to the state-of-the-art, in addition to asymptotic improvements.


INTRODUCTION
Cryptographic proof systems can be used in distributed data-processing applications to provide both security and privacy guarantees.This is especially relevant when clients outsource the data processing task to a potentially untrusted server that (i) has enough resources to carry out the computation and optionally (ii) may hold additional data that is required to complete the task but that cannot be shared with clients.
As an example, consider the scenario where a bank owns a machine learning model  that decides credit worthiness  =  (, ), given some customer data  and model parameters  .A proof system for this scenario should provide publicly verifiable (hence auditable) proofs with strong guarantees for: • Integrity: the prediction is indeed generated by the model, given solely the data provided by the customer and the model parameters.Integrity also guarantees that no bias or unauthorized data-such as gender or race-were used in the computation.This is relevant as the bank (or similar stakeholders) must abide to legal directives that forbid discrimination when providing goods or services [13,14].• Fairness: if the model is certified by a third-party auditor, customer may obtain guarantees of fair treatment, i.e., the decision process has been the same across all customers.We note that Supreme Audit Institutions have recently defined best-practices to audit ML models and certified ML may be soon available in real-world applications [17,38].• Privacy: if the model parameters  are proprietary, the bank may publish a (certified) commitment to  while proving that  =  (, ) in zero knowledge [21]; this allows the customer to verify that computation was carried out correctly, while  is kept private and nothing is leaked other than what can be inferred by the prediction itself.
Despite the rapid progress in the last decade, general-purpose cryptographic proof systems do not scale well to large inputs.The main bottleneck appears at the prover side, both on running time and memory usage.Among the many families of cryptographic proof systems in the literature, sumcheck-based proof systems [4,22,35,43,44] achieve the best prover performance (linear on the circuit size).Nevertheless, modeling computation as a circuit introduces high overheads that make even these systems impractical when executed on computations that process large amounts of data.
Dedicated proof systems trade-off generality for performance.In particular, they avoid the general circuit encodings of their generalpurpose counterparts, and achieve better performance, albeit for restricted classes of functions.For example, previous work has shown how to exploit the sequentiality and low multiplicative depth of some classes of functions to achieve low overhead both for provers and verifiers.For example, vCNN [28] and zkCNN [29] enable verifiable ML applications by exploiting the sequential composition of ML functions where data is processed one layer (i.e, function) at a time and the output of the current layer is fed as input to the next one.The same principle is used by PhotoProof [32] and ZK-IMG [26] that exploit the sequential composition of image processing tasks and provide proof systems tailored to verifiable image processing.Dedicated protocols as described above, however, come at the price of poor composability and leave little room for modification and improvement.

Contributions
In this work, we aim at solutions combining the best of both worlds: the efficiency of dedicated protocols and the versatility of general-purpose schemes.With this goal in mind, we introduce a new framework for the modular design of sumcheck-based proof systems, and we use it to develop new efficient protocols for verifiable machine learning and image processing.More specifically, our contributions are the following: A modular framework for sumcheck-based proofs.We develop our framework by identifying and abstracting away the key properties of a variety of proof systems based on the sumcheck protocol, including the well-established GKR protocol [20].Briefly speaking, these protocols proceed in a layer-by-layer fashion so that at each layer the prover starts by making a "promise" about the output, and later the verifier ends with a "promise" about the input.Their security guarantee is that if the input's "promise" is correct then the initial output's "promise" must be correct too.
We define our framework abstracting these protocols as follows: • We introduce the notions of fingerprinting scheme and verifiable evaluation scheme on fingerprinted data (VE).
Fingerprinting schemes characterize the aforementioned notion of "promise" and are essentially a mechanism that allows prover and verifier to succinctly represent vectors of inputs/outputs.VEs are interactive protocols in which the verifier works by only knowing fingerprints of inputs and outputs (and thus can run sublinear in the input/output size).• We show a generic composition theorem: given two VEs for functions  1 and  2 and compatible fingerprints, one can build a VE for their (partial or total) composition  (, ) =  2 ( 1 (), ).• We show that a VE can be lifted to become an interactive proof if the verifier computes the fingerprints of the inputs and outputs of the computation (but not of intermediate steps).We also show that a VE can be compiled into a succinct argument by using commit-and-prove arguments for the evaluation of fingerprints (instantiatable with polynomial commitments [27]).• We instantiate our fingerprints as evaluations of multilinear polynomials, and then we show how to capture a large class of existing protocols-such as the multilinear sumcheck protocol of [43], GKR, and the efficient matrix multiplication from [37]-under our framework.By combining these results, we obtain a way to easily design sumcheck-based proof systems in a modular way.Following the principle of modularity, one needs only to focus on designing VE schemes for specific functionalities, a task that likely results in more lightweight solutions (as we confirm below).In particular, we may take advantage of many years of great research in the field, as our modular design allows us to nicely integrate previous tools and gadgets.Furthermore, the practicality of modular VEs is not only evident at design time, but also at implementation time, since the code can be designed in blocks, in a "Lego" manner.
Applications to verifiable machine learning and image processing.We apply our approach to construct efficient proofs of computation for (convolutional) neural networks and image processing.Both processes have a layered structure that is amenable to our modular framework.Therefore, we build a VE protocol for the full computation by composing several "gadgets" VEs for each layer (including existing and new VEs that we develop -see below), and then we use a multilinear polynomial commitment to compile it into an argument of knowledge.Following the modularity principle, then we focus on designing efficient VEs for the main subroutines needed by these applications.
In this application context, our main contribution is a new VE scheme for convolution operations which is amenable to multiple input-output channels and also to prediction batching.Convolution is a challenging operation in proof systems, as it is represented by arithmetic circuits with complex wiring (and up to O  3 size for convolutions over a  ×  matrix) which is expensive for general purpose solutions.The most efficient dedicated protocol in the literature appears in zkCNN [29], which proposes a fast proving technique for Fast Fourier Transform (FFT), achieving asymptotically optimal O  2 proving time.Nevertheless, their approach requires proving an FFT, a Hadamard product and an inverse FFT, which increase concrete proof size and prover time.Moreover, in their case the convolution kernel, which is often small in applications, needs to be padded to the input size.
We overcome these limitations by designing a compact matrix encoding of the convolution operation to which we apply the efficient matrix multiplication prover in [37].Crucially, we optimize our technique to efficiently support multiple channels (both input and output), which is when our solutions improve even more over the zkCNN's approach.Notably, our convolution VE achieves proof size and verifier time that are independent from the input size and the number of output channels.
We obtain further improvements by designing VE gadgets that extend techniques originally proposed in the context of the GKR protocol for arithmetic circuits.Notably, we propose a VE for "manyto-one reductions" for input fingerprints that extends the GKRspecific technique of [49], and we generalize the blueprint from Hyrax [42] in order to efficiently batch the executions of the same VE on different inputs, e.g.,   =  (  ) for  = 1 to  .
Finally, we leverage our framework to construct the first dedicated proof system for recurrent neural networks.
Implementation and evaluation.We implement and benchmark our efficient convolution prover in Rust and confirm the concrete improvements (in overall efficiency and proof size) over the stateof-the-art [29] for common sets of parameters.Even for a singlechannel convolution, our VE improves over previous solutions by a factor of 5-10× in proof size, and by a similar factor in prover time for small kernel sizes.

Additional Related Work
Sumcheck-based proofs.The seminal paper of Goldwasser, Kalai and Rothblum [20] showed how to use the sumcheck protocol [31] to construct a doubly-efficient interactive proof (known as GKR in the literature) for layered arithmetic circuits.Several papers improved the proving time of GKR either in general [10], for circuits with specific structure [37,41,52] or through variants of the original protocol [43,49].Thaler was the first to show sumcheck-based protocols for specialized computations, such as matrix multiplications, with optimal prover time [37].Another line of works, started by Zhang et al. [51], showed how to use GKR in combination with polynomial commitments to build (zero-knowledge) argument systems [35,42,43,50].Arguments based on this approach are among the most efficient ones for proving time, as most of their computational efford relates to an information-theoretic-secure protocol involving only finite field operations.Recent works show how to combine the sumcheck protocol with multilinear polynomial commitments to build succinct non-interactive arguments [4,22,44].
Our modular framework is close in the spirit to that of Campanelli, Fiore and Querol [3] who build zk-SNARKs modularly via the efficient composition of specialized commit-and-prove SNARKs.Our techniques work at the information-theoretic level and are based on fingerprints and VE schemes, as opposed to commitments and SNARKs, allowing for a less demanding security notion than computational binding.
Verifiable machine learning.The closest work to this contribution is zkCNN [29] which shows how to exploit the sequential nature of neural networks to build an argument system for their verifiability.Compared to zkCNN, our work improves prover time by showing a faster protocol for convolutions and proposes a general framework that makes it easier to reuse, implement, and improve the components of these protocols.vCNN [28] and ZEN [18] also tackle the problem of zero-knowledge neural network predictions.vCNN combines different commit-and-prove SNARKs to efficiently prove the CNN layers, notably they use quadratic polynomial programs for convolution layers and quadratic arithmetic programs for ReLU and Pooling layers.ZEN presents a quantisation mechanism (based on [25]) for R1CS-based proof systems that achieves significantly less constraints and hence a faster proving time and smaller public parameters.Although we do not directly compare to vCNN and ZEN, we observe that [29] shows that zkCNN is orders of magnitude faster than vCNN and ZEN, and thus we achieve the same improvements.Another related work about zero-knowledge proofs for ML-based predictions is that of Zhang et al. [48], whose techniques are however specialized to decision trees.
Verifiable Image Processing.Besides solutions based on general-purpose zkSNARKs, there are a few works that build specialized proof systems for image processing transformations, notably PhotoProof [32], ZK-IMG [26], and VILS [5].
PhotoProof [32] presents an image authentication framework where images are output by "secure" cameras (i.e., cameras capable of signing images) and Proof-Carrying Data [9] is used to define a set of admissible transformations.The PhotoProof prototype is based on libsnark [34] and experiments show that proving one transformation of a 128×128 image takes more than 300 seconds and a public key of a few GBs.ZK-IMG [26] improves over PhotoProof by using halo2 [47] as the underlying ZK-SNARK system and by showing how to chain proofs of sequential transformations without revealing the intermediate outputs-a feature that may be desirable in scenarios where the input image is private.Performance reported in [26] show that convolution operations can take more than 80 seconds to generate a proof for images of 1280 × 720 pixels.Finally, VILS [5] takes an alternative approach to authenticated image editing by computing all possible image transformation at the source (i.e., by the secure camera) and accumulating them in a cryptographic accumulator.
Our techniques allow us to obtain a 20× smaller proof size than [26] (albeit not taking into account the opening size of a polynomial commitment, since these are scheme-dependent) and faster prover and verifier times even while running on less powerful hardware.

PRELIMINARIES 2.1 Notation
The definitions, games, and constructions that we introduce in our work use standard notation.Algorithms, oracle names, and cryptographic parameters are denoted in sans-serif font.To assign the output of an algorithm Alg on input  to a variable , we write  ← Alg (𝑥).To remark that an algorithm is randomized, we write  ←$ Alg (𝑥).An algorithm can input or return blank values, represented by ⊥.The security parameter is denoted by , and its unary representation as 1  .In interactive algorithms, we underline steps that involve interaction, such as Send or Get.

Cryptographic Primitives
We define informally the main cryptographic primitives used in our constructions -commitments and (commit-and-prove) arguments of knowledge -and refer to appendix A for more formal definitions.
Commitment schemes allow one to commit to a value (e.g., a scalar, a vector, a polynomial) in a way that is binding and hiding.Binding informally means that a commitment cannot be opened to two distinct values, while hiding guarantees that the commitment reveals no information about the underlying value.In our work, we denote a commitment scheme Com with a tuple of algorithms (Setup, Com, Vf) such that: Setup(1  ) generates the commitment key ck; Com(ck, ) outputs a commitment com and an opening  for input value ; Vf (ck, com, , ) returns a bit  to indicate if  is a valid opening of commitment com to .
An argument of knowledge AoK for an NP relation R is a tuple of algorithms (Setup, Prove, Vf) such that: Setup(1  , R) outputs a common reference string crs; Prove(crs, , ) →  returns a proof  for (, ) ∈ R; Vf (crs, , ) accepts or rejects .An AoK should be complete and knowledge-sound.The former informally means that honestly generated proofs are accepted by Vf.The latter informally guarantees that any prover producing a valid proof for a statement  must know a valid witness  for it.An AoK is said succinct if the total communication between prover and verifier is polylogarithmic in the witness size.AoK satisfies zero-knowledge if proofs leak no information about the witness beyond the truth of the statement (this is modeled through a simulator that can generate valid proofs for a valid statements without knowing the witness).
In our work we use the notion of commit-and-prove AoKs for relation R and commitment scheme Com, which is an AoK for the NP relation R Com such that ((, com); (, , )) ∈ R Com iff (, (, )) ∈ R and Com.Vf (ck, com, , ) = 1.

Proof Systems
We include standard background and definitions on proof systems.In the sequel, let F be a finite field and ℓ a natural number.
For  ∈ N and a vector  ∈ F  and ℓ = ⌈log ⌉, there exists a (unique) indexing function   : {0, 1} ℓ → F given by   () =   where  = ( 1 , . . .,  ℓ ) is the binary representation of .Then, we define the MLE of , that we denote by x : F ℓ → F, as the MLE of the indexing function   .
Definition 2.3 (Interactive Proof).Let F be a family of functions, and let L F = {( , , ) :  ∈ F ∧  () = } the corresponding language.An interactive proof is a pair of algorithms  ← ⟨P, V⟩ ( , , ) such that the following properties hold: Completeness: For any ( , , ) -Soundness: For any algorithm P * and ( , , ) The probabilities are over the random coins of the verifier.

COMPOSITION FRAMEWORK FOR INTERACTIVE PROOFS
Our goal in this section is to introduce a framework for building interactive proofs from the composition of function-specific protocols.Our framework consists of three main components: (1) fingerprinting schemes, that are a mechanism with which prover and verifier can succinctly represent inputs and outputs of the computation; (2) verifiable evaluation schemes on fingerprinted data (VE), that are the function-specific protocols in which the verifier works by only knowing fingerprints of inputs and outputs; (3) a composition theorem which shows how to compose VEs, in such a way that the verifier only needs to compute fingerprints for the main input and output of the computation, but not for the intermediate inputs of the sequential steps.
In this section, we define the syntax and the security property of these objects, state and prove the composition and finally also show how to compile a VE scheme into succinct arguments.Definition 3.1 (Fingerprint).Let X be a data space, D X a distribution over a randomness space R X , and C a finite set.A randomized fingerprint (with fingerprint space C) is a function H : X × R X → C. Given  ∈ X,  ∈ R X , we call   ← H(,  ) the fingerprint of  on  .Furthermore, we say that a fingerprint H is (statistically) sound for D X if for any pair ,  * ∈ X such that  ≠  * , we have Pr The distribution D X is an abstraction that allows us to capture sampling (e.g. via a uniform distribution) from a space which is yet undefined.The randomness space R X may depend on the data space X and on the security parameter  of the scheme, that will generally be implicit.For instance, large domains may require large randomness spaces1 .
Fingerprints and CRHFs.Even though their syntax presents similarities, fingerprints are strictly weaker objects than collision-resistant hash functions (CRHFs).Fingerprints are only guaranteed to be sound if the randomness  is randomly sampled, as opposed to controlled by the adversary.Also, the input  has to be chosen by the adversary before seeing  .The closest notion to our fingerprints are universal hash functions (when instantiated over an exponentially large output space).

Verifiable Evaluation Schemes on Fingerprinted Data
For our framework we consider a class of interactive proofs for the language L F = {( , , ) :  ∈ F ∧  () = }, which have the following structure (cf. Figure 1): (1) Prover and verifier agree on a common fingerprint   = H(,   ).As an example, the verifier samples and sends randomness   ←$ D Y to the prover, and both parties compute   independently.(2) Prover and verifier interact on common input ( ,   ,   ) through subroutines VE.P() and VE.V(  ) respectively.Notably, neither  nor  are used by the verifier in this part of the interaction.At the end of a successful interaction, both parties agree on a common fingerprint   and randomness   .
(3) The verifier checks that   = H(,   ) and rejects otherwise.In other words, these are interactive proofs that manage to reduce the check  () =  into a simpler verification that only involves the fingerprints of the output (computed in step (1)) and of the input (computed in step (3)).In this work, we formalize the primitive that takes place in step (2), that we call (interactive) verifiable evaluation scheme on fingerprinted data (VE).The goal of a VE scheme is to prove that, given an admissible function  and fingerprints c  , c  , then c  is a valid fingerprint to the input  and c  is a valid fingerprint to  (𝑥).Contrary to the intuitive setting where the

⟨P, V⟩(𝑓 , 𝑥, 𝑦)
Prover Verifier interaction starts with both parties having a common input  (or fingerprint c  ) and finishes on  () (or c  ), VE interactions start at a common output fingerprint c  and finish with both parties agreeing on an input fingerprint c  .
Definition 3.2.A verifiable evaluation scheme on fingerprinted data VE for a family of functions F is a pair of interactive algorithms (VE.P, VE.V) that, given as prover input ; as verifier input randomness r  ; and as common input fingerprints c  , randomness r  , and a function  ∈ F , the interaction outputs Where c  is a common output, r  a prover output, and  a verifier output.Furthermore, the verifier VE.V is public-coin.
The scheme VE is correct if for any valid pair ( , ) and randomness r  , r  , we have that Pr Our definition considers families of functions with multiple inputs and outputs, and also with multiple input-output fingerprints.Inputs and outputs may correspond one-to-one with fingerprints, but it is also possible that several fingerprints (computed on different randomness) correspond to a single input or output.For compactness, we write vectors c  , r  (respectively c  , r  ) where c  , ∈ C corresponds to r  , ∈ R X .
The security that is required for VEs is that, if c  are valid fingerprints of  and the verifier accepts, then c  are guaranteed to be valid fingerprints of  () (except with negligible probability).As we will show later, this property is very useful for composing VEs.We remark that security only holds when the fingerprints of the inputs c  are honest.

Definition 3.3 (VE Soundness).
A VE scheme VE is statistically (resp.computationally) sound if for any stateful unbounded (resp.PPT) adversary A, the following probability is negl(): where the probability is taken over the choices of r  , r  , the randomness of A and any additional randomness used by VE.V.
Next, we show that VE security is indeed sufficient for building a sound interactive proof as described in Figure 1.The proof can be found in Appendix C. Proposition 3.4.The protocol in Figure 1 is an interactive proof.

Composition of VEs
Next, we show that the composition of VEs that use the same fingerprint scheme is also a VE.This allows for constructions of modular interactive protocols for sequential functions.
Let  be composed of several sub-functions  1 , . . .,   , that can place left-to-right in a pipeline fashion (see Figure 2).The highlevel approach of this procedure is the following: 1) start on a fingerprint of the output of  (i.e., on the right) that both prover and verifier trust; 2) run the VE schemes for the   in a right-to-left order (starting with   ); while 3) collecting fingerprints to inputs of the   obtained throughout the interaction and using them as output fingerprints for sub-functions on the left.At the end of the interaction, the verifier needs to check one or multiple input fingerprints.In Figure 2, we show this procedure in a block diagram.
X and Y =  =1 Y ⟩ be domains.Let also  1 : X → Z and  2 : Z × X → Y. Finally, let  : X × X → Y be the function given by the (partial) composition  (, x) 2 ( 1 (), x).Then, given verifiable evaluation schemes VE 1 and VE 2 for  1 and  2 based on the same fingerprint scheme, the composition protocol VE obtained by running VE 2 and then VE 1 as in Figure 2 is a verifiable evaluation scheme for  .
The proof appears in Appendix C. By combining Proposition 3.5 and Proposition 3.4, we obtain a framework for composing arbitrary evaluation schemes for different functions that can be later compiled into an interactive proof.Regarding efficiency, the communication complexity and running time of the resulting protocol grows additively for both prover and verifier, as VEs are run sequentially.
For clarity, in the following sections we use a parametrization for VE schemes that we define as follows.
Definition 3.6 (Parametrization of VEs).A verifiable evaluation scheme VE is parametrized by: • the fingerprint scheme H, • the (family of) admissible functions F = { : X → Y}, • the input and output (vectors of) fingerprints c  , c  , • the communication complexity | | (of prover messages, i.e., we do not consider verifier challenges) • the prover and verifier running time t P , t V , • and the soundness .

From VEs to Arguments of Knowledge
We show how to turn a VE scheme for  =  () into a commit-andprove argument of knowledge for the NP relation The full scheme is presented in Appendix D. The idea is a generalization of the vSQL approach [51] and relies on the observation that in the VE protocol the verifier does not need to know neither  nor  but only their fingerprints   ,   .In the VE-to-IP construction, the verifier would test if   = H(,   ) and   = H(,   ).In the AoK, the verifier instead holds the commitment com  , and we let the prover show the correctness of the fingerprint   w.r.t. the committed .To enable this proof we only need a commit-and-prove AoK for the computation of H (instantiatable with a multilinear polynomial commitment).
Finally, we observe that, similarly to zkCNN, we can obtain a zero-knowledge AoK for R Π by using existing approaches [8,43] based on zero-knowledge sumcheck and low-degree extensions.More precisely, starting from the (non-ZK) VE scheme, we first apply the information-theoretic compiler based on zero-knowledge sumcheck from Libra ( [43], Section 4.1).Then, we require a ZK-AoK for H in the compilation to a succinct argument.For the first step, we also need to mask the fingerprints obtained by the verifier to avoid leakage of intermediate values.This can also be done following ( [43], Section 4.2).

VERIFIABLE EVALUATION FOR MULTILINEAR POLYNOMIALS
In this section, we reinterpret the line of work for the delegation of computation via sumchecks of multilinear polynomials, initiated by the GKR protocol [20] and continued by [11,37,43,49], in the framework introduced in Section 3. We show that the notion of verifiable evaluation scheme captures the soundness properties of these core protocols, and we provide a modular approach such that they are easily composable with function-specific VEs.This allows us to compose these existing protocols with the new VE schemes that we propose in the next section.First of all, we define a fingerprint based on multilinear extensions.From this point, we adopt the convention that  = ⌊log|F|⌋ for a field F. Proposition 4.1.Let F be a field, x be the multilinear extension of  ∈ F  , and ℓ = ⌈log ⌉.Then, the evaluation of a multilinear extension at a point  ∈ F ℓ , given by x () ← H MLE (, ), is a statistically sound fingerprint for the uniform distribution over F ℓ .
Proof.Given two inputs ,  * and  ←$ F  such that  ≠  * , we have that where the bound follows by the Schwartz-Zippel lemma. □ Multinear sumcheck VE.The following result is a generalization of the multilinear sumcheck-based delegation schemes in the literature, particularly of those introduced in [37,43].The prover time depends on the time required to compute the multilinear extension of each polynomial factor  , as described below.Note that when the multilinear sumcheck is described in the VE framework, the function  corresponds to the sum of the evaluations over {0, 1} ℓ , while the polynomial factors  , ∈ F[ 1 , . . .,  ℓ ] correspond to the input and are not necessarily known to the verifier.In most practical cases,  = 1 and  is a small constant (such as  = 2).where each factor  , is a multilinear polynomial over F evaluated on a subvector  , ⊂ .Then, the multilinear sumcheck protocol VE ML in Figure 3 Proof.First, we recall that the sumcheck protocol over a field F for a ℓ-variate polynomial of degree  has soundness ℓ/|F| [31].
Correctness, communication complexity and efficiency follow from inspection of Figure 3 and from the efficient sumcheck and padding techniques in previous work [43,49].For soundness, consider a successful adversary against VE soundness that, given an output fingerprint c *   ≠   (  ), makes VE SC .V accept.Let also  ′ 1 ( 1 ), . . .,  ′ ℓ ( ℓ ) be the sequence of degree  polynomials that correspond to running the protocol honestly, in addition to the constant polynomial  ′ 0 =   (  ).By definition of VE soundness, we have that all input fingerprints are honestly computed, i.e., c , =  , () for every , .Therefore, as the check in line 12 of Figure 3 verifies, it must be that ĝℓ ( ℓ ) =  ′ ℓ ( ℓ ).We conclude that the adversary must have found a collision during the sumcheck, which occurs with probability  = ℓ/|F|.

VE for GKR layers
In the celebrated GKR protocol [20], prover and verifier interact in a series of sumchecks that take place at every layer of the circuit.Each of these sumchecks can be written as a VE scheme with multiple input fingerprints (and possibly multiple output fingerprints too).This interpretation is straightforward following Proposition 4.2; it also addresses the observation that the add and mult gate predicates can be replaced by alternative gate predicates, in order to support other operations efficiently as mentioned in [43], or larger fan-in such as in [29].Following the notation from Libra [43], we write   for the output values at the gates of the circuit at layer  (interpreted as a function   : {0, 1} ℓ  → F) and Ṽ its multilinear extension.We define the wiring predicates add  , mult  : {0, 1} ℓ  +2ℓ −1 → F, which take one gate label  ∈ {0, 1} ℓ  and two gate labels  1 ,  2 ∈ {0, 1} ℓ −1 , and output 1 if gate  is an addition (respectively a multiplication) gate that takes the outputs from gates  1 ,  2 in the previous layer.Therefore, for any  ∈ {0, 1} ℓ  , we can write  +1 as In the protocol, prover and verifier start on a common fingerprint of the output  +1 (  ) and then run the multilinear sumcheck from Figure 3.At the end of the sumcheck, in which the prover sends a total of 2 • ℓ  polynomials, the verifier needs to check the consistency of the prover's claims by using the wiring predicates.Namely, it needs to compute (or re-use in a layer above) the following fingerprints: Ṽ ( 1 ), Ṽ ( 2 ), ãdd  (  ,  1 ,  2 ), m ult  (  ,  1 ,  2 ).The following result is a reinterpretation of [43], and in particular the observation that the prover time is linear in 2 ℓ where ℓ = max{ℓ  , ℓ +1 } due to the sparsity of ãdd  , m ult  and Lemma 2.2.The proof follows from Proposition 4.2.Proposition 4.3.The interactive protocol that takes place at a GKR layer is a VE scheme VE GKR for all functions computable by a single-layered arithmetic circuit with gates of fan-in 2. The scheme is parametrized by 1 output fingerprint (of  +1 ), 4

VE for Many-to-One Reductions
Multivariate sumcheck-based VEs often present the issue that, from a single output fingerprint, the interaction yields multiple input fingerprints to be checked by the verifier at a later time.For GKR layers, the two input fingerprints of   obtained shall be used as output fingerprint for layer −1.To avoid an exponential blow-up on the number of fingerprints to be checked, the original GKR protocol proposes a 2-to-1 reduction protocol that, given two fingerprints of any , it reduces them to a single fingerprint.An alternative to the 2-to-1 reduction is to use a random linear combination on the sum [8].
Below we formalize 2-to-1 reductions in the VE framework and generalize it to a -to-1 reduction.The result extends GKR-specific techniques from Virgo++ [49].Proposition 4.4.Let  ∈ F  and let x ( 1 ), . . ., x (  ) be MLE fingerprints on   ∈ F ℓ .Let also   ∈ F for  = 1, . . ., , let  (, ) be the indicator function on the boolean hypercube such that  (, ) = 1 if  =  and is zero elsewhere, and define Then, running the multilinear sumcheck protocol from Figure 3 on  () yields a VE scheme VE m-1 parametrized by  output fingerprints x (  ),  + 1 input fingerprints ( (  ,   ) for  = 1, . . .,  and x (  )), Note the additional soundness loss of 1/|F| with respect to the sumcheck, which comes from the choice of the   .It is straightforward to express the random linear combination approach from [8] as a VE, also following Proposition 4.2.Such VE is parametrized by 2 input fingerprints (of  +1 ), and 6 output fingerprints (2 of   , 2 of add  , 2 of mult  ).
Evaluation of mult, add and structured predicates.In all VEs introduced so far, including those in Proposition 4.3 and 4.4, the number of input fingerprints is larger than the number of output fingerprints.Some of these fingerprints correspond to unstructured data (such as the values at a circuit layer or an external input), but most of them have a regular structure such as wiring predicates mult, add and indicator functions.
When multiple VEs are composed, fingerprints coming from structured data may be checked directly by the verifier, as opposed to plugged into other VEs.There exist essentially two design choices available: • The verifier recomputes the multilinear extensions on its own.In many cases, one can benefit from parallelism [10], or from sparsity [43].In [23], it is shown that most simple predicates (those expressible as read-only branching programs), including many regular wiring patterns such as indicator functions, can be evaluated in logarithmic time (i.e.polynomial in ℓ).• The verifier performs a pre-processing phase or relies on a trusted third party to compute (multilinear) polynomial commitments to the data.Then, the prover provides an opening proof on the required point.In this setting, the evaluation is outsourced to the prover, similarly to what is done for instance in Spartan [35].

Efficient Matrix Multiplication
Among the protocols that we can capture in our framework, a notable example is the efficient interactive protocol for matrix multiplication from [37].The main idea of the protocol is to express the product of two matrices  =  •  where , ,  ∈ F × as a polynomial identity as Then, the interaction follows the sumcheck in Figure 3. Namely, given  1 ,  2 ∈ F ℓ , both parties carry out a sumcheck over The protocol is therefore a VE scheme parametrized by two input fingerprints Ã( 1 ,  3 ), B( 3 ,  2 ), an output fingerprint

VERIFIABLE EVALUATION FOR MACHINE LEARNING
In this section, we introduce efficient proofs for common ML operations, following our VE framework.We focus on Convolutional Neural Networks (CNNs) though we note that many of these operations are also usual in image processing.We start by introducing ML preliminaries., where  () is the number of channels at layer , and  () ×  () is the dimension of the arrays at layer .Namely, at each intermediate layer we have  ()  "parallel" arrays of the same size.An example of multiple channels in an input layer is a coloured image, which commonly has 3 channels: the red, blue, and green values of each pixel.

Convolution.
The equation of a plain 2D convolution2 in a CNN for a given output channel  is If no padding and strides (i.e."jumps" in the convolution) are applied, the output matrix  (+1)  is a square matrix of size  (+1) ×  (+1) where  (+1) =  () −  () + 1.It is very common in practice to apply a zero or mirror padding such that  () =  (+1) .Convolutions can be carried out via (naive) dot products, via Fast Fourier Transforms (FFTs), via polynomial multiplication, or via matrix multiplication 3 .
A related common operation is transposed convolution, which is an upsampling operation that increases the size of the output with respect to the input.We refer to [15] for a good introduction to convolution arithmetic.
In Appendix B we briefly discuss other relevant layer types; namely, activation, pooling, fully connected and batch normalization.
5.1.3Quantisation.Generally, CNNs need to be quantised to be embedded in proof systems, since these require that values belong to some finite field.Quantisation is actually used beyond verification, as typical models reach a similar accuracy on short integers (such as 8-bit).A usual quantization scheme is [25], which, as shown in zkCNN [30], can be integrated into large fields easily.A possible avenue for building verifiable CNNs without quantisation consists of using proof systems with native ring arithmetic such as [6,36].

Our VE for Convolution
In this section we present a novel approach to proving convolutions efficiently by exploiting the symmetrical structure of a convolution operation.We write convolutions as matrix multiplications, seeking a more convenient form than the commonly used Toeplitz or circulant matrices (see [39] for further details).
Rewriting convolution.We observe that it is possible to re-write a convolution operation in the following compact form, where we specify a convolution of a 3 × 3 input  by a 2 × 2 kernel  .
The example is easily extended to an   ×   input and  ×  kernel.The matrix on the left-hand side has dimensions 4 ( −  + 1) 2 ×  2 .More generically, this is the dimension of the flattened output times the dimension of the flattened weight matrix, which is  2  × 2 for a convolutional layer that has an output of size   ×   .We can extend this approach to capture multiple channels in a convolutional neural network.Let us recover usual CNN notation while ignoring layer indices; let   be the input with channel  ∈ [], and let  , be the weight matrix where  ∈ [] is the output channel.Then, in matrix form (where X, Ŵ are the transformed matrix representations of the data and weights in the form of Equation 5), we have that the layer's output  is given by Namely, for each input channel  we have product of a (  ) 2 ×  2 matrix and a  2 × matrix.Each   is a column vector of length  2  (i.e., a flattened channel of the output of the layer).If we apply the efficient VE for matrix multplication at this stage, we need to prove the result of a sum of  matrix multiplications, where the size of the matrices is (  ) 2 ×  2 and  2 × .
Combining all input channels.The main efficiency advantage of our approach is that it is straightforward to extend the sumcheck equation for matrix multiplication (eq.( 3)) to sum over the multiple channels.To do this, we can encode both X and Ŵ as trivariate polynomials given by X (, , ) X (, ) and Ŵ (, , ) Ŵ (, ) for every  ∈ [].Then, we obtain the following sumcheck equation over Proposition 5.1.Let VE conv be the VE scheme for two-dimensional convolution that is obtained by running the multivariate sumcheck protocol in Figure 3 on Equation 7.Then, VE conv is parametrized by two input fingerprints (one for X and one for Ŵ ), one output fingerprint (for  ), communication complexity Intuitively, the asymptotic benefit of our approach compared to previous work is essentially given by expressing the input channels in columns in eq. ( 6), avoiding the overhead of padding the kernels to the input size.
We also note that it is straightforward to extend equation 5 to support arbitrary padding or stride settings by modifying the reshaped input X , as done in our implementation.An advantage of our method is that the output  does not need to be reshaped after the VE is applied.

Transpose Convolution.
The transpose convolution operation can be re-written as in Equation 5.For an example, let  =   = 2 over a single input channel  ()  .A basic upscaling transposed convolution yields   = 3 as below.
For arbitrary input channels, the output will be a  2  × matrix.As before, we need to compute the sum over all input channels  ∈ [], which can be done by extending the sumcheck as in Equation 7.
This yields a prover time of t P = O  ( 2   2 +  2 ) and a verifier time of t V = O log( 2 ) , exactly as for convolutions.

Neural Network Layers
Neural networks, and in general many data processing algorithms, incorporate several (generally simple) steps beyond convolution.We succinctly describe efficient ways of constructing VEs for the most usual operations.

Layer reshaping and pooling.
For any sequence of operations that can be expressed without any multiplication gate (such as padding, rotation, compression, averaging, or any input rearrangement -e.g., the pre-processing required for the input of VE conv ), one can encode the desired pattern in a wiring predicate  (, ) and apply the multilinear sumcheck VE SC as follows.For an input layer  and output layer  , let  (, ) =  if the value  •  () is added to  ().Then, VE SC can be applied over  () =  ∈ {0,1} ℓ  (, ) •  ().Note that P is sparse for most operations (except for weighted sums of many input values).
The predicate  natively supports average pooling.For max pooling, we recall the approach using auxiliary bit decompositions by zkCNN [29], that can be expressed as a VE.We also note that the described VE for reshaping can be easily used in combination with a many-to-one VE.

Normalization and linear transformations.
Point-wise normalization, and in general input re-scaling operations that can be expressed as linear transformations of the form  ↦ →  + , where ,  ∈ F, can be verified via a linear shift without any prover work.Indeed, multilinear fingerprints satisfy that given ,  ∈ F  such that  () =  •  () +  for all  ∈ {0, 1} ℓ , then   = Ỹ () =  • X () + .

Activation functions.
Due to their non-linearity, the verification of activation layers is particularly challenging and essentially reduces to two possibilities: • Dedicated VEs with additional input.For instance, zkCNN [29] introduces a protocol for ReLU that requires additional bit decomposition, and can be easily seen as a VE.• Approximate activation functions via polynomials, as is usual in the privacy-preserving ML literature.Quadratic polynomials may already offer good approximations [1].
For this approach, one can construct a VE that evaluates quadratic polynomials via the following multilinear sumcheck (which follows from a GKR-like encoding): For a degree  polynomial, it is possible to use a binary tree of multiplications, such that prover time, verifier time, and communication complexity scale with log .
An alternative approach is using efficient lookup arguments [16,33,45,46], where one can benefit from storing all values of the activation function (for quantised inputs) in a lookup table.We leave the investigation of lookups in our VE framework as interesting future work.

Neural Networks
To construct a dedicated proof system for neural networks, we build a large VE scheme (denoted by VE NN ), composed by several "gadget" VE k for each of the layers of the network.Then, we use a multilinear polynomial commitment scheme to build a commitand-prove AoK that achieves succinctness and efficient verification, following the blueprint of Proposition 3.5.
Following previous notation, let  () be the input and  () the function at layer .We consider two general kinds of layers: • Layers  () ( () ) that apply an input transformation without additional parameters.For such  () we consider VE k that take output fingerprints  ) to an auxiliary predicate  (see below).
• Layers  () ( () , () ) that require additional parameters, not necessarily known to the verifier.For these functions, we consider VE k that take output fingerprints ( ) and produce input fingerprints ).Additionally, we require VE k to take as many output fingerprints to  (+1) as input fingerprints produced by VE k+1 , such that they are compatible.Note, we can always achieve compatibility as one can reduce input fingerprints by applying VE m-1 (Proposition 4.4).
The predicates  () englobe any additional predicate that expresses the circuit at each layer, such as the wiring predicates in Equation (1) or additional auxiliary input as in [29].For both  ()  and  () , we define  (, )  () () and  (, )  () () via interpolation as where  ∈ {, } and  (  , ) is the indicator function on ⌈log ⌉ variables.Without loss of generality, we pad every  () to have the same number of variables.For concrete implementations, it is possible to optimize the padding.We describe VE NN and its compiled AoK Π NN in Figure 4. Soundness of VE NN follows by Proposition 3.5 and the soundness of VE k and VE m-1 .Π NN is an instantiation of the compiler of Section 3.3 and Proposition D.1.By expressing the model parameters and predicates as single polynomials, it is possible to obtain, via many-to-one reductions, a single input fingerprint for each of   (0) ,  , and .These fingerprints are verified in Π NN .V by three polynomial commitment opening proofs.Proposition 5.2.The protocol VE NN is a VE scheme for a neural network architecture  NN, , parameterized by 1 output fingerprint (of ), and 3 input fingerprints (of  (0) ,  , and ).Communication complexity, prover time, verifier time, and soundness result from the sum of the respective parameters of each VE k and VE m-1 on Figure 4.
Besides, Π NN is an argument of knowledge for the relation Finally, we remark that our modular approach allows verifying pre-or post-processing operations in addition to the model, such as an aggregation phase.In this case, one can extend VE NN and compose it with additional VE schemes for these operations.

Proof Batching
Our techniques are amenable to efficient batching where many evaluations   =  (  , ) for  = 1, . . .,  are verified in a single step.For VE schemes that rely on the multilinear sumcheck protocol from Figure 3, including the convolution VE introduced in this section, it is possible to reduce the verification time and communication complexity from linear to constant in the number of instances  .
Let  (, ) ∈ F[ 1 , . . . log  +ℓ  ] be defined by  (, )   (), and let  (, ) be defined analogously following equation (9).Then, one can run the protocol in Figure 3 over  (  ,   ) where   ∈ F log  and   ∈ F ℓ  .For instance, the sumcheck on the convolution VE (equation ( 7)) can be written as The resulting VE increases the prover time by a factor of ⌈log  ⌉ and maintains the same soundness, communication complexity and verifier time as their single-input counterpart.

Verifiable Recurrent Neural Networks
As an additional application of our modular framework, we show how to construct a protocol for the verification of recurrent neural network (RNN) predictions, a problem that has not been addressed efficiently in the literature.RNNs are a type of neural network designed to process sequential data such as time series or natural language text.Unlike feedforward neural networks, which process input data in a single pass and do not maintain memory, RNNs have a loop that allows information to be passed from one time step to the next, following a cyclic computation graph.
Let  be the length of the longest cycle in the graph described by a RNN of  layers.For example,  = 1 in the RNN in Figure 5, as the only cycle is a self-loop.We construct a VE that verifies the computation of  predictions ( (1) , . . .,  () ) from (streaming) inputs (  for  = 0, . . ., .Then, it "unrolls" the intermediate computations of the RNN as in Figure 6.The resulting computation trace is a circuit of depth  =  +  •  with an evident layer structure. • The prover embeds each layer of the computation trace in a multilinear polynomial   (, )  ()  () as in equation ( 9), and defines   ,   accordingly.In total, one obtains  multilinear polynomials, structured as the layers in Figure 6.
• The VE proceeds similarly to the VE NN of Figure 4 Even if the proof size scales linearly in the length of the stream , we believe that our approach may present good concrete performance for small streams, in particular due to the batching technique.

Image Processing
The techniques developed in these sections find a direct application in the verification of image processing operations.For instance, convolution is used in applications such as edge detection (such as using Sobel or Canny kernels), image blurring (Gaussian blur), and feature extraction.Below we provide a brief description how to construct a VE for some common applications.
• Operations that require geometric modifications or rearrangements of the original picture, such as cropping, rotation, mirroring, padding, or partial censoring (i.e.removal or replacement of sectors of an image) can be verified following Section 5.3.1.• For convolution-related operations, one can directly apply our VE CNN with the desired parameters.
• Multiple transformations can be merged in a single sumcheck by merging wiring predicates.For instance, rotation + cropping + input reshaping (1) and a posterior convolutional filtering (2) can be verified with only two sumchecks.
For images encoded in RGB or other multi-channel format, we can apply batching techniques for the channels as shown in equation (10).If negative values appear in convolution kernels, linear shifts need to be applied to avoid wrapping of field elements.We compare the performance of our approach to ZK-IMG [26] and PhotoProof [32] in Section 6.3.

EVALUATION
In this section we discuss the performance of our solution and compare it to previous work.We focus the evaluation on our VE conv for convolution operations introduced in Section 5.2, as this is the most novel proof gadget compared to previous work.

Theoretical comparison
Recalling previous notation, let  ×  be the input size,  ×  the kernel size, and ,  the number of input and output channels, respectively.Our VE conv achieves short , and verifier time t V = O log( 2 ) .In zkCNN [29], the proving time for a convolutional layer using the FFT-based approach involves a prover time t P = O  2  and verification t V = O log 2 ( 2 ) , where  = max{  ,   }.Hence, our approach is always more efficient in communication complexity and verification time, while our prover is more efficient asymptotically when  2 ≤ , which is often the case in practice (e.g., VGG16 presents  = 3 and  grows up to 512), and its running time is independent of  when the term  2   2 dominates in the sum.Additionally, in zkCNN they need to either compute the FFT matrix or outsource this to the prover, thereby increasing proof size.We avoid all the complications of the multiple sumchecks in our direct approach.We also note that their FFT-sumcheck-based protocol can be easily expressed as a VE.
We note that, in many typical ML models,  ≫ ,  in early layers, and  ≫  ≈  in 'deep' intermediate layers.Hence, even if our approach does not outperform the prover time of the FFT-based polynomial multiplication approach in all parameter regimes, it will improve it for many parameter sets in intermediate layers.Based on the characteristics of the layer, one could select the most efficient VE for convolution.

Experimental comparison
We implemented VE conv in Rust. 5 We use the arkworks library [2] for implementing field arithmetic over the 256-bit prime field from the bls12-381 curve, the same field used in [29].We also utilize several components of the arkworks sumcheck library that implements the doubly efficient protocol in [43].
We carry out different benchmarks in a virtual machine running Debian GNU/Linux with 8 cores Xeon-Gold-6154 at 3GHz and with 5 Our code is available at https://github.com/imdea-software/MSCProof98 GB of RAM.Our implementation can be run using the natively supported parallelisation in arkworks, but we run our experiments on a single thread to facilitate comparison to previous work.All timings correspond to the average over 10 executions.
Single-channel convolution.Our first set of benchmarks run a single convolution with different input and kernel sizes.For small kernels  = 4, our VE prover requires 1.3 ms for a  = 32 input, and 98 ms for  = 256.In this parameter regime, our prover time is 5× faster than the FFT prover (and also the naive prover) in [29].Our prover also outperforms [28] by two orders of magnitude.For large convolution kernels, the prover in zkCNN remains faster.
Verification is very fast and scales logarithmically on the kernel size, as expected.Verifying a moderate-size convolution such as  = 256 (in fact, for any ) and  = 8 takes 0.157 ms, whereas large kernels  = 128 require 0.362 ms.
Multiple channels.Our approach is optimized for multiple convolution channels, as we show in Figure 7.We display our results for a small fixed kernel  = 4 and input  = 64, for  up to 64 and  = 1, 32, 128.As seen in the chart, the prover time is essentially constant in  since  2 •  2 dominates the sum.The verifier time is also very small, ranging from 0.07 ms for  = 1 to 0.210 ms for  = 64, and also constant in .
We do not have concrete running times for multiple channels in zkCNN, but we expect their prover time to increase linearly on  • .
Communication complexity.We also provide concrete figures of the communication complexity (equivalently, the proof size of the non-interactive protocol), which is deterministic for VE conv (Proposition 5.1).For the single-channel experiments, the proof size amounts to 0.64 KB for  = 8, and 1.4 KB for  = 128, for any input size.This is a 8× improvement over zkCNN, which ranges from 5.6 KB to 8.4 KB for the same experiments.For the multi-channel setting in Figure 7, the instance  = 64,  = 8 and  = 32 yields a proof size of 1.12 KB for any .
Image Processing.Finally, we benchmark a convolution proof of a 8×8 kernel (such as blurring) with a RGB image (720×480) with the goal of comparing to ZK-IMG [26], which already outperforms [32] by several orders of magnitude.The comparison is only approximate as their benchmarks are run on more powerful hardware than ours, and image sizes are not identical.
In this regime, VE conv takes 3.3 s of proving time, 0.12 ms of verification time and yields a proof size of 0.64 KB.In ZK-IMG, a 3× larger 1280 × 720 convolution input involves 78 s of proving (ignoring key generation), 8.12 ms of verification, and 11 KB proof size (a 20× increase).
For a 128 × 128 input, they report 2.7 s of proving time and 5.3 ms of verification on standard hardware.For the same size and a 8 × 8 kernel, our prover takes 110 ms (25× faster) and our verifier 0.117 ms.
Nevertheless, ZK-IMG implements a complete proof system, while our approach requires an additional polynomial commitment.We expect other simple transformations (cropping, padding, partial censoring...) to present similar running times.
Pre-Processing in VE conv .As discussed in Sections 5.2 and 5.3.1, a pre-processing reshaping step, which can often be embedded into other steps such as activation layers, is required if VE conv is used to prove a standalone convolution.In that case, the sumcheck in Section 5.3.1 needs to be executed after VE conv .We do not include this step in our benchmarks, but note that it induces a minimal overhead as (1) the sumcheck involves strictly less variables and rounds than VE conv , and (2) the prover already has the fingerprints to the reshaped input.
Polynomial Commitment Overhead.A polynomial commitment is used in the AoK described in Proposition 5.2 but not at the VE level.The overhead induced by the PC depends on the chosen scheme and affects the efficiency of our solution and prior work's [29] in the same way.In the case of zkCNN, sumchecks take roughly 2/3 of the total prover time, whereas PCs take the remaining 1/3 (see [29], Table 1).Our improvements in the information-theoretic protocol significantly reduce the fraction taken by the sumchecks.
For completeness, we benchmark the multilinear KZG from HyperPlonk [4] together with our VE conv .For a single-channel convolution of  = 256,  = 4, a PC opening takes 400 ms, whereas the VE sumcheck prover takes 98 ms.The commit operation takes 191 ms.We remark that the PC opening cost gets further amortized when more VEs are composed sequentially.In general, the deeper the model is, the more significant the sumcheck overhead becomes.

Discussion
Our protocols achieve, overall, faster prover times, reduced communication and faster verification times than existing solutions.As in other works [26,28,29], we found memory usage to be the main bottleneck, the reason being the dynamic programming technique used by the prover to compute the multilinear extensions.Yet, our approach allows for clearing the memory after every sequential step, as opposed to solutions such as [28] or [26] (built upon general-purpose proof systems).A solution towards improving memory bottlenecks is to trade memory usage for proving time by applying streaming algorithms for multilinear extensions [12], which is an interesting direction for future work.

Figure 1 :
Figure1: Interactive proof constructed from a verifiable evaluation scheme on fingerprinted data VE and a fingerprinting scheme H.

Figure 4 :
Figure 4: Modular construction of VE NN and compilation to an argument of knowledge Π NN .The verifier VE NN .V is omitted as it simply runs VE k .V sequentially.

0 )
as follows.• The prover computes the predictions and stores all intermediate values  ()

Figure 5 :
Figure 5: Illustration of a RNN with a loop at layer  1 .

Figure 7 :
Figure 7: Prover time for varying number of channels ,  and fixed  = 64 and  = 4.