Optimal Auctions through Deep Learning: Advances in Differentiable Economics

Designing an incentive compatible auction that maximizes expected revenue is an intricate task. The single-item case was resolved in a seminal piece of work by Myerson in 1981, but more than 40 years later, a full analytical understanding of the optimal design still remains elusive for settings with two or more items. In this work, we initiate the exploration of the use of tools from deep learning for the automated design of optimal auctions. We model an auction as a multi-layer neural network, frame optimal auction design as a constrained learning problem, and show how it can be solved using standard machine learning pipelines. In addition to providing generalization bounds, we present extensive experimental results, recovering essentially all known solutions that come from the theoretical analysis of optimal auction design problems and obtaining novel mechanisms for settings in which the optimal mechanism is unknown.


Introduction
Optimal auction design is one of the cornerstones of economic theory. It is of great practical importance as auctions are used across industries and in the public sector to organize the sale of products and services. Concrete examples are the U.S. FCC Incentive Auction, the sponsored search auctions conducted by search engines such as Google, and the auctions run on platforms such as eBay. In the standard independent private valuations model, each bidder has a valuation function over subsets of items, drawn independently from not necessarily identical distributions. It is assumed that the auctioneer knows the value distributions and can use this information in designing the auction. A challenge is that valuations are private, and bidders may not report their valuations truthfully.
In a seminal piece of work, Myerson resolved the optimal auction design problem when there is a single item for sale (Myerson, 1981). Today, after 40 years of intense research, there are some elegant partial characterizations (Manelli and Vincent, 2006;Pavlov, 2011;Haghpanah and Hartline, 2019;Giannakopoulos and Koutsoupias, 2018;Daskalakis et al., 2017;Yao, 2017), but the analytical problem of optimal design is not completely resolved even for a setting with two bidders and two items. At the same time, there have been impressive algorithmic advances (Cai et al., 2012b(Cai et al., ,a, 2013Hart and Nisan, 2017;Babaioff et al., 2014;Yao, 2015;Cai and Zhao, 2017;Chawla et al., 2010), although most of them apply to the weaker notion of Bayesian incentive compatibility (BIC). Our focus in this paper is on auctions that satisfy dominant-strategy incentive compatibility (DSIC), which is a more robust and desirable notion of incentive compatibility.
A recent line of work has started to bring in tools from machine learning and computational learning theory to design auctions from samples of bidder valuations. Much of the effort has focused on analyzing the sample complexity of designing revenue-maximizing auctions (Cole and Roughgarden, 2014; Mohri and Medina, 2016;Huang et al., 2018;Morgenstern and Roughgarden, 2015;Gonczarowski and Nisan, 2017;Morgenstern and Roughgarden, 2016;Syrgkanis, 2017;Gonczarowski and Weinberg, 2018;Balcan et al., 2016). A handful of works has leveraged machine learning pipelines to optimize different aspects of mechanisms (Lahaie, 2011;Dütting et al., 2014;Narasimhan et al., 2016), but none of these provides the generality and flexibility of our approach. There have also been other computational approaches to auction design, under the research program of automated mechanism design (Conitzer and Sandholm, 2002Sandholm, , 2004Sandholm and Likhodedov, 2015) (to which the present paper contributes), but where scalable, they are limited to specialized classes of auctions that are already known to be incentive compatible.

Our Contribution
In this work, we provide the first, general purpose, end-to-end approach for solving the multi-item optimal auction design problem. We use multi-layer neural networks to encode the rules of auction mechanisms, with bidder valuations comprising the input to the network and an allocation and payments comprising the output of the network. We train these neural networks using samples from bidder value distributions and seek to maximize expected revenue subject to constraints for incentive compatibility. We refer to the overarching framework as that of differentiable economics, which references the idea of making use of differentiable representations of economic rules. In this way, we can use stochastic gradient descent for economic design, building on what is a very successful pipeline for deep learning.
The central technical challenge in this work is to achieve incentive compatibility, so that bidders will report true valuations in the equilibrium of the auction. We propose two different approaches to handling incentive compatibility (IC) constraints. In the first, we leverage characterization results for IC mechanisms, and constrain the network architecture appropriately. In the case of singlebidder settings, we show how to make use of menu-based characterizations, which correspond to DSIC mechanisms. We refer to this architecture as RochetNet, reflecting in its naming a connection with a characterization due to Rochet (1987).
The second approach replaces the IC constraints with the requirement of zero expected ex post regret, which is equivalent to DSIC up to measure zero events. For this, we make use of augmented Langrangian optimization during training, which has the effect of introducing into the loss function penalty terms that correspond to violations of incentive compatibility. In this way, we minimize during training a combination of negated revenue and a penalty term for IC violations. We refer to this neural network architecture as RegretNet. This approach is applicable to multi-bidder multiitem settings for which we do not have tractable characterizations of IC mechanisms, but will generally only find mechanisms that are approximately incentive compatible. We show through extensive experiments that these two approaches are capable of recovering the designs of essentially all auctions for which theoretical solutions have been developed over the past 40 years, and in the case of RegretNet, we show that the degree of approximation to DSIC is very good. We also demonstrate that this deep learning franework is a useful tool for refuting hypotheses or generating supporting evidence in regard to the conjectured structure of optimal auctions, and that in the case of RochetNet this framework can be used to discover designs that can then be proved to be optimal. We also give generalization bounds that provide confidence intervals on the expected revenue and expected ex post regret, in terms of the empirical revenue and empirical regret achieved during training, the descriptive complexity of the neural network used to encode the allocation and payment rules, and the number of samples used to train the network.

Discussion
While the original work on automated mechanism design (AMD) framed the problem as a linear program (LP) (Conitzer and Sandholm, 2002Sandholm, , 2004, this has severe scalablility issues as the formulation scales exponentially in the number of agents and items (Guo and Conitzer, 2010). We provide a detailed comparison with an LP-based framework, and find that even for a small setting with two bidders and two items (and a discretization of bidder values into eleven bins per item), the corresponding LP takes 62 hours to complete since the LP needs to handle ≈ 9 × 10 5 decision variables and ≈ 3.6 × 10 6 constraints.
In comparison, differentiable economics leverages the expressive power of neural networks and the ability to enforce complex constraints using a standard machine learning pipeline. This provides for optimization over a broad class of mechanisms without needing to resort to a discretized function representation, and is constrained only by the expressivity of the neural network architecture. For the same setting, our approach finds an auction with low regret in just over 3.7 hours (see Table  13). Moreover, the LP based approach fails to scale much beyond this point while the neural network-based approach continues to scale.
The optimization problems studied here are non-convex and gradient-based approaches may, in general, get stuck in local optima. Empirically, however, this has not been an obstacle to the successful application of deep learning in other problem domains, and there is theoretical support for a "no local optima" phenomenon (see, e.g., Choromanska et al. (2015); Kawaguchi (2016); Du et al. (2019); Allen-Zhu et al. (2019)). We make similar observations for our experiments: our neural network architectures recover optimal solutions, wherever known, despite the formulation being non-convex.
In the case of RegretNet, our framework only provides a guarantee of approximate DSIC. In this regard, we work with expected ex post regret, which is a quantifiable relaxation of DSIC that was first introduced in (Dütting et al., 2014). An essential aspect is that it quantifies the regret to bidders for truthful bidding given knowledge of the bids of others (hence "ex post"), and thus is a quantity that measures the degree of approximation to DSIC. Indeed, our experiments suggest that this relaxation is a very effective tool for approximating optimal DSIC auctions, with RegretNet attaining a very good fit to known theoretical results.
This work also shows that this neural-network based pipeline can be used to discover new analytical results (see Section 5.5, where we use computational results to guess the analytical structure of an optimal design and duality theory to verify its optimality).

Further Related Work
Since the first version of this paper, there has been considerable follow-up work on the topic of differentiable economics, extending the approach to budget-constrained bidders (Feng et al., 2018), applying specialized architectures for single bidder settings and using them to derive new analtyical results (Shen et al., 2019), minimizing agent payments (Tacchetti et al., 2019), applying to multifacility location problems (Golowich et al., 2018), applying to two-sided matching (Ravindranath et al., 2021;Feng et al., 2022), incorporating human preferences (Peri et al., 2021), balancing fairness and revenue (Kuo et al., 2020), providing certificates of strategy-proofness (Curry et al., 2020), requiring complete allocations (Curry et al., 2022), developing permutation-equivariant architectures (Rahme et al., 2021a), formulating the problem as a two-player game between a designer and an adversary (Rahme et al., 2021b), using context-integrated transformer-based neural network architecture for contextual auction design (Duan et al., 2022), and using attention mechanism through transformers for optimal auctions design (Ivanov et al., 2022). There has also been follow-up work on deriving sample complexity bounds for learning a Nash equilibrium (Duan et al., 2021) using tools similar to the ones we use for our generalization bounds.
More recent work has adopted differentiable approaches for the design of taxation policies (Zheng et al., 2022), indirect auctions (Shen et al., 2020;Brero et al., 2021b,a), mitigations to price collusion Brero et al. (2022), game design (Balaguer et al., 2022), the study of platform economies (Wang et al., 2022b), and for multi-follower Stackelberg games (Wang et al., 2022a). Deep learning has also been used to study other problems within the field of economics, for example using neural networks to predict the behavior of human participants in strategic scenarios (Hartford et al., 2016;Fudenberg and Liang, 2019;Peterson et al., 2021), to provide an automated equilibrium analysis of mechanisms (Thompson et al., 2017), for causal inference (Hartford et al., 2017;Louizos et al., 2017), and for solving for the equilibria of Stackelberg games (Wang et al., 2022a), symmetric auction games (Bichler et al., 2021), and combinatorial games (Raghu et al., 2018). The research described here also relates to the method of empirical mechanism design (Areyan Viqueira et al., 2019;Vorobeychik et al., 2012Vorobeychik et al., , 2006Brinkman and Wellman, 2017), which applies empirical game theory to mechanism design, using empirical game theory to search for the equilibria of induced games by building out a suitable set of candidate strategies (Jordan et al., 2010;Kiekintveld and Wellman, 2008;Wellman, 2006); see also more recent work on policy-space response oracles (Lanctot et al., 2017).

Organization
Section 2 formulates the auction design problem as a learning problem, introduces the charactizationbased and characterization-free approaches, and gives the main generalization bounds. Section 3 introduces the network architectures of RochetNet and RegretNet, and instantiates the specific generalization bound for these networks. Section 4 describes the training and optimization procedures, and Section 5 presents extensive experimental results including experiments that provide support for theoretical conjectures in regard to the design of optimal auctions along with the discovery of new, provably-optimal auction designs. Section 6 concludes.

Preliminaries
We consider a setting with a set of n bidders N = {1, . . . , n} and m items M = {1, . . . , m}. Each bidder i has a valuation function v i : 2 M → R ≥0 , where v i (S) denotes the bidder's value the subset of items S ⊆ M .
In the simplest case, a bidder may have additive valuations, with a value v i ({j}) for each item j ∈ M , and a value for a subset of items S ⊆ M that is v i (S) = j∈S v i ({j}). Alternatively, if a bidder's value for a subset of items S ⊆ M is v i (S) = max j∈S v i ({j}), the bidder has a unit-demand valuation. We also consider bidders with general combinatorial valuations, but defer the details to Appendix A.2 and B.3.
Bidder i's valuation function is drawn independently from a distribution F i over possible valuation functions V i . We write v = (v 1 , . . . , v n ) for a profile of valuations, and denote V = n i=1 V i . The auctioneer knows the distributions F = (F 1 , . . . , F n ), but does not know the bidders' realized valuation v. The bidders report their valuations (perhaps untruthfully), and an auction decides on an allocation of items to the bidders and charges a payment to them.
We denote an auction (g, p) as a pair of allocation rules g i : V → 2 M and payment rules p i : V → R ≥0 (these rules can be randomized). Given bids b = (b 1 , . . . , b n ) ∈ V , the auction computes an allocation g(b) ∈ 2 M , and payments p An auction is ex post individually rational (IR) if each bidder receives a non-zero utility when participating truthfully, i.e.
In a DSIC auction, it is in the best interest of each bidder to report truthfully, and so the equilibrium revenue on valuation profile v is simply i p i (v). Optimal auction design seeks to identify a DSIC auction that maximizes expected revenue.
There is also a weaker notion of incentive compatibility, Bayesian Incentive Compatibility (BIC). An auction is BIC if each bidder's utility is maximized by reporting truthfully when the other bidders also report truthfully, i.e.
In this work, we focus on DSIC auctions rather than BIC auctions, since DSIC auctions are more preferable in practice because truthful bidding remains an equilibrium without common knowledge of the distributions on valuations or common knowledge on rationality.

Formulation as a Learning Problem
We pose the problem of optimal auction design as a learning problem, where in the place of a loss function that measures error against a target label, we adopt as the loss function the negated, expected revenue on valuations drawn from F . We are given a parametric class of auctions, (g w , p w ) ∈ M, for parameters w ∈ R d for some d > 0, and a sample of L bidder valuation profiles S = {v (1) , . . . , v (L) } drawn i.i.d. from F . There is no need to compute equilibrium inputs; rather, we sample true profiles, and seek to learn rules that are DSIC. The goal is to find an auction that minimizes the negated, expected revenue , among all auctions in M that satisfy DSIC (or just IC). For a single-bidder setting, there is no difference between DSIC and BIC.
We present two approaches for achieving IC. In the first, we leverage a characterization result to constrain the search space so that all mechanisms within this class are IC. In the second, we replace the IC constraints with a differentiable approximation, and move the constraints into the objective via the augmented Lagrangian method. The first approach affords a smaller search space and is exactly DSIC, but only applies to single-bidder multi-item settings. The second approach applies to multi-bidder, multi-item settings, but entails search through a larger parametric space and only achieves approximate IC.
In Appendix A.1, we also describe a construction based on Myerson (1981)'s characterization result for multi-bidder single-item settings, which we refer to as MyersonNet.

Characterization-Based Approach
We begin by describing our first approach, which we refer to as RochetNet, in which we exploit a characterization of DSIC mechanisms to constrain the search space.
We describe the approach for additive valuations, but it can also be extended to unit demand valuations. For an additive valuation on m items, the utility function u : R m ≥0 → R induced for a single bidder by a mechanism (g, p) is, where g j (v) ∈ {0, 1} indicates whether or not the bidder is assigned item j. We can consider a menu of J choices, for some J ≥ 1, where each choice consists of a possibly randomized allocation, together with a price. For choice j ∈ [J], let α j ∈ [0, 1] m specify the randomized allocation, and parameter β j ∈ R specify the negated price. By choosing the menu item that maximizes the bidder's utility, or the null (no allocation, no payment) outcome when this is better, a menu of size J induces the following utility function: The well known taxation principle from mechanism design theory tells us that a mechanism that selects the menu choice that maximizes an agent's reported utility, based on its bid b ∈ R m , is DSIC (Hammond, 1979;Guesnerie, 1995). To see this, observe that the menu does not depend on the reports, and that the agent will maximize its utility by reporting its true valuation function so that the right choice is made on its behalf. Moreover, the taxation principle also tells us that the use of a menu is without loss of generality for DSIC mechanisms.
Based on this, for a given J ≥ 0, we seek to learn a mechanism with parameters w = (α, β), where α ∈ [0, 1] mJ and β ∈ R J , to maximize the expected revenue E v∼F [β j * (v) ], where j * (v) ∈ argmax j∈[J] {α j · v + β j }, and denotes the best choice for the bidder, where choice 0 corresponds to the null outcome. For a unit-demand bidder, the utility can also be represented via (1), with the additional constraint that j g j (v) ≤ 1, ∀v. We discuss this more in Section 3.1.
We also have the following characterization of DSIC mechanisms for the single bidder case.
Theorem 2.1 (Rochet (1987)). The utility function u : R m ≥0 → R that is induced by a DSIC mechanism for a single biddder is 1-Lipschitz w.r.t. the 1 -norm, non-decreasing, and convex. The convexity can be understood by recognizing that the induced utility function (2) is the maximum over a set of hyperplanes, each corresponding to a choice in the menu set. Figure 1 illustrates Rochet's theorem for a single item (m = 1) and a menu consisting of four choices (J = 4). Here, the induced utility for choice j given bid Given this, to find the optimal single-bidder auction we can search over a suitably sized menu set and pick the one that maximizes expected revenue. In Section 3.1 we explain how to achieve this by modeling the utility function as a neural network, and formulating the above optimization as a differentiable learning problem.

Characterization-Free Approach
Our second approach-which we refer to as RegretNet-does not require a characterizatio of IC. Instead, it replaces the IC constraints with a differentiable approximation and brings the IC constraints into the objective by augmenting the objective with a term that accounts for the extent to which the IC constraints are violated.
We measure the extent to which an auction violates IC through the following notion of ex post regret. Fixing the bids of others, the ex post regret for a bidder is the maximum increase in her utility, considering all possible non-truthful bids. For a mechanism (g w , p w ), with parameters w, we will be interested in the expected ex post regret for bidder i: where the expectation is over for model parameters w. We assume that F has full support on the space of valuation profiles V . Given this, and recognizing that the regret is non-negative, an auction satisfies DSIC if and only if rgt i (w) = 0, ∀i ∈ N , except for measure zero events. 1 Given this, we re-formulate the learning problem as one of minimizing the expected negated revenue subject to the expected ex post regret being zero for each bidder: Given a sample S of L valuation profiles from F , we estimate the empirical ex post regret for bidder i as: and seek to minimize the empirical loss (negated revenue) subject to the empirical regret being zero for all bidders, and the following formulation: We additionally require the auction to satisfy IR, which can be ensured by restricting the search space to a class of parametrized auctions that charge no bidder more than her valuation for an allocation.
In Section 3, we model the allocation and payment rules through a neural network, and incorporate the IR requirement within the architecture. In Section 4 we describe how the IC constraints can be incorporated into the objective using Lagrange multipliers, so that the resulting neural net can be trained with standard pipelines.

Quantile-Based Regret
The intent is that the characterization-free approach leads to mechanisms with low expected ex post regret. By seeking to minimize the expected ex post regret, we can also obtain regret bounds of the form "the probability that the ex post regret is larger than x is at most q." For this, we define quantile-based ex post regret. Definition 2.1 (Quantile-based ex post regret). For each bidder i, and q with 0 < q < 1, the qquantile-based ex post regret, rgt q i (w), induced by the probability distribution F on valuation profiles, is defined as the smallest x such that We can bound the q-quantile based regret rgt q i (w) by the expected ex post regret rgt i (w) as in the following lemma. The proof appears in Appendix D.1.
Lemma 2.1. For any fixed q, 0 < q < 1, and bidder i, we can bound the q-quantile-based ex post regret by Using this Lemma 2.1, we can show, for example, that when the expected ex post regret is 0.001, then the probability that the ex post regret exceeds 0.01 is at most 10%.

Generalization Bounds
We conclude this section with two generalization bounds. The first is a lower bound on the expected revenue in terms of the empirical revenue during training, the complexity (or capacity) of the auction class that we optimize over, and the number of sampled valuation profiles. The second is an upper bound on the expected ex post regret in terms of the empirical regret during training, the complexity (or capacity) of the auction class that we optimize over, and the number of sampled valuation profiles.
We measure the capacity of an auction class M using a definition of covering numbers from the ranking literature (Rudin and Schapire, 2009). For this, define the ∞,1 distance between auctions (g, p), (g , p ) ∈ M as max v∈V i∈N,j∈M For any > 0, let N ∞ (M, ) be the minimum number of balls of radius required to cover M under the ∞,1 distance.
Theorem 2.2. For each bidder i, assume that the valuation function v i satisfies v i (S) ≤ 1, ∀S ⊆ M . Let M be a class of auctions that satisfy individual rationality. Fix δ ∈ (0, 1). With probability at least 1 − δ over draw of sample S of L profiles from F , for any (g w , p w ) ∈ M, and C, C are distribution-independent constants.
See Appendix D.2 for the proof. If the term ∆ L in the above bound goes to zero as the sample size L increases then the above bounds go to zero as L → ∞. In Theorem 3.1 in Section 3, we bound ∆ L for the neural network architectures we present in this work.

Neural Network Architectures
We describe the RochetNet architecture for single-bidder multi-item settings in Section 3.1, and the RegretNet architecture for multi-bidder multi-item settings in Section 3.2. We focus on additive valuations and unit-demand valuations, and discuss how to extend the constructions to allow for combinatorial valuations in Appendix A.2.

The RochetNet Architecture
RochetNet operationalizes the idea of menu-based mechanisms through a suitable neural network architecture. We first describe the construction for additive valuations and then explain how to extend it to unit-demand valuations. The parameters correspond to a menu of J choices, where each choice j ∈ [J] is associated with randomized allocation α j ∈ [0, 1] m and negated price β j ∈ R (β j s will be negative, and the smaller the value of β j , the larger the payment). The network selects the choice for the bidder that maximizes the bidder's reported utility given its bid, or chooses the null outcome (no allocation, no payment) when this is preferred. This ensures DSIC and IR.
The utility function, represented as a single layer neural network, is illustrated in Figure 2, where each h j (b) = α j · b + β j for bid b ∈ R m . The input layer takes a bid b ∈ R m and the output of the network is the induced utility. For input b, j * (b) ∈ argmax j∈[J]∪{0} {α j · b + β j } denotes the best choice for the bidder, where choice 0 corresponds to α 0 = 0 and β 0 = 0 and the null outcome. This best choice defines the allocation and payment rule: for bid b, the allocation is g w (b) = α j * (b) and the payment is Figure 2: RochetNet: Neural network representation of a non-negative, monotone, convex induced utility By using a large number of hyperplanes, one can use this neural network architecture to search over a sufficiently rich class of DSIC and IR auctions for the single-bidder, multi-item setting. Given the RochetNet construction, we seek to minimize the negated, expected revenue, E v∼F [β j * (v) ]. To ensure that the objective is a continuous function of parameters α and β, we adopt during training a softmax operation in place of the argmax, and the following loss function: where and κ > 0 is a constant that controls the quality of the approximation. Here, the softmax function, softmax j (κx 1 , . . . , κx J ) = e κx j / j e κx j , takes as input J real numbers and returns a probability distribution consisting of J probabilities, proportional to the exponential of the inputs. We only do this approximation during training, and always use argmax during testing to guarantee the mechanism is DSIC. During training, we seek to optimize the parameters of the neural network, i.e., α ∈ [0, 1] mJ , and β ∈ R J , to minimize loss (5). For this, given a sample S = {v (1) , . . . , v (L) } drawn from F , we use stochastic gradient descent to optimize an empirical version of the loss. This approach easily extends to a single bidder with a unit-demand valuation. In this case, the new requirement is that the sum of the allocation probabilities cannot exceed one. This can be enforced by restricting the coefficients for each hyperplane to sum up to at most one, i.e. m k=1 α jk ≤ 1, ∀j ∈ [J], and α jk ≥ 0, ∀j ∈ J, k ∈ [m]. To achieve this contraint, we can reparameterize α jk as softmax k γ j1 , · · · , γ jm , where γ jk ∈ R, ∀j ∈ J, k ∈ m. With this restriction, the resulting mechanism is DSIC for unit-demand bidders since the selected menu choice corresponds a distribution over single-item allocations. 2

The RegretNet Architecture
We next describe the architecture for the characterization-free, RegretNet approach. In this case, we train a neural network that explicitly encodes a multi-bidder allocation and payment rule. The architecture consists of two logically distinct components that comprise part of a single network: the allocation component and the payment component. These are trained together as a single network, and the outputs of these networks are used to compute the regret and revenue, and thus quantities used by the loss function.

Additive Valuations
An overview of the RegretNet architecture for additive valuations is given in Figure 3. The allocation component encodes a randomized allocation rule g w : R nm → [0, 1] nm and the payment component encodes a payment rule p w : R nm → R n ≥0 , both of which are modeled as feed-forward, fully-connected networks with a tanh activation function in each of the hidden nodes. The input layer consists of bids b ij ≥ 0 representing the valuation of bidder i for item j.
The allocation component outputs a vector of allocation probabilities z 1j = g 1j (b), . . . , z nj = g nj (b), for each item j ∈ [m]. To ensure feasibility, i.e., that the probability of an item being allocated is at most one, the allocations are computed using a softmax activation function, so that for all items j, we have n i=1 z ij ≤ 1. To accommodate the possibility of an item not being assigned, we include a dummy node in the softmax computation to hold the residual allocation probability. The payment component outputs a payment for each bidder that denotes the amount the bidder should pay in expectation for a particular bid profile.
To ensure that the auction satisfies ex post IR, i.e., does not charge a bidder more than her expected value for the allocation, the network first computes a normalized paymentp i ∈ [0, 1] for each bidder i using a sigmoidal unit, and then outputs a payment p i =p i ( m j=1 z ij b ij ), where the z ij 's are the outputs from the allocation component. This guarantees ex post IR, since the payment can be represented as a distribution over payments for each allocation in the support of the randomized allocation, where each payment is at most the bidder's reported value for that allocation.

Unit-Demand Valuations
The allocation component for unit-demand bidders is the feed-forward network shown in Figure 4. For revenue maximization in this setting, it is sufficient to consider allocation rules that assign at most one item to each bidder. 3 In the case of randomized allocation rules, this requires that the total allocation probability to each bidder is at most one, i.e., j z ij ≤ 1, ∀i ∈ [n]. We would also require that no item is over-allocated, i.e., i z ij ≤ 1, ∀j ∈ [m]. Hence, we design the allocation . . .  component such that the matrix of output probabilities [z ij ] n i,j=1 is doubly stochastic. 4 In particular,the allocation component computes two sets of scores s ij 's and s ij 's. Let s, s ∈ R nm denote the corresponding matrices. The first set of scores are normalized along the rows and the second set of scores normalized along the columns. Both normalizations can be performed by passing these scores through softmax functions. The allocation for bidder i and item j is then computed as the minimum of the corresponding normalized scores: where indices n + 1 and m + 1 denote dummy inputs that correspond to an item not being allocated to any bidder and a bidder not being allocated any item, respectively. We first show that ϕ DS (s, s ) as constructed is doubly stochastic, and that we do not lose in generality by the constructive approach that we take. See Appendix D.3 for a proof.
It remains to show that doubly-stochastic matrices correspond to lotteries over one-to-one assignments. This is an easy corollary of Birkhoff (1946) and also a special case of the bihierarchy structure proposed in Budish et al. (2013) (Theorem 1), which we state in the following lemma for completeness. Lemma 3.2 (Birkhoff (1946)). Any doubly stochastic matrix A ∈ R n×m can be represented as a convex combination of matrices Budish et al. (2013) also propose a polynomial algorithm to decompose the doubly stochastic matrix. The payment component for unit-demand valuations is the same as for the case of additive valuations (see Figure 3).

Covering Number Bounds
We conclude this section by instantiating our generalization bound from Section 2.4 to Regret-Net, where we have both a regret and revenue term. Analogous results can also be stated for RochetNet, where we only have a revenue term. Here, · 1 is the induced matrix norm, i.e. w 1 = max j i |w ij |.
Theorem 3.1. For RegretNet with R hidden layers, K nodes per hidden layer, d g parameters in the allocation component, d p parameters in the payment component, m items, n bidders, a sample size of L, and the vector of all model parameters w satisfying w 1 ≤ W , the following are valid bounds for the ∆ L term defined in Theorem 2.2, for different bidder valuation types: (a) additive valuations: The proof is given in Appendix D.5. As the sample size L → ∞, the term ∆ L → 0. The dependence of the above result on the number of layers, nodes, and parameters in the network is similar to standard covering number bounds for neural networks (Anthony and Bartlett, 2009).

Training the Networks
We next describe how we train the neural network architectures presented in the previous sections.
The approach that we take for RochetNet is the standard (projected) stochastic gradient descent (SGD) for loss function L(α, β) in Equation 5. For additive valuations, we project each weight α jk during training into [0, 1] to guarantee feasibility.
In the case of RegretNet, we need to take care of the need for incentive alignment directly. We use the augmented Lagrangian method to solve the constrained training problem in (4) over the space of neural network parameters w. The Lagrangian function for the optimization problem, augmented with a quadratic penalty term for violating the constraints, is where λ ∈ R n is a vector of Lagrange multipliers, and ρ > 0 is a fixed parameter that controls the weight on the quadratic penalty. The solver is described in Algorithm 1 and alternates between the following updates on the model parameters and the Lagrange multipliers: We divide the training sample S into minibatches of size B, and perform several passes over the training samples (with random shuffling of the data after each pass). We denote the minibatch received at iteration t by S t = {v (1) , . . . , v (B) }. The update (a) on model parameters involves an unconstrained optimization of C ρ over w and is performed using a gradient-based optimizer.
Let rgt i (w) denote the empirical regret in (3) computed on minibatch S t . The gradient of C ρ w.r.t. w for fixed λ t is given by:

16:
Update Lagrange multipliers once in Q iterations: The terms rgt i and g ,i in turn involve a "max" over misreports for each bidder i and valuation profile . We solve this inner maximization over misreports using another gradient based optimizer. In particular, we maintain a misreport v ( ) i for each i and valuation profile . For each minibatch, we compute the optimal misreport, for each agent i and each valuation profile , by taking Γ gradient updates from a randomly initialized valuation, each update of the form v ( ) for some γ > 0. This is in the spirit of adversarial machine learning, where these gradient steps on the input are taken to try to find a misreport for the agent that "defeats" the incentive alignment of the mechanism. Figure 5 gives a visualization of this search for defeating misreports when learning an optimal auction for a problem with a single bidder with an additive valuation over two items, where the bidder's value for each item is an independent draw from U [0, 1] (see Section 5.3, Setting A). In the visualization, the bidder has true valuation (v 1 , v 2 ) = (0.1, 0.8), with this input represented as a green dot. The red crosses represent possible misreports. The heat map shows the utility gain, , for this bidder when bidding some amount (b 1 , b 2 ) ∈ [0, 1] 2 rather than truthfully. This mechanism is already approximately DSIC and the utility gain is Figure 5: The gradient-based approach to regret approximation, shown for a well-trained auction for Setting A. The top left plot shows the true valuation (green dot) and ten random initial misreports (red dots). The remaining plots give snapshots of the progress of gradient ascent on the input, showing this every four steps.
negative everywhere (and truthful bidding has zero regret), with shades of yellow corresponding to a misreport that is almost as good as a true report and shades of green towards blue corresponding to a harmful misreport. We illustrate the use of input gradients by initializing each of 10 possible misreports (we are using 10 misreports for illustration, in our experiments we initialize only a single misreport), and performing Γ = 20 gradient-ascent steps (9) for each misreport. Figure 5 shows the initial misreports along with a new snapshot of the location of each misreport every four gradient-ascent steps.
We use the Adam optimizer (Kingma and Ba, 2014) for updates on model parameters w and misreports v ( ) i . 5 Since the optimization problem is non-convex, the solver is not guaranteed to reach a globally optimal solution. However, this training algorithm proves very effective in our experiments. The learned auctions incur very low regret and closely match the structure of optimal auctions in settings where this structure is known from existing theory.

Experimental Results
In this section, we demonstrate that our approach can recover near-optimal auctions for essentially all settings for which an analytical solution is known, that it is an effective tool for confirming or refuting hypotheses about optimal designs, and that it can find new auctions for settings where there is no known analytical solution. We present a representative subset of the results here, and provide additional experimental results in Appendix B.

Setup
We implement our framework using the TensorFlow deep learning library. For RochetNet we initialized parameters α and β in (5) using a random uniform initializer over the interval [0,1] and a zero initializer, respectively. For RegretNet we used the tanh activation function at the hidden nodes, and Glorot uniform initialization (Glorot and Bengio, 2010). We perform cross validation to decide on the number of hidden layers and the number of nodes in each hidden layer. We include exemplary numbers that illustrate the tradeoffs in Section 5.7.
We trained RochetNet on 2 15 valuation profiles sampled every iteration in an online manner. We used the Adam optimizer with a learning rate of 0.1 for 20,000 iterations for making the updates. The parameter κ in Equation (6) was set to 1,000. Unless specified otherwise we used a max network over 1,000 linear functions to model the induced utility functions, and report our results on a sample of 10,000 profiles.
For RegretNet we used a sample of 640,000 valuation profiles for training and a sample of 10,000 profiles for testing. The augmented Lagrangian solver was run for a maximum of 80 epochs (full passes over the training set) with a minibatch size of 128. The value of ρ in the augmented Lagrangian was set to 1.0 and incremented every two epochs.
An update on w t was performed for every minibatch using the Adam optimizer with learning rate 0.001. For each update on w t , we ran Γ = 25 misreport updates steps with learning rate 0.1. At the end of 25 updates, the optimized misreports for the current minibatch were cached and used to initialize the misreports for the same minibatch in the next epoch. An update on λ t was performed once every 100 minibatches (i.e., Q = 100).
We ran all experiments on a compute cluster with NVDIA Graphics Processing Unit (GPU) cores.

Evaluation
In addition to the revenue of the learned auction on a test set, we also evaluate the regret achieved by RegretNet, averaged across all bidders and test valuation profiles, i.e., rgt = 1 n n i=1 rgt i (g w , p w ). Each rgt i has an inner "max" of the utility function over bidder valuations v i ∈ V i (see (3)). We evaluate these terms by running gradient ascent on v i with a step-size of 0.1 for 2,000 iterations (we test 1,000 different random initial v i and report the one that achieves the largest regret).
For some of the experiments we also report the total time required to train the network. This time is incurred during offline training, while the allocation and payments can be computed in a few milliseconds once the network is trained.

The Manelli-Vincent and Pavlov Auctions
As a representative example of the exhaustive set of analytical results that we can recover with our approach we discuss the Manelli-Vincent and Pavlov auctions (Manelli and Vincent, 2006;Pavlov, 2011). We specifically consider the following single-bidder, two-item settings: A. Single bidder with additive valuations over two items, where the item values are independent draws from U [0, 1].
B. Single bidder with unit-demand valuations over two items, where the item values are independent draws from U [2,3].
The optimal design for the first setting is given by Manelli and Vincent (2006), who show that the optimal mechanism is deterministic and offers the bidder three options: receive both items and  (c) and (d) are for Setting B. The panels describe the probability that the bidder is allocated item 1 (left) and item 2 (right) for different valuation inputs. The optimal auctions are described by the regions separated by the dashed black lines, with the numbers in black the optimal probability of allocation in the region. pay (4 − √ 2)/3, receive item 1 and pay 2/3, or receive item 2 and pay 2/3. For the second setting Pavlov (2011) shows that it is optimal to offer a fair lottery ( 1 2 , 1 2 ) over the items (at a discount), or to purchase any item at a fixed price. For the parameters here the price for the lottery is 1 6 (8 + √ 22) ≈ 2.115 and the price for an individual item is 1 6 + 1 6 (8 + √ 22) ≈ 2.282. We used two hidden layers with 100 hidden nodes in RegretNet for these settings. A visualization of the optimal allocation rule and those learned by RochetNet and RegretNet is given in Figure 6. Figure 7(a) gives the optimal revenue, the revenue and regret obtained by RegretNet, and the revenue obtained by RochetNet. Figure 7(b) shows how these terms evolve over time during training in RegretNet.
We find that both approaches essentially recover the optimal design, not only in terms of revenue, but also in terms of the allocation rule and transfers. The auctions learned by RochetNet are exactly DSIC and match the optimal revenue precisely, with sharp decision boundaries in the allocation and payment rule. The decision boundaries for RegretNet are smoother, but still remarkably accurate. The revenue achieved by RegretNet matches the optimal revenue up to a < 1% error term and the regret it incurs is < 0.001. The plots of the test revenue and regret show that the augmented Lagrangian method is effective in driving the test revenue and the test regret towards optimal levels.
The additional domain knowledge incorporated into the RochetNet architecture leads to exactly DSIC mechanisms that match the optimal design more accurately, and speeds up computation (the training took about 10 minutes compared to 11 hours). On the other hand, we find it surprising how well RegretNet performs given that it starts with no domain knowledge at all.
We present and discuss a host of additional experiments with single-bidder, two-item settings in Appendix B.

The Straight-Jacket Auction
Extending the analytical result of Manelli and Vincent (2006) to a single bidder, and an arbitrary number of items (even with additive preferences, all uniform on [0, 1]) has proven elusive. It is not even clear whether the optimal mechanism is deterministic or requires randomization.   nakopoulos and Koutsoupias, 2018), and the test revenue of the auction learned by RochetNet, for various numbers of items m. The SJA is known to be optimal for up to six items and conjectured to be optimal for any number of items.
A breakthrough came with Giannakopoulos and Koutsoupias (2018), who were able to find a pattern in the results for two items and three items. The proposed mechanism-the Straight-Jacket Auction (SJA)-offers bundles of items at fixed prices. The key to finding these prices is to view the best-response regions as a subdivision of the m-dimensional cube, and observe that there is an intrinsic relationship between the price of a bundle of items and the volume of the respective best-response region. Giannakopoulos and Koutsoupias (2018) give a recursive algorithm for finding the subdivision and the prices, and used LP duality to prove that the SJA is optimal for m ≤ 6 items. 6 They also conjecture that the SJA remains optimal for general m, but were unable to prove it. Figure 8 gives the revenue of the SJA, and that found by RochetNet for m ≤ 10 items. We used a test sample of 2 30 valuation profiles (instead of 10,000) to compute these numbers for higher precision. It shows that RochetNet finds the optimal revenue for m ≤ 6 items, and that it finds DSIC auctions whose revenue matches that of the SJA for m = 7, 8, 9, and 10 items. Closer inspection reveals that the allocation and payment rules learned by RochetNet essentially match those predicted by Giannakopoulos and Koutsoupias for all m ≤ 10. We take this as strong additional evidence that their conjecture is correct.
For these experiments, we used a max network over 10,000 linear functions (instead of 1,000) to increase the representation and flexibility of the neural network. This overparameterization trick is commonly used in deep learning and has proven to be very effective in practice (Krizhevsky et al., 2012;Allen-Zhu et al., 2019). We illustrate this effect in Appendix B.4. We followed up on the usual training phase with an additional 20 iterations of training using Adam optimizer with learning rate 0.001 and a minibatch size of 2 30 .
We also found it useful to impose item-symmetry on the learned auction, especially for m = 9 and 10 items, as this helped with accuracy and reduced training time. Imposing symmetry comes without loss of generality for auctions with an item-symmetric distribution (Daskalakis and Weinberg, 2012). To impose item symmetry, we first permute the inputs to be in ascending order, compute the allocation and payment on this permuted input, and then invert the permutation of allocation to compute the mechanism for the original inputs. With these modifications it took about 13 hours to train the networks.

Discovering New Analytical Results
We next demonstrate the potential of RochetNet to help to discover new analytical results for optimal auctions. For this, we consider a single bidder with additive but correlated valuations for two items as follows: C. One additive bidder and two items, where the bidder's valuation is drawn uniformly from the There is no analytical result for the optimal auction design for this setting. We ran RochetNet for different values of c to discover the optimal auction. The mechanisms learned by RochetNet for c = 0.5, 1, 3, and 5 are shown in Figure 10.
We can validate the optimality of this conjectured design through duality theory (Daskalakis et al., 2013). The proof is given in Appendix D.6.
Theorem 5.1. For any c > 0, suppose the bidder's valuation is uniformly distributed over set Then the optimal auction contains two menu items In Appendix B.5, we also give the mechanisms learned by RochetNet for two additional settings. Taken together, these results demonstrate that RochetNet is a powerful tool to help in the discovery of new analytical results. In follow-up work, Shen et al. (2019)   closely related to RochetNet, to discover an optimal analytical result for a similar setting: a single additive bidder and two items, where the bidder's valuation is drawn uniformly from the triangle

Experiments with Optimal Mechanisms that Require an Infinitely-sized Menu
We now demonstrate how RochetNet performs for settings where the optimal mechanism is known to require an infinite number of menu choices. For this, we consider the following setting from Daskalakis et al. (2017): D. One additive bidders and two items, where bidders draw their value for each item independently from Beta(α = 1, β = 2). 7 This setting and the corresponding optimal mechanism, with its infinite menu size, is described in detail in Example 3 of Daskalakis et al. (2017). We seek to evaluate the performance of  A Beta distribution with α = 1, β = 2 has the density function f (x) = 2(1 − x)  Net for different-sized menus. In Figure 11, we report the revenue, the number of menu choices represented in RochetNet, and the number of menu choices that are active for one or more samples in the test set. As we increase the number of initialized menu choices, the number of active menu items increases as well. Comparing the optimal infinite-sized menu with the menu learned by Ro-chetNet, we find that the difference in revenue comes from a large number of menu items that each only contribute marginally to the net revenue (< 10 −5 ). RochetNet fails to learn some of these menus due to the fixed size of mini-batches and the numerical tolerance error of the optimization routine. Regardless, the overall gap in revenue is negligible. Already with two active menu items, RegretNet achieves a revenue of ∼ 0.3309 (99.93% of optimal), while with three or more active menu items the revenue is at least ∼ 0.3310 (99.96% of optimal).

Scaling Up
In this section, we consider settings with up to five bidders and up to ten items. This is several orders of magnitude more complex than existing analytical or computational results. It is also a natural playground for RegretNet, as no tractable characterizations of IC mechanisms are known for these settings. We specifically consider the following two settings, that generalize the basic setting considered in Manelli and Vincent (2006) and Giannakopoulos and Koutsoupias (2018) to more than one bidder: E. Three additive bidders and ten items, where bidders draw their value for each item independently from U [0, 1].
F. Five additive bidders and ten items, where bidders draw their value for each item independently from U [0, 1].
An analytical description of the optimal auction for these settings is not known. However, running a separate Myerson auction for each item is optimal in the limit of the number of bidders (Palfrey, 1983). For a regime with a small number of bidders, this provides a strong benchmark. We also compare to selling the grand bundle via a Myerson auction. For Setting E, we show in Figure 12(a) the revenue and regret of the learned auction on a validation sample of 10,000 profiles, obtained with different architectures. Here (R, K) denotes an architecture with R hidden layers and K nodes per layer. The (5, 100) architecture has the lowest regret among all the 100-node networks for both Setting E and Setting F. Figure 12(b) shows that the learned auctions yield higher revenue compared to the baselines, and do so with tiny regret.

Comparison to Linear Programming
In this section, we compare the train time and solution quality of RegretNet with the solve time and solution quality of the LP-based approach proposed in Conitzer and Sandholm (2002Sandholm ( , 2004. To be able to run the LP, we consider the small setting of two additive bidders and two items, and with bidders that draw their value for each item independently from U [0, 1]. For RegretNet, we used two hidden layers with 100 nodes per hidden layer. The LP was solved with the commercial solver Gurobi and on an Amazon AWS EC-2 instance with 48 cores and 96GB memory. For the LP-based approach, we handle continuous valuations by discretizing the values into 11 bins per item (resulting in ≈ 9 × 10 5 decision variables and ≈ 3.6 × 10 6 constraints), and adopt two different rounding strategies, one that rounds a continuous input valuation profile to the nearest discrete profile for evaluation (-nearest), and one that rounds a continuous input valuation profile down to the nearest discrete profile for evaluation (-down). Whereas the LP-based mechanism with -nearest rounding fails IR, the use of -down rounding ensures the LP-based approach is IR.
The results for this set-up are shown in Figure 13. We also report the violations in IR constraints incurred by the LP on the test set; for L valuation profiles, this is measured by 1 −u i (v ( ) ), 0}. Due to the coarse discretization, the LP approach with nearest-point rounding suffers substantial IR violations. As a result of this, as well as its relatively high regret compared to RegretNet, the relatively high revenue achieved by the LP together with nearest-point rounding, compared with RegretNet, is misleading. For this reason, we also include the performance of the LP-based mechanism when the continuous input valuation profiles are rounded down to their respective discrete profiles. There we see zero IR violation but substantially lower revenue than RegretNet (and still with higher regret) We were not able to run an LP for this setting for a finer discretization than 11 bins per item value in more than nine days (216 hours) of compute time. 8 In contrast, RegretNet yields very low regret along with zero IR violations (as the neural network satisfies IR by design), and does so in around four hours. In fact, even for the larger Settings E-F, the training time of RegretNet was less than 13 hours. In Figure 14, we plot the test revenue, test regret and the run-time of the LP-based and RegretNet methods, while varying the number of variables in the LP and number of parameters in RegretNet. For the LP, this is done by varying the discretization and for RegretNet, this is done by varying the network structure. In Appendix C, we include the complete set of results for varying the discretization in the LP-based method and varying the number of hidden layers and hidden units configurations in RegretNet. Introducing an increasingly fine discretization into the LP-based method provides an initial increase in revenue in return for a modest increase in run time, but this gives way to a huge increase in run time with no effect on revenue. For RegretNet, the training time is relatively stable as the number of hidden layers and units per layer is varied, while larger networks bring a substantive increase in revenue. We only plot the results for RegretNet that lie on the efficient frontier, and refer to Figure 25 for the full details. Taken together these results show that RegretNet's performance substantially extends the revenue-time Pareto frontier available from the LP method, obtaining higher revenue for a relatively modest training time.

Conclusion
In this paper, we have introduced a new framework of differentiable economics for using neural networks for economic discovery, specifically for the discovery of revenue-optimal, multi-bidder and multi-item auctions. We have demonstrated that standard machine learning pipelines can be used to essentially re-discover all known, optimal auction designs, and to discover the design of auctions for settings out of reach of theory and settings that are orders of magnitude larger than those that can be solved through other computational approaches. We also see promise for the framework in advancing economic theory, for example in supporting or refuting conjectures and as an assistant in guiding new economic discovery.
This framework has already inspired a great deal of follow-on work, in taking differentiable economics to additional domains and in scaling-up the methods to support networks that simultaneously handle multiple sizes of markets (number of bidders and number of items). Looking ahead, there remain a number of interesting challenges. Beyond expanding the domains that are studied by differentiable economics, the methodological challenges include the interpretability of learned mechanisms, integrating additional structural regularities from economic theory, scaling up to larger economic systems, and providing robustness guarantees in the form of certificates for economic properties. Combinatorial auctions (CAs) presents an especially important domain, and one whose study we have only initiated here (see Appendix A.2 and B.3, for theory and experiment results for the case of CAs with two items). CAs are important to practice (Palacios-Huerta et al., 2022), and yet concerns around low revenue and their vulnerability to collusion Day and Milgrom, 2008;Levin and Skrzypacz, 2016;Goeree and Lien, 2016) mean that we lack a complete understanding even for the design of efficient auctions, never mind finding revenue-optimizing designs.

A Additional Architectures
In this appendix we present additional network architectures, for a multi-bidder single-item setting, and for a general multi-bidder multi-item setting with combinatorial valuations.

A.1 The MyersonNet Approach
We start by describing an architecture that yields optimal DSIC auction for selling a single item to multiple buyers.
In the single-item setting, each bidder holds a private value v i ∈ R ≥0 for the item. We consider a randomized auction (g, p) that maps a reported bid profile b ∈ R n ≥0 to a vector of allocation probabilities g(b) ∈ R n ≥0 , where g i (b) ∈ R ≥0 denotes the probability that bidder i is allocated the item and n i=1 g i (b) ≤ 1. We shall represent the payment rule p i via a price conditioned on the item being allocated to bidder i, i.e. p i (b) = g i (b) t i (b) for some conditional payment function t i : R n ≥0 → R ≥0 . The expected revenue of the auction, when bidders are truthful, is given by: The structure of the revenue-optimal auction is well understood for this setting.
Theorem A.1 (Myerson (1981)). There exist a collection of monotonically non-decreasing functions,φ i : R ≥0 → R called the ironed virtual valuation functions such that the optimal BIC auction for selling a single item is the DSIC auction that assigns the item to the buyer with the highest ironed virtual valueφ i (v i ) provided that this value is non-negative, with ties broken in an arbitrary value-independent manner, and charges the bidders according to is monotonically non-decreasing. For regular distributions F 1 , . . . , F n no ironing is required andφ i = ψ i for all i.
If the virtual valuation functions ψ 1 , . . . , ψ n are furthermore monotonically increasing and not only monotonically non-decreasing, the optimal auction can be viewed as applying the monotone transformations to the input bidsb i =φ i (b i ), feeding the computed virtual values to a second price auction (SPA) with zero reserve price, denoted (g 0 , p 0 ), making an allocation according to g 0 (b), and charging a paymentφ −1 i (p 0 i (b)) for winning bidder i. In fact, this auction is DSIC for any choice of strictly monotone transformations of the values: Theorem A.2. For any set of strictly monotonically increasing functionsφ 1 , . . . ,φ n , an auction defined by outcome rule g i = g 0 i •φ and payment rule p i =φ −1 i • p 0 i •φ is DSIC and IR, where (g 0 , p 0 ) is the allocation and payment rule of a second price auction with zero reserve.
For regular distributions with monotonically increasing virtual value functions designing an optimal DSIC auction thus reduces to finding the right strictly monotone transformations and corresponding inverses, and modeling a second price auction with zero reserve. We present a high-level overview of a neural network architecture that achieves this in Figure 15(a), and describe the components of this network in more detail in Section A.1.1 and Section A.1.2 below.
The MyersonNet is tailored to monotonically increasing virtual value functions. For regular distributions with virtual value functions that are not strictly increasing and for irregular distributions this approach only yields approximately optimal auctions. Figure 15: (a) MyersonNet: The network applies monotone transformationsφ 1 , . . . ,φ n to the input bids, passes the virtual values to the SPA-0 network in Figure 16, and applies the inverse transformations

A.1.1 Modeling Monotone Transforms
We model each virtual value functionφ i as a two-layer feed-forward network with min and max operations over linear functions. For K groups of J linear functions, with strictly positive slopes w i kj ∈ R >0 , k = 1, . . . , K, j = 1, . . . , J and intercepts β i kj ∈ R, k = 1, . . . , K, j = 1, . . . , J, we define:φ Since each of the above linear function is strictly non-decreasing, so isφ i . In practice, we can set each w i kj = e α i kj for parameters α i kj ∈ [−B, B] in a bounded range. A graphical representation of the neural network used for this transform is shown in Figure 15(b). For sufficiently large K and J, this neural network can be used to approximate any continuous, bounded monotone function (that satisfies a mild regularity condition) to an arbitrary degree of accuracy (Sill, 1998). A particular advantage of this representation is that the inverse transformφ −1 can be directly obtained from the parameters for the forward transform: e −α i kj (y − β i kj ).

A.1.2 Modeling SPA with Zero Reserve
We also need to model a SPA with zero reserve (SPA-0) within the neural network structure. For the purpose of training, we employ a smooth approximation to the allocation rule using a neural network. Once we learn value functions using this approximate allocation rule, we use them together with an exact SPA with zero reserve to construct the final auction. The SPA-0 allocation rule g 0 can be approximated using a 'softmax' function on the virtual valuesb 1 , . . . ,b n and an additional dummy inputb n+1 = 0: where κ > 0 is a constant fixed a priori, and determines the quality of the approximation. The higher the value of κ, the better the approximation but the less smooth the resulting allocation function.
The SPA-0 payment to bidder i, conditioned on being allocated, is the maximum of the virtual values from the other bidders and zero: Let g α,β and t α,β denote the allocation and conditional payment rules for the overall auction in Figure 15(a), where (α, β) are the parameters of the forward monotone transform. Given a sample of valuation profiles S = {v (1) , . . . , v (L) } drawn i.i.d. from F , we optimize the parameters using the negated revenue on S as the error function, where the revenue is approximated as: We solve this training problem using a minibatch stochastic gradient descent solver.

A.2 RegretNet for Combinatorial Valuations
We next show how to adjust the RegretNet architecture to handle bidders with general, combinatorial valuations. 9 In this case, each bidder i reports a bid b i,S for every bundle of items S ⊆ M (except the empty bundle, for which her valuation is taken as zero). The allocation component of the network has an output z i,S ∈ [0, 1] for each bidder i and bundle S, denoting the probability that the bidder is allocated the bundle.
To prevent the items from being over-allocated, we require that the probability that an item appears in a bundle allocated to some bidder is at most one. We also require that the total allocation to a bidder is at most one: We refer to an allocation that satisfies constraints (14)-(15) as being combinatorial feasible. To enforce these constraints, the allocation component of the network computes a set of scores for each bidder and a set of scores for each item. Specifically, there is a group of bidder-wise scores s i,S , ∀S ⊆ M for each bidder i ∈ N , and a group of item-wise scores s Let s, s (1) , . . . , s (m) ∈ R n×2 m denote these bidder scores and item scores. Each group of scores is normalized using a softmax function:s i,S = exp(s i,S )/ S exp(s i,S ) ands i ,S ). The allocation for bidder i and bundle S ⊆ M is defined as the minimum of the normalized bidder-wise scores i,S and the normalized item-wise scoress (j) i,S for each j ∈ S: Similar to the unit-demand setting, we first show that ϕ CF (s, s (1) , . . . , s (m) ) is combinatorial feasible and that our constructive approach is without loss of generality. See Appendix D.4 for a proof.
In addition, we want to understand whether a combinatorial feasible allocation z can be implementable, defined in the following way. Unfortunately, Example A.1 shows that a combinatorial feasible allocation may not have an integer decomposition, even for the case of two bidders and two items.
Example A.1. Consider a setting with two bidders and two items, and the following fractional, combinatorial feasible allocation: where the coefficients sum to at most 1. Firstly, it is straightforward to see that a = b = 1/4. Given the construction, we must have c + d = 3/8, e ≥ 0 and f + g = 3/8, h ≥ 0. Thus, a + b + c + d + e + f + g + h ≥ 1/2 + 3/4 = 5/4 for any decomposition. Hence, z is not implementable.
To ensure that a combinatorial feasible allocation has an integer decomposition we need to introduce additional constraints. For the two items case, we introduce the following constraint: Then we want to argue that C can be represented as k and each B is a feasible 0-1 allocation. Matrix C has all zeros in the last (items . In addition, based on constraint (17), for each bidder i, Thus C is a doubly stochastic matrix with scaling factor 1 − n i =1 z i ,{1,2} . Therefore, we can always decompose C into a linear combination k We leave to future work to characterize the additional constraints needed for the multi-item (m > 2) case.

A.2.1 RegretNet for Two-item Auctions with Implementable Allocations
To accommodate the additional constraint (17) for the two items case we add an additional softmax layer for each bidder. In addition to the original (unnormalized) bidder-wise scores s i,S , ∀i ∈ N, S ⊆ M and item-wise scores s To satisfy constraint (17) for each bidder i, we compute the normalized scores i,S for each i, S as,s Then the final allocation for each bidder i is: The payment component of the network for combinatorial bidders has the same structure as the one in Figure 3, computing a fractional paymentp i ∈ [0, 1] for each bidder i using a sigmoidal unit, and outputting a payment p i =p i S⊆M z i,S b i,S .

B Additional Experiments
We present a broad range of additional experiments for the two main architectures used in the body of the paper, and additional ones for the architectures presented in Appendix A

B.1 Experiments with MyersonNet
We first evaluate the MyersonNet architecture introduced in Appendix A.1 for designing single-item auctions. We focus on settings with a small number of bidders because this is where revenue-optimal auctions are meaningfully different from efficient auctions. We present experimental results for the following four settings: G. Three bidders with independent, regular, and symmetrically distributed valuations v i ∼ U [0, 1].

H. Five bidders with independent, regular, and asymmetrically distributed valuations
I. Three bidders with independent, regular, and symmetrically distributed valuations v i ∼ Exp (3).
J. Three bidders with independent irregular distributions F irregular , where each v i is drawn from U [0, 3] with probability 3/4 and from U [3,8] with probability 1/4.
We note that the optimal auctions for the first three distributions involve virtual value functions φ i that are strictly monotone. For the fourth and final distribution the optimal auction uses ironed virtual value functions that are not strictly monotone.
For the training set and test set we used 1,000 valuation profiles sampled i.i.d. from the respective valuation distribution. We modeled each transformφ i in the MyersonNet architecture using 5 sets of 10 linear functions, and we used κ = 10 3 .
The results are summarized in Figure 17. For comparison, we also report the revenue obtained by the optimal Myerson auction and the second price auction (SPA) without reserve. The auctions learned by the neural network yield revenue close to the optimal.

B.2 Additional Experiments with RochetNet and RegretNet
In addition to the experiments with RochetNet and RegretNet on the single bidder, multi-item settings in Section 5.3 we also considered the following settings: K. Single additive bidder with independent preferences over two non-identically distributed items, where v 1 ∼ U [4,16] and v 2 ∼ U [4,7]. The optimal mechanism is given by Daskalakis et al. (2017). L. Single additive bidder with preferences over two items, where (v 1 , v 2 ) are drawn jointly and uniformly from a unit triangle with vertices (0, 0), (0, 1) and (1,0). The optimal mechanism is due to Haghpanah and Hartline (2019).
M. Single unit-demand bidder with independent preferences over two items, where the item values v 1 , v 2 ∼ U [0, 1]. See Pavlov (2011) for the optimal mechanism.
We used RegretNet architectures with two hidden layers with 100 nodes each. The optimal allocation rules as well as a side-by-side comparison of those found by RochetNet and RegretNet are given in Figure 18. Figure 19 gives the revenue and regret achieved by RegretNet and the revenue achieved by RochetNet.
We find that in all three settings RochetNet recovers the optimal mechanism basically exactly, while RegretNet finds an auction that matches the optimal design to surprising accuracy.

B.3 Experiments with RegretNet with Combinatorial Valuations
We next compare our RegretNet architecture for combinatorial valuations described in Section A.2 to the computational results of Sandholm and Likhodedov (2015) for the following settings for which the optimal auction is not known: N. Two additive bidders and two items, where bidders draw their value for each item independently from U  These settings correspond to Settings I.-III. described in Section 3.4 of Sandholm and Likhodedov (2015). These authors conducted extensive experiments with several different classes of incentive compatible mechanisms, and different heuristics for setting the parameters of these auctions. They observed the highest revenue for two classes of mechanisms that generalize mixed bundling auctions and λ-auctions (Jehiel et al., 2007).
These two classes of mechanisms are the Virtual Value Combinatorial Auctions (VVCA) and Affine Maximizer Auctions (AMA). They also considered a restriction of AMA to bidder-symmetric auction (AMA bsym ). We use VVCA * , AMA * , and AMA * bsym to denote the best mechanism in the respective class, as reported by Sandholm and Likhodedov and found using a heuristic grid search technique.
For Setting N and O, Sandholm and Likhodedov observed the highest revenue for AMA * bsym , and for Setting P the best performing mechanism was VVCA * . Figure 21 compares the performance of RegretNet to that of these best performing, benchmark mechanisms. To compute the revenue of the benchmark mechanisms we used the parameters reported in Sandholm and Likhodedov (2015) (  Figure 21: Test revenue and test regret for RegretNet for Settings N-P and a comparison with the best performing VVCA and AMA bsym auctions as reported by Sandholm and Likhodedov (2015). In order to make sure we are using sufficient data to report our results, we re-ran our evaluation for setting N on a bigger test set with up to 50,000 samples and computed the regret using 5,000 gradient ascent steps. The estimated revenue and regret remained approximately the same as that observed on our regular test set with 10,000 samples with regret computed using 2,000 gradient ascent steps. Figure 20 shows how the revenue and regret varies as we increase the size of the test set.

B.4 Experiments with RochetNet with varying linear units
In Figure 22, we show how the performance of RochetNet varies as we increase the number of initialized menu choices (i.e., number of units in the network). We consider here a single bidder and six items, where the bidder's valuation is sampled independently U [0, 1] for each item. The optimal mechanism is given by the Straight-Jacket Auction (SJA). We observe that RochetNet recovers the optimal design with increasing accuracy as we increase the number of menu choices (units in the network) and even while only a small fraction of the menu choices are active ( < 3% active when the number of initialized menu choices are over a 1000). When we also impose itemsymmetry, we observed that the performance of RochetNet is relatively invariant to increasing the number of initialized menu choices.

B.5 Additional Experiments with discovering new analytical results
In Section 5, we described how RochetNet can be used to discover new analytical results for optimal auctions. In this section, we give analogous computational results, again suggestive of the structure of theoretically-optimal auction designs, for two such additional settings: Q. One additive bidder and two items, where the bidder's valuation is drawn uniformly from the triangle R. One additive bidder and two items, where the bidder's valuation is drawn uniformly from the The mechanisms learned by RochetNet for setting Q and setting R for various values of c are shown in Figure 23 and Figure 24 respectively

C Comparison to Linear Programming
In Figure 25, we report additional details on the performance of the LP-based approach as we vary the discretization in the LP and the number of parameters in RegretNet (varying the number of hidden layers and hidden units). For LP, the number of parameters is given by the number of output variables used to define the objective and the constraints. For RegretNet, the number of parameters are computed by counting the number of learnable weights in the allocation and payment network. The results are reported for the setting in Section 5.8, with two additive bidders and two items, with bidder item values sampled independently U [0, 1]. For the -nearest rounding strategy, the LP-based approach yields a higher revenue than RegretNet, but this is misleading and would not be attainable in practice because it has higher regret and suffers from substantial IR violations. If we instead compute the allocation and payment in the LP through -down rounding, the IR violation is zero but the revenue is much lower. Increasing the amount of discretization in the LP leads to more accurate results with lower regret (and lower IR violations with -nearest), but the number of parameters and the run time also increase exponentially. For the setting with 12 bins per value, the LP did not terminate despite running for 9 days on an AWS EC2 instance with 48 cores and 96GB memory. In contrast, RegretNet learns a mechanism in this setting with negligible regret and zero IR violations in at most six hours for most configurations. In Figure 25, we report the test revenue and test regret achieved by RegretNet for different hidden layer R and hidden units K configurations.

D Omitted Proofs
We present formal proofs for all theorems and lemmas that are stated in the body of the paper or in other appendices. We first introduce some notation. We denote the inner product between vectors a, b ∈ R d as a, b = d i=1 a i b i . We denote the 1 norm for a vector x by x 1 and the induced 1 norm for a matrix A ∈ R k×t by A 1 = max 1≤j≤t

D.1 Proof of Lemma 2.1
Let Rewriting the expected value, we have where the last inequality holds because for any 0 <

D.2 Proof of Theorem 2.2
We present the proof for auctions with general, randomized allocation rules. A randomized allocation rule g i : V → [0, 1] 2 M maps valuation profiles to a vector of allocation probabilities for bidder i, where g i,S (v) ∈ [0, 1] denotes the probability that the allocation rule assigns subset of items S ⊆ M to bidder i, and S⊆M g i,S (v) ≤ 1. This encompasses both the allocation rules for the combinatorial setting, and the allocation rules for the additive and unit-demand settings, which only output allocation probabilities for individual items. The payment function p : V → R n maps valuation profiles to a payment for each bidder p i (v) ∈ R. For ease of exposition, we omit the superscripts "w". Recall that M is a class of auctions consisting of allocation and payment rules (g, p). As noted in the theorem statement, we will assume w.l.o.g. that for each bidder i, v i (S) ≤ 1, ∀S ⊆ M .

D.2.1 Definitions
Let U i be the class of utility functions for bidder i defined on auctions in M, i.e., and let U be the class of profile of utility functions defined on M, i.e., the class of tuples (u 1 , . . . , u n ) where each u i : , ∀i ∈ N for some (g, p) ∈ M. We will sometimes find it useful to represent the utility function as an inner product, i.e., treating v i as a real-valued vector of length 2 M , we may write Let rgt • U i be the class of all regret functions for bidder i defined on utility functions in U i , i.e., and as before, let rgt • U be defined as the class of profiles of regret functions. Define the ∞,1 distance between two utility functions u and u as and let N ∞ (U, ) denote the minimum number of balls of radius to cover U under this distance. Similarly, define the distance between u i and u i as max )|, and let N ∞ (U i , ) denote the minimum number of balls of radius to cover U i under this distance. Similarly, we define covering numbers N ∞ (rgt • U i , ) and N ∞ (rgt • U, ) for the function classes rgt • U i and rgt • U respectively. Moreover, we denote the class of allocation functions as G and for each bidder i, Similarly, we denote the class of payment functions by P and P i = {p i : V → R | p ∈ P}. We denote the covering number of P as N ∞ (P, ) under the ∞,1 distance and the covering number for P i using N ∞ (P i , ) under the ∞,1 distance.

D.2.2 Auxiliary Lemma
We will use a lemma from Shalev-Shwartz and Ben-David (2014). Let F denote a class of bounded functions f : Z → [−c, c] defined on an input space Z, for some c > 0. Let D be a distribution over Z and S = {z 1 , . . . , z L } be a sample drawn i.i.d. from D. We are interested in the gap between the expected value of a function f and the average value of the function on sample S, and would like to bound this gap uniformly for all functions in F. For this, we measure the capacity of the function class F using the empirical Rademacher complexity on sample S, defined below: where σ ∈ {−1, 1} L and each σ i is drawn i.i.d from a uniform distribution on {−1, 1}. We then have: Lemma D.1 (Shalev-Shwartz and Ben-David (2014)). Let S = {z 1 , . . . , z L } be a sample drawn i.i.d. from some distribution D over Z. Then with probability of at least 1 − δ over draw of S from D, for all f ∈ F,

D.2.3 Generalization Bound for Revenue
We first prove the generalization bound for revenue. For this, we define the following auxiliary function class, where each f : V → R ≥0 measures the total payments from some mechanism in M: Note each function f in this class corresponds to a mechanism (g, p) in M, and the expected value E v∼D [f (v)] gives the expected revenue from that mechanism. The proof then follows by an application of the uniform convergence bound in Lemma D.1 to the above function class, and by further bounding the Rademacher complexity term in this bound by the covering number of the auction class M. Applying Lemma D.1 to the auxiliary function class rev • M, we get with probability of at least 1 − δ over draw of L valuation profiles S from D, for any f ∈ rev • M, there exists a distributionindependent constant C > 0 such that, All that remains is to bound the above empirical Rademacher complexityR L (rev • M) in terms of the covering number of the payment class P and in turn in terms of the covering number of the auction class M. Since we assume that the auctions in M satisfy individual rationality and v(S) ≤ 1, ∀S ⊆ M , we have for any v, p i (v) ≤ 1.
By the definition of the covering number for the payment class, there exists a coverP for P of size |P| ≤ N ∞ (P, ) such that for any p ∈ P, there is a f p ∈P with max v i |p i (v) − f p i (v)| ≤ . We thus have: where the second-last inequality follows from Massart's lemma, and the last inequality holds because We further observe that N ∞ (P, ) ≤ N ∞ (M, ). By the definition of the covering number for the auction class M, there exists a coverM for M of size |M| ≤ N ∞ (M, ) such that for any (g, p) ∈ M, there is a (ĝ,p) ∈M such that for all v, This also implies that i |p i (v) −p i (v)| ≤ , and shows the existence of a cover for P of size at most N ∞ (M, ).
Substituting the bound on the Radamacher complexity term in (19) in (18) and using the fact that N ∞ (P, ) ≤ N ∞ (M, ), we get: which completes the proof.

D.2.4 Generalization Bound for Regret
We move to the second part, namely a generalization bound for regret, which is the more challenging part of the proof. We first define the class of sum regret functions: r i (v) for some (r 1 , . . . , r n ) ∈ rgt • U .
The proof then proceeds in three steps: (1) bounding the covering number for each regret class rgt • U i in terms of the covering number for individual utility classes U i (2) bounding the covering number for the combined utility class U in terms of the covering number for M (3) bounding the covering number for the sum regret class rgt • U in terms of the covering number for the (combined) utility class M.
An application of Lemma D.1 then completes the proof. We prove each of the above steps below.
Proof. By the definition of covering number N ∞ (U i , ), there exists a coverÛ i with size at most N ∞ (U i , /2) such that for any For any u i ∈ U i , takingû i ∈Û i satisfying the above condition, then for any v, Thus, for all u i ∈ U i , there existsû i ∈Û i such that for any valuation profile v, This completes the proof of Step 1.

Proof. Recall that the utility function of bidder
There exists a setM with |M| ≤ N ∞ (M, /n) such that, there exists (ĝ,p) ∈M such that Therefore, for any u ∈ U, takeû = (ĝ,p) ∈M, for all v, v , This completes the proof of Step 2.
Proof. By definition of N ∞ (U, ), there existsÛ with size at most N ∞ (U, ), such that, for any u ∈ U, there existsû such that for all v, v ∈ V , Step 1, it is easy to show N ∞ (rgt • U, ) ≤ N ∞ (U, /2). Together with Step 2 this completes the proof of Step 3.
Based on the same arguments as in Section D.2.3, we can thus bound the empirical Rademacher complexity as:R Applying Lemma D.1, completes the proof of the generalization bound for regret.

D.3 Proof of Lemma 3.1
First, given the property of the softmax function and the min operation, ϕ DS (s, s ) ensures that the row sums and column sums for the resulting allocation matrix do not exceed 1. In fact, for any doubly stochastic allocation z, there exists scores s and s , for which the min of normalized scores recovers z (e.g. s ij = s ij = log(z ij ) + c for any c ∈ R).

D.5 Proof of Theorem 3.1
In Theorem 3.1, we only show the bounds on ∆ L for RegretNet with additive and unit-demand bidders. We restate this theorem so that it also bounds ∆ L for the general combinatorial valuations setting (with combinatorial feasible allocation). Recall that the 1 norm for a vector x is denoted by x 1 and the induced 1 norm for a matrix A ∈ R k×t is denoted by combinatorial valuations (with combinatorial feasible allocation): We first bound the covering number for a general feed-forward neural network and specialize it to the three architectures we present in Section 3 and Appendix A.2.
Lemma D.2. Let F k be a class of feed-forward neural networks that maps an input vector x ∈ R d 0 to an output vector y ∈ R d k , with each layer containing T nodes and computing z → φ (w z), where each w ∈ R T ×T −1 and φ : R T → [−B, +B] T . Further let, for each network in F k , let the parameter matrices w 1 ≤ W and φ (s) − φ (s ) 1 ≤ Φ s − s 1 for any s, s ∈ R T −1 .
where T = max ∈[k] T and d is the total number of parameters in a network.
Proof. We shall construct an 1,∞ cover for F k by discretizing each of the d parameters along [−W, +W ] at scale 0 /d, where we will choose 0 > 0 at the end of the proof. We will useF k to denote the subset of neural networks in F k whose parameters are in the range {−( W d/ 0 − 1) 0 /d, . . . , − 0 /d, 0, 0 /d, . . . , W d/ 0 0 /d}. The size ofF k is at most 2dW/ 0 d . We shall now show thatF k is an -cover for F k .
We use mathematical induction on the number of layers k. We wish to show that for any f ∈ F k there exists af ∈F k such that: For k = 0, the statement holds trivially. Assume that the statement is true for F k . We now show that the statement holds for F k+1 .
We then have: where the second line follows from our assumption on φ k+1 , and the sixth line follows from our inductive hypothesis and from (20). By choosing 0 = B(2ΦW ) k , we complete the proof.
We next bound the covering number of the auction class in terms of the covering number for the class of allocation networks and for the class of payment networks. Recall that the payment networks computes a fraction α : R m(n+1) → [0, 1] n and computes a payment p i (b) = α i (b) · v i , g i (b) for each bidder i. Let G be the class of allocation networks and A be the class of fractional payment functions used to construct auctions in M. Let N ∞ (G, ) and N ∞ (A, ) be the corresponding covering numbers w.r.t. the ∞ norm. Then: Proof. LetĜ ⊆ G,Â ⊆ A be ∞ covers for G and A, i.e. for any g ∈ G and α ∈ A, there existŝ g ∈Ĝ andα ∈Â with We now show that the class of mechanismM = {(ĝ,α) |ĝ ∈Ĝ, andp(b) =α i (b) · v i ,ĝ i (b) } is an -cover for M under the 1,∞ distance. For any mechanism in (g, p) ∈ M, let (ĝ,p) ∈M be a mechanism inM that satisfies (22). We have: where in the third inequality we use b i , g i (b) ≤ 1. The size of the coverM is |Ĝ||Â|, which completes the proof.
We are now ready to prove covering number bounds for the three architectures in Section 3 and Appendix A.2.
Proof of Theorem D.1. All three architectures use the same feed-forward architecture for computing fractional payments, consisting of K hidden layers with tanh activation functions. We also have by our assumption that the 1 norm of the vector of all model parameters is at most W , for each = 1, . . . , R + 1, w 1 ≤ W . Using that fact that the tanh activation functions are 1-Lipschitz and bounded in [−1, 1], and there are at most max{K, n} number of nodes in any layer of the payment network, we have by an application of Lemma D.2 the following bound on the covering number of the fractional payment networks A used in each case: N ∞ (A, ) ≤ max(K, n) 2 (2W ) R+1 dp where d p is the number of parameters in payment networks. For the covering number of allocation networks G, we consider each architecture separately. In each case, we bound the Lipschitz constant for the activation functions used in the layers of the allocation network and followed by an application of Lemma D.2. For ease of exposition, we omit the dummy scores used in the final layer of neural network architectures.
The hidden layers = 1, . . . , R are standard feed-forward layers with tanh activations. Since the tanh activation function is 1-Lipschitz, φ (s)−φ (s ) 1 ≤ s−s 1 . We also have by our assumption that the 1 norm of the vector of all model parameters is at most W , for each = 1, . . . , R + 1, w 1 ≤ W . Moreover, the output of each hidden layer node is in [−1, 1], the output layer nodes is in [0,1], and the maximum number of nodes in any layer (including the output layer) is at most max{K, mn}.
By an application of Lemma D.2 with Φ = 1, B = 1, and d = max{K, mn} we have where d g is the number of parameters in allocation networks.
Unit-demand bidders. The output layer n allocation probabilities for each item j as an element-wise minimum of two softmax functions. The activation function φ R+1 : R 2 n → R n for the final layer for two sets of scores s,s ∈ R n×m can be described as: φ R+1,i,j (s, s ) = min{softmax j (s i,1 , . . . , s i,m ), softmax i (s 1,j , . . . , s n,j )}.
We then have for any s,s, s ,s ∈ R n×m , φ R+1 (s,s) − φ R+1 (s ,s ) 1 where the last step can be derived in the same way as (23).
Combinatorial bidders. The output layer outputs an allocation probability for each bidder i and bundle of items S ⊆ M . The activation function φ R+1 : R (m+1)n2 m → R n2 m for this layer for m + 1 sets of scores s, s (1) , . . . , s (m) ∈ R n×2 m is given by: where softmax S (a S : S ⊆ M ) = e a S / S ⊆M e a S .
We now bound ∆ L for the three architectures using the covering number bounds we derived above. In particular, we upper bound the the 'inf' over > 0 by substituting a specific value of : We apply the duality theory of Daskalakis et al. (2013) to verify the optimality of our proposed mechanism (motivated by empirical results of RochetNet). For the completeness of presentation, we provide a brief introduction of their approach here. Let f (v) be the joint valuation distribution of v = (v 1 , v 2 , · · · , v m ), V be the support of f (v) and define the measure µ with the following density, wherev is the "base valuation", i.e. u(v) = 0, ∂V denotes the boundary of V ,n(v) is the outer unit normal vector at point v ∈ ∂V , and m is the number of items. Let Γ + (X) be the unsigned (Radon) measures on X. Consider an unsinged measure γ ∈ Γ + (X × X), let γ 1 and γ 2 be the two marginal measures of γ, i.e. γ 1 (A) = γ(A × X) and γ 2 (A) = γ(X × A) for all measurable sets A ⊆ X. We say measure α dominates β if and only if for all (non-decreasing, convex) functions u, u dα ≥ u dβ. Then by strong duality theory we have and both the supremum and infimum are achieved. Based on "complementary slackness" of linear programming, the optimal solution of Equation 25 needs to satisfy the following conditions.
Corollary D.1 (Daskalakis et al. (2017)). Let u * and γ * be feasible for their respective problems in Equation 25, then u * dµ = v − v 1 dγ * if and only if the following two conditions hold: Then we prove the utility function u * induced by the mechanism for setting C is optimal. Here we only focus on Settiong C with c > 1, for c ≤ 1 the proof is analogous and we omit here 11 . The transformed measure µ of the valuation distribution is composed of: