A Constant-Factor Approximation for Nash Social Welfare with Subadditive Valuations

We present a constant-factor approximation algorithm for the Nash Social Welfare (NSW) maximization problem with subadditive valuations accessible via demand queries. More generally, we propose a framework for NSW optimization which assumes two subroutines which (1) solve a configuration-type LP under certain additional conditions, and (2) round the fractional solution with respect to utilitarian social welfare. In particular, a constant-factor approximation for submodular valuations with value queries can also be derived from our framework.


Introduction
We consider the problem of allocating a set I of m indivisible items to a set A of n agents, where each agent i ∈ A has a valuation function v i : 2 I → R ≥0 .The Nash social welfare (NSW) problem is to find an allocation S = (S i ) i∈A that maximizes the geometric mean of the agents' valuations, For α ≥ 1, an α-approximate solution to the NSW problem is an allocation S with NSW(S) ≥ OPT/α, where OPT denotes the optimum value of the NSW-maximization problem.
Allocating resources to agents in a fair and efficient manner is a fundamental problem in computer science, economics, and social choice theory, with substantial prior work [Bar05, BT96, BCE + 16, Mou04,RW98,Rot16,You95].A common measure of efficiency is utilitarian social welfare, i.e., the sum of the utilities i∈A v i (S i ) for an allocation (S i ) i∈A .This objective does not take fairness into account, as all items could be allocated to one agent whose valuation function dominates the others.In order to incorporate fairness, various notions have been considered, ranging from envy-freeness, proportional fairness to various modifications of the objective function.At the end of the spectrum opposite to utilitarian social welfare, one can consider the max-min objective, min i∈A v i (S i ), also known as the Santa Claus problem [BS06].This objective is somewhat extreme in considering only the happiness of the least happy agent.
Nash social welfare provides a balanced tradeoff between the requirements of fairness and efficiency.It has been introduced independently in several contexts: as a discrete variant of the Nash bargaining game [KN79,Nas50]; as a notion of competitive equilibrium with equal incomes in economics [Var74]; and also as a proportional fairness notion in networking [Kel97].Nash social welfare has several desirable features, for example invariance under scaling of the valuation functions v i by independent factors λ i , i.e., each agent can express their preference in a "different currency" without changing the optimization problem (see [Mou04] for additional characteristics).

Preliminaries
The difficulty of optimizing Nash social welfare depends naturally on the class of valuation functions that we want to deal with, and how they are accessible.Various classes of valuations have been considered in the literature.For the sake of this paper, let us introduce three basic classes of valuations, and two oracle models.
Classes of valuation functions A set function v : 2 I → R is monotone if v(S) ≤ v(T ) whenever S ⊆ T .A monotone set function with v(∅) = 0 is also called a valuation function or simply valuation.
A valuation v : 2 I → R is additive if v(S) = j∈S w j for nonnegative weights w j .A valuation v : 2 A valuation v : 2 I → R is fractionally subadditive (or XOS) if v(S) = max i∈I j∈S w ij for nonnegative weights w ij .
A valuation v : 2 We remark that these classes form a chain of inclusions: additive valuations are submodular, submodular valuations are XOS, and XOS valuations are subadditive.
Oracle access.Note that additive valuations can be presented explicitly on the input.However, for more general classes of valuations, we need to resort to oracle access, since presenting a valuation explicitly would take an exponential amount of space.Three types of oracles to access valuation functions have been commonly considered in the literature.
• Value oracle: Given a set S ⊆ I, return the value v(S).
• Demand oracle: Given prices (p j : j ∈ I), return a set S maximizing v(S) − j∈S p j .
• XOS oracle (for an XOS valuation v): Given a set S, return an additive function a from the XOS representation of v such that v(S) = a(S).

Prior work
The Nash social welfare problem is NP-hard already in the case of two agents with identical additive valuations, by a reduction from the Subset-Sum problem.For multiple agents, it is NP-hard to approximate within a factor better than 0.936 for additive valuations [GHM17], and better than 1 − 1/e ≃ 0.632 for submodular valuations [GKK20].
The first constant-factor approximation algorithm for additive valuations, with the factor of 1/(2e 1/e ) ≈ 0.346, was given by Cole and Gkatzelis [CG15] using a continuous relaxation based on a particular market equilibrium concept.Later, [CDG + 17] improved the analysis of this algorithm to achieve the factor of 1/2.Anari, Oveis Gharan, Saberi, and Singh [AGSS17] used a convex relaxation that relies on properties of real stable polynomials, to give an elegant analysis of an algorithm that gives a factor of 1/e.The current best factor is 1/e 1/e − ǫ ≃ 0.692 by Barman, Krishnamurthy, and Vaish [BKV18]; the algorithm uses a different market equilibrium based approach.Note also that this factor is above 1 − 1/e, hence separating the additive and submodular settings.
Constant-factor approximations have been extended to some classes beyond additive functions: capped-additive [GHM18], separable piecewise-linear concave (SPLC) [AMGV18], and their common generalization, capped-SPLC [CCG + 18] valuations; the approximation factor for capped-SPLC valuations matches the 1/e 1/e − ε factor for additive valuations.All these valuations are special classes of submodular ones.Subsequently, Li and Vondrák [LV21b] designed an algorithm that estimates the optimal value within a factor of (e−1) 2 e 3 ≃ 0.147 for a broad class of submodular valuations, such as coverage and summations of matroid rank functions, by extending the techniques of [AGSS17] using real stable polynomials.However, this algorithm only estimates the optimum value but does not find a corresponding allocation in polynomial time.
An important concenptual advance was presented in [GHV20], where a relaxation combining ideas from matching theory and convex optimization was shown to give a constant factor for the class of "Rado valuations" (containing weighted matroid rank functions and some related valuations).A crucial property of this approach is that it is quite modular and ended up leadin to multiple further advances.In [LV21a], this approach was extended to provide a constant factor approximation algorithm for general submodular valuations, by replacing the concave extension of a valuation by the multilinear extension.The initial factor was rather small (1/380).Recently, a much simpler algorithm combining matching and local search was presented to give a (1/4 − ǫ)-approximation for submodular valuations [GHL + 23].
For the more general classes of XOS and subadditive valuations [BBKS20, CGM21, GKK20], however, only polynomial approximation factors were known until now, and this is the best one can hope for in the value oracle model [BBKS20], for the same reasons that this is a barrier for the utilitarian social welfare problem [DNS10].The best known approximation factors up to now have been O(1/n) for subadditive valuations, and O(1/n 53/54 ) for XOS valuations if we are given access to both demand and XOS oracles [BKKN21].Constant factors for XOS valuations seemed quite out of reach prior to this work, and obtaining any sublinear factor for subadditive valuations was stated as an open problem in [BKKN21].

Our results and techniques
Our main result is the following.

Theorem. (informal)
There is an algorithm, with polynomial time and demand queries to agents' valuations, that provides a constant factor approximation for the Nash Social Welfare with subadditive valuations.
As a special case, this also gives a constant-factor approximation for XOS valuations accessible via demand queries.(The algorithm for XOS valuations is somewhat simpler, as we discuss later in this section.)This completes the picture in the sense that now we have a constant factor approximation for Nash social welfare in the main settings where one is known for utilitarian social welfare: for submodular valuations with value queries, and for subadditive valuations with demand queries.(It is known that a stronger oracle than a value oracle is required for XOS and subadditive valuations, even for utilitarian social welfare.) The basis of our approach is the matching+relaxation paradigm which gave a constant-factor approximation for submodular valuations [GHV20,LV21a].Considering that the only constant-factor approximation for social welfare with subadditive valuations [Fei08] is based on the "Configuration LP", which can be solved using demand queries, it is a natural idea to use a relaxation similar to the Configuration LP.A natural variant for Nash social welfare is the Eisenberg-Gale relaxation, using the logarithm of the concave extension of each agent's valuation.We apply this relaxation on top of an initial matching, as in [GHV20].
The main obstacle with this approach is that natural rounding procedures for the Configuration LP do not satisfy any concentration properties.At a high level, without concentration some agents have higher value but some have lower value -leading to poor Nash social welfare even if we can maintain the expected utilitarian social welfare.More specifically, the first challenge is that, given a fractional solution x i,S , we would ideally like to round it to an integral allocation by allocating set S to agent i with probability x i,S .Even though this ideal rounding preserves each agent's expected value, the variance can be arbitrary, depending on the fractional solution x i,S .Our first technical contribution is a procedure (see Lemma 3) for finding a new feasible solution to the Configuration LP that, for each agent, has only high value subsets in its support (with the exception of agents who get most of their value from a single item -this case is handled separately with the matching procedure).This procedure is rather simple in hindsight.At a high level, we can think of the fractional solution as a distribution of allocations for each agent.We want to discard the part of the distribution that corresponds to low value subsets; but this leaves uncovered probability mass.We re-cover this remaining mass by splitting high value subsets to "stretch" over more probability mass, while allocating each item with to agent i with the same total probability.
The next obstacle in rounding the Configuration LP is resolving "contentions": aka under the ideal rounding procedure described above, we may be trying to allocate the same item to multiple agents (even though in expectation it is only allocated to one agent).For XOS valuations, a simple independent randomized contention resolution scheme guarantees a constant factor approximation and also enjoys good concentration.However the situation is more complicated for subadditive valuations.The only known constant-factor approximation for social welfare with subadditive valuations is a rather intricate rounding procedure of Feige [Fei08], which does not seem to satisfy any useful concentration properties.In any rounded solution, there might be agents who receive very low value, which hurts Nash social welfare, and hence we cannot use it directly.
Our solution is an iterated rounding procedure, where in each stage a certain fraction of agents is "satisfied" in the sense that they receive value comparable to their fractional value.We allocate the respective items to them, subject to random filtering which ensures that enough items are still left for the remaining agents.Then we recurse on the remaining agents and remaining items.Still, some agents may receive a relatively small value, but we guarantee that the fraction of agents who receive low values is proportionally small, which means that the Nash social welfare overall is guaranteed to be good.As an example: if it suffices to solve for an allocation where n 2 agents receive value at least 1 2 V i , n 4 agents receive value at least 1 4 V i , n 8 agents receive value at least 1 8 V i , and so on.Then the approximation factor in terms of Nash Social Welfare turns out to be and this infinite product converges to 1/4 (we leave this as an exercise for the reader).
In order to guarantee the success of this rounding procedure, we need a concentration inequality (as in previous works).Concentration properties of subadditive functions are somewhat weaker and more difficult to prove that for submodular or XOS functions.Here we appeal to a powerful subadditive concentration inequality presented by Schechtman [Sch03], which is based on the "qpoint control inequality" of Talagrand [Tal89,Tal95].

Reducing Nash Welfare to Rounding of Configuration LP
Technically, we prove a reduction theorem (Theorem 1) which shows that to achieve a constant factor approximation for Nash social welfare, is it sufficient to implement efficient subroutines for two subproblems: (1) finding a solution of the Configuration LP satisfying a certain additional property (which happens to be satisfied for example by an optimal solution of the Eisenberg-Gale relaxation [GHV20], or the continuous greedy algorithm for the log-multilinear relaxation [LV21a]), and (2) rounding a fractional solution of the Configuration LP while losing only a constant factor with respect to utilitarian social welfare.The latter problem is relatively easy for XOS valuations, but non-trivial for subadditive valuations.Fortunately, a factor 1/2 rounding procedure is known due to Feige's work on welfare maximization with subadditive bidders [Fei08], which we use here as a blackbox.
We remark that the constant factors lost in various stages of our proof are rather large and lead to a final approximation factor of ∼ 1/375, 000 for the Nash social welfare problem with subadditive valuations.One may hope that as in the case of submodular valuations, an initially large constant factor can be eventually improved to a "practical one".
Paper organization.In Section 2, we present the main technical result, which is a reduction of Nash social welfare to a certain relaxation solver and a rounding procedure for the Configuration LP.In Section 3, we show how this implies a constant factor approximation algorithm for Nash social welfare with subadditive valuations.
We defer some components of the algorithm which are similar to earlier work to the appendices: Solving the relaxation and proving the required guarantees (Appendix A), the rematching lemmas (Appendix B), and concentration of subadditive functions (Appendix C).

Optimizing NSW via relaxation and rounding for social welfare
Here we describe our general approach which allows us to derive algorithms for NSW optimization in several settings.At a high-level, we reduce the NSW optimization to finding a certain solution for the "Configuration LP" (for social welfare optimization), and having a rounding procedure for the Configuration LP, again with respect to social welfare.
Let us define the Configuration LP: Equivalently, this can be written as where as before, The following is our main reduction theorem, which provides an algorithm for Nash social welfare, given two procedure that we call the Relaxation Solver and Rounding Procedure.Note that assumption on the Relaxation Solver is somewhat unusual: It is not that (x i,S ) is an optimal or near-optimal solution of (Configuration LP), but a different condition that the optimum social welfare with valuations by (The social welfare of x i,S itself with valuations w i is exactly |A|, so as a consequence (x i,S ) is c-approximate optimum with respect to the valuations w i .)This condition is required primarily for the later "rematching" step (Lemma 8).Fortunately, this condition is satisfied by natural approaches to solve the "Gale-Eisenberg" relaxation, which replaces the continuous valuation extensions by their logarithms.We discuss this further in Section 3.
Theorem 1. Suppose that for a certain class of instances of Nash social welfare, with subadditive valuations, we have the following procedures available, with parameters c, d ≥ 1: • Relaxation Solver: Given valuations (v i : i ∈ A) on a set of items I, we can find a feasible solution (x i,S ) of (Configuration LP) such that the social welfare optimum with valuations is at most c|A|.
• Rounding Procedure: Given a feasible solution (x i,S ) of (Configuration LP), we can find an allocation (S 1 , . . ., S n ) where each S i is a subset of some set S ′ i such that x i,S ′ i > 0 and Then there is an algorithm which provides an O(cd 2 )-approximation in Nash social welfare for the same class of instances, using 1 call to the Relaxation Solver and a logarithmic number of calls to the Rounding Procedure.The running time is polynomial in |A|, |I| and the support of the fractional solution (x i,S ).
In the following, we prove this theorem by presenting an algorithm with several phases.These phases are similar to recent matching-based algorithms for Nash social welfare [GHV20, GHV21, LV21a,GHL + 23] with the exception of two phases which are new (phases 3,4 below).The high-level outline is as follows.

NSW Algorithm Template.
1. We find an initial matching τ : A → I, maximizing i∈A v i ({τ (i)}).Let H = τ [A] denote the matching items and I ′ = I \ H the remaining items.Let also 2. We apply the Relaxation Solver to obtain a fractional solution (x i,S ) i∈A ′ ,S⊆I ′ and values We can view these values as "targets" for different agents to achieve.

Let
We preprocess the fractional solution (x i,S ) for i ∈ A ′′ , removing sets of low value and partitioning sets of high value, so that for every set in the support of the new fractional solution x ′ i,S for agent i, we have v i (S) = Θ(V i ).
4. We apply the Rounding Procedure to x ′ i,S to find an allocation Since each S i has value at most V i (due to our preprocessing), it must be the case that a Θ( 1 d )-fraction of agents receive value at least Θ( 1 d V i ).We allocate a random Θ( 1 d )-fraction of items to this Θ( 1 d )-fraction of agents (each item from their respective sets independently with probability Θ( 1 d )); call the resulting set T i for agent i.We repeat this phase for the remaining items and agents, until there are no agents left.For agents i ∈ A \ A ′′ , we define T i = ∅.

We recompute the initial matching to obtain a new matching
Now we proceed to analyze the phases of this algorithm more rigorously.

Initial Matching
There is nothing new in this phase.We can find a matching τ : A → I maximizing i∈A v i (τ (i)) by solving a max-weight matching problem with edges (i, j) where v i (j) > 0, and weights w ij = log v i (j).
We denote by H = τ [A] the matched items, by I ′ = I \ H the remaining items, and by A ′ = {i ∈ A : v i (I ′ ) > 0} the agents who get positive value from I ′ .
A property we need in the following is the following.
Proof.If there is j ∈ I ′ , v i (j) > v i (τ (i)), then we can swap τ (i) for j in the matching and increase its value.
For subadditive valuations, we also get

Relaxation Solver
Here we assume that the Relaxation Solver is available as a black-box.We return to its implementations in specific settings in Section 3.
We apply the Relaxation Solver to the residual instance on items I ′ = I \ H and agents A ′ who have nonzero value for some items in I ′ .The important property of the obtained solution (x i,S ) is that after scaling the valuations as follows, the social welfare optimum for w 1 , . . ., w n is at most c|A ′ |.In other words, for any feasible allocation

Set Splitting
Here we describe Phase 3, preprocessing of the fractional solution.We will work only with agents who get significant value from the fractional solution: Let ν i = max j∈I ′ v i (j) and We prove the following.
Lemma 3. Assume that the valuations v 1 , . . ., v n are subadditive.Given a feasible solution (x i,S ) of (Configuration LP) for an instance with agents A ′′ and items I ′ , where and ν i = max j∈I ′ v i (j), we can find (in running time and a number of value queries polynomial in the number of nonzero coefficients x i,S ) a modified solution We apply the following procedure to the fractional solution x = (x i,S ).

Let
i,S = 0 and k i,S = 0 for S / ∈ F i ; i.e., discard sets whose value is too low.

For every
Note that this is possible since by subadditivity, the average value of a subset in any partition of S into k i,S subsets is at least v i (S)/k i,S ≥ 1 3 V i , and indivisibility of items can cause the value to drop by at most ν i .4. For each set S ℓ produced above, remove some items if necessary to ensure that its value is at most V i .Call the resulting set S ′ ℓ .Note that since removing an item can decrease the value by at most ν i , we start from value ≥ 1 3 V i − ν i , and we only remove items as long as the value is more than V i , we can conclude that 5. Set xi,T = S∈F i ,∃ℓ:S ′ ℓ =T x i,S , and x ′ i,T = xi,T S xi,S .6.Return x ′ .
Let us now prove the desired properties of x ′ .By construction (step 5), the solution is normalized in the sense that T x ′ i,T = 1 for every i ∈ A ′′ .Also, as we argued above, V i ≥ v i (T ) ≥ 1 3 V i − ν i for every set T participating in the support of x ′ .It remains to prove that the coefficients x ′ i,T add up to at most 1 on each item.
Let us first consider xi,T : Since each contribution to xi,T for j ∈ T is inherited from some coefficient x i,S where j ∈ S, each coefficient x i,S contributes at most once in this way, and the coefficients x i,S for S ∋ j add up to at most 1, it is clear that i,T ∋j xi,T ≤ 1.Finally, x ′ i,T is obtained by normalizing xi,T ; so we need to be concerned about the summation S xi,S , which could be possibly less than 1.
We have: Observe that each coefficient x i,S for S ∈ F i contributes k i,S coefficients of the same value to the summation S xi,S , and the union of the respective sets is S.So we have considering that 3v i (S)/V i ≥ 1 for S ∈ F i , so the floor operation can decrease the ratio by at most a factor of 2. Also, we have S∈F i v i (S)x i,S ≥ 2 3 V i from above.Hence x ′ i,T = xi,T S xi,S ≤ xi,T and the coefficients x ′ i,S for S ∋ j add up to at most 1.

Iterated Rounding
Finally, we need to round the fractional solution (x ′ i,S ) obtained in the previous phase.As a subroutine, we use the assumed Rounding Procedure for (additive) social welfare.
Given a fractional solution x ′ = (x ′ i,S ) obtained in the previous phase, we call the procedure NSW-ROUND(x ′ , A ′′ , I ′ , δ) with a parameter δ = 1 7d , where d is a approximation factor guaranteed by the Rounding Procedure.

4:
Let R t ← {j ∈ I 0 : r j = t} for all t ≥ 1 5: Let t ← 1 6: while A t = ∅ do 7: For each agent i ∈ A t \ A t+1 , allocate T i ← S i ∩ R t . 10: end while 11: Return (T i : i ∈ A 0 ) 12: end procedure As we mentioned above, the intuition behind this rounding procedure is that it gives good value to a large fraction of agents, and exponentially small values to an exponentially decaying number of agents, so overall its Nash social welfare is good.We prove this in a sequence of lemmas.
Lemma 4.Under our assumption on the Rounding Procedure, and setting δ = 1 7d , in each round there is at least a δ-fraction of agents (rounded up to the nearest integer) who receive value at least δV ′ i .
Proof.Note that V ′ i ≥ 1 6 V i , since every set in the support of x ′ i,S has value at least 1 3 V i −ν i ≥ 1 6 V i .We assume that under valuations w i (S) = 1 , the Rounding Procedure returns an allocation Also, the fractional solution x ′ has been processed so that no set in its support for agent i has value more than V i ≤ 6V ′ i , and the rounding only allocates subsets of sets in the support of x ′ .Hence, we have for every agent i.Consider the agents who receive value w i (S i ) ≥ δ; if the number of such agents is less than δ|A t |, then the total value collected by the agents is i∈At w i (S) < 6 Lemma 5.If |A 0 | = a and the agents are ordered by the round in which they received items (and arbitrarily within each round), then the i-th agent receives each element of her set S i independently with probability at least δ(1 − i−1 a ).Proof.Consider the i-th agent, and suppose that i ∈ A t \ A t+1 , i.e. the agent gets items in round t.We claim that a(1 − δ) t−1 ≥ n − i + 1: In each round, we allocate items to at least a δ-fraction of agents, so the set of agents A t−1 remaining after t − 1 rounds has size at most a(1 − δ) t−1 .This set must include agent i, otherwise she would have been satisfied earlier.Therefore, The items allocated to agent i in round t are S i ∩ R t , where R t contains each element independently with probability δ(1 − δ) t−1 .By the argument above, δ(1 − δ) t−1 ≥ δ • a−i+1 a .
Lemma 6.If T i is the set allocated to the i-th agent in the ordering defined above (and we assume w.l.o.g. that the index of this agent is also i), and max j∈I ′ v i (j) ≤ ν i then Proof.By definition, the set S i tentatively chosen for the i-th agent in the round where i ∈ A t \A t+1 satisfies (see Lemma 3).By Lemma 5, the i-th agent receives a set T i = S i ∩ R t which contains each element of S i independently with probability at least δ(1 This is a random quantity due to the randomness in R t (the set S i is fixed here).We use concentration of subadditive functions (Theorem 20) to argue that this expression is not too large in expectation.We have f (S i ) = v i (S i ) + ν i ≥ 1 3 δV i .By the expectation property of subadditive functions (Lemma 16), we have

Let us denote the last expression µ
Now, we apply the lower-tail inequality, Theorem 20, with q = 2. Observe that we can assume n ) and so the desired bound holds.
Let us set q = 2 and k Our goal is to bound the expectation E[log V i f (T i ) ].We distinguish two cases: When f (T i ) <1 30 µ i , we use the bound f (T i ) ≥ ν i , which always holds.Otherwise, we use the bound f (T i ) ≥ 1 30 µ i .From here, One can verify that the function 2 2 x log x is upper-bounded by log 2 for all x > 0. 1 Hence, If T i is the set allocated to the i-th agent in the ordering defined above (and we assume w.l.o.g. that the index of this agent is also i), and max j∈I ′ v i (j) ≤ ν i then Proof.Let us denote a = |A ′′ |.From Lemma 6, we have Here, we have a i=1 log(1 = log a! a a ≥ −a by a standard estimate for the factorial.So we can conclude This concludes the analysis of the iterated rounding phase, which allocates the set T i to each agent i ∈ A ′′ .For agents i ∈ A \ A ′′ , we set T i = ∅.

Rematching and finishing the analysis
The last step in the algorithm is to replace the initial matching τ : A → H with a new matching σ : A → H which is optimal on top of the allocation (T i : i ∈ A).To analyze this step, we need two lemmas from previous work, whose proofs can be modified easily to yield the following.(We provide full proofs in the appendix.)Lemma 8 (matching extension).Let τ : A → I be the matching maximizing a∈A v i (τ (a)), H = τ (A), and for every allocation (T * 1 , . . ., T * n ) of the items in I ′ .Then there is a matching π : A → H such that i∈A where (S * 1 , . . ., S * n ) is an allocation of I optimizing Nash social welfare.Lemma 9 (rematching).Let τ : A → I be the matching maximizing a∈A v i (τ (a)), H = τ (A), I ′ = I \ H, π : A → H another arbitrary matching, and We apply Lemma 8 with the values V i = S⊆I ′ v i (S)x i,S , where (x i,S ) is the fractional solution returned by Relaxation Solver.Due to our assumptions, the condition of Lemma 8 is satisfied and hence there is a matching π : A → H as described in Lemma 8: From Lemma 7, we can find with constant probability an assignment (Recall that δ = 1 7d , where d > 1 is the parameter guaranteed by the Rounding Procedure.) Finally, we use the rematching Lemma 9, with values v i (T i ): there exists a matching ρ : Recall that at the end, we find a matching σ : A → H maximizing i∈A v i (T i + σ(i)).Therefore, the NSW value of our solution is at least as much as the one provided by the matching ρ,

Nash social welfare with subadditive valuations
Here we explain how to use the general framework described in Section 2 to obtain a constant-factor approximation for subadditive valuations, accessible by demand queries.
Theorem 10.There is a constant-factor approximation algorithm for the Nash social welfare problem with subadditive valuations, using polynomial running time and a polynomial number of queries to a demand oracle for each agent's valuation.
Aside from our general reduction and the ability to solve the Eisenberg-Gale relaxation with demand queries, the main component that we need here is an implementation of a Rounding Procedure for subadditive valuations, as described in Theorem 1.To our knowledge, there is only one such procedure known, which is rather intricate and forms the basis of Feige's ingenious 1/2-approximation algorithm for maximizing social welfare with subadditive valuations [Fei09].We use it here as a black-box, which can be described as follows.
Theorem 11.For any ǫ > 0, there is a polynomial-time algorithm, which given a fractional solution (x i,S ) of (Configuration LP) for an instance with subadditive valuations, produces a random allocation (R i : i ∈ A) such that for every agent, R i ⊆ S i for some S i , x i,S i > 0, and For the proof, we refer the reader to Section 3.2.2 of [Fei09], Theorem 3.9 and the summary of its proof which shows that every player receives expected value at least ( 1 2 − ǫ)V i .Now we are ready to prove Theorem 10.
Proof.Considering Theorem 1, we want to show how to implement the Relaxation Solver and Rounding Procedure for subadditive valuations.
The Relaxation Solver can be obtained applying standard convex optimization techniques to the (Eisenberg-Gale Relaxation) relaxation.As we discuss in more detail in Appendix A, we can compute the values and supergradients of the objective function using demand queries, and obtain an optimal solution satisfying the assumption of Lemma 15 (with f i = v + i , α = 1), and hence for every feasible solution x * .Another way to interpret this condition is that for V i = v + i (x i ) and modified valuations defined as w i (S) = 1 V i v i (S), there is no feasible solution x * achieving value i∈A w + i (x * i ) > 2|A|.In particular, the social welfare optimum with the valuations (w i : i ∈ A) is at most 2|A|.Hence, we satisfy the Relaxation Solver assumptions with c = 2.
Next, we implement the Rounding Procedure: Given a fractional solution (x i,S ), Theorem 11 gives a procedure which returns a random allocation (R i : . This means that for the modified valuations, Hence, we satisfy the Rounding Procedure assumptions with d = 2 1−2ǫ .Finally, we apply Theorem 1 with c = 2 and d = 2 1−2ǫ .We obtain a constant-factor approximation algorithm for the Nash social welfare problem with subadditive valuations accessible via demand queries.(The constant factor ends up being 20000(c + 1)d 2 = 375000 for ǫ = 0.1.)

A The Eisenberg-Gale Relaxation
We consider the following relaxation of the Nash Social Welfare problem similar to the relaxations in [GHV21,LV21a].We remark that the application of (Eisenberg-Gale Relaxation) in the Nash Social Welfare algorithm excludes the items allocated in the initial matching; indeed we ignore those items for the analysis in this section.
where f i is a suitable relaxation of the valuation function v i for each i.In particular, we will use the concave extension, From here, we can see that v + i (x i ) is a minimum over a collection of linear functions, and hence a concave function.

A.1 Solving the Eisenber-Gale Relaxation
Here we show how to solve the (Eisenberg-Gale Relaxation) using demand queries.
Lemma 12.Given demand oracles for v 1 , • • • , v n , an optimal solution x * for (Eisenberg-Gale Relaxation) can be found within a polynomially small error in polynomial time.Moreover, the support of x * has size polynomial in n.
) is a concave function as well (wherever v + i (x i ) > 0).If we implement the evaluation and supergradient oracles for log v + i (x i ), then we can use standard techniques (see, e.g., [DHHW23]) to maximize i∈A log v + i (x i ) over the convex polytope The function v + i (x i ) can be evaluated with polynomially many demand queries; this is wellknown [Fei08] and holds because the demand oracle happens to be the separation oracle for (Dual LP).Hence we can also evaluate log v + i (x i ).We focus here on the implementation of a supergradient oracle.
A supergradient of log v + i at a point z is any linear function L i (y) such that L i (z) = log v + i (z) and L i (y) ≥ log v + i (y) everywhere.Given z, as a first step, we find a supergradient of v + i itself: This can be done by solving the dual LP and finding α and (p j : j ∈ I) such that v + i (z) = α + j∈I p j z j = α + p • z.Since v + i (y) for every y is the minimum over such linear functions, we also have v + i (y) ≤ α + p • y for all y.Hence α + p • y is the desired supergradient at z. Next, we compute the gradient of log(α + p • y) with respect to y: We claim that the linear approximation of log(α + p • y) obtained by evaluating this gradient at z, , and for all y, where the second inequality follows from the concavity of log(α + p • y).
Hence, (Eisenberg-Gale Relaxation) can be solved in polynomial time, within a polynomially small error, using standard convex optimization techniques [DHHW23].In particular, we can find a point x such that i∈A log v + i (x * i ) ≤ i∈A log v + i (x i ) + ǫ for every feasible solution x * .Finally, let's explain why the solution can be assumed to have polynomially bounded support.Given a fractional solution x ij (which has obviously polynomially bounded support), for each agent i, using demand queries we also obtain a solution of (Dual LP) certifying the value of v + i (x i ).By complementary slackness, there is a matching primal solution of (Eisenberg-Gale Relaxation) which has nonzero variables corresponding to the tight constraints in (Dual LP) that define the dual solution.Since the dimension of (Dual LP) is polynomial, the number of such tight constraints is also polynomial.Hence we can assume that the number of nonzero variables in (Eisenberg-Gale Relaxation) is polynomial.

A.2 Properties of the optimal solution
Consider now the (Eisenberg-Gale Relaxation) in a general form, with objective functions f i (which could be equal to v + i or perhaps some other extension of v i ).
Suppose that x is an optimal solution of this relaxation.We will need the following property, which is also stated in [GHV20] in the context of general concave valuations (Lemma 4.1 in [GHV20]).Our proof here is much simpler.First, we consider the case of differentiable concave f i which makes the proof cleaner.(Recall however that v + i is not differentiable everywhere.) Lemma 13.For an optimal solution x of (Eisenberg-Gale Relaxation) with differentiable nonnegative monotone concave functions f i , and any other feasible solution x * , we have Proof.Since f i (x) is a concave function, we have From here, we get using the fact that x * is feasible and x is an optimum for the objective function i∈A log f i (x i ).
To deal with a more general situation where f i is not necessarily differentiable, and we don't find an exact optimum, we prove a robust version of this lemma.Lemma 14.Let f i : [0, 1] I → R for each i ∈ A be nonnegative, monotone and concave.For ǫ > 0, let x be an ǫ 4 -approximate solution of (Eisenberg-Gale Relaxation), in the sense that for every other feasible solution x ′ , (1) And suppose further that that y ij ≥ ǫ for all i, j, Then for every feasible solution x * , we have Note that we must necessarily have ǫ ≤ 1/|A|, because 1 ≥ i∈A x ij ≥ ǫ|A|.
Proof.Let x satisfy the assumptions of the lemma.For any feasible x * and T ≥ 1, using the concavity of f i , we can write From here, Note that since y ij ≥ ǫ, we have Also, f i (0) ≥ 0, so by monotonicity and concavity, Hence the ratio r i = T ǫ in absolute value, and we can use the following elementary approximation: Plugging into the inequality above, we obtain .
Applying the assumption of the lemma to the feasible solution We set T to equate the last two terms: T = 1/ǫ 3 , which gives the statement of the lemma.
Corollary 15.Given a value oracle and a supergradient oracle for each f i , for any constant α > 0, we can find a solution x of (Eisenberg-Gale Relaxation) in polynomial time such that for any feasible solution x * , Proof.For ǫ > 0 (to be chosen at the end), we run a convex optimization algorithm on (Eisenberg-Gale Relaxation) with the additional constraint that x ij ≥ ǫ, to obtain a solution x such that for any feasible x ′ satisfying the same constraint, we have By Lemma 14, this solution also satisfies Finally, note that every feasible solution x * of (Eisenberg-Gale Relaxation) can be modified to obtain a feasible solution x ′ = (1 − ǫn)x * + ǫn • 1 n 1 which satisfies the constraint x ′ ij ≥ ǫ, and we have f for every i ∈ A. Therefore, our solution also satisfies For ǫ = α 2+(1+α)n , we obtain the desired statement.

B Rematching lemmas
Here we prove the rematching lemmas from Section 2.2.These are essentially identical to lemmas in previous work on Nash social welfare, only reformulated in a way convenient for our presentation.We give self-contained proofs here for completeness.
Proof of Lemma 8. Suppose that S * i = H * i ∪ T * i where H * i ⊆ H and T * i ⊆ I ′ .We define a matching π as follows: For each nonempty H * i , let π(i) be the item of maximum value (as a singleton) in H * i .For H * i = ∅, let π(i) be an arbitrary item in H not selected as π(i ′ ) for some other agent.(Since |H| = |A|, we can always find such items.)Recall that A ′ are the agents who get positive value from I ′ ; in particular, we can assume T * i = ∅ for i / ∈ A ′ .Then we have, using monotonicity and subadditivity Here we use the AMGM inequality: where the last inequality is by assumption and the fact that i∈A |H Proof of Lemma 9. Let Ã = {i ∈ A : W i < max{v(π(i)), ν i }.We define a directed bipartite graph B between Ã and H, with two types of edges: We also define: We define a matching ρ as follows; • For i ∈ A τ , ρ(i) := τ (i), • For i ∈ A π , ρ(i) := π(i).
First, observe that this is indeed a matching: If it was the case that τ (i) = π(i ′ ) = j for some i ∈ A τ , i ′ ∈ A π , then we would have edges (i ′ , j) and (j, i) in the graph, and since there is a directed path from i to A ν (i ∈ A τ ), there would also be a directed path from i ′ to A ν , contradicting the fact that i ′ ∈ A π .Hence, ρ is a matching.Also, it's easy to see that for every Next, we analyze the value guarantee for ρ: We claim that i∈Aτ v i (τ (i)) ≥ i∈Aν ν i i∈Aτ \Aν v i (π(i)).Observe that the vertices of A τ can be covered disjointly by directed paths that terminate in A ν (from each vertex of A τ , there is such a path and it is also unique, because the in-degrees / out-degrees in the graph are at most 1).Let P denote the A τ -vertices on some directed path like this, and let s be its last vertex (in A ν ).If it was the case that i∈P v i (τ (i)) < ν s i∈P \{s} v i (π(i)), then we could modify the matching τ by swapping its edges on P for the π-edges from P \ {s}, and finally an element of value ν i for s (since this item is outside of H and hence available).This would increase the value of the matching τ , which was chosen to be optimal, so this cannot happen.It follows that i∈P v i (τ (i)) ≥ ν s i∈P \{s} v i (π(i)) for every maximal directed path terminating in A ν , and since these paths cover A τ disjointly, by combining all these inequalities we obtain Substituting this into the inequality above,

C Concentration of subadditive functions
Let us start with a simple lower bound on the expected value of a random set with independently sampled elements.
Lemma 16.If f : 2 M → R + is a monotone subadditive function and R is a random subset of S where each element appears independently with probability 1/k, k ≥ 1 integer, then Proof.Consider a random coloring of S, where every element j ∈ S receives independently a random color c(j) ∈ [k].Defining S ℓ = {j ∈ S : c(j) = ℓ}, we see that each set S ℓ has the same distribution as the set R in the Lemma.Therefore, by subadditivity.
This property is similar to the properties of submodular or self-bounding functions, which satisfy very convenient concentration bounds (similar to additive functions).Unfortunately, the same bounds are not true for subadditive functions; however, some concentration results can be recovered with a loss of certain constant factors.
Here we state a powerful concentration result presented by Schechtman [Sch03], based on the "qpoint control" concentration inequality by Talagrand [Tal89,Tal95].We state it here in a simplified form suitable for our purposes.
Theorem 17.Let f : 2 M → R + be a monotone subadditive function, where f ({i}) ≤ 1 for every i ∈ M .Then for any real a > 0 and integers k, q ≥ 1, and a random set R from a product distribution, Pr[f (R) ≥ (q + 1)a + k] (Pr[f (R) ≤ a]) q ≤ 1 q k .This statement can be obtained from Corollary 12 in [Sch03] by extending the definition of f to Ω * = I⊂M 2 I simply by saying f I (S) = f (S) for all S ⊆ I. Also, we identify 2 I with {0, 1} I in a natural way.Assuming f ({i}) ≤ 1 means that 0 ≤ f (S + i) − f (S) ≤ 1 for any set S, by monotonicity and subadditivity.Therefore, f is 1-Lipschitz with respect to the Hamming distance, as required in [Sch03].The statement holds for any product distribution, i.e. a random set R where elements appear independently.
Note that Theorem 17 refers to tails on both sides and hence is more convenient to use with the median of f than the expectation.The next lemma shows that this is not a big issue, since the theorem also implies that the median and expectation must be within a constant factor.Definition 18.We define the median of a random variable Z as any number med(Z) = m such that Pr[Z ≤ m] ≥ 1/2, Pr[Z ≥ m] ≥ 1/2.
For any nonnegative variable, obviously E[Z] ≥ 1 2 med(Z).For subadditive functions of independent random variables, we also get a bound in the opposite direction.
Hence, we obtain the following as a corollary of Theorem 17 and Lemma 19.For convenience, we also introduce a parameter ν > 0 as an upper bound on singleton values.