Participatory Objective Design via Preference Elicitation

In standard resource allocation problems, the designer sets the objective function, which captures the central allocation goal, in a top-down manner. The agents primarily participate in the allocation mechanism by reporting their preferences over the items; they cannot influence the objective once the designer sets it. Implicitly, this approach presumes that standard ways of eliciting the agents’ preferences adequately represent their true preferences—an assumption which does not hold if agents have preferences not just over the items they receive but also over the objective being optimized. For instance, agents may also have social preferences, such as inequality-aversion, altruism, or similar other-regarding behavior. We cannot express such preferences through standard cardinal utilities or ordinal rankings over the items the designer would typically elicit from the agents. This work examines how we can use this bottom-up preference elicitation stage to enable participants to express preferences over the objectives. We present a versatile framework that elicits agents’ preferences over a possible set of objectives and then minimally alters the underlying optimization problem to solve for a new objective that combines both the standard benchmark objective and the agents’ preferences for other objectives. We show how to evaluate this new participatory approach against the standard approach, using our notions of loss and gain in social welfare as well as individual tradeoffs. We illustrate the potency of this framework using a well-studied fair division problem where the designer aims to allocate m divisible items to n agents. In the standard setting, the designer optimizes for utilitarian social welfare, i.e., the sum of the agents’ cardinal utilities. We assume that some agents are also inequality-averse and may, therefore, have preferences for objectives that minimize inequality. Using the popular Fehr and Schmidt [31] model, we demonstrate how to map this fair division question to our framework, where the participatory approach optimizes both the standard utilitarian social welfare objective and the agents’ heterogeneous preferences over the level of inequality. We examine this problem theoretically to show that there can be large gains in social welfare if the designer uses this participatory approach. Further, we show that the loss in social welfare is linear in the level of inequality aversion and independent of the number of agents. We present a tighter bound in both cases under further natural assumptions on the preferences. We also examine the worst-case cost an individual agent might incur. Our results indicate that the loss in social welfare (measured by the standard objective) and gain in social welfare (measured by the participatory one) can favor the participatory approach in several natural settings. Throughout the work, we highlight various promising avenues for examining this participatory approach in the specific case study tackled in this paper and a broader range of resource allocation problems.


ABSTRACT
In standard resource allocation problems, the designer sets the objective function, which captures the central allocation goal, in a top-down manner.The agents primarily participate in the allocation mechanism by reporting their preferences over the items; they cannot influence the objective once the designer sets it.Implicitly, this approach presumes that standard ways of eliciting the agents' preferences adequately represent their true preferences-an assumption which does not hold if agents have preferences not just over the items they receive but also over the objective being optimized.For instance, agents may also have social preferences, such as inequality-aversion, altruism, or similar other-regarding behavior.We cannot express such preferences through standard cardinal utilities or ordinal rankings over the items the designer would typically elicit from the agents.
This work examines how we can use this bottom-up preference elicitation stage to enable participants to express preferences over the objectives.We present a versatile framework that elicits agents' preferences over a possible set of objectives and then minimally alters the underlying optimization problem to solve for a new objective that combines both the standard benchmark objective and the agents' preferences for other objectives.We show how to evaluate this new participatory approach against the standard approach, using our notions of loss and gain in social welfare as well as individual tradeoffs.
We illustrate the potency of this framework using a well-studied fair division problem where the designer aims to allocate  divisible items to  agents.In the standard setting, the designer optimizes for utilitarian social welfare, i.e., the sum of the agents' cardinal utilities.We assume that some agents are also inequality-averse and may, therefore, have preferences for objectives that minimize inequality.Using the popular Fehr and Schmidt [31] model, we demonstrate how to map this fair division question to our framework, where the participatory approach optimizes both the standard utilitarian social welfare objective and the agents' heterogeneous preferences over the level of inequality.We examine this problem theoretically to show that there can be large gains in social welfare if the designer uses this participatory approach.Further, we show that the loss in social welfare is linear in the level of inequality aversion and independent of the number of agents.We present a tighter bound in both cases under further natural assumptions on the preferences.We also examine the worst-case cost an individual agent might incur.
Our results indicate that the loss in social welfare (measured by the standard objective) and gain in social welfare (measured by the participatory one) can favor the participatory approach in several natural settings.Throughout the work, we highlight various promising avenues for examining this participatory approach in the specific case study tackled in this paper and a broader range of resource allocation problems.

INTRODUCTION
In standard use of algorithms and mechanism design for resource allocation, a central planner determines various aspects of the mechanism, including the central objective function we optimize.On the other hand, the participating agents primarily engage by contributing their preferences over the items.Underlying this setup is an assumption that standard ways of eliciting preferences-which often entail reporting their cardinal utilities over the items or their ranking of the items-can adequately represent the agents' true preferences.Notably, agents cannot influence the overall objective once the designer sets it.Naturally, we may assume that enabling agents to participatorily design the central objective is costly, difficult to implement, and challenging to study theoretically.
In this work, we identify a possible participatory design framework that balances these competing needs.We present a versatile framework that minimally alters the underlying optimization problem in resource allocation to incorporate the agents' preferences over the central objective.This framework leverages the natural bottom-up preference elicitation stage to capture agents' preferences not only over the items but also over the set of possible objectives.We further define notions of loss and gain in social welfare as well as individual tradeoffs incurred by the worst-off agent to evaluate how this participatory approach stacks up against the standard approach.
We then illustrate and stress-test this participatory approach using a well-studied fair division problem where the designer wants to allocate  divisible items to  agents.Under the standard approach, these multi-objective agents would only report their cardinal utilities   over the items and the designer optimizes for utilitarian social welfare,    .For our case study, we assume that some of these agents are inequality-averse, as modeled by Fehr and Schmidt [31].This popular behavioral economics model is one of the canonical social preference models, which generally study other-regarding behavior, including altruism, certain fairness concerns, and inequality aversion [5,19,30].
We then study the loss to standard social welfare from the designers' perspective and the gain to social welfare from the agents' perspective when we move from the standard to the participatory approach.We study the gain and loss in various general settings, finding that the relative loss can, at most, grow linearly in the level of inequality aversion and is independent of the number of participants.We also find that the ratio of gain-to-loss can be unbounded in some natural settings, highlighting potential significant gains.We provide tighter bounds under further natural assumptions on the (dis)similarity of the agents' preferences.We also examine the worst-case tradeoff any individual may suffer and find that individual tradeoffs can be linear in the number of participants.Finally, we address questions of strategy-proofness, by discussing possible designs to elicit the agents' true preferences over the objectives.
Our analyses suggest that the participatory approach, which elicits agents' preferences for inequality aversion, comes only at a small cost to efficiency, measured by standard notions of utilitarian welfare.Moreover, it can yield significant gains, measured by the participatorily designed objective.This suggests that empowering algorithm participants to contribute to shaping the objective function can drastically improve community-level outcomes.We contextualize our contribution within the broader research literature in Appendix B and discuss possible avenues for research exploration in Section 6.

PROBLEM FORMULATION
We begin by introducing our broader framework, specifically in the context of resource allocation.We then illustrate this framework's potency using a well-studied fair division problem.While we present our key technical contributions via this case study, the framework applies more broadly.We discuss generalizations in Section 6, where we highlight additional research avenues, and in Appendix A, where we demonstrate how this framework captures other existing studies of resource allocation.
Consider a resource allocation problem where a designer wants to allocate  items to  agents.Let  = [   ] be the allocation matrix in the set of feasible allocations X ⊆ R × + .For the case of divisible items,    is the proportion of item  allocated to agent .We begin with a standard formulation, where agents have linear utilities .
Here,   is the utility of agent  and  = [   ] denotes utility coefficients that parameterize the utility function.We use  () to denote the utility profile over the  agents:  () = ( 1 (),  2 (), . . .,   ()).The utility function above can be rewritten more concisely as   () = ⟨  ,   ⟩, where   and   denote the  th rows of  and , respectively. 1n standard resource allocation problems, the designer determines various aspects of the allocation mechanism-such as the objective function, resource availability, or fairness constraints-in a top-down manner.Each agent's participation is primarily limited to reporting their utility coefficients   .The standard approach, therefore, implicitly assumes that such utilities adequately reflect the agents' preferences for allocative outcomes.
Agents may, however, have preferences beyond their utility over the items.In our case study, we consider the setting where agents have social preferences, i.e., they care not only about their utility for items they receive but also about others' utility for their respective allocations.Examples include preferences over the level of inequality imposed by the allocation, altruism towards other agents, and various fairness concerns.Put more simply, the agents' preferred objective can be distinct from one another and the objective set by the designer, and might be more complex than their preference for their allocation.
Definition 2.1 (The set of objectives).We assume there is a set of possible objectives H , where each objective ℎ ∈ H : X × [] → R maps an allocation  into a real value for the respective agent.The objectives depend on the utility coefficients , which we drop from the notation for brevity. 2he set of possible objectives can be general.It may include, for instance, each agent's utility for their allocation (ℎ(; ) = ⟨  ,   ⟩), their utility for other agents' allocations, or the minimum utility over all the agents.In standard resource allocation problems, the designer selects one of these objectives from this set H .We refer to this objective as the benchmark objective and denote it by ℎ * .Intuitively, we can think of ℎ * as the objective defining   .
By contrast, we assume that each agent may have a different preferred objective, aggregating the objectives in H into a single one.We will assume that the aggregation function belongs to a specific function class parameterized by  .For example, the aggregation function may be a weighted linear combination of the objectives in H , with weights determined by the preference  ∈ R | H | .

Definition 2.2 (Multi-objective agents).
A multi-objective agent has a preference  within the space of valid preferences Θ that determines how to reconcile conflicting objectives in H .More precisely, there exists a function  : R | H | × Θ → R that aggregates values for all the objectives in H into a scalar value based on the agent's preference  .We use the shorthand   () to denote agent 's aggregated value:   () =  ({ℎ(; )} ℎ∈ H ;   ).We also denote the value profile ( 1 (),  2 (), . . .,   ()) by  ().
Standard resource allocation problems unavoidably elicit the utility coefficients .Although our multi-objective agent formalization captures far greater complexity in the agents' preferences, it only requires additional elicitation of  to determine how an agent aggregates the objectives.
After associating each agent with a single objective-whether this is a benchmark objective ℎ * chosen by the designer or the agent's preferred objective   -the next step is for the designer to define an optimization problem.We assume that the designer has a social welfare function  : R  → R that, along with the allocation constraints, defines the optimization problem.To concretely illustrate our framework, for the rest of this paper, we will use the utilitarian social welfare, which sums all the individual values with equal weight.
Using the above notions, we can now define our participatory approach, which enables agents to express preferences not only over the items but also over the objectives.Definition 2.3 (Participatory objective design).In the participatory approach to resource allocation, a designer first associates each multi-objective individual  with a single objective function   by eliciting their preference   over the set of possible objectives H .The designer then maximizes  ( () This participatory approach is in contrast to the standard approach, where the designer would optimize over  =1 ℎ * (; ).

Loss and Gain from Participatory Objective Design
We now introduce notions of loss, gain, and individual tradeoffs incurred by moving from the standard approach, where the designer selects an objective in a top-down manner, to the participatory approach, where agents influence the overall objective in a bottomup fashion.Specifically, we study loss in social welfare, as measured by the benchmark objective, and gain in social welfare, as measured by preferences elicited from the participatory approach.We also look at individual tradeoffs, which consider the maximum cost to utility incurred by a single agent.
Central to these notions is the comparison of social welfare maximizing allocations under the standard and participatory approaches.Formally, let  * () denote the profile of benchmark objectives (ℎ * (; 1), . . ., ℎ * (; )), and  denote the profile of aggregated objectives of individuals ( 1 (), . . .,   ()).We define these two optimal allocations as We first consider the notion of loss, which pessimistically measures the potential reduction in social welfare as measured by the benchmark objective if we move to the participatory approach.Definition 2.4 (Loss in social welfare).Suppose ℎ * ∈ H is the benchmark objective and agents may have multi-objective preferences.We define the loss in social welfare measured by the benchmark objective as where  ( * ( * )) is the optimal social welfare, as measured by the benchmark objective, and  ( * (  )) measures the same notion of social welfare under the participatory approach.
We similarly define the gain in social welfare, which measures the potential improvement in social welfare, as measured by the participatory approach.Definition 2.5 (Gain in social welfare).Suppose ℎ * ∈ H is the benchmark objective and agents may have multi-objective preferences.We define the gain in social welfare measured using the elicited preferences as Overall, small loss and large gain values indicate that it is favorable to move to the participatory approach as doing so does not incur much loss in social welfare, even by the benchmark objective, and it can result in an increase in social welfare, as measured by the participatory approach.
Our notions of loss and gain are similar to the notion of price of fairness in resource allocation.Price of fairness typically compares optimal allocations abiding by certain fairness constraints with optimal allocations determined without such constraints.In particular, the loss in this setting is closely related to existing notions of price of fairness (PoF) when ℎ * (; ) =   ().For instance: Bertsimas et al. [6]'s PoF = / ( ( * )) , Caragiannis et al. [16]'s PoF = 1 + / ( (  )) .
So far, we have looked at loss and gain, when we look at the overall social welfare.We may additionally consider the cost incurred by a single individual.For a benchmark objective ℎ * ∈ H , individual tradeoffs capture the maximum relative decrease in ℎ * that any single individual would incur when we move to a participatory approach.
Definition 2.6 (Individual tradeoffs).For a benchmark objective ℎ * ∈ H , the individual tradeoffs (IT) is A small  suggests that no single agent experiences a significant drop in social welfare, as measured by the benchmark objective, when we move to a participatory approach, A large  indicates that there exists an individual who may experience such a cost.Individual tradeoffs might diverge significantly from what we observe in aggregate, giving us a distinct notion of loss to consider.

Inequality-Averse Preferences as a Case Study
The above framework provides a backbone for studying participatory objective design and the tradeoffs incurred when moving from a standard, single-objective approach to a participatory, multiobjective approach.We now illustrate the potency of this framework by turning to a well-studied resource allocation problem with inequality-averse agents.Specifically, we consider a fair division problem where agents have utilities over the items they receive, as well as the overall inequality of a given allocation.This study of inequality-averse agents is a special case of other-regarding behavior and is wellstudied empirically and theoretically in behavioral economics [10,19,31].The model of inequality-aversion we study draws on work by Fehr and Schmidt [31].
Let   () be agent 's utility for the items, which we take as the benchmark objective ℎ * .Suppose the agents are additionally inequality-averse.Concretely, we let H be the set containing   s, the inequality imposed by being "more wealthy" than others (advantageous inequality), and inequality incurred by being "less wealthy" than others (disadvantageous inequality).
We follow the formulation of Fehr and Schmidt [31] We denote the agents' profile of   and   by  and , respectively.As is common in the literature, we assume that   ≥   > 0, and that   and   are of the same order [48].We examine the changes in allocation when agents exhibit slight inequality aversion, i.e., the parameters   and   are less than 1/2 and are typically small, to capture settings where the agents' true preferred objectives do not deviate significantly from the benchmark.
Historically, social welfare calculation in resource allocation has relied on an inequality-agnostic objective for each agent, i.e., ℎ * (; ) =   (), for all  ∈ [].Different mechanisms have been developed based on the particular structure of the social welfare function  to elicit agents' preferences for items and efficiently allocate resources.Nevertheless, models that do not account for other-regarding behaviors, like inequality aversion, may fail to capture agents' true preferences.Our framework presents one way to capture these preferences, so that they can influence the overall allocations.It also presents a way to evaluate how this approach compares to the standard approach using the benchmark objective.
In the rest of the paper, we consider the utilitarian social welfare and our measures of loss, gain, and individual tradeoffs when the designer's benchmark objective of  ( * ()) =  ( ()) is replaced by the participatory variant which takes inequality-aversion into account.

LOSS IN INEQUALITY-AGNOSTIC SOCIAL WELFARE
We first study the loss in social welfare as measured by the benchmark objective when we move from the standard approach to the participatory approach.For this analysis, we first consider the case of two agents and show that the worst-case loss linearly scales with the level of inequality aversion.We further show that we can obtain tighter bounds by imposing structures on the agents' preferences.
We then consider the case of  agents, where we show that the worst-case loss is independent of the number of agents, thereby inheriting the above linear relationship with the level of inequality aversion.We then consider the case of clustered agents with similar preferences and independent agents whose utility coefficients for the items are drawn independently from the same distribution.For the case of clustered agents, we provide a possibly tighter bound when the clusters are sufficiently distinguishable.For the case of independent agents, we prove an improved upper bound that grows quadratically with the level of inequality aversion.
We provide a summary of our results in Table 1.

Two-Agent Setting
For the case of two agents, we plug in  +  () and  −  () into agent 's aggregated value and obtain Here, − refers to the agent other than agent .The utilitarian social welfare in this setting is To begin, we characterize  , = arg max   ( ()) as the maximizer of social welfare under the participatory approach. 4In essence,  , assigns all of an item  to an agent  if her utility for the item is significantly higher than that of the other agent.What constitutes a "significant" difference is determined by a function of  , , and the total demand on the item, given by    +  −  .
Lemma 3.1 (Two-agent solution characterization).The social welfare-maximizing allocation for the sake of two inequalityaverse agents follows 4 For ease of notation, we denote this solution by For -dissimilar agents ( ≤ 1), we provide both a general worst-case bound and a tighter bound assuming utility coefficients are distributed uniformly.For independent agents, we introduce   as a measure of how well the underlying distribution of item 's utility coefficients is spread.The bound shows a quadratic improvement in terms of  ( , ).Multi-agent: Roughly speaking,   ( , ) is  (max    +   ).For a more accurate definition of   ( , ) refer to Theorem 3.4.All the provided bounds in the multi-agent setting assume no agent can get more than a  proportion of an item. where In general,  ,   depends on the whole matrix of utility coefficients .However, Eq. ( 13) significantly simplifies the problem by characterizing  ,   as a function of utility coefficients only for item , i.e.,  1 and  2 , and a common bounded variable Δ 1 .This characterization enables us to study the worst-case allocation for an item in isolation from the other items.As a direct application of Lemma 3.1, we next provide a worst-case bound on  :=  ( ( * )) −  ( ( , )).Theorem 3.2 (Upperbounded loss in an unrestricted two-agent setting).Without loss of generality, suppose that  1 ( * ) ≥  2 ( * ).We can upperbound loss as a sum over the terms per item:  ≤ : 1 > 2   , where and   ( , ) (1 −   −  − )/(1 +   +  − ).For a fixed  1 ,   will be maximized when  2 =  1 ( , )  1 .Without any restriction on  2 , this gives us a worst-case upper bound of where   ( , ) For a fixed  1 , Fig. 1 shows   as a function of  2 .When  2 ≥  1 , the item goes to agent 2 and  ∉ J 1 .We can therefore assume that   = 0.For  2 <  1 , as long as Δ  ≤ ( 1 +  2 )( 1 +  2 ) or equivalently  2 ≥  1 ( , ),   scales linearly with  2 .The maximum of   occurs at  2 =  ( , )  1 .At this point,   =  1 ( , )  1 , where   ( , ) 1−  ( , ).For  2 <  1 ( , )  1 , inequality aversion is not strong enough to change the allocation.Note, for small   s and   s, we have   ( , ) = Θ(1) and   ( , ) = Θ(  +  − ).
Figure 1:   of Eq. ( 14) as a function of  2 when  1 is fixed.
The worst-case upper bound of Eq. ( 15), or equivalently  ≤  1 ( , ) ∥ 1 ∥ 1 , is observed when the agents have aligned preferences and  2 is a down-scaled version of  1 .Next, we investigate whether we can avoid this worst-case scenario and attain better guarantees by imposing further restrictions on the agents' preferences.In particular, we consider three cases: similar agents, dissimilar agents, and independent agents.We briefly discuss these cases in the following and refer the reader to Appendices D and E for the complete analysis.
Similar Agents.Since our measure of inequality depends on the absolute difference of utilities, a natural choice to impose similarity is to bound ∥ 1 −  2 ∥ 1 .We say that agents are -similar if ∥ 1 −  2 ∥ 1 ≤ .For -similar agents, an immediate result of Eq. ( 14) is  ≤  Δ  ≤ .This bound is tight up to a factor of 2 (Proposition F.1).

Dissimilar Agents. We say agents are
For agents that are -dissimilar, the maximal loss, which corresponds to  2 aligning with  1 , occurs only if This ratio is lowerbounded by 1/, which we obtain using Jensen's inequality.Hence, for -dissimilar agents, when  ≪ 1, we anticipate a significantly smaller loss compared to the worst-case scenario.Intuitively, the dissimilarity constraint prevents the alignment of  1 and  2 for many items with large  1 , resulting in little competition for the items between the agents.
We upperbound   for the general case of -dissimilar agents in Theorem D.1.As the theorem states, an item will not contribute more than   1 ( , ) • max () 1− , ()  to the loss, where  ∈ [0, 1] is an arbitrary constant.Without any assumption on how utility coefficients are distributed, for the choice of  = 0.5, an immediate result of this theorem is  =    1 ( , )

√
, which is an informative bound only if  < 1.The theorem further states that the loss will be mainly realized from items with  1 <  ()  .Therefore, introducing a prior distribution over the  1 s can improve our bounds.For example, assuming  1 ∼   (0, 1), for  = 0.5, no more than  √  proportion of items will contribute to the loss in the worst-case scenario, resulting in  =    1 ( , )  .By carefully selecting our , we show an improved bound of    1 ( , ) () 4/3 in Corollary D.2.
Independent Agents.Let  1 and  2 be independently and identically distributed according to distribution   .Here,   is the density function with the corresponding cumulative distribution function   .We do not make any assumption on the independence of items.Looking at Fig. 1, recall that   ≤  1 ( , )  1 , and a positive   occurs only when  2 falls within the interval [ 1 ( , )  1 ,  1 ].Since agents are independent, for a well-spread distribution   , we can argue Hence, we expect , and consequently ) .Since we do not know which agent is doing better a priori when preferences are random, we can use   ( , ) = max    ( , ) in our bounds: ( , ) .This quadratic bound is a significant improvement over the worst-case  (   ( , )) bound, though it only holds if   is well-spread.Refer to Proposition F.2 for a counterexample.
In Appendix E, we provide bounds for the loss with general distributions.Of particular interest is Corollary E.3 where we introduce   sup 0≤ ā≤1 ā   ( ā)/  ( ā) as a measure of how well-spread distribution   is.We show that having bounded   for every item  is sufficient to bound the expected loss quadratically in   ( , ).For example, this holds for the uniform distribution, which has a  value of 1.

Multi-Agent Setting
We now consider the general case of  agents.Similar to the twoagent case, we begin with a characterization of the optimal allocation  , , allowing us to analyze items independently.In this setting, we add the constraint that no agent can get more than a   portion of item .An example of such constraint is assigning students to a class of size 1/  , where no single student can occupy more than one seat.For  = 2 and   = 1, this is equivalent to the problem we studied in the two-agent setting.Lemma 3.3 (Multi-agent solution characterization).Suppose there are  items and  inequality-averse agents, where we would like to maximize social welfare as measured by the multi-objective preferences ( , ), subject to the constraint that the share of each agent from any item  does not exceed   .
For any pair of agents  and , if agent  has not received her maximum share from item  (i.e.,  ,   <   ), then agent  can only get a share of  (i.e., If, in the limit of many agents, 1     → ᾱ and 1 Eq. (17) indicates that in the reallocation of an item from agent  to , both the society view regarding inequality represented by ∥ ∥ 1 and ∥ ∥ 1 , and inequality aversion of the individuals involved play a role.For instance, a society moderately averse to (disadvantageous) inequality (moderate ∥ ∥ 1 ) facilitates reallocation even when the better-off agent is not averse to (advantageous) inequality (small   ).As an immediate result of Lemma 3.3 we can upperbound the loss in the most general case: Theorem 3.4 (Upperbounded loss in an unrestricted multi-agent setting).Suppose 1/  ∈ N and let   be the set of we can upperbound the loss as a sum over terms per item: If, in the limit of many agents, 1     → ᾱ and 1     → β, then This result is in stark contrast with similar studies of the price of fairness.For instance, Caragiannis et al. [16] show in the allocation of divisible goods enforcing proportionality or envy-freeness, the price of fairness grows at least with √ .This is equivalent to a relative loss of Ω(1 − 1/ √ ) and approaches 1 asymptotically, implying that the fair allocation becomes inefficient.In contrast, we bound the relative loss in our setting by max    ( , ), which is a constant.
In simple terms, a loss as severe as Eq. ( 19) can occur if, for each item , there is a group   consisting of at least |  | worse-off agents with closely aligned down-scaled interests to those of   .Like the two-agent setting, we ask whether having further structure on agents' utility coefficients can help avoid the worst-case loss.We consider two cases: clustered agents and independent agents.Clustered Agents.Suppose that our agents are in  clusters.We denote the set of agents within cluster  ∈ [] by C  .For each cluster , define the cluster's upper and lower representative coefficients: ā max  ∈ C     ,    min  ∈ C     .We assume that clusters are easily distinguishable, i.e., agents within a cluster are similar to one another and dissimilar agents in other clusters.More precisely, (1) Within each cluster , suppose ∥ ā −   ∥ 1 ≤ .We call  the radius of the cluster.(2) Between distinct clusters  and  ′ , suppose ā and   ′ are -dissimilar for every For small values of  and , this structure enables us to directly apply findings from the two-agent setting and derive tighter bounds.
Theorem 3.5 (Upperbounded loss in clustered agents setting).Suppose each agent  belongs to one of  distinct clusters.Clusters have a radius of  and between clusters dissimilarity of , where  ≤ 1.Each agent's share from an item is bounded by , where we assume that 1/ ∈ N for the sake of simplicity.The expected loss is bounded by If, for  ∈   , the utility coefficients    are best explained by (, 1), the exponent of the () term improves to ( + 1) 2 /( + 2).
For a small , the latter term of Eq. ( 21) is dominant.This only improves our bound beyond the unrestricted bound of Theorem 3.4 if the number of clusters is small and clusters are sufficiently distinguishable.
Independent Agents.Consider    ∼   independently for each agent , but note that preferences may not be independent across items.The following theorem demonstrates that if   is sufficiently small, with the number of winners for item  (i.e., 1/  ) being comparable to , the loss can be quadratically bounded in the level of inequality aversion, irrespective of the number of agents.Theorem 3.6.Suppose each agent 's utility coefficients    are drawn independently from the distribution   , with a corresponding cumulative distribution   .Suppose further that  → ∞, but    is bounded.For   ∈  1 , define   max ā>0 ā   ( ā)/  ( ā).If   ≤ 1/(     ( , )) for every , then

GAIN IN INEQUALITY-AVERSE SOCIAL WELFARE
In this section, we study the gain in social welfare, as measured using elicited preferences over objectives, from moving to a participatory approach.We do so by examining the relationship between loss and gain using the gain-to-loss ratio.A high gain-to-loss ratio indicates a relatively higher benefit to moving to a participatory approach than the loss in social welfare, as measured by the benchmark objective.Specifically, we ask: Is / bounded in general?If not, could we bound this ratio using further assumptions on agents' preferences?

Two-Agent Setting
We start from a two-agent setting and present a lower bound on the gain.The ratio of this lowerbounded gain over the upperbounded loss from the previous section will provide a lower bound on /.Proposition 4.1.Without loss of generality, suppose  1 ( * ) ≥  2 ( * ).We can lowerbound gain as a sum over terms per item:  ≥ : 1 > 2   , where Figure 2:   from Eq. ( 23), as a function of  2 with  1 kept fixed, is plotted in blue.  is also depicted in gray.
We now examine   in Eq. ( 23) as a function of  2 (Fig. 2).For a fixed  1 , the gain is increasing in  2 as long as  2 <  1 .The maximum gain of 2( 1 +  2 )  1 will be realized as the maximum gain can be written as c ( , )  1 .In Fig. 2, we have depicted   along with an upper bound on the loss in gray (duplicating Fig. 1).There are especially two interesting regimes in this figure: (1) For  2 → −  1 ,   /  takes very large values.In this case, we can also expect large values for /.(2) For  2 → +  1 (Δ 1 )  1 , the ratio of   /  has small values.Specifically, when Δ 1 →  1 +  2 , one can verify   in Eq. ( 23) goes to zero.But for Δ 1 far from  1 +  2 ,   /  can be lowerbounded meaningfully above 0.
The next two propositions formally state the above observations.Proposition 4.2.Suppose the agents are -similar, i.e., ∥ 1 −  2 ∥ 1 ≤ , and initially  1 ( * ) >  2 ( * ) with no tie for any item.Then Intuitively, even when the gain-to-loss ratio is small, it can still be meaningfully above 0.

Multi-Agent Setting
The large gain-to-loss ratio is not limited to the two-agent setting.Consider the allocation of  goods to  agents with   = 1 and the following utility coefficients: As  → + 0, it is straightforward to see  *   = 1{ = 1} and  , In the limit of  → + 0, we have  → 0 and  → 2 >1 ( 1 +   ).Therefore, / → ∞.Note, although a dissimilarity constraint can potentially upperbound the gain, it cannot upperbound the gain-to-loss ratio.The following proposition shows that, under weak conditions, even extreme dissimilarity cannot guarantee a bounded gain-to-loss ratio.Proposition 4.4.For any  > 0, suppose there exist  agents who are pairwise -dissimilar.If there exists an agent  for whom either   > 0 or   > 0 for some  ≠ , then / → ∞.

INDIVIDUAL COSTS OF INEQUALITY AVERSION
The above notions of loss and gain consider aggregate outcomes, but we may also care about the worst-off individual.To this end, we wish to bound the worst-case individual tradeoffs In this section, we show that  ( , ) can approach , even for small levels of inequality aversion.This high individual tradeoff often stems from competition for items when agents' preferences are similar, suggesting this may result from the benchmark allocation rather than the ill-suitedness of the inequality-averse allocation.We give an example where  ( , ) ≈ , yet  ( ( * )) ≈  ( ( , )).However, we can give slightly more optimistic bounds on  ( , ) under mild assumptions on which agent gives up the most utility.
We start by examining individual tradeoffs in the two-agent, two-good setting.If utilities are normalized.i.e.,     = 1 for all , we can write the utility profile with coefficients where  1 (resp. 2 ) is how much agent 1 (resp.agent 2) values good 1 and 1 −  1 (resp. 1 −  2 ) is how much agent 1 (resp.agent 2) values good 2. We additionally assume    > 0 for all , , so that there exists a complete allocation , i.e.,     = 1 for all  ∈ [], with no inequality,  +  () =  −  () = 0 for all .Without loss of generality, suppose  1 >  2 and  1 > 1 −  2 .In this setting,  * is the identity, giving all of good 1 to agent 1 and all of good 2 to agent 2, yet  1 ( * ) >  2 ( * ).Moreover, by the characterization given in Lemma 3.1, assuming inequality aversion is sufficiently strong, i.e.,  1 +  2 ≥ ( 1 −  2 )/( 1 +  2 ), then we have This allocation gives enough of good 1 to agent 2 to equalize the agents' utilities.As agent 1 will be giving up some of her allocations and agent 2 will be receiving goods from agent 1 relative to  * , agent 1 incurs the maximum cost and with for 0 <  1 <  2 sufficiently small.In particular, the closed form of , which approaches 2 as  1 ,  2 → 0. We now move to the multi-agent setting.In Proposition 5.1, we show that in the worst case, individual tradeoff scales linearly with the number of agents .Proposition 5.1.There exist  agents with normalized utility coefficients for whom  ( , ) → .
The above bound gets arbitrarily close to  when all agents have similar preferences.However, allocating "almost all of the utility" to a single agent is optimal.Inequality aversion, in this case, leads to a significant drop for the single agent despite having a small loss in social welfare.This leads us to ask what happens if agents are sufficiently dissimilar.We bound max    ( * )   ( , ) by bounding   ( , ) by a 0-inequality allocation   .This bound enables us to understand  ( , ) in terms of the agent who receives the highest and lowest utilities under  * (Lemma G.1). Proposition 5.2.Suppose  =  = 1 and let   be an allocation such that This bound lends itself to a more straightforward interpretation, as we now have a bound characterized by the best-and worst-off agents according to  ( * ).Consequently, if the worst-off person in the  * allocation has positive utility, we can bound individual tradeoffs based on  * .

DISCUSSION AND CONCLUSION
In this paper, we study the impact of eliciting inequality preferences from inequality-averse agents on resource allocation.We upperbound the loss to inequality-agnostic welfare the principal incurs by eliciting inequality aversion, and upperbound the gain to inequality-averse welfare by eliciting such preferences.In general, these bounds are linear in the inequality aversion , though the bounds can be tightened with certain assumptions on the structure of agents' preferences.Moreover, we show the largest tradeoff that any one agent might incur to their inequality-agnostic utility from inequality aversion can be arbitrarily bad even under stronger assumptions on preferences, growing linearly in the number of agents.
In our work, we hope to encourage further exploration of preference elicitation can be used to inform bottom-up approaches to resource allocation.We assume in inequality-averse allocation the agents are able to communicate their inequality aversion preferences.There may be cases where a principal can elicit more granular information such as a partial ranking from agents to estimate   values in settings where agents cannot communicate them exactly.Extending our results to the allocation of indivisible goods and other resource allocation problems will require careful consideration.Finally, the tradeoffs noted in this work, which we explore from a largely theoretical lens, may point to and provide a tool for understanding empirical behavior in resource allocation settings.For instance, in resource allocation settings that result in surprising or undesirable outcomes in practice, we wish to see if we can evaluate if these outcomes are caused by the planner failing to incorporate inequality-aversion or other social preferences in the utility model.

ETHICAL CONSIDERATIONS, POSITIONALITY, AND REFLECTIONS ON ADVERSE IMPACTS
This work is primarily a theoretical proof-of-concept to understand the potential for preference elicitation to enable a bottom-up approach to objective design.This is concerned with questions related to the distribution of power and, in particular, how we can design objectives through a participatory approach.
The motivation for modeling inequality-averse agents comes from the behavioral economics literature [10,31].Of course, the true modeling of participant preferences is far more nuanced than what this model captures.Consequently, trade-offs of how granularly participants can and should express community-level preferences should be studied in future work.We believe the results of this paper-which show potential inefficiencies in standard approaches compared to our participatory approach-provide further motivation for this line of work.
While we hope this work leads to a deeper examination of bottom-up approaches for designing objectives, we also acknowledge that such approaches might be susceptible to adversarial participants, such as spoofing attacks.We encourage future research to scrutinize this potential.Finally, the participatory approach we present in this work is one of many such approaches and may not be appropriate or efficient, depending on the underlying sets of objectives.We encourage future work to examine this broader space of theoretical possibilities and empirically evaluate these frameworks in practice.

A BROADER APPLICATIONS OF MULTI-OBJECTIVE FRAMEWORK
In Section 2, we motivate and discuss a general framework of bottom-up resource allocation with multi-objective agents, then proceed to use inequality-averse agents as one case study of this model.We now demonstrate some ways in which this framework holds with greater generality by mapping some existing works on resource allocation where agents' utilities cannot solely be captured by classical utility measures.
While this paper examines inequality aversion, this is just one of many well-studied formulations of social preferences [32] .Other formulations of social preferences, such as altruism [45], can be neatly modeled here.Chen and Kempe [20] model altruistic agents (in a traffic routing setting) as having cost for their latency plus  times the latency they create.In a similar vein, Flanigan et al. [34] model altruistic voters as having utility for their preferences plus  times utility for the "public preference" represented by the population mean, with valuation   () =   () +    ′   ′ ().There's also a body of literature that looks at price of fairness, including price of envy-freeness [14,16,23], in which fairness is imposed top-down by the mechanism designer.In reality, this fairness might be valued by multi-objective agents in fair division problems, and might be resolved bottom-up.Instead of treating envy-freeness as a constraint, we can model multi-objective agents as valuing envy-freeness as part of the objective.Here, multi-objective agents might have valuations   () =   () −  max  ′ ⟨  ,   ′ −   ⟩, with weight  on the maximum envy they have for another agents' allocation.Moreover, this framework can also capture more recent variations of envy-freeness that incorporate, for example, positions on social networks [1] by modifying the terms in the summand above.
Finally, we note that our framework extends beyond just social preferences.For example, externalities are well-studied, whereby the utility individuals have for items they receive might be positively or negatively impacted by whether other agents have those items.One common example of externalities is the widespread use of cell phones: a user only has high utility for a cell phone if their friends and family also have one so they can communicate.In this case of externalities, we can model multi-objective agents   () =   () +   ′ ≠ 1{  ′ = 1}, where utility increases with the number of other people who have the item.

B RELATED WORK
Participation in Algorithm Design.While research on participatory machine learning has surged in recent years [43], there is little consensus about how to operationalize participation [9,26].In light of this increasing spotlight on participation, Sloane et al. [60] warn about the temptation to "participation wash, " where algorithm designers boast ingenuine, deceptive, or nonconsensual participation.Similarly, Hitzig [38] observes the widening of a normative gap between the normative theory of the mechanism design literature and the normative goals of policymakers, which creates barriers to participation in mechanisms and algorithms in the way policymakers want.
Our work uses inequality-aversion as a case study, which can be contrasted with the fairness literature and work on equity in the resource allocation literature.In light of this, Mulligan et al. [47] discuss disciplinary-specific conceptualizations of fairness in our era, which can obscure discussions about values in technology.They call for interdisciplinary discussions and collaborations around the concept of fairness.Similarly, Finocchiaro et al. [33] emphasize that the approach to fairness has always been restricted to what can be reduced to the field's scope.In particular, machine learning traditionally approaches fairness by incorporating a pre-defined metric into optimization, treating people as data points with no agency.On the other hand, mechanism design, while considering potential strategic behavior, often tends to measure utility as a proxy for equality.Our framework can be seen as a step towards bridging these views; our proposal is to avoid a fixed objective function through participatory objective design that can incorporate diverse views on, for example, fairness or inequality aversion.We also deviate from the conventional notion of selfish agents and utilities, allowing for richer preferences over overall outcomes.
Other-Regarding, or Social, Preferences.The economics literature has studied models of agents who have other-regarding preferences [10,25,27,31], but most often in the context of analyzing equilibrium strategies in various games, rather than evaluating the allocation produced by a fixed mechanism or game.These social preferences have been empirically validated, whether because of social norms [30], a desire for reciprocity [5], or a desire to be fair [29], and have been observed in contexts ranging from tax compliance [3] and fair wages [2] to general games [19].Fehr and Fischbacher [28] argue that it is impossible to fully understand the effects of market outcomes and competition without considering social preferences, and Fehr and Fischbacher [29] show that even a few altruists or egoists can drastically affect market outcomes.Recent work has additionally shown that social preferences can curb distortion in voting [34] and participatory budgeting [4].
Fair Division as a Case Study.As a case study of our framework, we study a classical resource allocation problem of allocating  divisible goods among  agents, often studied by the fair division literature, which most often studies a top-down approach with different conceptualizations of fairness.Implicitly much of this literature studies mechanisms that do not directly optimize utilitarian social welfare (because of its implicit unfairness), but benchmark proposed mechanisms against this metric.For example, the proportionally fair allocation mechanism of Kelly [42] (equivalent to the Nash Bargaining Solution and CEEI) has become the one of the most widely-used mechanisms for allocating bandwidth rates on networks, as it maximizes Nash social welfare (products of utilities) for all of the agents, which Caragiannis et al. [17] observes is envy-free and efficient.Other notions of fairness yield slightly different fair division mechanisms, such as those of Ghodsi et al. [36], Robertson and Webb [55].Moreover, Cole and Tao [24] leverages randomization to guarantee ex-ante envy-freeness of efficient resource allocation mechanisms.Our work diverges from these since much of the fair division literature proposes mechanisms satisfying some fairness constraints, and possibly benchmarks quality against social welfare.In Section 2.2, we introduce inequality-averse as our case study for the rest of the paper.The notion of inequality-aversion we adopt is in line with equitable cake-cutting [12,13].A division of cake is deemed equitable if every agent receiving cake has the same utility for their allocation.Equitable cake cuts cannot be exactly computed, and even approximations are expensive [53].
An additional line of work on fairness when reporting agents have externalities affecting their utilities has emerged [61,64], which is one possible application of our general framework (see Appendix A for more discussion).However, Velez [61] uses money to address externalities in the allocation of indivisible goods.In contrast, Zhang et al. [64] study cake-cutting ( = 1) with externalities and proves the existence of (generalized) envy-free and proportional allocations when agents have utilities for each others' allocations.However, these preferences are not about community-level outcomes; rather, they are about others' individual outcomes, which aligns more with the altruism literature [20,45,63].
Preference Elicitation.Our work also intersects with the literature on preference elicitation.In the canonical principal-agent problem, informational asymmetries often require a principal to elicit some information from agents -often about their preferences [11,18,21,41] or predictions about future events [35,44,59].The principal then uses the elicited information to make decisions, such as allocating resources [15,46,50] or making decisions about public goods (e.g., determining the winner of an election, facility placement) [51,52].We defer discussing strategyproofness to § C, and primarily focus more on the question of what bits of information are being collected from agents, rather than the question of whether or not agents are being truthful.However, Bierbrauer and Netzer [7], Bierbrauer et al. [8] discuss mechanism design that is (non-)robust to social preferences, albeit in different settings, and only through analyzing the existence of a dominant strategy equilibrium.Recently, Joren et al. [40] also study participation in machine learning by presenting a system in which decision subjects can choose what data they provide to a model or ensemble of models in a way that improves expected performance of the applied model.
In the context of student assignment to schools, our framework aligns with the suggestion of Robertson and Salehi [57] that individual preferences can serve as valuable signals for promoting justice if they are expanded and made more expressive.This includes offering more avenues for expressing preferences regarding desirable social outcomes.Moreover, Robertson et al. [56] found that a more expressive preference language can encourage greater participation by students.These findings reinforce our proposal that preference elicitation across a broad spectrum of objectives, incorporating fairness considerations, can effectively advance social and distributive justice with meaningful participation from stakeholders.
Alignment.Our work relates to aligning reinforcement learning agents or large language models using human preferences over sampled outputs [22,39,49].When directly specifying objectives is difficult for the designer or the human objective is hard to formulate, one approach is to collect human preferences over a set of model outcomes and learn a reward function to maximize.Recent works have also shown that alignment of large language models can be achieved indirectly by fine-tuning on collected preferences without explicitly modeling a human reward function [54].Our framework follows a similar procedure, allowing agents to specify their objectives as a function of possible objectives.While alignment discussions mainly focus on single reinforcement learning agents, our work addresses allocating goods to multiple agents, where a central planner makes final decisions.Unlike common alignment approaches in machine learning that use arbitrary reward functions, our approach limits options to, for example, those supported by behavioral economics.This restriction enables tractable optimal allocation and provides theoretical bounds on potential improvement or loss due to preference elicitation.It also limits what aspect of the problem agents can change, for instance, they can be involved in determining the inequality-efficiency tradeoff.

C ELICITING PREFERENCES OF STRATEGIC AGENTS
Throughout our study, we assumed the planner has access to the agents' true preferences ,  , and .Next, we briefly discuss possible strategic manipulations and mechanism design in the presence of inequality-averse agents.For demonstration, we assume  =  so there is only a single deviation from the standard setting of mechanism design.
First of all, as long as payments are permitted and agents are quasi-linear, a well-known result is that externality pricing is dominantstrategy incentive compatible and maximizes social welfare [37,Chapter 8].A quasi-linear agent  cares for   () −   where   is the payment made by .Note that   can be a non-linear multi-dimensional function here.For reported utility coefficients  ′ and inequality-aversion levels  ′ , let   (;  ′ ,  ′ ) represent agent 's valuation of allocation  under the reported preferences.The inequality-averse allocation under the reported preferences is  ( ′ ,  ′ ) = arg max   ( (;  ′ ,  ′ )).Externality pricing defines the payment rule where  − is the sum of valuations for every agent other than .Let us examine this payment rule in the two-agent setting.In the absence of agent 1, For inequality-agnostic agents (i.e.,  = 0), the above payment rule is equivalent to a second-price auction per item.But in general, allocation and payments depend on all elements of  and  .Going beyond truthful mechanisms, it turns out there exists a simple modification of the standard second-price auction such that significant deviation from truthful reporting cannot be justified.
Proposition C.1.Assume there is a cap of   on the maximum level of inequality aversion one is allowed to report.Given  ′ and  ′ are reported by the two agents, define payment rule . Then no agent  has any incentive to report  ′   outside of [ 2 (  ,   )    ,  −2 (  ,   )    ].
Proof.Let  ′ 2 and  ′ 2 be the agent 1's belief about agent 2's report.Also, let  1 and  1 be the true preference of agent 1.There are three possibilities: (1) If  1 <  ( 1 ,  ′ 2 )  ′ 2 , the allocation under truthful report will be  1 = 0. Any deviation from truthful reporting that results in So, agent 1 needs to pay an extra  ′ 2 and optimistically her overall change of utility will be Therefore, agent 1 has no beneficial deviation in this case.
, the allocation under truthful report is  1 = 1 and agent 1 has to pay  ′ 2 .To avoid this payment, agent 1 should report  ′ 1 <  ( ′ 1 ,  ′ 2 ).In this case, agent 1 will lose all of item  and optimistically her overall change of utility will be Again, agent 1 has no incentive to deviate.
1 from  1 might be justified up to a scale of  ±2 ( 1 ,  ′ 2 ).Fig. 3 shows the summary of possibilities.Regardless of agent 1's belief about agent 2, she has no incentive to report

D THE LOSS OF INCORPORATING INEQUALITY AVERSION: TWO 𝛾-DISSIMILAR AGENTS
The next theorem shows how the knowledge of dissimilarity can be helpful in bounding the loss.Theorem D.1 (Upperbounded loss for two dissimilar agents).For two -dissimilar agents and any  ∈ [0, 1], the loss can be bounded by  ≤    , where (1) If  ∥ 1 ∥ 1 > 1: Proof.For notational convenience, we denote  2 by   in this proof.For fixed  1 ,  , and , we can rewrite   in Eq. ( 14) as We will explicitly show the dependence on  throughout the proof.Since the   is discontinuous in   and hard to analyze, we first Figure 4:   with its quadratic upperbound (Eq.( 40)) in red.
upperbound it with two quadratic terms (Fig. 4).Specifically, we require the upperbound to be zero at   = 0 and   = 1, and have a zero derivative at   =  1 ( , )  1 .These constraints uniquely determine the upper bound: To find the worst-case upperbounded loss, we need to solve: The Lagrangian function of the above optimization problem is where , , and  are non-negative Lagrange multipliers.We know for every valid value of the multipliers max So, in the following, we first solve max  L and then choose appropriate values for multipliers to obtain a good bound.
As the definition of L suggests, L is separable over items: L =  L  .Defining we can write L  (  ; ,   ,   )   (  ) +     +   .The separability over items allows us to write max  L =  max   L  .As L  is a concave quadratic function of   , it is necessary and sufficient for the maximizer to satisfy the first-order condition The derivative of   is positive if and only if   <  1 ( , )  1 , so, there are two possibilities based on the sign of   : (1)   < 0: The first-order condition requires  *  = arg max   L  <  1 ( , )  1 .Plugging derivative of   into Eq.( 46) and solving for   gives By evaluating L  at  *  and simplifying equations, we obtain (2)   ≥ 0: The first-order condition in Eq. ( 46) requires  *  = arg max   L  ≥  1 ( , )  1 .Plugging derivative of   and solving for   gives By evaluating L  at  *  and simplifying equations, we obtain Now that we have found L * (, , ) max  L =  L *  (,   ,   ), the next step is to find appropriate , , and  that minimize L * or make it sufficiently small.Again we consider two cases: (1)   < 0: In this case, So, L *  is monotone increasing in   .The optimal values of   and   depend on the sign of  ∥ 1 ∥ 1 −  1 : (a)  ∥ 1 ∥ 1 >  1 : We set   = 0 and   → ( ∥ 1 ∥ 1 −  1 ) + .For these values, we have   → 0 − and We set   = 0 and For such a choice, (2)   ≥ 0: In this case, So, L *  is monotone increasing in   .Again, the optimal values of   and   depend on the sign of  ∥ 1 ∥ 1 −  1 : (a)  ∥ 1 ∥ 1 ≤  1 : We set   = 0 and   = ( 1 −  ∥ 1 ∥ 1 ).For these values, We set   = 0 and Evaluating L *  at these values and simplifying equations, Now for every  1 we can choose between   < 0 and   ≥ 0 cases and choose   and   such that L *  is minimized: (2)  ∥ 1 ∥ 1 ≤  1 : We set   = 0 and   as Eq. ( 53).
For such a choice: Let where 1 ≥  ≥ 0. Then (2)  ∥ 1 ∥ 1 < 1: Fig. 5 shows the upperbounded   for sufficiently small .The parameter  gives us the flexibility to penalize extreme values of  1 .Without any further knowledge on  1 , we cannot do better than   =   1 ( , ) √︁  ∥ 1 ∥ 1 which can be achieved by choosing  = 0.5.This bound is more informative than  ≤  1 ( , )∥ 1 ∥ 1 only if  < ∥ 1 ∥ 1 / 2 .Having a prior over  1 can result in strictly better bounds.The next corollary gives an example of  1 being drawn from a uniform distribution, even without any assumptions on the independence of items.
Corollary D.2.Assume  ≤ 1/.For  1 ∼   (0, 1) for all  ∈ [], we have Proof.Using Theorem D.1, for  ≥ 0.5 we have A good choice of  would be a value such that all the terms have the same exponent: 2 (68) , this is a tighter bound than the distribution-free bound.

E THE LOSS OF INCORPORATING INEQUALITY AVERSION: TWO INDEPENDENT AGENTS
The following Lemma bounds   for a general two independent agents setting.
Define a new variable â / ā.Given ā, the cumulative distribution of â is .

F ADDITIONAL STATEMENTS
Proposition F.1.There exist two -similar agents for whom  ≥ /2.

□
, λ, , , ξ ≥ 0 , (96) Note that the inequality aversion levels are less than 1/2, so, |  | < 1/2.Using the new definitions, we can rewrite Eq. (90) as If    > 0, then    = 0 and Eq.(97) for agent  and item  gives   = (1 +   )   − λ  .Further, if    <   , we have λ  = 0. Then plugging   into Eq.(97) gives Since both  and λ only take non-negative values and   > −1,   > −1, we can write Then using Eq. ( 96) and choosing extreme values for   and   , we have The max over agents can be upperbounded by the sum of max over agents in each cluster: • If  ∈ C  , for every  ∈ C  , we have Note that without any distributional assumption, we cannot find a bound better than   () Putting these all together, we have Suppose that in the case of a tie when finding  * , a random agent gets the complete share, so that all elements of  * are either 0 or .Note that this assumption has no effect on  ( ( * )) and only simplifies the analysis by ensuring  1{ ∈ J ()} ≤ 1/ for every item .Then using a similar argument as in Eq. ( 103), we can rewrite the sum over agents in the above equation as Plugging this into Eq.( 113) completes the proof.□ Proof of Theorem 3.6.We start with the proof intuition and then present the formal proof.
Proof Intuition.The idea behind the proof is as follows.Roughly speaking, a loss of at most     ( , )    incurs if an agent  ∈   losses her share to an agent  ∉   , where    ≥    ≥ (1 −   ( , ))   .If there were only two agents (  = 1,    =  −  ), we could argue that if agents are independent and   is smooth, the probability that    lies in this narrow band is   =  (  ( , )    ), so the expected loss from the reallocation of item  will be     ( , ) =  ( but  (1/ )  ≥  (4)  , we have  = 4, so agents 1 to 3 will not lose their share to any agent not in   (though they might exchange goods between themselves) and agent 4 is the highest rank agent that might do so.Now suppose agent  ∈   has lost her share to agent  ∉   .The loss of this reallocation is    −    which will be less than  ( )  −    .Let    be the random variable describing the loss an agent  ∉   imposes.Using a similar notation as Lemma E.1, we can write    ≤  ( )  (    | ( )  ).Define  ( )  to be the  th largest value of {   } ∉  .Putting these all together, given  (1)  ,  (2)  , . . .,  (1/ −1)  , we can bound   by Here, M max ā E[  (1)  | ā] =  () (refer to Lemma E.1).We start by approximating the expectation of Eq. ( 118): Here, Ĝ (1/ −1) can be expanded as Ĝ It is straightforward to show for 1 − Ĝ  ( | (1)  ) ≤    ≤ 1/(), the above sum has almost all of the important terms of a binomial expansion and Ĝ (1/ −1)

𝑗
( | (1)  ) = 1 −  (1).Therefore Eq. ( 118) can be bounded by  ( M). Next, we approximate expectation of the  th term of Eq. ( 119): Here, the difference of  terms can be approximated by For a large enough , the probability that  ( −1)  −  ( )  takes a value much larger than 1/ goes to zero.The ( + 2) factor inside the sum can be upperbounded by 1/.But, if 1/ >   , then the terms corresponding to large s will be negligible and it suffices to sum up only the first    terms.Upperbounding ( + 2) factor, the remaining terms can be upperbounded by a binomial expansion of 1. So, putting these all together, the expectation of Eq. ( 119) can be bounded with high probability by Finally, Eq. ( 120) is obviously bounded by  M. Since  → ∞ and the ratio of  and 1/ is constant, 1/ also goes to infinity and Eq.(120) becomes negligible.This completes the proof.□

G.2 Deferred Proofs from Section 4
Proof of Lemma 3.1.Consider an item  such that  1 >  2 .There are three possibilities: 1) If  2 >  1 (Δ 1 )  1 , an immediate result of Lemma 3.1 is  2 = 1.In this case, the loss of overall utility is  1 −  2 but the inequality is also reduced by  1 +  2 .So, the social welfare based on true valuations has increased by ( 1 +  2 )( 1 +  2 ) − ( 1 −  2 ), which is reflected in Eq. (23).A simple calculation shows this term is non-negative for any Δ 1 ≤  1 +  2 .2) In the case of  2 =  1 (Δ 1 )  1 , for any arbitrary value of  2 , the resulting gain is  2 times the gain of full reallocation which is non-negative.3) The allocation does not change if  2 <  1 (Δ 1 )  1 and gain is zero in this case.So, overall, Eq. ( 23) gives a lower bound for the gain that can be realized from reallocation of item .□ Proof of Proposition 4.2.Without loss of generality we assumed agent 1 is better off under  * , so : 1 > 2  1 >  ( ( * ))/2, and Δ 1 > 0. Agent 2 can be seen as an adversary with budget  minimizing gain.Starting at the point where  2 → −  1 ,   is as large as c1 ( , )  1 .To reduce this gain, agent 2 might spend  1 (Δ 1 )  1 (1 −  1 (Δ 1 )) 1 from her dissimilarity budget to make   zero.The return rate of agent 2's investment or equivalently reduction rate in   is c1 ( , )/ 1 (Δ 1 ).Hence, agent 2's best strategy is to greedily spend her money on items with the smallest  1 and make  2 sufficiently different on those axes.Ideally, this will reduce the total gain by  c1 ( , )/ 1 (Δ 1 ), resulting in  ≥ ( : 1 > 2 c1 ( , )  1 ) −  c1 ( , )/ 1 (Δ 1 ).On the other hand, we already know the loss in the case of -similar agents is upperbounded by .Putting these together and treating c1 ( , )/ 1 (Δ 1 ) as a constant completes the proof.□ Note that this ratio does not depend on .(131) Then in the limit of  → + 0, although gain and loss both go to zero, their ratio goes infinity.This is happening despite every two agents being -dissimilar with  → + 0. Therefore, dissimilarity constraints are not helpful in upperbounding /.□