Differentially Private Resource Allocation

Recent studies have shown that systems with limited resources like Metadata-private Messenger (MPM) suffer from side-channel attacks under resource allocation (RA). In the case of MPM, which is designed to keep the identities and activities of both callers and callees private from network adversaries, an attacker can compromise a victim’s friends and keep calling the victim to infer whether the victim is busy, which breaks the privacy guarantee of MPM. In this work, we systematically study how to protect the privacy of RA against the aforementioned attacks with differential privacy (DP). Though DP has been tested by Angel et al. (IEEE S&P 2020) in protecting RA, which lets the allocator add dummy requests following a biased Laplace distribution to hide the existence of the victim and then assign resources randomly, we identify that this approach does not leverage the uncertainty from the attacker’s view, thus leading to a loose bound of DP. As a result, more than 40% of the resources are wasted to satisfy DP. To make the DP solutions more practical, we precisely model the RA process from the attacker’s view and present a thorough study of the noisy allocation mechanisms by considering different distributions, scales, and biases of noise. We identify four new mechanisms and prove that they all follow ϵ -DP (Angel et al. follow (ϵ, δ)-DP). Through theoretical and empirical analysis, we found these approaches can outperform Angel et al. by a large margin in privacy-utility tradeoff.


INTRODUCTION
Resource allocation (RA) is a long-standing problem relevant to a variety of application scenarios, such as virtual machine assignment [46], storage allocation [43], network bandwidth management [44], and channel allocation [61].Prior works mostly focus on the efficiency and cost of RA [9,27,30,31,33,39,49], e.g., how to improve the resource utilization and guarantee the quality of service to all users [30].However, the privacy issues of RA have been overlooked for a long time and were only studied recently.Angel et al. [4] reveal that a powerful attacker can determine the existence of other parties in the RA system.Concretely, for an allocator managing limited resources, when one party requests resources, the number of resources the other parties can obtain will be affected.Therefore, the attacker can try to send a large volume of requests and use the allocation results to infer the existence of other users.Knowing the existence of others opens the door to more serious attacks that can infer users' activities.For example, although Metadata-private messengers (MPM) are designed to hide the calling activities between clients, such privacy guarantee can be breached with RA side-channel and traffic analysis [4].Existing Resource Allocators.Most of the existing allocators (e.g., the first-in-first-out allocator) do not offer any privacy guarantee [3].Recently, Angel et al. [3] proposed an allocator AKR1 that satisfies differential privacy (DP) [17].Angel et al. consider the scenario where the resource allocator owns a limited number of resources and the attacker controls a large number of clients.The attacker learns of the existence of another victim when the requests to the allocator are not fulfilled.To protect privacy during RA, AKR adds dummy requests to the real ones and then assigns resources to randomly chosen requests.The number of dummy requests follows the biased Laplace distribution, and by a standard post-processing argument in DP (explained in Section 2.3), the existence of the victim is differentially private to the attacker.While the dummy requests puzzle the attacker, we found that the utility of AKR is not satisfactory.For instance, to achieve an acceptable protection level of DP (with parameters  = 2,  = 10 −6 ) more than 40% of the resources must be wasted in its experiment setting.
Our Solution.Different from AKR, which implies the attacker knows the total number of requests after noise is added, we observe that the practical attacker only has a partial view of RA.Therefore we choose to model the RA privacy from the attacker's view.Due to the randomness introduced by RA, we benefit from "privacy amplification" [5,20] through such modeling and achieve better privacy-utility tradeoff.
Then, we implemented the DP mechanisms under four noise distributions, including constant (CST), uniform (UNI), one-sided geometric (GEO), and double geometric (DGEO), and tailored them to our new modeling.We conduct a rigorous privacy analysis and derive much tighter privacy bounds than AKR.We prove GEO and DGEO always satisfy -DP under various parameters, while CST and UNI satisfy -DP under certain conditions.Interestingly, we find that adding a constant noise (CST), which obviously violates traditional DP, can be proven to satisfy DP in the context of RA, due to the randomness of the allocation process.On the other hand, AKR only considers non-negative Laplace noise and relies on the post-processing argument to satisfy (, )-DP.Evaluation.We evaluate the proposed mechanisms empirically by simulating the RA process of Alpenhorn [42] with 5 million to 100 million rounds of requests, to demonstrate the privacy-utility tradeoff in real-world settings.(1) GEO outperforms other mechanisms when  is smaller (i.e.,  < 2) and has relatively stable performance; (2) DGEO performs better with a larger  ( > 2).Compared to AKR which wastes 44% of the resources, DGEO only wastes 10% of resources with  = 2.Moreover, when  = 2.25, AKR utilizes 60% of the resources while DGEO achieves 97% utilization.(3) Parameters of the mechanisms have to be carefully tuned and negative bias should be avoided.The advantage over AKR is especially surprising as AKR is supposed to have better utility under the relaxed (, )-DP, whereas our mechanisms follow the strict -DP.This justifies the effectiveness of our privacy analysis.Contributions.The main contributions are summarized below: • We conduct a rigorous privacy analysis of differentially private RA, and derive tighter privacy bounds under the attacker's view for four noisy mechanisms.• We theoretically and empirically evaluate our proposed mechanisms.One mechanism, called GEO, leads to the best privacy-utility tradeoff and outperforms AKR by a large margin.• We published the code in a GitHub repository [14].

BACKGROUND 2.1 Problem Definition
Resource allocation (RA) assigns limited resources to the requesting parties, and we focus on RA within computing systems in this paper.Examples include resource management in data centers [2], assignment of virtual machines (VMs) in cloud [46], cache allocation in computers [43], and channel allocation for Metadata-private Messenger (MPM) [42].Below we first provide an abstract view of standard RA and describe its involved parties and procedure.Then, we describe the attackers' goals and capabilities in RA.The frequently used notations are defined in Table 1.RA Parties and Procedure.Our abstraction of standard RA considers a scenario where an allocator allocates resources based on the requests submitted by a number of clients.The allocator can contain one server or a group of servers for fault-tolerance.In the setting of data center, the allocator can be a virtual machine manager (VMM), and the client can be a data center tenant.In the  setting of MPM, where two users can set up a call in a private way, the allocator can be a callee and the client can be a caller.
Regarding the RA procedure, we assume it takes rounds of interactions between the allocator and the clients.In each round, the allocator receives requests from its clients for resources (e.g., CPUs in a cloud and communication channels to be allocated to a caller in MPM) and makes the best efforts to serve the requests.Hence, for each request, the allocator either accepts it and allocates the resources, or rejects it when all resources have been occupied.
Following prior work [3], we assume the quantity of the resources is a limited number , and all resources are identical.Each round, some clients send requests, and each request asks for one piece of resource.Because the resources are identical, the requests are also identical (except the requesters' IDs).We note that some assumptions can be relaxed (e.g., resources are not identical and each client can request multiple resources) to match different application scenarios, and we discuss these variations in Section 6.2.Adversary Model.Since the clients' requests might not always be fulfilled under limited resources, the allocator's response could leak information about the existence of some clients.Figure 1 illustrates how such inference attack can be conducted.Formally, we assume the attacker in the strongest attack scenario who can: • compromise all clients except one victim client, and we denote the number of compromised clients as .
• know the number of available resources  before RA.
• compromise more clients than the resources, i.e.,  ≥ , and all requests are submitted at the same time.
The attacker can tell there is a victim requesting a resource if less than  requests from the attacker are fulfilled.We assume the adversary is malicious who can behave arbitrarily rather than being semi-honest.We only consider the privacy issues in RA and other issues like availability (e.g., the attacker blocks a victim from getting resources by overwhelming the allocator) are out of scope.We note that an adaptive attacker can exploit the correlation of results between multiple rounds, and infer more information that weakens the allocation privacy.We propose a few approaches to tackle such adversary in Section 6.1.
Regarding the allocator, we assume it is trustworthy, and can see all clients and requests and add noises.Hence, the allocator can analyze the historical data to estimate the parameters to be used by our mechanisms without privacy issues.We also assume the communication between the victim and the allocator is secure, so the number of the victim's requests are not leaked.Impact of RA Side-channel Leakage.Even though the information about the victim during RA is seemingly insignificant, it can be leveraged as a side channel to break privacy-enhancing technologies or make the subsequent attacks more effective.
Specifically, Angel et al. described an attack based on the RA sidechannel [4] against MPM.MPM like Vuvuzela [61], Alpenhorn [42], Stadium [60] and Karaoke [41] hide both the message content and its metadata (including sender, receiver, time of communication, etc.) from the network adversaries.In essence, a user within an MPM initiates a conversation with her friend on an agreed time or round and encrypts the messages with a shared key.In the conversation round, the user initiates  channels to  friends (including the friend to have the "real" conversation).To avoid leaking metadata, users are forced to send and receive a message on each channel in each round 2 .Since MPM requires the clients to always be online, only the communicating parties of a client should be protected, while the client's existence is known.
It turns out the privacy guarantee of MPM can be entirely violated.As shown by Angel et al. [4], a user usually has a greater number of friends than  channels.When the attacker controls  ( ≥ ) friends of the user and lets them call the user, if the user is busy (e.g., not responding) to more than  −  callers controlled by the attacker, the attacker knows the user is communicating with others who are out of her control.Moreover, when the attacker compromises the friends of multiple users, she can infer which users are likely active in a given round with intersection and disclosure attacks [1,45].Specifically, the attacker can narrow down the possible sender-recipient pairs by ignoring all the idle users during the first round of calling.Then the attacker can build intersections of active users and keep reducing the set of possible sender-recipient pairs during additional rounds.Because the requests and resources are all identical under our assumptions, detecting such inference attack is also very challenging.Existing Resource Allocators.We aim to design an RA that hides the existence of the victim while maximizing request fulfillment.One trivial solution that provides perfect privacy is to have the allocator withhold all the resources and reject every request, but obviously, this solution has zero utility.Angel et al. characterizes the existing allocators into (1) FIFO (first in, first out) allocator, (2) Uniform allocator, (3) Slot-based resource allocator (SRA) and (4) Randomized resource allocator (RRA) [3], while FIFO and uniform allocators are non-private and SRA and RRA are private.However, both SRA and RRA incur prominent utility loss.

A Primer on Differential Privacy
Our work applies differential privacy (DP) mechanisms on RA.We briefly overview DP in this subsection and describe how AKR applies DP to RA [3] in the next subsection.
In the standard (central) setting, a trusted data curator adds noise (e.g., through the Laplace mechanism or Geometric mechanism) to fulfill a DP notion (e.g., (, )-DP) given a query from a data consumer, which bounds the information leakage provably.Definition 1 ((, )-Differential Privacy).[17] An algorithm M satisfies (, )-differential privacy against an adversary, where ,  ≥ 0, iff for any two neighboring datasets  and  ′ , and any subset  of all possible outcomes of algorithm M, we have We consider two datasets  and  ′ to be neighbors, denoted as  ≃  ′ if and only if  =  ′ + or  ′ =  +, where  + denotes the dataset resulted from adding one user's data  to the dataset . measures privacy loss at a differential change in data, which is also called privacy budget. models the probability when the algorithm M fails to be differentially private, which is also called "failure probability".The value of  is normally very small in order to keep the algorithm satisfying DP most of the time.When  = 0, we simplify the (, 0)-DP to -DP and call it pure DP.
Laplace Mechanism [17].It computes a function  on input dataset  while satisfying -DP, by adding to  () a random noise.The magnitude of the noise depends on GS  , i.e., the global  1 sensitivity of  , defined as (on any two neighboring datasets  ≃  ′ ), When  outputs a single element, M can be written as: where L () denotes a random variable sampled from the Laplace distribution with scale parameter  such that Pr [L () = ] = 1 2  − | |/ .When  outputs a vector, M adds independent samples of L GS   to each element of the vector.

Differentially Private Allocation in AKR
As all the requests are identical from the allocator's point of view, the key to providing privacy is to "control" the number of resources the attacker receives.Thus, AKR asks the allocator to add dummy requests.Specifically, AKR sets the dataset  to be all requests made by clients, and computes the noise L GS   .To ensure the number of added requests (i.e., M () in Equation 3) is non-negative, a bias  is added when sampling the Laplace noise so that the probability of the noise being negative is bounded by , which we refer to as the biased Laplace distribution.The workflow of AKR is: • Input: , , GS  , , Theorem 3 (DP Proof for AKR [3]).Algorithm M is (, )differentially private for  = 1/ and  = ∫ 1 −∞ L ( |, 1/) .Specifically, for any subset of values  in the range [ () , ∞) of M: where  () computes the cardinality of set .

Note that:
We can see  tends to be large in order to have a small .
Given that the noise is non-negative, what the attacker observes after allocation can be seen as a post-processing of the requests, as the victim's request is indistinguishable from the added dummy requests.Specifically, let  be a random variable denoting the number of resource attacker gets.Since the attacker only learns which requests of her were fulfilled, from her point of view dummy requests and victim are indistinguishable.Thus for each value  ∈ [0, ], where  is the number of requests with dummies.Combined with the inequalities governing the probabilities that M outputs each value of  for  and  ′ , respectively.We have that Pr [ =  |] ≤   Pr [ =  | ′ ] + , and similarly with  and  ′ exchanged.Thus the distribution of the number of attacker's requests allocated are very close for  and  ′ .

MODELING RESOURCE ALLOCATION
In this section, we first demonstrate the problem of AKR's modeling of RA.Then, we present a taxonomy of different ways to "add noise" in RA and a general approach to model privacy.

Privacy Amplification from Allocation
We argue that AKR's modeling of RA leads to suboptimal utility due to the lack of consideration for the attacker's view and capabilities.Though AKR, by its definition, does not reveal the number of total requests each round, their proof indicates a stronger statement that the DP guarantee holds even when the attacker observes the total number of requests after noise is added (i.e., the number of requests from both the attacker and the victim).More specifically, their proof guarantees that the noisy total number of requests is bounded by (, )-DP when honest clients are added.However, such information is not actually accessible to the attacker, thus it creates a gap between the proof and the actual definition of the RA problem.Examining the attacker's view is crucial for privacy amplification in our study.By comprehending the capabilities and limitations of the attackers, we can construct a precise analysis and avoid unnecessary noise.In real-world scenarios, the capability of an attacker can be considerably limited, as they are typically not granted access to the internal states of an allocator.In fact, if the attacker can observe the internal states of an allocator, she just needs to access the number of requests before adding noise, which defeats all DP-based protection.
We note that such a modeling gap is common in DP for ease of proof.For example, in DP-SGD [22], the privacy guarantee is proved on each SGD step, implying that the attacker can observe the intermediate steps, but such information should not be accessible to the attacker.A similar case also appears in the proof of privacy blanket [6, Theorem 3.1] (which assumes the attacker has unrealistic extra information for the ease of proof) for the shuffle DP model.
Hence, we propose to more precisely model the attacker's capabilities and offer a tighter bound under the notion of DP.By conducting the privacy analysis from scratch, we present a set of "privacy amplification" results 3 .In this paper, the privacy amplification stems from the fact that the attacker only has a partial view of the allocation result.The attacker is aware of whether the other compromised clients receive the allocated resources, except for the one uncompromised client.Compared to AKR, which has to introduce larger noise to deter the (unrealistic) attacker, we can use smaller noise to satisfy DP.In Section 5.2 ("Why Models Attacker's View"), we elaborate the impact of privacy amplification.

Design Space
As described in Section 2.1, RA takes two steps: (1) receive a request, and (2) allocate the resource if the request is accepted.Hence, for privacy protection, the allocator can add noise to either (1) the number of requests (i.e., by adding dummy requests or removing some requests), or (2) the number of available resources (i.e., by withholding some available resource).After that, the allocator can randomly select requests and assign resources to them.Therefore, the design space for the allocator is composed of: • DS1: Choosing Where to Add Noise.The allocator can add noise to either the number of requests or the number of resources or both.Our analysis shows that randomizing the number of resources has the same effect as randomizing the number of requests (explained later), thus we focus on designing methods to add noise to the number of requests.In Section 6.4, we give a few real-world examples.• DS2: Choosing How Noise is Generated.The allocator adds noise to the observed number of requests, and we have the flexibility to choose: -The distribution of the noise.
-The range (support) of the distribution.
We found AKR only covered part of the design space: (1) AKR considered RA as post-processing and only adds non-negative noise (dummy requests) to the requests.(2) AKR did not consider distributions other than the Laplace distribution.Adding Noise to Resource.Beyond adding noise to the requests, we can choose to add noise to the resources.Here we consider that the noise is always negative, or the resources are withheld from being assigned to clients.The positive noise can be seen as "creating" resources on the fly and assigning more than what is asked by a client, which could be impractical for a real-world system.Yet, we can prove that withholding any number of resources can be equivalently modeled as assigning them to dummy requests.Specifically, the allocator could withhold  resources from  requests, which results in  −  random requests getting resources.This is equivalent to that  requests being randomly removed from the system (so that the rest  −  requests are granted with resources).Thus, we only consider adding noise to requests.

Privacy Modeling
Under DS1, we model RA's privacy through the lens of DP as follows.We use  to denote the random variable for the number of noisy requests. denotes the number of requests made in a round.Given two neighboring datasets ,  ′ , w.l.o.g., we assume  ′ equals to  plus the honest request from the victim client 4 .RA's privacy can be quantified as: where  6).Request Addition ( ≥ 0).For the case of , assuming there are  requests from , given a specific number of dummy requests  ≥ 0, we have: ) For the case of  ′ , which has an additional honest request, the attacker could receive one fewer resource.Thus we have: Similar to Equation 5, in Equation 6, when Request Removal ( < 0).For the case of  (the honest request does not exist), when the number of added dummy requests is negative ( < 0), some requests will be removed randomly.We have: This case is simpler than "Request Addition", and what the attacker observes is deterministic: if after adding negative noise ,  +  is still greater than , then the attacker will always receive  resources; if  +  ≤ , then the attacker will always receive  +  resources.
For the case of  ′ , there are  + 1 +  requests, and we need to consider whether the honest request is fulfilled.Let  = min( + 1 + , ) + , which leads to two scenarios: • Allocator assigns resources to the honest client: in this case,  can only be  − 1.The probability of the allocator assigning resources to the honest client is  +1 , which is equivalent to the case of selecting  = min( + 1 + , ) + items from a total of  + 1 items without replacement and that the honest client is selected.
• Allocator does not assign resources to the honest client:  must be  if the honest request is not fulfilled, which happens with probability 1 −  +1 .Thus we have: where  = min( + 1 + , ) + .
We want to highlight that considering request removal (negative noise) is another key difference from AKR. Attacker's Strategy.From the attacker's point of view, it is important to set  (the number of compromised clients) to a value that can maximize privacy leakage (i.e., maximize Equation 4).Recall that we assume  (resource capacity) is known to the attacker, and each client can submit at most one request (see Section 2.1).Following the previous analysis of request addition and request removal, we can derive the best attacker strategy below we follow this strategy for this rest of the paper.Theorem 4. The maximum privacy leakage happens when the attacker sends  =  requests.
Proof.We consider the cases of noise  < 0 and  ≥ 0, and prove  =  causes maximum privacy leakage in both cases.
First, considering the case when noise is non-negative ( ≥ 0), the attacker's goal is to choose  to maximize the difference between the cases of  and  ′ .Note that the difference can only be observed when  +  ≥  because otherwise, all requests will be granted with resources.To ensure  +  ≥  for all  ≥ 0, we have  ≥ .Based on the previous analysis, when 0 ≤  < , there is no privacy at  =  −  − 1, because Thus it does not matter to the attacker what value to set to  in this case.For  ≥ , the privacy protection is given by In order to maximize the above, we need to set  to its minimum within the range of  ≥ , that is,  = .Now, we consider the case when negative noise ( < 0) is added.By observing Equation 7and Equation 8, we know that to trigger the different outputs for case  and  ′ (i.e.,  =  +  for case  and  =  + + 1 for case  ′ ),  + needs to be < .The difference of  and  ′ (privacy protection) is then given by To have  + <  (i.e.,  <  −) hold for all  < 0, we have  ≤ .Now, in order to maximize Equation 9,  is to be set to . □

NOISY MECHANISMS
In this section, we analyze different noisy mechanisms under DS2.
As the RA output is discrete, we choose discrete distributions for the mechanisms.Specifically, we consider constant, uniform, one-sided geometric, and double geometric distributions, and name them CST, UNI, GEO, DGEO for short.Though these mechanisms have been studied in the standard DP [24,25], we conducted new theoretical analysis to derive tighter privacy bounds, which require extensive proof work as shown in Appendix A. In Table 2, a summary of different mechanisms is given.In particular, 1) we prove the DP bounds for all mechanisms, though CST and UNI only satisfy DP when certain conditions are met (i.e., noise sample space should be at least ); 2) our mechanisms outperform AKR in utility by a large margin 5 .

Constant Noise (CST)
In this case, we consider request addition only, and the noise  always equals a constant number .Observing Equations ( 5) and ( 6 As a result, we have the following theorem: Theorem 5. Assuming an allocator has  resources, constant noise has to be at least  to satisfy DP.
Proof.Suppose the resource allocated to the attacker is , and the attacker always sends out  =  requests.Then we have Note that  = ( − ) + happens in  when all dummy requests get resources and the remaining resources go to the attacker.Similarly,  = ( −  − 1) + happens in  ′ when the victim and all dummy requests get resources and the remaining resources go to the attacker.When  ≥ ,  ∈ {0, 1, • • • ,  } for both  and  ′ .Thus, given  =  and Equation 10, we can give an upper-bound of its privacy leakage as: where   = ++1 +1 is reached at  = .□ This is a surprising result, as adding a fixed noise should not satisfy DP.In our case, adding a fixed noise still provides privacy because of the randomness of the allocation process.Still, we argue that it does not offer good utility.Due to the constraint  ≥ , the utility is never more than 0.5.

Uniform Mechanism (UNI)
In this case, the discrete noise (it can be negative or non-negative) is drawn uniformly from [ ℓ ,   ]: ℓ and   define the shape of distribution used in UNI, with  ℓ defining the starting point.In Appendix A.1, we prove attacker's view satisfies DP when Yet, our analysis shows UNI is also not recommended when the utility requirement is more critical.This is because the utility degrades linearly to negative noise when the number of requests equals the number of resources.In a nutshell, suppose the total number of requests is  and  = .Removing requests causes less resource to be allocated with certainty while adding requests results in the same with a probability.With this, our goal is to have  ℓ ≥ 0 and   ≥  to achieve the best privacy and utility tradeoff, and Appendix B studies how these parameters should be determined.

One-sided Geometric Mechanism (GEO)
Intuitively, reducing the probability density of large noise can reduce the amount of dummy requests added, and thus improve utility.To this end, we adopt the geometric distribution within the range [ ℓ , ∞) with the noise distribution: Like UNI,  ℓ also models the starting point of the new distribution.For , a larger value makes the noise decay faster and has negligible probability for large value , thus improving utility.In terms of privacy, we can also prove attacker's view satisfies DP (See Appendix A.2).For the same reason in Section 4.2, negative noise has negative influence on utility in a deterministic way.Therefore, though GEO tolerates negative noise (i.e.,  ℓ can be negative), we do not recommend setting  ℓ < 0. The two parameters  ℓ and  both influence  and utility: For  ℓ > 0, increasing  ℓ reduces both  and utility, and increasing  raises  and utility.For  ℓ < 0, utility and privacy varies in different cases.Appendix B studies the parameter settings.

Double Geometric Mechanism (DGEO)
AKR adds a biased Laplace noise to the number of requests (explained in Section 2.3).Likely, we propose to draw the noise from a biased double geometric distribution: We call  = 1/ the scale of the noise and  the bias of the noise.Adding double-geometric noise with a scale 1/ to the number of requests satisfies -DP [17,40], and we prove it in Appendix A.3.
AKR chooses Laplace noise, which is similar to DGEO but in the continuous domain.AKR sets a positive bias  so that the probability that the noise is negative is bounded by , and the authors prove AKR follows (, )-DP.In order to have a small  (i.e., the probability of failing DP to be small),  must be fairly large which leads to unsatisfactory utility.For example, when  equals a common value of 10 −6 ,  has to be at least 15 (it is even larger than the number of real requests and resources) to achieve  = 1 for  = 10.
Hence, accommodating negative noise without using a large bias is essential to high utility, and we show it is possible.In a nutshell, negative noise may relax the pre-allocation , but not necessarily introduce .Although negative noise introduces a discrepancy between the possible outcomes of  and  ′ from the attacker's view, as well as in the range of  (resources dispatched to the attacker), it does not violate DP when combined with non-negative noise as proved in Theorem 8 of Section A.3 (i.e., the attacker's view satisfies DP).In Section 5.2, we provide empirical analysis to show the impact of RA on utility and privacy from the attacker's view.

EVALUATION
In this section, we evaluate the privacy and utility of different mechanisms.Here we summarize the key results.1) Our mechanisms outperform AKR by 11% to 65% in terms of utility (e.g., DGEO outperforms AKR by 53% given  = 2).GEO has a clear advantage for smaller  while DGEO is able to achieve better utility with larger .
2) Different parameters can achieve similar privacy protection but lead to very different levels of utility.

Evaluation Setup
Settings.To compare different mechanisms in the privacy-utility tradeoff, we choose to simulate RA using a real-world system setting.Similar to AKR, we take Alpenhorn [42], an MPM, as one of our target systems.In essence, a user in Alpenhorn starts a conversation with his/her friend at an agreed time or round.In a conversation round, the user initiates  channels to  friends, then sends and receives messages on each channel to hide the real communication pattern.Section 2.1 describes how its privacy guarantee can be violated.The evaluation by AKR models how Alpenhorn allocates channels for requests to defend against allocation-based side-channel attacks.Similar to AKR, we set the resource capacity  = 10 for most of the experiments, meaning that a user has a maximum of 10 channels that can be established with other clients.We also experiment with larger  (15,20) to test the scalability of the proposed mechanisms and AKR.AKR sets an upper bound to the number of requests in each round and considers at most 10% of them to be honest requests.We remove the upper bound and set the number of victim requests to at most 1 to simulate the worst case for the victim, as explained in Section 2.1.Note that AKR uses a Poisson distribution to simulate the request number from all users while our total requests in case  and  ′ are fixed to  and  + 1, respectively.Given that we assume at most one victim exists during allocation, we did not apply the Poisson distribution.Unless otherwise stated, we simulate 10 million independent rounds of allocation with requests of attacker  =  (the optimal attacker strategy, as proved in Theorem 4), and measure privacy and utility.
In Section 6.3, we try to justify the choice of simulation setup and discuss the limitations of simulation.Metrics.We evaluate the performance of different mechanisms under three metrics: privacy, utility, and waiting overhead.Regarding privacy, we compute the empirical  by Equation 4with the simulation results, and the larger value indicates more privacy leakage.Theoretical  can be derived from Theorem 5 to Theorem 8, but their values are not always computable.For the study of parameters (Appendix B), we compute some theoretical  for the comparison.
As for the utility, we mainly measure the empirical resource utilization  , or how many (in ratio) resources are put into real use after allocation, from the simulation results.This differs from the classic DP that considers the accuracy of the analysis results as utility, or how close the noisy output is to the ground truth.The same utility measure is chosen by AKR as well. is given by: where  is the number of fulfilled requests,  is the number of resources, and Pr [ = ] is the probability of  requests being fulfilled.
While resource utilization is relevant to the overhead on the allocator, the overhead on the client can be measured by their waiting time (or waiting overhead).We use the probability of the victim getting the resource in any round, as the higher probability should lead to a shorter waiting time for the resource.For example, in Alpenhorn (original version that is not protected by DP) with  resources and  attacker requests, the probability that the victim gets resource Pr [  ] is given by: Denoting the Pr [  ] as the probability that the victim gets resources after DP, the ratio between Pr [  ] and Pr [  ] represents the amount of waiting overhead caused by DP mechanisms.Implementation.We implement our code in Python 3.7.10 with NumPy 1.19.5 libraries.The implementation is open-sourced [14].

Evaluation Results
We compare the performance of different mechanisms, i.e., CST, UNI, GEO, DGEO, and AKR with simulation.
First, we enumerate different  values for each mechanism and compute the best utility value, which is derived by searching in the space of possible mechanism parameters.Figure 2 illustrates the quantitative results of the tradeoff between privacy and utility.Note that, for AKR, since it is (, )-DP, we set  = 10 −6 , which is commonly chosen by other DP works (Angel et   In general, we found that all of our proposed mechanisms have better utility than AKR for every  when the parameters are finetuned.Specifically, GEO has better utility given lower  (i.e., under 2) while DGEO yields better utility given more relaxed  (i.e., over 2).AKR reaches the utility of 0.58 with (2, 10 −6 )-DP, while GEO and DGEO are able to achieve the utility of 0.89 with 2-DP, increasing the utility by 0.31 (53%).Overall, the margin of DGEO over AKR ranges from 0.05 to 0.39, GEO is able to outperform AKR over a range of 0.08 to 0.36, and UNI is able to outperform AKR by at most 0.15.This result is surprising as (, )-DP usually yields better utility than -DP.We believe this is due to the fact that our mechanisms have the ability to accommodate negative noise, while AKR has to use a large bias to satisfy DP.
Since CST cannot achieve a utility value of more than 0.5, in the following experiments, we focus on the other mechanisms.In the previous experiment, we change the mechanism parameters to fit , but in the real-world deployment, the parameters are determined ahead.Here we evaluate the impact of the parameters related to bias.For DGEO and AKR, they are represented by .For UNI and GEO, the starting point ( ℓ for UNI and GEO) models the bias.We configure bias to a small value of 10.With the utility targeting 0.5, GEO and DGEO are able to bound privacy with  of 0.80 and 0.77.UNI and AKR result in much higher  at 1.28 and 1.5.Hence, with a small bias, our mechanisms can protect allocation with better privacy while achieving the same utility as AKR.
So far, the prior experiments quantitatively measure how the mechanisms perform.Like Angel et al. [3], we visualize privacy protection under a fixed set of parameters.Specifically, we measure the difference in allocation results ( and  ′ ) based on the number of resources allocated to the attacker.Figure 3 shows the visualization of GEO, when bias is configured to 10.The lines of  and  ′ stay close, suggesting the privacy leakage of GEO is small.
Regarding the waiting overhead Pr [  ], we found UNI, GEO, DGEO and AKR reach 1.45, 1.92, 1.91 and 1.88 when configuring bias to 10, suggesting our mechanisms either have similar or lower waiting overhead than AKR.Still, we acknowledge that such overhead is significant and we discuss this issue in Section 6.3.Impact of Parameters.To assess the impact of mechanism parameters, we compute the privacy and utility values theoretically, as explained in Section 5.1.Here we summarize the guideline for setting parameters and leave the details to Appendix B.
For UNI, one should avoid large   as the privacy benefit diminishes and utility drops noticeably.Regarding  ℓ , we found negative values do not offer good privacy and small  ℓ is necessary to maintain good privacy.For GEO, a negative starting point  ℓ should be avoided as it does no good to utility or privacy.We suggest that a small positive starting point  ℓ with a moderately high  value would be optimal for GEO.For example, an  ℓ of 3 with  = 0.7 can achieve reasonable privacy  = 1.24 and a good utility of 0.75.Our evaluation in Appendix B also indicates that a small positive bias  with a scale  around 1 would be optimal for DGEO.For the larger resource capacity , GEO and DGEO still perform well.Why Models Attacker's View.In Section 4.4, we argue that modeling the attacker view is better than modeling the whole view that is adopted by AKR.Here we justify this claim under the same simulation.Figure 4 shows an example with the zero-mean ( = 0) double-geometric distribution under simulation.Given two different cases  and  ′ , Figure 4a depicts the difference of output before allocation and Figure 4b shows the output after allocation from the attacker's view, assuming DGEO with scale 1 is applied to allocate  = 10 resources, and  contains  = 10 requests.Our study shows that the existence of the victim can drastically affect the portion of resources an attacker can get after allocation.Table 3 uses four different zero-mean double geometric distributions to further explain why RA itself should be part of the privacy modeling.First, when the original  decreases, more noise is expected, which leads to an increase in privacy protection for both before and after RA.However, given a relatively high scale (i.e., small ), the privacy protection after RA can be 6 times worse than that before RA.Such extra information leakage is an indicator that the privacy budget is affected by RA.
We also take a step forward to measure the privacy amplification caused by modeling the attacker's view.We adjust AKR by replacing its Laplace noise with double geometric noise, which we denote as AKR-DGEO, and compare it with DGEO.As their noise mechanisms become the same, we can exemplify the privacy-utility tradeoff without and with privacy amplification.Our empirical analyses indicate that, for a utility measure of approximately 0.42, privacy amplification results in a decrease in the privacy parameter  from 1.00 to 0.59.Likewise, when the utility measure is near 0.60,  diminishes from 2.00 to 1.43 after amplification.

DISCUSSION 6.1 Privacy Consumption over Multiple Rounds
Like Angel et al. [3], our analysis focuses on a single round.Privacy normally degrades over multiple rounds rapidly.For instance, naively applying the sequential composition property of DP over multiple rounds deteriorates the privacy guarantee (i.e., ) linearly.Inspired by previous work, we identify three ways to curb privacy consumption: (1) using advanced composition [48] to reduce the total , (2) reusing noise for repeated requests [21,59], and (3) bounding the number of requests.Though relaxations could happen for the attacker's background knowledge [16], our approach does not limit the attacker's background knowledge but rather their view, and therefore we believe composition works in our case.Next, we discuss how the three methods can be applied in more detail.Using Advanced Composition.Traditional composition theorem in DP may result in a union bound over noise, which is suboptimal.Avoiding union bound for multiple queries has been an important open problem in differential privacy [58].The well-known advanced composition theorem [18] adjusts pure DP to approximate (, )-DP with  > 0 to yield better composition results.In cases where the attacker interacts with the allocator over multiple rounds, we argue that the leakage can be modeled by the -fold adaptive composition [18].
Mironov [48] proposed new bounding techniques for advanced composition under Rényi DP (RDP) to this end.In our case, we can transform -DP to (, )-RDP for any  > 0 [48], compose RDP with Theorem 1, and transform back to (, )-DP with Theorem 2. Popular DP libraries like Opacus have supported RDP advanced composition [53].Alternatively, we can utilize Equations 4 and 5 in [52] to derive the (, )-DP bound directly and employ numerical methods [26] to obtain more accurate results.
Yet, it is an open question to directly prove the RDP guarantee for our mechanisms (to avoid conversions mentioned above and compose better).One possible route is to follow the proof of the discrete Gaussian mechanism [11] and we leave it as a future work.Reusing Noise.When new incoming requests are from the same set of clients of the previous round, the server can avoid consuming an extra privacy budget by reusing the noise generated for the previous round [21,59].In this way, the attacker gains no more information than the previous round while the server consumes no extra budget.Specifically, the output of the algorithm remains the same if we fix the randomness that happens in a certain round.Thus, the server can utilize a persistent secret key for a pseudorandom function (PRF) over the same set of clients, where in each round the server is able to simulate the same randomness for the same set of clients.Bounding the Number of Requests.Drawing from [19], we can simplify the privacy analysis by eliminating the need to consider every RA round for each client by capping client requests over a period (e.g., a maximum of 2 calls daily for MPM clients).

Other Settings
Though our study primarily examines clients submitting binary requests for a single resource under worst-case privacy, it can be extended to (1) the non-binary setting in which clients can submit requests for more than one resource, (2) the multi-resource setting in which there are multiple kinds of resources and clients can request arbitrary resources, and (3) the average-case privacy.Non-binary Requests that Can be Fulfilled Partially.This setting can be transformed into the binary case by casting each nonbinary request as multiple binary requests.The global sensitivity will be changed to the number of maximum requests per client.Non-binary Requests that Cannot be Fulfilled Partially.The problem is transformed into an optimization problem aiming for maximum utilization of resources [36].In general, the allocator picks the requests that maximize its target function.The allocator can add noise to the number of requests, which we expect to yield worse utility compared to our primary setting.This is because when requests for large resources are added or removed from the allocator, a great amount of resources are wasted.Multiple-resources Allocation.A multiple-resource allocator deals with multiple types of resources simultaneously.In this setting, the privacy protection of the allocator subjects to sequential composition, thus the overall privacy depends on the summation of all privacy losses.The intuition is that the privacy leakage of each allocation can be seen as auxiliary information, and be combined with leakage from allocations of other types of resources.Multiple Honest Requests.Multiple honest requests in allocation happen when the attacker is not strong enough to control all other clients except the victim.Assume the requests are binary in this setting and the attacker does not know the resource distribution among the honest requests.In this case, the honest requests (other than the one from the victim) are equivalent to the dummy requests in our primary setting because the distribution among them remains unknown to the attacker.Therefore, we can add less noise in this setting in order to achieve the same privacy guarantee.We have justified the above assumption by experimenting with DGEO (the results are omitted due to page limit).

Limitations
Empirical Study on Privacy.The privacy analysis in our evaluation is empirical-based (i.e., 's are calculated empirically based on our simulation result).We choose simulation for two main reasons.First, we aim to compare the privacy-utility tradeoff of different mechanisms at different privacy parameters (e.g., Figure 2), and the computational overhead will be very high if the experiments are executed on large-scale real-world systems.Second, for the MPM system we evaluate, there is no published dataset about its communication data, so we have to simulate the allocations.In fact, Angel et al. took a similar approach to evaluate privacy empirically [3], and the scale of our simulation is comparable or larger (from 5 million rounds to 100 million rounds).Simulation has been leveraged to evaluate other privacy-preserving systems for the same reason, like differentially oblivious databases [54].We also acknowledge the limitation of our simulation, which does not fully approximate real-world, large-scale systems.Efficiency.Adding dummies results in higher waiting overhead because the clients now need to go through more rounds in order to get the desired resources.However, once the resources are allocated, no additional delay should be observed.
The spatial overhead due to serving the dummy clients could be prominent, especially for systems that operate on very limited resources.The same limitation exists in AKR, and the overhead is often unavoidable for systems leveraging DP.On the other hand, our approach provides better resource utilization than AKR, e.g., 98% under DGEO and 59% under AKR when  = 2.3.Higher resource utilization also leads to smaller waiting overhead.For example, for an approach with 40% utilization, the chances for a user to get resource allocated within 5 dialing rounds in Alpenhorn is about 99%.Our proposed mechanisms all surpass 40% as shown in Table 2. Attacks against DPRA.Potential side-channel attacks against DP algorithms, such as timing attacks [32], may compromise our DPRA, but require adaptation to the RA setting.

Real-world Examples and Utility Analysis
Here we first give a few examples of how the noise under  ≥ 0 and  < 0 can be instantiated in real-world systems.We follow the basic setting as described in Section 2.1 first (i.e., all resources are identical and one request asks for one piece of resource).
• In the cloud setting, users request for VMs and whether they are served is based on the available resources like CPU and memory.When  > 0, the allocator creates dummy VMs that potentially occupy resources.When  < 0, not all the requested resources are allocated to the VM (even though there are available resources).• Inside a computer, requests to cache resources (e.g., cache ways) are automatically generated during a memory access, which can lead to cache side-channel attacks [66]. > 0 will assign cache ways to dummy programs and  < 0 will skip the caching of some memory content.Either option will reduce the accuracy of the attack which relies on cache contention between attacker and victim.• In MPM, the requests are from a user's friends who intend to start a conversation in a round.Noise  > 0 is to add fake friends and  < 0 means to reject some requests.
For more complex allocators, we can extend the DP mechanisms following Section 6.2.For example, the buddy system manages memory in the power of two increments [37] and we can support it by considering the memory requests as non-binary.When concurrent requests are supported by multiple resource pools (e.g., hypervisor resource pools [62]), multiple-resources allocation can be applied.
Regarding the results of the privacy-utility tradeoff (e.g., summarized in Table 2), we argue they are practical in the real-world setting.For example, a study of Google Cloud shows the resource utilization is 40% -60% and the resource waste due to early task termination is 4.53 -14.22% [23].In this case, the utility after DGEO and GEO should be acceptable (e.g., 0.82 for GEO at  = 1.7).

RELATED WORK
Joint DP.We focus on the partial view of the attacker.The Joint DP definition proposed by Kearns et al. [36] formalizes this intuition, primarily to compute equilibrium in games with incomplete information [36,55,56].Note that Joint DP is just a definition, and classic DP primitives like the Laplace mechanism are still used.We are the first to formally investigate the design space and adapt various DP mechanisms to RA.Private Matching and Allocation.Our problem can be seen as a variation of the private allocation/matching problem, through which users have (non-binary) valuations for products (potentially in multiple rounds), and the goal is to maximize welfare while protecting users' private value for each good.Existing works [10,15,29,34,35,50] have applied DP algorithms (e.g., Laplace mechanism) that are asymptotically interesting.Our modeling of RA is different and we explored different noisy mechanisms.Biased Noise.AKR employs biased noise to satisfy DP, while DGEO uses it to improve the privacy-utility tradeoff.Biased noise has been examined before.Mazloom and Gordon [47] introduced a modified 2-sided geometric distribution to generate noise that enables differentially private access patterns with high efficiency.DJoin [51] cuts Laplace noise at zero to provide distributed queries with DP.Shrinkwrap [7] offers a truncated Laplace mechanism for differentially private data federation, where dummies are introduced to pad intermediate results.He et al. [28] proposes a model for private record linkage, allowing the disclosure of the true matching records while keeping the protocol executions indistinguishable when non-matching records are replaced.DP Against Side-channel Leakage.The leakage from RA can be considered as allocation-based side channels [3].A more common type of side channel is consumption-based, which happens when the system resources (e.g., network bandwidth and cache) are consumed.A number of works have applied DP to protect the system against the latter type of leakage.The protected resources/services include procfs of system statistics [64], streaming traffic [67], Trusted Execution Environment (TEE) [65], health data (e.g., ECG data) [57], task schedules [13], and packet scheduler [8].
Another related line of work is differentially oblivious [12], which was proposed to address the fundamental limitation of ORAM (Oblivious RAM).Though ORAM can protect the program's secret by hiding its memory access pattern, it incurs a very high performance overhead.By converting full obliviousness to differential obliviousness, one can obtain meaningful privacy with little overhead [12,38,63].While this paper also hides a victim's secret (i.e., its existence at a certain time), it considers an orthogonal adversary model where the attacker observes part of the true results without any mechanism to hide the victim-related information.

CONCLUSION
In this paper, we studied the problem of privacy protection designated under resource allocation and systematically modeled it through the lens of differential privacy.Specifically, we identified the key issues of a prior system AKR and propose to consider negative noise and mechanisms other than the standard Laplace noise.We designed four different mechanisms, CST, UNI, GEO, and DGEO, and proved they all satisfy -DP.In both theoretical and empirical analysis, we found our mechanisms outperform AKR in utility ranging from 11% to 65% given a privacy budget .Among the proposed mechanisms, we recommend GEO, which has a good privacy-utility tradeoff and performs especially well when  is small (e.g., less than 2).Ultimately, we hope to use this work to attract more attention to the privacy issues of resource allocation and encourage new privacy-preserving solutions to be designed.Regarding utility,   = 10 consistently ranks highest for different  ℓ , followed by   = 15 and   = 20.Regarding  ℓ , increasing its value enhances privacy (resulting in a lower ), with utility peaking when  ℓ ranges between [−5, 0].However, we observe two outliers related to  ℓ in Figure 5a.First, a peak is observed when  ℓ = −10, because all requests in  are removed deterministically but the probability of the same situation for  ′ is 1 +1 , where victim exists.Second, when  ℓ =   = 10,  drops to 1.75 because this special case implies that the attacker gets no resource in the victim's absence.Geometric Parameter  and Starting Point  ℓ of GEO. Figure 6 depicts how  and  ℓ affect GEO.For  ℓ = −50 and  ℓ = −10, both  and utility approach 0 due to the high likelihood of request removal.At  ℓ = 0, utility is high but  consistently exceeds 2. For  ℓ = 10, 20,  is below 1.5, with utility rising as  ℓ increases.For , its influence on  is minimal, except at  ℓ = 10 where  increases sharply after  = 0.5.Utility consistently grows with  across all settings.Geometric Scale  and Bias  of DGEO.In DGEO, the scale parameter  determines the noise's decay rate.A smaller  results in noise more closely concentrated around the bias . introduces more noise to the allocation, impacting post-allocation privacy.We evaluate the influence of these parameters on privacy and utility, presenting the findings in Figure 7. Introducing bias  improves privacy, especially when  < 1.For larger , the distribution resembles a discrete uniform, keeping  stable (around 2 for  ≥ 0). has limited utility impact unless  = 0. Resource Capacity .We set  to 10 for the prior experiments like Angel et al. [3].Here we test our mechanisms and AKR on

Figure 1 :
Figure 1: An example of RA.An allocator has six resources and the total number of requests sent by attacker is six.Privacy of the victim is violated when the attacker observes one of the requests is not fulfilled.

Figure 3 :
Figure3: Allocation results by GEO with  = 0.90, which sets the bias to 10.The -axis represents the number of fulfilled requests of the attacker, and the -axis represents the frequency of each output out of 100 million rounds.We increase the simulation rounds from 10 million to 100 million in order to yield precise results.

Figure 4 :
Figure 4: Distribution of output over 5 million runs.Before RA, we draw noise from a double geometric distribution with  = 1 and  = 10.After RA, the distribution changes, and the privacy leakage increases (the empirical  rises to 2.07).

20 (
a)  of UNI given different  ℓ and   .

20 (
b) Utility of UNI given different  ℓ and   .

Figure 6 :
Figure 6: Impact of  and  ℓ on GEO.B IMPACT OF PARAMETERSStarting Point  ℓ and End Point   of UNI.In Figure5, we display privacy and utility across various   ( = 10, 15, 20)  and  ℓ values (along the -axis).Notably,   = 15 largely mirrors   = 20 in terms of , even though   = 20 is expected to offer superior privacy.Regarding utility,   = 10 consistently ranks highest for different  ℓ , followed by   = 15 and   = 20.Regarding  ℓ , increasing its value enhances privacy (resulting in a lower ), with utility peaking when  ℓ ranges between [−5, 0].However, we observe two outliers related to  ℓ in Figure5a.First, a peak is observed when  ℓ = −10, because all requests in  are removed deterministically but the probability of the same situation for  ′ is 1 +1 , where victim exists.Second, when  ℓ =   = 10,  drops to 1.75 because this special case implies that the attacker gets no resource in the victim's absence.Geometric Parameter  and Starting Point  ℓ of GEO.Figure6depicts how  and  ℓ affect GEO.For  ℓ = −50 and  ℓ = −10, both  and utility approach 0 due to the high likelihood of request removal.At  ℓ = 0, utility is high but  consistently exceeds 2. For  ℓ = 10, 20,  is below 1.5, with utility rising as  ℓ increases.For , its influence on  is minimal, except at  ℓ = 10 where  increases sharply after  = 0.5.Utility consistently grows with  across all settings.Geometric Scale  and Bias  of DGEO.In DGEO, the scale parameter  determines the noise's decay rate.A smaller  results in noise more closely concentrated around the bias . introduces more noise to the allocation, impacting post-allocation privacy.We evaluate the influence of these parameters on privacy and utility, presenting the findings in Figure7.Introducing bias  improves privacy, especially when  < 1.For larger , the distribution resembles a discrete uniform, keeping  stable (around 2 for  ≥ 0). has limited utility impact unless  = 0. Resource Capacity .We set  to 10 for the prior experiments like Angel et al.[3].Here we test our mechanisms and AKR on of DGEO given different scale  and bias .
Utility of DGEO given different scale  and bias .

Figure 7 :
Figure 7: Impact of  and  on DGEO.

Figure 8 :
Figure 8: Privacy protection and utility under  = 15, 20.The ranges for the x-axis differ for  because not all utility values can be derived under every . = 15, 20.Figure8shows the privacy-utility tradeoff.For AKR, besides the default  = 10 −6 , we also evaluate  = 10 −12 , bringing its privacy closer to -DP.Figure8illustrates that  significantly impacts AKR's utility, with average gaps of 0.2 for  = 15 and 0.1 for  = 20.GEO and DGEO still perform well for these new  values and better than AKR.

Figure 8
illustrates that  significantly impacts AKR's utility, with average gaps of 0.2 for  = 15 and 0.1 for  = 20.GEO and DGEO still perform well for these new  values and better than AKR.

Table 1 :
,   , , ,  Parameters of the noisy mechanisms Notations frequently used in this paper.
The Geometric mechanism satisfies -DP.
View A M (•) models the allocation outcomes in the attacker's view.Note that View A M differs from M in Equation 1 in that View A  represents only the output in the attacker's view (i.e.,  ≤ | |).We now describe the detailed analysis of Pr [ | | | + ] under two cases:  ≥ 0 and  < 0. We enumerate all possible situations under RA and derive the exact probability expressions for Pr [ | | | + ] and Pr [ | | ′ | + ](i.e., Equation 5 and Equation M is a partial view of the final allocation outcome.Pr [ = ] denotes the probability  = , where  is a random variable and  is within some range [ ℓ ,   ], and Pr [ | | | + ] is the probability that With Pr [ | | | + ], we are able to more precisely model RA privacy than AKR and captures the randomness introduce by RA, since outside of the above range. has to satisfy  ≤ min(, ) because what the attacker observes cannot exceed the total number of resources  or the number of requests .Similarly,  ≥ ( − ) + (we use  + to denote max(0, )) because there are only  other requests, so the attacker must get at least ( − ) + resources.We only model the case when the number of requests  ≥  −  because when  <  − , all requests are fulfilled (no privacy leakage).In that case, Pr [ | | | + ] = 1 for  =  and Pr [ | | | + ] = 0 otherwise.The denominator of Equation 5 is +  because we have a total of  + requests and we allocate  resources to them (equivalent to choosing  from  + requests to allocate resources).Thus there are

Table 2 :
A summary of different mechanisms and their utility under some representative  values.Note that  = 10 and  = 10 −6 .
[3] even chooses a larger value,  = 10 −4[3]).Comparison of different mechanisms.The ranges of  for CST and UNI are limited. CT's utility never exceeds 0.5 because at least  dummy requests are required to make it differentially private.The utility of GEO does not increase when  is between 1.8 to 2.3, and we speculate this is because the parameters leading to the optimal utility have not been discovered through simulation.

Table 3 :
Comparison of different settings of DGEO with  = 10.We use 5 different  values (first row).Row 2 shows the empirical  is close to the original , which indicates our simulation has only small errors.Row 3 is the empirical  after RA, which deviates from the original .The last row shows our theoretical bound of  given in Theorem 8 is close to the empirical value.