Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness

Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike. These two objectives, however, are incompatible when a scoring model is calibrated through discontinuous probability functions, where individuals can be randomly assigned an outcome determined by a fixed probability. This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different -- a clear violation of individual fairness. Assigning unique odds to each protected sub-population may also prevent members of one sub-population from ever receiving equal chances of a positive outcome to another, which we argue is another type of unfairness called individual odds. We reconcile all this by constructing continuous probability functions between group thresholds that are constrained by their Lipschitz constant. Our solution preserves the model's predictive power, individual fairness and robustness while ensuring group fairness.


INTRODUCTION
Predictive models that output a score or probability for a multi-dimensional input, i.e., scoring functions, are a common tool in automated decision-making [12,13].Binary classification is a popular realisation of this paradigm, where a threshold is placed on a score to produce a decision; among others, it can be found in school examinations where individual answers are condensed into a grade that translates to a pass/fail mark [37], or banking where the history of personal finances is compressed into a credit score that captures one's likelihood of defaulting on a loan [27].Many repayment probability for black repayment probability for white thresholds for black thresholds for white approval probability for black approval probability for white Fig. 1.Two-threshold fixed randomisation [22] applied to probabilities (y-axis) output by a loan repayment classifier built upon credit scores (x-axis).It satisfies equalised odds for the binary protected attribute race (black and white) by using it to assign approval probabilities, but results in discontinuities that violate individual fairness and create a gap between group-specific individual odds.
such applications, especially in high stakes domains like healthcare, finance and judiciary, are coming under increased scrutiny given their potential harm to society -predictive models deployed in these contexts are expected to be accurate, robust, fair and explainable.These four desiderata, however, are often at odds.Improving utility, i.e., predictive power, of a model may entail increasing its complexity at the expense of interpretability and robustness, e.g., due to overfitting [4,46].Similarly, equalising errors between protected groups to ensure fairness may require sacrificing utility and impairing other notions of fairness [14,41].
In this paper we focus on the latter scenario, where (protected) sub-populations are treated differently, thus unfairly, due to persistent historical biases [10], training data under-representation [11] and greedy optimisation of an objective function.Correcting for these biases is often challenging as it requires detailed knowledge of the data domain and the input space.One popular solution to this problem, which we study here, is threshold optimisation under fairness constraints when dealing with multiple protected groups.This method relies on calculating unique decision functions based on scores for each protected group in order to satisfy a given fairness constraint, e.g., demographic parity [39].
We re-examine this approach, as finding a set of thresholds for a given score function that satisfy multiple fairness constraints -such as equalised odds [3] -is often impossible if only using a collection of single thresholds.Instead, a decision function that is optimal with respect to a definition of group fairness selected by the model owner is derived directly from the scoring function using a pair of thresholds for each group.Outputs that fall between the thresholds are allocated a random decision based on a fixed probability parameter -a procedure called fixed randomisation -which, while effective, exhibits a number of shortcomings demonstrated by Figure 1 and discussed later in Section 3. Using a fixed randomisation parameter is suboptimal for both the entities that create the model (owners) and those whose case is being decided by the model (users) because: (1) if the scoring function is accurate, the decision function cannot leverage this in the intervals between the thresholds; (2) even if the scoring function is individually fair, the step-based decision function is not, e.g., users whose scores are just under a threshold are treated very differently to those who are barely above it, despite their scores being similar (commonly referred to as the threshold effect [32]); and (3) users from one protected group may be unable to access the odds of positive classification offered to another group (equalised individual odds unfairness).
Consider the fixed randomisation solution shown in Figure 1, which satisfies equalised odds for the two values of the protected attribute race in a loan allocation setting using fixed randomisation.For example, a white user with a credit Manuscript submitted to ACM score of 49.5 is assigned the same odds (50%) of receiving a loan as a white user whose credit score is 25 despite the latter being 6.6 times more likely to default than the former.This stands in stark contrast to a white user with a credit score of 24.5 (just below the threshold) who has no chance of getting a loan despite being only 1.03 times more likely to default than the aforementioned white user with a credit score of 25.This case study illustrates that an increase in credit score -and therefore an increase in the likelihood of repaying a loan -is not reflected in the final decision for all scores except at the thresholds.In addition, while some white users have a 50% chance of receiving a loan and some black users have a 97.2% chance, these success odds are never offered to the other group; therefore, a white user will never be given a 97.2% chance of receiving a loan and vice versa.This disparity motivates a new notion of fairnesscalled equalised individual odds -which we outline in Definition 3.2 in Section 3.
We address these shortcomings by deriving a set of closed-form, continuous, monotonic functions (shown later in Figure 4) parameterised only by the thresholds and a probability parameter, making them easy to compute (Section 4.2).
We show that these functions are constrained via a maximum derivative, preventing a change in score leading to a large shift in classification odds and thus maintaining individual fairness and softening the threshold effect (Section 4.3).
Our approach enables the model owners to prioritise users with higher scores, better honouring the underlying score distribution as well as improving the transparency of the process.These properties incentivises users to increase their score as such an action improves their odds of a positive outcome -see Figure 3 for a direct comparison to Figure 1.We analyse our method in two case studies -through the lens of credit scoring for loan allocation in Section 5.1, and risk of recidivism in Section 5.2.For the credit scoring case study we seek equalised odds across the four values of the race attribute -non-Hispanic white (white), black, Hispanic and Asian -found in the 2003 TransUnion TransRisks Scores (CreditRisk) data set [38], whereas for the recidivism case study we enforce equalised odds across a combination of two races -Caucasian and African America -and two sexes -male and female -found in the 2016 ProPublica Recidivism Risk Score (COMPAS) data set [38].In both cases, we show that individual fairness is improved while group fairness and accuracy are preserved.In summary, our contribution is threefold: (1) we demonstrate that fixed randomisation for group fairness violates individual fairness; (2) we derive a set of closed-form, continuous and monotonic probability functions; and (3) we show that these continuous curves preserve group fairness and improve performance while adhering to the constraint imposed by individual fairness.

Notation
We assume that the scalar scoring function  : X ↦ → R takes individual instances and outputs a score R ⊆ R; ℎ : R ↦ → Y, where Y ≡ {0, 1}, is an arbitrary, possibly stochastic, binary decision function on R that maps the scores  to predicted classes  according to a predetermined probability distribution P{  = 1  =  }.Lower case letters denote an individual instance from a sample, e.g., x is an instance in  .Functions denoted by Greek letters, such as  : R ↦ → I where I ≡ [0, 1], parameterise this probability based on scores, e.g., according to the Bernoulli distribution ℎ( ) ∼  1,  ( ) .
Effectively, ℎ( ) is the probability that  = 1 for  =  .Alternatively, for deterministic behaviour  can be defined by a single threshold  ∈ R, where a score  ≥  yields ℎ( ) = 1 and  <  yields ℎ( ) = 0; this behaviour can be captured by the indicator function One common realisation of this thresholding function is a binary probabilistic classifier, where R ≡ I and  = 0.5.We therefore define the final decision function  ℎ : X ↦ → Y as the composition  ℎ = ℎ • , where the subscript on  indicates the composition function ℎ on the scoring function .Additionally, capital letters refer to samples from spaces, such that  is a sample from the space X; ( ) =  are the corresponding scores calculated by  for the sample  ;  ( ) =  are classes predicted for all instances in the sample  ; and  captures their ground truth labels.We denote the protected attribute as , and consider the joint distribution (, ,  ).We make no assumptions on the type or shape of X, nor on the construction of  (the behaviour of which is discussed in Section 3).

Distance and Similarity Measures
Defining "similar individuals" can be challenging and is deeply rooted in the landscape and shape of the input space, the complexity of the problem, and the density and distribution of the training data  within the space.Distances on metric spaces, regardless of their definition, must follow a set of axioms (outlined in Appendix A).This problem is also not strictly mathematical and depends highly on the context.Additionally, discrete or categorical data can be difficult to quantify and compare; for example, in a feature space of size  , how different is an unmarried individual from a married person, all other things being equal?One could argue that its importance depends on the size of  -a large value of  can dilute the importance of each individual feature.If we are trying to predict whether an individual has any children, however, this feature is of high importance regardless of the size of  .To best capture such dependencies, we can employ similarity graphs or bespoke distance metrics chosen based on the problem definition and the data set at hand.
Using tailor-made definitions of similarity, nonetheless, poses two issues: (1) it makes it difficult to compare results between experiments; and (2) the results are subject to the quality of the metric and its suitability for the problem at hand.We operate under the assumption that model inputs are inaccessible (simulating scenarios where data and model parameters are protected and/or private), thus we are only given scores, values of the protected attribute and the label (ground truth).For our work we therefore rely on generic distance metrics such as Euclidean, Hamming and Gower's distances.Note that we assume that changing the protected class  for an individual is too large of a change to label the two instances as similar since this alteration entails using a different set of thresholds and probabilities in the final decision function.Examples of classical distance functions are presented in Appendix A.

Related Work
Group and individual fairness are two commonly considered categories [7].Group fairness focuses on the statistical difference in outcomes between sub-populations determined by the values of a protected attribute  [5].The type of statistical outcome that a model owner may want to focus on is domain-specific, but measures closer to 0 are more desirable as this indicates no statistical difference between two groups.For a simple case of a binary protected feature  = {,  ′ } where  ∩  ′ = ∅, we can further differentiate two types of group fairness [9]: Outcome Predictions are equalised in a set way across groups, e.g., demographic parity [1]: An example of demographic parity may be in school admissions [44], where the distribution of admitted students should represent the distribution of the applicants for each value of  (i.e., if applicants are 50% male and 50% female, admissions should reflect this pattern).

Manuscript submitted to ACM
Error Distribution (In)correct classifications should be equalised in a predetermined way, e.g., using false negative rate: An example of equalising false negative rate may be in the medical field, where false negatives could have dire consequences for a patient.Erring on the side of caution equally for all groups is therefore more preferable, up to a certain cost [31].
There are many ways in which group fairness can be operationalised, with different tasks and domains requiring a specific constraint or a mixture thereof.In this paper, we mainly consider one of the strongest fairness constraints called equalised odds [33], which is outlined in Definition 2.1.Definition 2.1 (Eqalised Odds).A decision function  : X ↦ → Y satisfies equalised odds with respect to a protected attribute  if false positives and true positives are independent of the protected attribute: A large portion of fairness research in machine learning therefore focuses on equalising outcomes and errors between users who belong to different protected groups, such as race or sex [28].There are three distinct areas where fairness can be injected into a data modelling pipeline: pre-processing transforms the underlying training data such that signals and cross-correlations causing bias and discrimination are weakened [8]; in-processing incorporates fairness constraints directly into the optimisation objective [40]; and post-processing alters the output of a decision-making process to mitigate bias of the underlying (fixed) model [25].
A variety of methods is needed as even when the scoring function is trained as "unaware" [15], and as such has no knowledge of the value of the protected class ,  can still become unfair.For example, the ground truth  may be correlated with  due to historical biases, some features in  may act as a proxy, or the distribution/behaviour of some features in  may be different between sub-populations, causing a predictive model to under-perform for under-represented groups.A different strand of work looks into fair data collection [43] and feature selection [20] as well as fair learning procedures, e.g., adversarial learning [45].In this paper, we focus on a popular class of post-processing methods known as threshold optimisation.Our work builds directly upon the foundational method introduced by Hardt et al. [22] by expanding and improving it along multiple dimensions.
A slightly more nuanced view on fairness is the notion of "treating similar individuals similarly", known as individual fairness [30].In short, we look to impose a constraint on the distance between any two points (individuals) in the input space against their distance in the output space [15].We measure distance or similarity using distance functions  on the input ( X ) and output ( R ) spaces: where   = (x  ) and  X ≥ 0 is a Lipschitz constant (refer to Section 2.2 for a discussion of distance metrics).The Lipschitz constant describes the maximal difference of the distance between two values in the input space with their corresponding distance in the output space.Limiting  X is usually done with a smoothing process, e.g., manifold regularisation [6], or by constraining the optimisation of  subject to a condition on the size of  X .The concept is to assume that individuals with similar features (small  X ) should appear close together in the output space (small Manuscript submitted to ACM Fig. 2. ROC curves for the CreditRisk data set.The solution space for each (protected) group is given by all the points on their respective ROC curve when a single threshold is used.If we rely on multiple thresholds and randomisation, however, we expand the solution space to all the points on and below an ROC curve -represented for each group as the coloured area.A fair solution, according to equalised odds, is any set of thresholds and probabilities such that each group achieves equal true and false positives.
R ).Therefore, limiting the rate at which  can change (i.e., its differential) in densely populated areas of the feature space can force  to be smoother, hence more fair. X can be as simple as Gower's distance for mixed categorical and numerical features (see Appendix A), but ideally should be chosen appropriately for the problem at hand; since R is

POST-PROCESSING FOR FAIRNESS WITH FIXED RANDOMISATION
In general, we expect that the higher the score  output by a scoring function  used for a predictive task, the "better" the outcome.This property is known as positive orientation, with negative orientation describing the opposite behaviour [19].
For example, if x 1 and x 2 are randomly drawn instances from  used to calculate credit scores, and x 1 has a higher credit score than x 2 -i.e., (x 1 ) > (x 2 ) -this relation implies that x 1 is more likely to have healthier spending habits, thus making this person more likely to repay a loan (see Figure 8 in Appendix E).This does not need to be strictly true for all values, but should hold in general.In other words, we expect the receiver operating characteristic (ROC) curvewhich expresses the (false positive, true positive) rates at different thresholds on  -to at least be above the diagonal line from (0, 0) to (1, 1) and monotonic, i.e., never decreasing [18].An ROC curve that is a straight line from (0, 0) to (1, 1) denotes a scoring function that is completely independent of  , i.e., a trivial scoring function.
Optimising a scoring function  with respect to complex definitions of fairness (such as equalised odds given by Definition 2.1) for multiple protected groups is more challenging than optimising  for less strict fairness notions (e.g., demographic parity) due to the need to satisfy multiple constraints simultaneously.With post-processing, we assume that  is fixed and inaccessible, i.e., a black box.We cannot therefore know or alter how scores are calculated from the input space, nor do we have access to the input space.This may be due to trade secrets or privacy concerns [24], and applies to credit scoring [23] among other domains.To achieve the desired notion(s) of fairness we therefore need to build an unbiased decision function  ℎ upon  by finding optimal thresholds for ℎ using only the joint distributions of ,  and  .
Manuscript submitted to ACM When cardinality of the protected attribute  is 2, optimal equalised odds can be achieved by fixing a single threshold at any point where the ROC curves are equal.If there are multiple points where the curves meet, the optimal solution (lowest false positive and highest true positive rates) is the intersection closest to (0, 1), i.e., the perfect model.Figure 2 shows the ROC curves stratified by a protected attribute  (race) for a loan repayment prediction based on credit scores  from the CreditRisk data set.
The challenge arises when the ROC curves do not touch or if || > 2. If the curves do not touch in (0, 1) × (0, 1), we can only satisfy equalised odds with a single threshold at the trivial points (0, 0) or (1, 1), i.e., assign the same outcome to all the scores.For || > 2 -e.g., where  =  1 ×  2 × • • • ×   may be a Cartesian product of  protected characteristics -it is highly unlikely for all the ROC curves to intersect at the same point (except for the trivial points).
When using a single threshold, each group can only access false and true positive values that are on their respective ROC curve (shown in Figure 2 as the coloured curved lines).Using multiple thresholds and randomisation, however, allows each group to access all the points that are below their respective ROC curve and above the trivial scoring function (shown in Figure 2 as the coloured regions).The optimal point for equalised odds therefore becomes the point under all ROC curves that is closest to (0, 1).
Hardt et al. [22] achieve equalised odds by setting group-specific thresholds  , -where  0, ≤  1, , so  ∈ {0, 1}that are applied to the scoring function .If a score falls between the thresholds designated for the protected group , it is assigned a class at random with a probability given by the parameter   ∈ I. Since thresholds are group-specific, we define a threshold-based classification function ℎ  : R ↦ → Y, where the probability of ℎ  ( ) = 1 is given by for each protected sub-population .In other words, ℎ  ( ) ∼  1,   ( ) .We therefore define the final decision function as  ℎ  = ℎ  • , and Equation 2gives us We call this fixed randomisation, as  ∈ [ 0, ,  1, ) yields probability   of  = 1.Setting   = 0,   = 1 or  0, =  1, is synonymous to using a single threshold.A visual example of fixed randomisation is provided in Figure 1.
Fixed randomisation is an effective approach to build a classifier  ℎ  based on a scoring function  that satisfies group fairness such as equalised odds.This strategy, however, exhibits a number of undesired properties; most notably: (i) it does not follow the general behaviour expected of a scoring function since all users who are subject to randomisation receive the same classification odds, no matter their score, but users whose scores are similar and near the thresholds are treated differently (ergo the example given in the introduction and shown in Figure 1); (ii) even if  is individually fair with well-defined  X , the discontinuities introduced by   at  , prevent  ℎ  from complying with individual fairness; and (iii) if   ≠   ′ then users from group  cannot access the random classification odds offered to group  ′ and vice versa.
Section 1 has thoroughly demonstrated the adverse consequences of point (i).While users are made to believe that a higher score is better, e.g., their credit rating, fixed randomisation only exhibits this behaviour at the thresholds.Refer back to Figure 1, which shows that despite there being clear evidence of white users with a credit score of 50 being more Manuscript submitted to ACM likely to repay their loan than white candidates whose credit score is 25, both are equally likely (but not guaranteed) to receive a loan.
Using Equation 2, the distance is the difference in odds of positive classification between two scores.
Point (ii) concerns the classification behaviour around the thresholds  , and fixed randomisation parameter   , which create discontinuities in odds for the final decision function  ℎ  .To demonstrate this we use Definition 3.1, which specifies a distance metric on the classification odds.Lipschitz conditions scale across compositions [17], such that Issues arise around the thresholds.Take , so two scores approach a threshold from different sides.In such a case, from Equation 2, we have that and and thus  R must be very large.As  1 approaches  , from one side and  2 from the other, ℎ  is clearly not locally Lipschitz continuous since  R → 0 but  Y →   or (1−  ), one of which is always above 0.In theory,  could be crafted such that it cannot map individuals to values around the thresholds, however this would introduce discontinuities to  and thus invalidate the Lipschitz condition.In this scenario, assuming  satisfies the individual fairness constraint defined in Equation 1,  ℎ  must ultimately violate such an individual fairness constraint at the thresholds when fixed randomisation is employed.Fixed randomisation can therefore be seen as a step function -see Figure 1 -which is not uniformly continuous on any interval that contains  , [16].Small changes can occur for a variety of reasons, e.g., a lack of instrumentation precision [32] or noise due to human error [34], and thus we argue that small changes should never dramatically change an individual's odds.Definition 3.2 (Eqalised Individual Odds).Given a probabilistic classifier   : X  ↦ → Y, where X  ⊆ X| = , defined by the probability curve   : R ↦ → I  ⊆ I such that ℎ  ( ) ∼ (1,   ( )) and Therefore, all sub-populations in  must be capable of attaining classification odds available to all the other groups.
Point (iii) highlights an interesting behaviour that gives rise to a novel, relatively weak, notion of fairness, which we call individual odds -see Definition 3.2.To satisfy this fairness criterion   does not necessarily need to be continuous but every point that it can reach must also be available to   ′ , so effectively we require I  ≡ I  ′ .Violating this constraint implies that there exists a subset of users from the  =  sub-population that can never be treated the same as a Manuscript submitted to ACM portion of individuals from the  =  ′ group and vice versa.Whenever   ≠   ′ , the individual odds criterion is clearly not satisfied for fixed randomisation.This definition of fairness bridges the, thus far somewhat separate, concepts of individual and group fairness as it considers the treatment of individual users in view of their assignment to distinct sub-populations determined by the protected attribute .

CONSTRUCTING CURVES FOR PREFERENTIAL RANDOMISATION
Under these conditions, assuring group and individual fairness is equivalent to searching for solutions that are continuous and smooth, with a well-defined limit on  R  X , which also satisfy Definition 2.1.We therefore must find a combination of the group thresholds ( 0, and  1, ) and a curve between them that satisfies individual as well as group fairness.

Defining Solution Behaviour
There are potentially infinite curves that satisfy the aforementioned conditions.In order to decrease the size of the solution space, we can impose further restrictions on the expected behaviour of the solution and parameterisation thereof.Where ℎ  follows fixed randomisation, we define preferential randomisation as   ( ) ∼  1,   ( ) to distinguish between the two; therefore,    =   •  and We expect preferential randomisation to behave as follows: Monotonicity Larger values of  should entail equal or higher chances of positive classification as argued by point (i) in Section 3, i.e.,

Continuity at boundaries
The solution should avoid sudden jumps in probability at the thresholds  , to satisfy point (ii), i.e.,   ( , ) =  .

Continuity for interval space
The curve that maps R to the classification probability must be well-defined at all points in R in compliance with point (iii).If r is any fixed point in R, where  → r + is  approaching r from above and  → r − is  approaching r from below.
Monotonicity between the thresholds guarantees that higher scores are treated better; continuity within the interval ensures that the Lipschitz constant does not explode at the thresholds -see Figure 3 This is especially important when we can only access the final decisions Y as opposed to the scores R, i.e.,  is a crisp classifier  : X ↦ → Y, in which case we require the ability to randomise the crisp predictions.With these constraints we can satisfy the requirements outlined in Section 3.

Viable Solutions from Linear Systems
Even with these constraints, the number of curves between each combination of thresholds that constitute viable solutions is still infinite.We therefore further constrict the solution space to piece-wise polynomials parameterised Manuscript submitted to ACM only by  , and   .We assume each solution takes the form and so for   =  0, + (1 −   )( 1, −  0, ), We choose this particular point of connection (  ) because it ensures that all solutions (including   ) follow This property guarantees that curves parameterised by the same thresholds and probabilities are comparable as they yield the same average probability between  0, and  1, .The only difference between such solutions is their smoothness and continuity (see Appendix B for the proof).Finding families of closed-form solutions is achieved by using the continuity and monotonic constraints, with the addition of smoothness constraints as the order of the polynomial increases, and solving a full-rank linear system x = b (refer to Appendix C for details).Here, we consider four candidate curves: 3   3 , and Their derivations are given in Appendix C. Note that the 4 th order polynomial (Equation 5) is not monotonic if .

Validating Individual Fairness
If  is individually fair from the outset, validating that a given solution satisfies the individual fairness constraint is straight forward.From Definition 3.1 and Equation 3, Manuscript submitted to ACM repayment probability for black repayment probability for white thresholds for black thresholds for white approval probability for black approval probability for white Fig. 3. Two-threshold preferential randomisation with smoothness constraints applied to probabilities (y-axis) output by a loan repayment classifier built upon credit scores (x-axis).It satisfies equalised odds for the protected attribute race (black and white) by using it to assign approval probabilities.This solution has no discontinuities -satisfying individual odds (Definition 3.2, see  and  ′ for an example) and being  R Lipschitz-continuous (Equation 1) -and offers predictive performance marginally better than the fixed randomisation method shown in Figure 1.
Taking the limit  1 →  2 , we get the definition of a derivative.Therefore, we can calculate  R by considering the Due to the definitions of each   , the maximum value of  ′  on R is always either at the thresholds or at the connection point, with the exception of the 4 th order polynomial for which  R is where  ′′  ( ) = 0 for  ∈ ( 0, ,  1, ), thus  R is always known.Finding an optimal solution is therefore a case of identifying values of  , and   for   that satisfy Definition 2.1 such that  R  X is well-defined.While  R is not guaranteed to be small, it is guaranteed to be finite.Taking the limit   → 1 or 0,  0, →  1, , or  1, →  0, , then  R → ∞, which is synonymous with using a single threshold, hence invalidating equalised odds.

CASE STUDIES
Here we apply the method of preferential randomisation to two case studies: credit scoring for loan allocation (CreditRisk) and risk of recidivism (COMPAS).Source code for all the studies is available online1 .

CreditRisk Case Study
To facilitate a direct comparison, we apply our method to the case study conducted by Hardt et al. [22].Credit scores are often used to determine whether an individual should receive a loan or mortgage, to calculate interest rates and credit limits, and even to conduct background check on tenants [21,29].The scoring function  -which calculates credit scores on input space X -operates as a black box (see Section 3), therefore we only observe the scores  and cannot access X or .
The input space may contain attributes influenced by cultural background (i.e., related to race), possibly causing the joint distribution of  and  to differ between sub-populations .The CreditRisk data set captures the credit score's ability to predict defaulting on a loan (i.e., failing to repay it) for 90 days or more.The data show that as credit score increases, the likelihood of defaulting decreases (shown in Figure 8 given in Appendix E) .The rate of these changes, however, is correlated with race.Therefore, when a single threshold for each sub-population is optimised for maximum accuracy, the equalised odds (Definition 2.1) becomes 0.28; we should strive for this fairness metric to be as close to 0 as We overcome this by using different thresholds and probabilities (specified in Table 3 given in Appendix F) achieved with a set of curves with differing smoothness constraints.These curves honour the "higher credit score leads to higher repayment probability" dependency encoded in the underlying data.Referring back to the example introduced in Section 1, we can see from Figures 3 and 4 that the white user with a credit score of 49.5 is now 2.6-5.25 times more likely to receive a loan than the white user with a credit score of 25, depending on which continuous solution is chosen.
The results -reported in Table 1 -show that the difference in accuracy and equalised odds between fixed randomisation and preferential randomisation is negligible (a change of +0.016 and −0.000634 respectively).The method additionally improves individual fairness by the Lipschitz constant on   and through satisfying Definition 3.2 (individual odds).
Preferential randomisation can therefore be used to guarantee group and individual fairness through the notions of equalised odds and individual odds, and this encourages users to engage with the scoring model.1.All solutions have comparable accuracy and satisfy equalised odds but yield a different Lipschitz constant  R .The Hispanic group is omitted as it uses a single threshold  , = 30 (refer to Table 3 given in Appendix F).

COMPAS Case Study
The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) software2 is a commercial tool used across multiple U.S. states to analyse and predict a defendant's behaviour if released on bail.The software output can be considered by judges during sentencing, albeit such a practice must be disclosed.Specifically, COMPAS offers three insights: (1) likelihood of general recidivism (re-offending); (2) likelihood of violent recidivism (committing a violent crime); and (3) likelihood of failing to appear in court (pretrial flight risk).Here, we focus on the risk of re-offending using the raw COMPAS scores available in the ProPublica data set3 [2].The COMPAS algorithm uses characteristics such as criminal history, known associates, drug involvement and indicators of juvenile delinquency in order to calculate a score  , where a higher score corresponds to a higher likelihood of recidivism.As is the case with CreditRisk (Section 5.1), the scoring algorithm used by the COMPAS software is proprietary.Given its high stakes nature, it is important to understand the predictive behaviour of this tool since its social situatedness captured by the (protected) data features -which are translated into the score  -may yield biased results [35] as shown in Figure 5.
To this end, we define  as the Cartesian product of two sensitive attributes -sex  1 = {male, female} and race  2 = {Caucasian, African-American} -found in the COMPAS data set, such that  =  1 ×  2 and so || = 4.
Additionally, normalised COMPAS scores for a population of interest are denoted with ; the ground truth label for each score in  is given by  , where 1 corresponds to individuals who committed an offence in a two-year time window; and  captures crisp predictions, with 1 indicating high risk (of recidivism).Studying the link between the scores and labels provided by the COMPAS data set -refer to Figure 9 given in Appendix E -indicates that for most values of  across all groups encoded by  if  1 >  2 , then P{ = 1| =  1 } ≥ P{ = 1| =  2 }.Therefore, we are in a good position to use the monotonic probability functions proposed in this paper to build the final classifier.

Caucasian male
Caucasian female African-American female  2. Accuracy (acc) as a percentage, equalised odds (EO) to the order of ×10 −4 , and Lipschitz constant ( R ) per method for each value of the Cartesian product of the protected attributes race and sex in the COMPAS prediction task.The African-American male group is not shown as it uses a single threshold ( , = 48) due to having the lowest ROC curve at the optimum, thus acting as the baseline for other groups.
Given the aforementioned relationship, it is in the public's (and judicial system's) best interest to always increase the probability of classifying an individual as high-risk when the score  increases.However, fixed randomisation does not allow for this.For example, under fixed randomisation a Caucasian male with a COMPAS score in the [24,41) range has an 11.6% chance of being classified as high-risk (see Table 4 in Appendix F); nonetheless, a Caucasian male at the top of this score range is almost twice as likely to commit an offence as a Caucasian male with a score at the low end of this range.Therefore, fixed randomisation is unfair on three fronts: (1) Caucasian males with scores in the lower range of [24,42) are treated the same as Caucasian males with scores in the higher range of this interval; (2) higher risk individuals are not labeled as such despite their scores indicating so; and (3) individuals whose outcome is randomised are never offered the same odds as members of other protected groups (in violation of Definition 3.2).
Notably, these arguments apply to all groups in the protected attribute  and not only Caucasian males.2. All solutions have comparable accuracy and satisfy equalised odds but yield a different Lipschitz constant  R .The African-American male group is omitted as it uses a single threshold  , = 48 (refer to Table 4 given in Appendix F).
Manuscript submitted to ACM analysis contained an error that caused his parole to be incorrectly denied [42] -thus we argue that small changes should not dramatically change the chances of classification.
The between-group equalised odds measure when we maximise accuracy separately for each group is 0.148.Mirroring Section 5.1, we apply our method to the COMPAS model in order to reduce the equalised odds disparity without breaking individual fairness.We therefore seek to calibrate the model with a combination of thresholds and probabilities that parameterise the continuous curves (defined in Section 4.2) using nothing but the joint distributions of (, ,  ).We then compare the continuous solutions to the step function solution (fixed randomisation) defined in Equation 2. The results reported in Table 2 and Figure 6 show that continuous curves can be used to simultaneously satisfy equalised odds, individual fairness and individual odds.Since low-scoring individuals are less likely to be classified as high-risk, defendants have an incentive to engage in behaviour that actively lowers their COMPAS score.Furthermore, public safety is prioritised more effectively since individuals with a measurably higher probability of recidivism are given higher odds of being classified as high-risk.

CONCLUSION AND FUTURE WORK
In this work we demonstrated how using fixed randomisation to guarantee group fairness may be detrimental to both the owners and users of a predictive model.Users with higher scores should be more likely to receive a better outcome -a property that may be lost when enforcing group fairness.Ensuring this behaviour also allows the owners to preserve predictive performance and transparency of the automated decision-making process.By using the method proposed in this paper -which relies on monotonic and continuous curves -we can guarantee these properties.
Our approach rewards building accurate scoring functions and adheres to the notion of individual fairness from the perspective of function composition.Importantly, the burden of accurate classification remains the sole responsibility of the model owner since our method forces all individuals to rely on the equalised odds measure of the worst-performing sub-population.This allocation of responsibility is desirable as owners can choose to invest in better predictors, data or scoring functions, whereas users in under-performing groups lack this agency.
Notably, our case study shows that there can exist multiple solutions that simultaneously satisfy equalised odds and individual fairness, which can be linked to model multiplicity [36].When equalised odds, individual fairness and accuracy are comparable between groups, we can choose to discriminate the solutions based on other criteria.Future work will explore this aspect of our curves; specifically, we will consider: (1) the most robust curve for each group [26]; (2) curves such that  R is closest between groups; (3) the smoothest curves; (4) curves that subject the fewest individuals to random outcomes, for example, min| 1, −  0, | ∀ ∈  ; and (5) curves that subject equal number of individuals to random outcomes between groups, e.g., min

A EXAMPLE DISTANCE FUNCTIONS
If M is a metric space and , ,  ∈ M, then  M : M × M ↦ → R + and the following hold: •  M (, ) = 0 -the distance between a point and itself is 0; • if  ≠ ,  M (, ) > 0 -the distance between two different points is strictly greater than 0; •  M (, ) =  M (, ) -the distance between two different points  and  is equal to the distance between  and ; and •  M (, ) ≤  M (, ) +  M (, ) -the distance between any two points is equal to or less than the distance given by visiting another point on a journey between the original two points (triangle inequality).

A.1 Euclidean Distance (Continuous Features)
The  2 -norm is defined as and is the foundation of Euclidean distance   : X × X ↦ → R defined as and so

A.2 Hamming Distance (Discrete Features)
The  1 -norm is defined as and is the foundation of Hamming distance   : X × X ↦ → {0, 1, . . .,  − 1,  }, which counts the number of features that differ between two inputs x 1 and x 2 , and is defined as where ⊕ is the XOR operation.Therefore, x 1 ⊕ x 2 is simply a vector of 0's and 1's such that .
Manuscript submitted to ACM For example, A.3 Gower's Distance (Mixed Continuous and Discrete Features) Take x 1 , x 2 ∈ R  that contain both continuous (numerical) variables and discrete (categorical) variables.We then consider each variable for  = 1, . . ., .If the pair  1, ,  2, is continuous, , where   is the range of the  th feature.Fundamentally, the second term is the normalised  1 -norm (defined in Equation 6) on the differences between two vectors.However, if the pair  1, ,  2, is discrete, we use the Iverson operation defined by As such, a value of   = 1 for both continuous and discrete features implies that  1, =  2, , and   = 0 implies that  1, and  2, are maximally different.We put this together to get Gower's Similarity Coefficient which is bounded within [0, 1].However, this coefficient does not follow the axioms laid out at the beginning of this section as   (, ) = 1.Therefore, using Equation 7we define Gower's distance as which offers the behaviour expected of a distance metric.

B GEOMETRIC MOTIVATION FOR THE AVERAGE PROBABILITY (LINEAR CASE)
We can parameterise each set of potential solutions for each value of  (i.e., protected sub-population) by only three parameters -  ,  0, and  1, -by constraining the area under each curve as equal to   ( 1, −  0, ).This forces all potential solutions with the same set of parameters to have the same average probability between the thresholds.

B.1 𝜏 𝑎 Proof (Point of Intersection)
Here, we discuss the details of bounding the piece-wise linear solution such that the two lines join at   ∈ [ 0, ,  1, ].
We present a proof that this value can be easily found, and is defined only through   = 1 −   ,  0, and  1, .
Manuscript submitted to ACM Fig. 7. Geometric interpretation of a piece-wise linear solution using thresholds and probabilities.We can see that a step function yields a rectangular area -blue space that denotes the average probability -and is defined by the probability   and the threshold interval size  1, −  0, .We expect this value to be equal to the area bounded by the piece-wise linear solution  ( ) and the x-axis (yellow space), which can be decomposed into simple geometric shapes and summed up.
We begin by assuming that the solution is linear in nature and, as outlined in the paper (Section 4.1), it preserves the average probability of the step function within the interval [ 0, , We therefore define Δ  =  1, −  0, to get the final result from Section 4.2:

□
Manuscript submitted to ACM B.2  1 () and  0 () Proof (Piece-wise Linear Solution) We define the linear form of the interpolant as where each   is linear, so In order to derive the final form, we must assume the following conditions (continuity): (1)  1 ( 0, ) = 0, (2)  0 ( 1, ) = 1, and From conditions 1 and 3 we get From the difference of these equations we get (  −  0, ) =   .
From the proof in Appendix B.1, we know that   =  0, +   Δ  (Equation 8), giving Similarly, from conditions 2 and 3 we get The difference yields From the definition of   and   we get We can decompose the integral of the piece-wise linear solution into two integrals over the interval, so using Equation 11we get Therefore, from Equation 10we get From definition of Δ  and   in Appendix B.1 we then get Also from Equation 10, we have C.2 Closed-form Smoothness for   ∈ [ 2 5 , 3  5 ] Here, we search for a smooth closed-form solution to the above problem.For simplicity, we assume that  0, = 0 and  1, = 1, however the solution can be generalised to arbitrary thresholds by applying shift and stretch operations.
We have the following constraints: We know that and so we have the following well-defined, full-rank linear system: .
Manuscript submitted to ACM we get the following linear system: (1−  ) 3  3 Since these curves are specified through closed-form solutions parameterised by  , and   on a known interval R,  R can be found analytically for each curve.Here, we show the derivation procedure for the linear and 4 th order solutions.
The other curves (cubic and quadratic) follow the same protocol.
Manuscript submitted to ACM

Definition 3 . 1 (
Classification Odds Distance).Given a decision function ℎ  : R ↦ → Y such that ℎ  ( ) ∼  1,   ( ) , we define the corresponding distance metric  Y : I × I ↦ → I such that for an example.Because no score can be outside of the [min(R), max(R)] = [R  , R  ] range, the output of   does not need to span the entire probability range [0, 1] if the thresholds are fixed at the extremes, i.e.,  0, {f(X) = 1|A = Asian, R = r}

Fig. 4 .
Fig.4.Probability curves corresponding to the results reported in Table1.All solutions have comparable accuracy and satisfy equalised odds but yield a different Lipschitz constant  R .The Hispanic group is omitted as it uses a single threshold  , = 30 (refer to Table3given in Appendix F).

Fig. 5 .
Fig. 5. ROC curves for the COMPAS data set.The coloured regions indicate areas accessible to each group.(Refer to Figure 2 for more details.) Small changes in score having a large impact on odds can have very real effects on individuals -see the case of Mr. Rodriguez, whose {Y = 1|A = African-American female, R = r}

Fig. 6 .
Fig.6.Probability curves corresponding to the results reported in Table2.All solutions have comparable accuracy and satisfy equalised odds but yield a different Lipschitz constant  R .The African-American male group is omitted as it uses a single threshold  , = 48 (refer to Table4given in Appendix F).

Table 1 .
Accuracy (acc) as a percentage, equalised odds (EO) to the order of ×10 −4 , and Lipschitz constant ( R ) per method for each value of the protected attribute race in the CreditRisk loan repayment prediction task.The Hispanic group is not shown as it uses a single threshold ( , = 30) due to having the lowest ROC curve at the optimum, thus acting as the baseline for other races.