Is Your Model Predicting the Past?

When does a machine learning model predict the future of individuals and when does it recite patterns that predate the individuals? In this work, we propose a distinction between these two pathways of prediction, supported by theoretical, empirical, and normative arguments. At the center of our proposal is a family of simple and efficient statistical tests, called backward baselines, that demonstrate if, and to what extent, a model recounts the past. Our statistical theory provides guidance for interpreting backward baselines, establishing equivalences between different baselines and familiar statistical concepts. Concretely, we derive a meaningful backward baseline for auditing a prediction system as a black box, given only background variables and the system’s predictions. Empirically, we evaluate the framework on different prediction tasks derived from longitudinal panel surveys, demonstrating the ease and effectiveness of incorporating backward baselines into the practice of machine learning.


Introduction
Proponents of predictive technologies for consequential decision-making emphasize the seeming ability of statistical models to anticipate individual actions.The ability to predict the future, so the argument goes, creates a rationale for adopting machine learning as policy: if a risk score charted the future trajectory of individuals, then intervening in a person's life on the basis of the risk score would be justified [KLMO15,OE16].At the same time, critical scholars caution that predictive technologies reproduce historical patterns of injustice and social stratification.In this account, rather than predicting future outcomes, statistical risk assessment tools punish individuals for factors predating their own agency [Eub18,Ben19].Does a statistical model predict individual agency or recite the past?The answer to the question is often not obvious.Consider the problem of loan default prediction, one of many tasks often framed as predicting future outcomes.One predictor might identify individual behavior detrimental to loan repayment and adjust the predicted likelihood of default accordingly.Another predictor might rely on historical associations between repayment and demographic factors, then predict based solely on the historical factors.Even if the models achieve the same accuracy, they derive predictive power along distinct pathways.In one solution, we rely on the effects of individual behavior on future outcomes.In the other, we reproduce patterns from the past that were determined before, and independently of, individual behavior.The latter form of predictionresembling a kind of stereotyping-is core to many documented examples of bias and unfairness in the use of machine learning [DHP + 12, ALMK16, Cho17, HKRR18, BG18, BCZ + 16, CBN17].
The distinction we draw is fundamental to the theory of equality of opportunity.Dworkin partitions attributes of an individual into factors for which the individual is responsible and factors outside the individual's control [Dwo18a,Dwo18b].Similarly, Roemer distinguishes between effort that an individual takes and the individual's type.A type groups individuals of the same circumstances, where "circumstances are those aspects of one's environment (including, perhaps, one's biological characteristics) which are beyond one's control" [Roe00,Roe02,RT16].Dworkin and Roemer build on this fundamental moral distinction to define what it means to achieve equality in the allocation of resources and opportunity.Here, we focus on the consequences of the same distinction within the context of prediction.
Although the precise distinction is more subtle, we can approximate it with the help of time.Background variables in a prediction problem are those that were determined before the individual, such as, place and date of birth, or parents' educational attainment.Background variables generally influence both an individual's actions and the target of prediction.Individual factors are variables that the individual can exert direct-possibly not full-control over.Correspondingly, we coin the term backward prediction to describe the use of background variables in prediction, and we use forward prediction to refer to the use of individual factors.
Our contribution.In this work we formalize the distinction between forward and backward prediction.We build a theory for forward and backward prediction around a family of simple and effective statistical tests, we call backward baselines.Backward baselines quantify how much of a predictor's strength should be attributed to a given set of background variables.Applying our tools, we empirically find that in representative prediction problems involving longitudinal panel data, backward prediction contributes significantly to the strength of the predictor.
The strength of backward prediction has important consequences.When prediction draws on background factors primarily, it is misleading to interpret the predictor as an individualized risk score.After all, backward predictors are invariant under individual variation and only depend on the individual's background.The strength of backward prediction speaks to the social and environmental constitution of the target of prediction.Consequently, there is no relative advantage to targeting interventions at the individual level on the basis of backward predictors.Even if individual-level interventions are helpful, there is no added benefit in targeting them based on individual predictions compared with targeting based on background variables.
To give an example, we consider predicting an individual's year-round medical expenditure based on the longitudinal MEPS panel survey data.We find that a predictor trained only on background variables nearly matches the predictive performance of classifiers trained on all features.Our finding echoes scholarship about the social determinants of health and medical expenditure [Kri11].
We envision that backward baselines will form a useful component of the machine learning evaluation toolkit.Straightforward to apply, backward baselines provide valuable insights into the interpretation and validity of prediction in consequential settings.
Predicting the past.To introduce our discussion of backward prediction, we consider an explicit data generating process that moves through time.In Figure 1, we depict the temporal dynamics, in the form of a causal graph, with time evolving from left to right.We think of X as individual-level covariates measured today, and Y as an outcome of interest to be measured in the future.In addition to the standard supervised learning variables, we also model an additional context variable W-predating the measurement of the covariates or outcome-that may directly influence both X and Y. Concretely, X could represent a record of an individual's educational, personal, and financial history, used to predict income Y measured in 10 years, and W could represent specific demographic features from the past, like childhood household income.This explicit temporal model elucidates the distinction between forward prediction and backward prediction.Forward predictors model how the present measurements X causally effect the future outcome Y. Backward predictors estimate the outcome by first inferring the past context W from X, then predicting Y based on W. In other words, backward prediction provides information about Y that could equally be explained by the past context W. W X Y Generating D: Backward baselines.Machine learning practitioners often build models using any and every predictive pathway available, including the backward pathway.Our goal is to elucidate and disentangle the prediction pathways that a given predictor uses.Backward baselines provide a careful accounting of the predictor's use of the forward and backward predictive pathways.The baselines are lightweight to run, only requiring input-output access to the predictive model, and are built on simple, but rigorous statistical foundations.For instance, a key challenge in reasoning about backward prediction is that the context W is typically robustly encoded within an individual's covariates X.That is, even if we explicitly censor the attributes defining the context, backward prediction from X may still be possible.Backward baselines handle this statistical subtlety gracefully, providing guaranteed estimates of the forward and backward predictive power, regardless of how redundantly W is encoded in X.
Our work establishes backward baselines as an effective tool for investigating predictive models.Our perspective is not that the backward prediction pathway is inherently problematic.Rather, we advocate that investigators use backward baselines to understand and contextualize performance numbers in prediction tasks.Adding the baselines to the "report card" for supervised learning would add clarity about the underlying mechanisms used to predict.This clarity, in turn, may inform debate about whether machine learning is an appropriate tool for the task at hand.If model builders cannot find a predictor that improves significantly over backward baselines, we should hesitate before turning prediction into policy.

Backward baselines
We work over a data universe X × Y, where X is a feature space and Y is a discrete set of labels in the case of classification problems.For regression problems, we take Y to be the real line R. Fixing a loss function ℓ : Y × Y → R + , for a given predictor h : X → Y, we measure the fit of the predictor in terms of its expected loss over a distribution (X, Y) ∼ D supported on X × Y: Throughout, we assume ℓ is symmetric in its two arguments.We study both binary classification and regression, focusing on the zero-one loss We extend this standard setup with a random variable W, jointly distributed with (X, Y) and supported on a discrete domain W. The variable W represents a context of both the individual covariates and the outcome of interest.While we model them as separate random variables, at times, we assume that X encodes W, explicitly or implicitly.For instance, in Proposition 2(a), we assume that perfect reconstruction of W is statistically possible from X.

Backward prediction baseline.
In our typical story of backward prediction from X, we imagine that a predictor first resolves W from X, then predicts Y from W. As such, if we are concerned that a predictor h is using the backward pathway, a natural baseline to compare against is predicting Y directly from the context W. Fixing a loss ℓ, we take g * : W → Y to be the statistically optimal predictor of Y from W, and consider the following backward prediction baseline, ℓ D (Y, g * (W)).
The loss ℓ D (Y, g * (W)) provides a fundamental baseline for how predictable the outcome Y is from W. By comparing this baseline to ℓ D (Y, h(X)), we can better contextualize the quality of predictions h produces.In particular, if h does not achieve significantly better loss than g * , then h is not a very impressive predictor: rather than using machine learning to make decisions, you could get the same performance simply by stereotyping based on W.
Backward rounding baseline.While the optimal backward predictor g * is fundamental, it only depends on the underlying relationship between W and Y, and does not depend on any predictor from X.Given such a predictor h : X → Y, we may instead consider a baseline based on prediction of h(X) from W. We consider the backward rounding baseline, defined by g h : W → Y, which we take to be the optimal predictor of h(X) from W.
Intuitively speaking, if the prediction h(X) is itself predictable from W, then it seems h must be using the backward pathway.Contrapositively, if h is a forward predictor, then h(X) cannot be predicted from W. An interesting aspect of this baseline is that g h can be estimated even when true outcomes are unavailable, unobserved, or unreliable.Moreover, in settings where predictions are performative, in the sense of influencing the distribution on outcomes [PZMDH20], the backward prediction baseline may not be applicable, while the backward rounding baseline is unaffected.
To understand what these two baselines measure exactly and how they relate, we need to formally define backward and forward prediction.

Distinguishing forward and backward prediction
We draw a distinction between two forms of prediction of Y from X: forward prediction models the mechanism by which X influences Y; backward prediction forecasts Y from X indirectly, by exploiting correlations through the context W. Because W may be redundantly encoded within X, we cannot simply remove W from the features to evaluate the predictive power along the forward pathway.Instead, we define forward and backward prediction based on conditional independence statements involving Y, h(X), and W.

h(X)⊥W
Most classifiers will be not be pure forward or pure backward predictors, but instead h(X) will have some correlation with Y that goes through W and some correlation that is independent of W. By comparing the loss achieved by a classifier h to one of our backward baselines, we can understand how close to a backward predictor the classifier is.

Backward prediction as random targeting.
Connecting Definition 1 to our motivating question, we see that using a backward predictor as the basis for intervention on individuals is fruitless.
In particular, once we condition on a category defined by W, backward predictions h(X) can be randomized across individuals with no loss.
Fact 1. Suppose h : X → Y is a backward predictor.For any setting of the context W = w, consider a randomized prediction strategy, where R w is an independent random variable distributed as h(X)|W = w.Then, the loss of h is equal to that of random prediction according to R w .
This fact follows immediately from the definition of backward prediction through conditional independence, but it gives a powerful conclusion.Given a predictor that uses the backward pathway through W, once we condition on a particular setting of W = w, then the predictions h(X) may as well be randomly assigned.That is, using a backward predictor as the basis of intervention, is analogous to stereotyping according to the categories defined by W, and then targeting the intervention randomly within categories.

Properties of backward baselines
In this section, we develop basic theory for backward baselines, demonstrating how these baselines give us a lens into understanding backward and forward prediction.We study the basic properties of backward baselines, establish interpretations of these baselines in terms of familiar statistical quantities, and draw connections to concepts from the study of fairness in prediction.We defer proofs of all formal claims to Appendix A.

Basic properties
Here, we establish some basic properties about backward baselines.These properties are intuitive, but also reveal subtleties in what we can(not) conclude about backward and forward prediction from backward baselines.We start with three simple properties of backward baselines, that help us to compare the predictive power from X to the predictive power from W.
Proposition 2. The following properties of backward baselines hold.
(a) When X encodes W, there exists a predictor h * : X → Y that achieves loss at most the backward prediction baseline.
These straightforward properties provide a foundation for reasoning about backward and forward prediction.Proposition 2(a) establishes that the backward prediction baseline is reasonable minimum standard for predictive accuracy from X. Proposition 2(b)-(c) can be viewed as onesided tests that let us demonstrate that a predictor is not a (pure) backward or forward predictor.
For a backward predictor, the backward baselines lower bound the loss ℓ D (Y, h(X)).On the other hand, for a forward predictor that achieves nontrivial loss (i.e., beating a constant), the backward baselines upper bound the loss.While Proposition 2(b)-(c) each provide one-sided tests, together they can tell a rich story.For instance, suppose a forward predcitor f and backward predictor b achieve similar loss ℓ D (Y, f (X)) ≈ ℓ D (Y, b(X)).We may distinguish these cases by backward rounding the predictors to g f and g b .Rounding f to g f will cause a significant deterioration in loss (to that of a constant predictor), but the rounded backward predictor g b will maintain the predictive power of b.In this case, we may still decide to reject f if it achieves mediocre accuracy, but cannot reliably reject it on the basis of being a backward predictor.

Rounding recovers optimal backward prediction
As discussed, we can define backward baselines in terms of the optimal predictor g * of Y from W, and also in terms of the backward-rounded predictor g h of h(X) from W. In generality, these two predictors realize different baselines; however, if h(X) is an accurate predictor of Y, then intuitively, it would seem that the baselines over g * and g h might be similar.For instance, for classification according to the zero-one loss and regression according to the squared loss, these predictors have closed forms.
We introduce the following technical conditions, which are useful for analyzing various properties of backward baselines.
Intuitively, confidence says that h(X) does not underestimate the probability that Y takes it's most likely value within the context W. Such (over)confidence of classifiers is typically observed in practice [GPSW17].
Weak calibration rules out predictors that blatantly ignore variation in Y based on the context W (including pure forward predictors).Definition 3 relaxes traditional notions of calibration [Daw85] and is implied by loss minimization, both in theory and our experiments.We show that under these conditions, backward rounding obtains optimal prediction of Y from W.
The interchangeability of g * and g h may be useful practically and conceptually.For instance, the analysis of Proposition 3 reveals that the backward rounding baseline lower bounds the backward prediction baseline, ℓ D (h(X), g h (W)) ≤ ℓ D (Y, g * (W)) (which, in turn, gives a strengthening of Proposition 2(b) under confidence or weak calibration).

Measuring forward predictive power
A key motivation for our study of backward baselines was the observation that, given a predictor h, determining the extent of forward prediction may be challenging.We show that under natural conditions, the backward rounding baseline for g h reveals insight into the forward predictive power of h.Conveniently, evaluating this baseline only requires black-box access to the predictive model and (X, W) samples-not labels Y.The lightweight nature of the baseline makes it an appealing option to audit for backward prediction, especially for proprietary predictive models.Concretely, we show that the backward rounding baseline gives insight into the covariance between h(X) and Y after conditioning on W.
In other words, if h(X) carries lots of information about Y, even after conditioning on W, then the backward rounding baseline will be large.The arguments to establish Proposition 4 are elementary, but the consequences are powerful.An auditor, who is given only black-box access to a classifier or predictor h, can reliably determine when h is a backward predictor by evaluating the backward rounding baseline without any labels Y from the true distribution.Concretely, the backward rounding baseline allows the auditor to establish an upper bound on the amount of information about Y contained in h(X) that isn't explained by W.
In the classification setting, the bound obtained by the rounding baseline is an inequality, but is tighter than the bound given by the backward prediction baseline.In the regression setting, the rounding baseline also characterizes the difference between the backward prediction baseline and the expected loss of h, which would otherwise require labeled outcomes Y to evaluate.In Appendix A, we describe an additional backward baseline for classification, which use labels from Y to gives an exact characterization of the forward predictive power of h.

Backward baselines and demographic parity
When W is defined by demographic features that are considered to be sensitive attributes, forward prediction recovers the notion of demographic parity from the literature on fair machine learning [DHP + 12].While a natural desideratum for equal treatment under a decision rule, the shortcomings of demographic parity as a notion of fairness have been documented extensively [DHP + 12, LDR + 18].As such, requiring pure forward prediction may result in unintended and undesirable consequences, just as blinding predictors of a sensitive attribute can.
Exploring the analogy between backward baselines and fair prediction sheds new light on demographic parity and stereotyping.In Appendix B, we formalize a duality between forward and backward prediction.Translating the duality into the language of fairness, the optimal unconstrained prediction decomposes into the optimal prediction under demographic parity plus the optimal "stereotyping" prediction that makes its judgments solely based on the sensitive attribute.

Empirical evaluation of backward baselines
The goal of our experiments is to empirically evaluate backward baselines.Toward this goal, we searched for datasets that meet at least four important criteria: 1.The outcome variable demonstrably lies in the future relative to the features.
2. The dataset contains general demographic background variables, as well as features specific to the prediction task.
3. Non-trivial prediction accuracy is possible.
Many machine learning datasets are unclear about the temporality of the outcome variable, thus falling short of the first criterion.For example, several datasets about credit default prediction do not clarify whether data points correspond to individuals who have already defaulted, or individuals that ended up defaulting some specific time after feature collection.
Well-suited to our evaluation are longitudinal panel surveys.Each panel consists of some number of survey participants who are interviewed in multiple rounds (or waves).By taking features from one round to predict outcomes in a later round, we can create prediction problems where outcomes and features are temporally well-separated.We choose two major panel surveys relating to medical expenditure and income.Complementing these panel surveys, we also consider a notorious dataset from the criminal legal domain.Extended results and full details are in Appendix C and Appendix D. The code is available at: https://github.com/socialfoundations/backward_ baselines

Medical Expenditure Panel Survey (MEPS)
The The survey distinguishes between demographic variables and variables corresponding to survey questions in of the rounds of the two panels.We create a prediction task whose goal is to predict a full-year outcome from Round 3 of Panel 23 and Round 1 of Panel 24.The target variable measures the total health care utilization across the year.We create a roughly balanced binarization of the target variable.A precise definition and further details are in the appendix.
We compute backward baselines in terms of the features age, race, age and race together, as well as all variables designated as demographic by the survey documentation.These include additional variables relating to age, race and ethnicity, marital status, nationality, and languages spoken.Figure 2 summarizes our findings.In particular, backward baselines trained on all demographic background variables match nearly all of the predictive performance of the classifiers trained on all features, similarly across three different prediction models.An extended set of figures is included in Appendix C.

Survey of Income and Program Participation (SIPP)
The Survey of Income and Program Participation (SIPP) is an import longitudinal survey conduced by the U.S. Census Bureau, aimed at capturing income dynamics as well as participation in government programs.
We consider Wave 1 and Wave 2 of the SIPP 2014 panel data.The target variable is based on the official poverty measure (OPM), a cash-income based measure of poverty.We compute this measure based on Wave 2 data.We again discretize the measure to obtain to roughly balanced classes for our binary prediction task.The goal is to predict this outcome based on features collected in Wave 1.After cleaning and preprocessing our data contains 39720 rows and 54 columns.We consider background variables education, race, education and race together, as well as all demographic variables, specifically, age, gender, race, education, marital status, citizenship status.
In Figure 3, we restrict our attention to the logistic regression model.The other models perform similarly and the full set of results can be found in Appendix C.

ProPublica COMPAS Recidivism Scores
A proprietary recidivism risk score, called COMPAS, was the subject of a notorious investigation into racial bias by ProPublica [ALMK16] in 2016.As part of the investigation ProPublica released a dataset of COMPAS scores about defendants associated with two-year recidivism outcomes.The dataset released by ProPublica has significant and well-documented issues that make it inadequate for the development of new risk scores as well as fairness interventions [BZZ + 21, Bar19].In experimenting with the COMPAS data set, our primary goal is to demonstrate the effectiveness of backward baselines in auditing problematic risk predictors.The results of backward baselines echo earlier findings that the performance COMPAS scores can be achieved by simple models [RWC20,WHPR22].
Note that we do not have access to the training data used for producing the COMPAS scores as is common in algorithmic audit scenarios.This is, fortunately, not required for evaluating backward baselines.We only need the scores, as well as associated demographic information.Figure 4 evaluates backward baselines against the COMPAS scores.The results are rather striking in how well backward baselines do in comparison.In particular, a single feature (prior convictions) appears to account for all of the predictive power of the COMPAS score.

Additional Related Works
On the level of techniques, backward baselines bear resemblance to a number of tools developed in the causal inference and machine learning communities.Backward baselines do not make any assumptions on the underlying causal structure between X, Y, and W. Still, the backward baseline toolkit is similar in ways to tools developed in settings for understanding the causal structure between variables, both for measuring confounding [McN03,Pea09] and mediation analysis [MFF07].At first glance, backward baselines may also feel similar to the study of spurious correlation, which has received considerable attention in the ML literature (e.g., [SRKL20, SHL20, VDYE21]).We caution, however, that correlation with background (or individual) features should not be understood as "spurious".Instead, correlations with background features reveal important structure in the data distribution, how the predictor exploits this structure, and in turn, when intervening on the basis of prediction may be ineffective.
As discussed, backward baselines also add new color to concepts studied in algorithmic fairness.In particular, under a specific causal interpretation, forward prediction may be understood similarly as the notion of Counterfactual Fairness [KLRS17].This connection can be made somewhat formal, as the latter notion has been shown to be closely related to the notion of Demographic Parity as well [RW22].Indeed, in some accounts of fairness, understanding the causal pathways of prediction is essential [KRCP + 17].
Finally, the findings of a recent empirical study of prediction systems in the American public school system are closely related to our theoretical work.[PBHA23] studies the Early Warning System (EWS) of the Wisconsin Department of Public Instruction.The research concludes that, largely, the individual-risk prediction sytems is (a) effective at prediction, and (b) reliant predominantly on environmental features (e.g., what percent of an individual's school qualifies for free or reduced lunch).In this way, while the predictions are accurate, they are backward predictors, and do not provide an effective tool for intervention on individuals' educational plans.

Conclusions
Our contribution has a normative, a theoretical, and an empirical component.We argue that the distinction between predicting the future of an individual and reproducing the past is central to the debate around where and how we should use statistical methods to make consequential decisions.The effectiveness of backward prediction, when observed, should question support for prediction as policy, and instead redirect focus toward interventions that target the background conditions.
Theoretically, we begin to develop a statistical learning theory of backward baselines.The theory helps simplify the landscape of possible backward baselines, while clarifying how to interpret different backward baselines.A notable outcome of our theory is that it supports the use and interpretation of a backward baseline that requires no observed outcomes.At the outset, it was not obvious that a meaningful backward baseline without measurement of the target variable is possible.This finding enables auditing without measured outcomes: An investigator can probe a predictive system with access to only background variables and predictions.
On the empirical side, we show the strength and versatility of backward baselines on a variety of datasets.Utilizing multiple waves of longitudinal panel surveys, our evaluation is careful about the temporality of features and outcomes.Along the way, we contribute to a better empirical understanding of how machine learning leverages past contexts to predict future life outcomes.
In conclusion, we propose backward baselines as a simple, broadly applicable tool to strengthen evaluation and audit practices in the use of machine learning.

A Omitted Proofs
Proposition (Restatement of Proposition 2).The following properties of backward baselines hold.
(a) When X encodes W, there exists a predictor h * : X → Y that achieves loss at most the backward prediction baseline.
a backward predictor, then its loss is at least the backward baselines.
Proof of Proposition 2. We prove each statement separately.
(a) By the assumption that X encodes W, i.e., that I(W; X) = H(W), there exists a computable map M : X → W such that for any (X, W, Y) ∼ D, M(X) = W. Thus, the predictor h * : X → Y defined as the composition of g * and M, is feasible, and achieves loss ℓ D (Y, h * (X)) = ℓ D (Y, g * (W)).
(b) Suppose h is a backward predictor; that is, h(X)⊥Y|W.Consider the loss achieved by h on D.
Note that by the conditional independence of h(X) and Y, we can take the expectation over X and Y conditioned on W separately.Then, the expected loss over the choice of h(X) ∈ Y is lower bounded by the optimal choice ŷ ∈ Y.
Thus, in all, we conclude that ℓ D (Y, g * (W)) ≤ ℓ D (Y, h(X)).The second inequality follows similarly, by lower bounding the expected loss over the draw of Y by the optimal ŷ ∈ Y, which results in g h (W).
(c) Suppose h is a forward predictor; that is h(X)⊥W.Consider the definition of g h , where the equality between the conditional and unconditional expectation follows by independence.Thus, g h : W → Y must be a constant predictor, and can only hope to compete with the best fixed prediction in predicting Y or h(X).
Proposition (Formal restatement of Proposition 3).Suppose a classifier h : X → {0, 1} is confident on Y over W.Then, Suppose a predictor h : X → [0, 1] is weakly calibrated to Y over W.Then, Proof of Proposition 3. First, we prove the equality for classifiers.Note that minimization is entirely determined by which side of 1/2 the probability that the outcome is 1.
, so g h (W) = 1 as well.The statement holds analogously for the case g * (W) = 0.
Next, we prove the equality for regression predictors.By the definition of weak calibration, we have that h matches expectations with Y conditional on W.

E[h(X)|W] = E[Y|W].
Thus, by the closed-form solution for g * and g h , we have the stated equality.
Proposition (Restatement of Proposition 4).Suppose a classifier h : X → {0, 1} is confident on Y over W. Let ℓ W (h, g h ) denote the backward rounding baseline Pr[h(X) ̸ = g h (W)|W] conditioned on W.Then, Proof of Proposition 4. Suppose h : X → {0, 1} is confident on Y over W. First, the covariance Cov(h(X), Y|W) is upper bounded by the variance Var(h(X)|W).Then, we express the variance of this Bernoulli random variable in terms of the backward rounding baseline.Specifically, for either ŷ ∈ {0, 1}: Further, by confidence and Proposition 3, we can bound the probabilities.
By properties of variance of Bernoullis, h(X) given W is more peaked than Y given W, so will have lower variance.
Given a weakly calibrated predictor h : X → [0, 1], we expand the difference in squared loss as follows.
By the fact that g h (W) = E[h(X)|W], the second term can be rewritten as the squared error between g h and h.
The first term can be rewritten as the expected covariance between Y and h(X) conditioned on W.

E[Y((h(X) −
In sum, the difference in losses is equal to Finally, if h is weakly calibrated to Y over W, then the expected covariance is equal to the squared of g h to h, where (1) follows by the assumption that h is weakly calibrated to Y over W. Thus, the difference in losses simplifies to the squared difference between g h and h.
An alternative backward baseline for classification covariance.We present an additional backward baseline for classifiers that may be of interest when an underlying score function is not available.In this baseline, we manipulate the distribution over outcomes, leaving the predictions fixed.Specifically, given a sample (X, W, Y) ∼ D, we resample the outcome Ỹ ∼ D Y|W , ensuring that h(X) and Ỹ are conditionally independent given W. We show that for the zero-one loss, the difference between the backward baseline ℓ D ( Ỹ, h(X)) and ℓ(Y, h(X)) is proportional to the expected conditional covariance of Y and h(X) given W.
Proposition 5.For any classifier h : X → {0, 1}, Proof.We expand the difference in zero-one loss by exploiting the identity that Pr for binary Y and h(X), and using the fact that E where (2) follows by the fact that Ỹ ∼ D Y|W is sampled conditionally independently from the distribution on Y given W.

B Backward Prediction and Demographic Parity
While conceived from different vantages, backward baselines and fair machine learning share similarities in perspective and technical structure.On a technical level, pure forward prediction is equivalent to demographic parity, a notion of fairness introduced by [DHP + 12].Based on this observation, certain insights about backward baselines have an analogue in fair prediction, and vice versa.For instance, we note that in combination, Proposition 2 and Proposition 3 imply that forward predictors cannot be calibrated to Y over W. Translating this observation into the language of fairness in prediction, we recover a specific case of the well-known results on the incompatibility of calibration and parity-based definitions of fairness in prediction [KMR17,Cho17].
In addition to giving insight into the backward rounding baseline, Proposition 4 shows a formal sense in which forward and backward predictors are orthogonal to one another.In particular, for weakly calibrated regression predictors h : is a sort of Pythagorean theorem, stating that the variation in Y after accounting for g h (W) can be broken into the variation in Y given h(X) and the variation in h(X) given g h (W).
Connecting the backward baselines framework to fairness in prediction suggests a simple algorithm for learning predictors satisfying demographic parity, that relies only on unconstrained learning primitives.First, we learn to predict Y as h(X); then, we learn to predict h(X) as g h (W); finally, we return f α (X, W) defined as for α ∈ [0, 1].Taking α = 1 achieves a relaxed first-order demographic parity.Specifically, f 1 (X, W) has a constant expectation over all W. In effect, f 1 (X, W) predicts optimally according to X then removes all variation that can be accounted for through W. Other choices of α may be interesting to interpolate between forward and backward prediction modes.

C Details on Empirical Evaluation
In this section, we show all figures for all baselines, classifiers, and metrics that we considered.We also provide additional details on the data sources, feature engineering, and target variable creation.
In all bar plots, the height of the bar is the mean value from 10 different random seeds and error bars indiciate a standard deviation across 10 different random seeds.In the case of ROC curves, the plot shows 10 curves overlaid from 10 different random seeds.None of the experiments require significant compute resources.
Given features X, context W, a given predictor Ŷ, and target variable Y, our plots evaluate five different methods: • XYY : Train on (X, Y), test model on (X, Y) • WYY : Train baseline on (W, Y), test baseline on (W, Y) (backward prediction baseline) • W ŶY : Train baseline on (W, Ŷ), test baseline on (W, Y) (equivalent to backward prediction baseline) • WY Ŷ : Train baseline on (W, Y), test baseline on (W, Ŷ) (equivalent to backward rounding baseline) • W Ŷ Ŷ : Train baseline on (W, Ŷ), test baseline (W, Ŷ) (backward rounding baseline) In the main body of the paper we included only two baselines and ommitted the equivalent ones.

Figure 1 :
Figure 1: Example data generating process for covariates X, outcome Y, and context W. Time starts from the left with context W and evolves forward to the right, realizing X then Y.
Medical Expenditure Panel Survey (MEPS) is a set of large-scale surveys of families and individuals, their medical providers, and employers across the United States, aimed at providing insights into health care utilization.We work with the publicly available MEPS 2019 Full Year Consolidated Data File.The dataset we consider has 28,512 instances corresponding to all persons that were part of one of the two MEPS panels overlapping with calendar year 2019.Specifically, Panel 23 has Rounds 3-5 in 2019, and Panel 24 has rounds 1-3 in 2019.Round 3 of Panel 23 and Round 1 of Panel 24 are the first of each panel in 2019.

E[ f 1
(X, W)|W] = E[h(X) − g h (W)|W] = E[h(X)|W] − g h (W) = 0 GOVCAT31, VAPROG31, VAPRAT31, IHS31, IHSAT31, PRIDK31, PRIEU31, PRING31, PRIOG31, PRINEO31, PRIEUO31, PRSTX31, PRIV31, PRIVAT31, VERFLG31, DENTIN31, DNTINS31, PMEDIN31, PMDINS31, PMEDUP31, PMEDPY31, AGE31X, MARRY31X, FTSTU31X, REFRL31X, MOPID31X, DAPID31X, HRWG31X, DISVW31X, HELD31X, OFFER31X, TRIST31X, TRIPR31X, TRIEX31X, TRILI31X, TRICH31X, MCRPD31X, TRICR31X, TRIAT31X, MCAID31X, MCARE31X, MCDAT31X, PUB31X, PUBAT31X, INS31X, INSAT31X, SEX, RACEV1X, RACEV2X, RACEAX, RACEBX, RACEWX, RACETHX, HISPANX, HISPNCAT, EDUCYR, HIDEG, OTHLGSPK, HWELL-SPK, BORNUSA, WHTLGSPK, YRSINUS Demographic features.The full list of demographic features we use is: It might seem that the question we ask runs headlong into the centuries-old problem of induction: How do we draw conclusions about the future based on past experience?But our study addresses a distinct and more specific question: Does a given predictor capitalize on individual behaviors that influence the outcome or on historical patterns that correlate with the outcome?Backward baselines ask about induction from what.Initiating this investigation, we start from factors that apparently predate the point of prediction, such as demographic background variables, and test to what extent a predictor utilizes these factors.Our findings-that backward prediction often serves a significant role in forecasting individuals' outcomes-adds relevant evidence for the ongoing deliberation of the meaning of individual risk scores [San03, Daw17, DKR + 21, Dwo21].Our present investigation into backward baselines is limited to settings where the variables defining a past context W are measured and observed by the auditor.A fundamental question in evaluating backward baselines is which variables constitute the right choice for context W. We emphasized the role of time in deciding what is outside the individual's control.Some factors are obviously in the past, e.g., place of birth, and parents' educational attainment.Other factors, such as race, gender, and individual's educational attainment, involve the individual at present but are nonetheless socially constituted.Time alone is therefore an imperfect guiding principle in choosing what we count as a suitable background variable W. Choosing W appropriately is not a purely technical question, but rather is up for debate based on the context and scope of the prediction task.