ABSTRACT
Now that machine learning algorithms lie at the center of many important resource allocation pipelines, computer scientists have been unwittingly cast as partial social planners. Given this state of affairs, important questions follow. How do leading notions of fairness as defined by computer scientists map onto longer-standing notions of social welfare? In this paper, we present a welfare-based analysis of fair classification regimes. Our main findings assess the welfare impact of fairness-constrained empirical risk minimization programs on the individuals and groups who are subject to their outputs. We fully characterize the ranges of Δ'e perturbations to a fairness parameter 'e in a fair Soft Margin SVM problem that yield better, worse, and neutral outcomes in utility for individuals and by extension, groups. Our method of analysis allows for fast and efficient computation of "fairness-to-welfare" solution paths, thereby allowing practitioners to easily assess whether and which fair learning procedures result in classification outcomes that make groups better-off. Our analyses show that applying stricter fairness criteria codified as parity constraints can worsen welfare outcomes for both groups. More generally, always preferring "more fair" classifiers does not abide by the Pareto Principle---a fundamental axiom of social choice theory and welfare economics. Recent work in machine learning has rallied around these notions of fairness as critical to ensuring that algorithmic systems do not have disparate negative impact on disadvantaged social groups. By showing that these constraints often fail to translate into improved outcomes for these groups, we cast doubt on their effectiveness as a means to ensure fairness and justice.
References
- Amartya Sen. Equality of What? Cambridge University Press, Cambridge, 1980. Reprinted in John Rawls et al., Liberty, Equality and Law (Cambridge: Cambridge University Press, 1987).Google Scholar
- Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259--268. ACM, 2015.Google Scholar
Digital Library
- Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pages 3315--3323, 2016.Google Scholar
Digital Library
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214--226. ACM, 2012.Google Scholar
Digital Library
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pages 1171--1180. International World Wide Web Conferences Steering Committee, 2017.Google Scholar
Digital Library
- Yahav Bechavod and Katrina Ligett. Learning fair classifiers: A regularization-inspired approach. arXiv preprint arXiv:1707.00044, 2017.Google Scholar
- Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153--163, 2017.Google Scholar
Cross Ref
- Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference, pages 43:1--43:23. ACM, 2017.Google Scholar
- Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pages 643--650. IEEE, 2011.Google Scholar
Digital Library
- Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pages 325--333, 2013.Google Scholar
Digital Library
- Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, pages 325--333, 2016.Google Scholar
Digital Library
- Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. In Advances in Neural Information Processing Systems, pages 5680--5689, 2017.Google Scholar
Digital Library
- Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems, pages 3992--4001, 2017.Google Scholar
Digital Library
- Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4066--4076, 2017.Google Scholar
- Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, pages 656--666, 2017.Google Scholar
Digital Library
- Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning, pages 2569--2577, 2018.Google Scholar
- Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In Advances in Neural Information Processing Systems, pages 2791--2801, 2018.Google Scholar
- Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International Conference on Machine Learning, pages 60--69, 2018.Google Scholar
- Sendhil Mullainathan. Algorithmic fairness and the social welfare function. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 1--1. ACM, 2018.Google Scholar
Digital Library
- Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. Fairness behind a veil of ignorance: A welfare analysis for automated decision making. In Advances in Neural Information Processing Systems, pages 1265--1276, 2018.Google Scholar
- Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023, 2018.Google Scholar
- Lydia Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. In International Conference on Machine Learning, pages 3156--3164, 2018.Google Scholar
- Christopher P Diehl and Gert Cauwenberghs. Svm incremental learning, adaptation and optimization. In Neural Networks, 2003. Proceedings of the International Joint Conference on, volume 4, pages 2685--2690. IEEE, 2003.Google Scholar
Cross Ref
- Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5(Oct):1391--1415, 2004.Google Scholar
- Li Wang, Ji Zhu, and Hui Zou. The doubly regularized support vector machine. Statistica Sinica, 16(2):589, 2006.Google Scholar
- Gang Wang, Dit-Yan Yeung, and Frederick H Lochovsky. A kernel path algorithm for support vector machines. In Proceedings of the 24th international conference on Machine learning, pages 951--958. ACM, 2007.Google Scholar
Digital Library
- Masayuki Karasuyama, Naoyuki Harada, Masashi Sugiyama, and Ichiro Takeuchi. Multi-parametric solution-path algorithm for instance-weighted support vector machines. Machine learning, 88(3):297--330, 2012.Google Scholar
Digital Library
- Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learning non-discriminatory predictors. In Conference on Learning Theory, pages 1920--1953, 2017.Google Scholar
- Marc Fleurbaey and François Maniquet. A theory of fairness and social welfare, volume 48. Cambridge University Press, 2011.Google Scholar
- Emmanuel Saez and Stefanie Stantcheva. Generalized social marginal welfare weights for optimal tax theory. American Economic Review, 106(1):24--45, 2016.Google Scholar
Cross Ref
- Lucy F Ackert, Jorge Martinez-Vazquez, and Mark Rider. Social preferences and tax policy design: some experimental evidence. Economic Inquiry, 45(3):487--501, 2007.Google Scholar
Cross Ref
- Floris T Zoutman, Bas Jacobs, and Egbert LW Jongen. Optimal redistributive taxes and redistributive preferences in the netherlands. Erasmus University Rotterdam, 2013.Google Scholar
- Vidar Christiansen and Eilev S Jansen. Implicit social preferences in the norwegian system of indirect taxation. Journal of Public Economics, 10(2):217--245, 1978.Google Scholar
Cross Ref
- Matthew Adler. Well-being and fair distribution: beyond cost-benefit analysis. Oxford University Press, 2012.Google Scholar
- Marc Fleurbaey, François Maniquet, et al. Optimal taxation theory and principles of fairness. Technical report, Université catholique de Louvain, Center for Operations Research and?, 2015.Google Scholar
- Ilyana Kuziemko, Michael I Norton, Emmanuel Saez, and Stefanie Stantcheva. How elastic are preferences for redistribution? evidence from randomized survey experiments. American Economic Review, 105(4):1478--1508, 2015.Google Scholar
Cross Ref
- Herbert Edelsbrunner and Ernst Peter Mücke. Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms. ACM Transactions on Graphics (tog), 9(1):66--104, 1990.Google Scholar
Supplemental Material
Available for Download
Index Terms
(auto-classified)Fair classification and social welfare




Comments