10.1145/3351095.3372857acmconferencesArticle/Chapter ViewAbstractPublication PagesfatConference Proceedings
research-article
Free Access

Fair classification and social welfare

ABSTRACT

Now that machine learning algorithms lie at the center of many important resource allocation pipelines, computer scientists have been unwittingly cast as partial social planners. Given this state of affairs, important questions follow. How do leading notions of fairness as defined by computer scientists map onto longer-standing notions of social welfare? In this paper, we present a welfare-based analysis of fair classification regimes. Our main findings assess the welfare impact of fairness-constrained empirical risk minimization programs on the individuals and groups who are subject to their outputs. We fully characterize the ranges of Δ'e perturbations to a fairness parameter 'e in a fair Soft Margin SVM problem that yield better, worse, and neutral outcomes in utility for individuals and by extension, groups. Our method of analysis allows for fast and efficient computation of "fairness-to-welfare" solution paths, thereby allowing practitioners to easily assess whether and which fair learning procedures result in classification outcomes that make groups better-off. Our analyses show that applying stricter fairness criteria codified as parity constraints can worsen welfare outcomes for both groups. More generally, always preferring "more fair" classifiers does not abide by the Pareto Principle---a fundamental axiom of social choice theory and welfare economics. Recent work in machine learning has rallied around these notions of fairness as critical to ensuring that algorithmic systems do not have disparate negative impact on disadvantaged social groups. By showing that these constraints often fail to translate into improved outcomes for these groups, we cast doubt on their effectiveness as a means to ensure fairness and justice.

References

  1. Amartya Sen. Equality of What? Cambridge University Press, Cambridge, 1980. Reprinted in John Rawls et al., Liberty, Equality and Law (Cambridge: Cambridge University Press, 1987).Google ScholarGoogle Scholar
  2. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259--268. ACM, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, pages 3315--3323, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214--226. ACM, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pages 1171--1180. International World Wide Web Conferences Steering Committee, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yahav Bechavod and Katrina Ligett. Learning fair classifiers: A regularization-inspired approach. arXiv preprint arXiv:1707.00044, 2017.Google ScholarGoogle Scholar
  7. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153--163, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  8. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference, pages 43:1--43:23. ACM, 2017.Google ScholarGoogle Scholar
  9. Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pages 643--650. IEEE, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning, pages 325--333, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, pages 325--333, 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. In Advances in Neural Information Processing Systems, pages 5680--5689, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems, pages 3992--4001, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4066--4076, 2017.Google ScholarGoogle Scholar
  15. Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, pages 656--666, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning, pages 2569--2577, 2018.Google ScholarGoogle Scholar
  17. Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In Advances in Neural Information Processing Systems, pages 2791--2801, 2018.Google ScholarGoogle Scholar
  18. Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International Conference on Machine Learning, pages 60--69, 2018.Google ScholarGoogle Scholar
  19. Sendhil Mullainathan. Algorithmic fairness and the social welfare function. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 1--1. ACM, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hoda Heidari, Claudio Ferrari, Krishna Gummadi, and Andreas Krause. Fairness behind a veil of ignorance: A welfare analysis for automated decision making. In Advances in Neural Information Processing Systems, pages 1265--1276, 2018.Google ScholarGoogle Scholar
  21. Sam Corbett-Davies and Sharad Goel. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023, 2018.Google ScholarGoogle Scholar
  22. Lydia Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. In International Conference on Machine Learning, pages 3156--3164, 2018.Google ScholarGoogle Scholar
  23. Christopher P Diehl and Gert Cauwenberghs. Svm incremental learning, adaptation and optimization. In Neural Networks, 2003. Proceedings of the International Joint Conference on, volume 4, pages 2685--2690. IEEE, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  24. Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5(Oct):1391--1415, 2004.Google ScholarGoogle Scholar
  25. Li Wang, Ji Zhu, and Hui Zou. The doubly regularized support vector machine. Statistica Sinica, 16(2):589, 2006.Google ScholarGoogle Scholar
  26. Gang Wang, Dit-Yan Yeung, and Frederick H Lochovsky. A kernel path algorithm for support vector machines. In Proceedings of the 24th international conference on Machine learning, pages 951--958. ACM, 2007.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Masayuki Karasuyama, Naoyuki Harada, Masashi Sugiyama, and Ichiro Takeuchi. Multi-parametric solution-path algorithm for instance-weighted support vector machines. Machine learning, 88(3):297--330, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learning non-discriminatory predictors. In Conference on Learning Theory, pages 1920--1953, 2017.Google ScholarGoogle Scholar
  29. Marc Fleurbaey and François Maniquet. A theory of fairness and social welfare, volume 48. Cambridge University Press, 2011.Google ScholarGoogle Scholar
  30. Emmanuel Saez and Stefanie Stantcheva. Generalized social marginal welfare weights for optimal tax theory. American Economic Review, 106(1):24--45, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  31. Lucy F Ackert, Jorge Martinez-Vazquez, and Mark Rider. Social preferences and tax policy design: some experimental evidence. Economic Inquiry, 45(3):487--501, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  32. Floris T Zoutman, Bas Jacobs, and Egbert LW Jongen. Optimal redistributive taxes and redistributive preferences in the netherlands. Erasmus University Rotterdam, 2013.Google ScholarGoogle Scholar
  33. Vidar Christiansen and Eilev S Jansen. Implicit social preferences in the norwegian system of indirect taxation. Journal of Public Economics, 10(2):217--245, 1978.Google ScholarGoogle ScholarCross RefCross Ref
  34. Matthew Adler. Well-being and fair distribution: beyond cost-benefit analysis. Oxford University Press, 2012.Google ScholarGoogle Scholar
  35. Marc Fleurbaey, François Maniquet, et al. Optimal taxation theory and principles of fairness. Technical report, Université catholique de Louvain, Center for Operations Research and?, 2015.Google ScholarGoogle Scholar
  36. Ilyana Kuziemko, Michael I Norton, Emmanuel Saez, and Stefanie Stantcheva. How elastic are preferences for redistribution? evidence from randomized survey experiments. American Economic Review, 105(4):1478--1508, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  37. Herbert Edelsbrunner and Ernst Peter Mücke. Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms. ACM Transactions on Graphics (tog), 9(1):66--104, 1990.Google ScholarGoogle Scholar

Supplemental Material

Index Terms

(auto-classified)
  1. Fair classification and social welfare

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Article Metrics

      • Downloads (Last 12 months)82
      • Downloads (Last 6 weeks)82

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!