ABSTRACT
We show that every approximately differentially private learning algorithm (possibly improper) for a class H with Littlestone dimension d requires Ω(log*(d)) examples. As a corollary it follows that the class of thresholds over ℕ can not be learned in a private manner; this resolves open questions due to [Bun et al. 2015] and [Feldman and Xiao, 2015]. We leave as an open question whether every class with a finite Littlestone dimension can be learned by an approximately differentially private algorithm.
- Maria-Florina Balcan and Vitaly Feldman. 2015.Google Scholar
- Statistical Active Learning Algorithms for Noise Tolerance and Differential Privacy. Algorithmica 72, 1 (2015), 282–315. Google Scholar
Digital Library
- Raef Bassily, Kobbi Nissim, Adam D. Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. 2016. Algorithmic stability for adaptive data analysis. In STOC. ACM, 1046–1059. Google Scholar
Digital Library
- Raef Bassily, Om Thakkar, and Abhradeep Thakurta. 2018.Google Scholar
- Model-Agnostic Private Learning via Stability. CoRR abs/1803.05101 (2018).Google Scholar
- Amos Beimel, Hai Brenner, Shiva Prasad Kasiviswanathan, and Kobbi Nissim. 2014. Bounds on the sample complexity for private learning and private data release. Machine Learning 94, 3 (2014), 401–437. Google Scholar
Digital Library
- Amos Beimel, Kobbi Nissim, and Uri Stemmer. 2013. Characterizing the sample complexity of private learners. In ITCS. ACM, 97–110.Google Scholar
- Amos Beimel, Kobbi Nissim, and Uri Stemmer. 2015. Learning Privately with Labeled and Unlabeled Examples. In SODA. SIAM, 461–477. Google Scholar
Digital Library
- Amos Beimel, Kobbi Nissim, and Uri Stemmer. 2016.Google Scholar
- Private Learning and Sanitization: Pure vs. Approximate Differential Privacy. Theory of Computing 12, 1 (2016), 1–61.Google Scholar
- Shai Ben-David, Dávid Pál, and Shai Shalev-Shwartz. 2009.Google Scholar
- Agnostic Online Learning. In COLT.Google Scholar
- Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim. 2005. Practical privacy: the SuLQ framework. In PODS. ACM, 128–138. Google Scholar
Digital Library
- Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. 1989. Learnability and the Vapnik-Chervonenkis dimension. J. Assoc. Comput. Mach. 36, 4 (1989), 929–965. Google Scholar
Digital Library
- Mark Bun, Kobbi Nissim, and Uri Stemmer. 2016. Simultaneous Private Learning of Multiple Concepts. In ITCS. ACM, 369–380. Google Scholar
Digital Library
- Mark Bun, Kobbi Nissim, Uri Stemmer, and Salil P. Vadhan. 2015. Differentially Private Release and Learning of Threshold Functions. In FOCS. IEEE Computer Society, 634–649. Google Scholar
Digital Library
- Mark Mar Bun. 2016.Google Scholar
- New Separations in the Complexity of Differential Privacy. Ph.D. Dissertation. Harvard University, Graduate School of Arts & Sciences.Google Scholar
- Hunter Chase and James Freitag. 2018. Model Theory and Machine Learning. arXiv preprint arXiv:1801.06566 (2018).Google Scholar
- Kamalika Chaudhuri, Daniel J. Hsu, and Shuang Song. 2014. The Large Margin Mechanism for Differentially Private Maximization. In NIPS. 1287–1295. Google Scholar
Digital Library
- Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. 2011. Differentially Private Empirical Risk Minimization. Journal of Machine Learning Research 12 (2011), 1069–1109.Google Scholar
Digital Library
- Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, and Zhiwei Steven Wu. 2016. Adaptive Learning with Robust Generalization Guarantees. In COLT (JMLR Workshop and Conference Proceedings), Vol. 49. JMLR.org, 772–814.Google Scholar
- Cynthia Dwork and Vitaly Feldman. 2018. Privacy-preserving Prediction. CoRR abs/1803.10266 (2018).Google Scholar
- Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. 2006. Our Data, Ourselves: Privacy Via Distributed Noise Generation. In EUROCRYPT (Lecture Notes in Computer Science), Vol. 4004. Springer, 486–503. Google Scholar
Digital Library
- Cynthia Dwork and Jing Lei. 2009. Differential privacy and robust statistics. In STOC. ACM, 371–380. Google Scholar
Digital Library
- Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. 2006. Calibrating Noise to Sensitivity in Private Data Analysis. In TCC (Lecture Notes in Computer Science), Vol. 3876. Springer, 265–284. Google Scholar
Digital Library
- Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science 9, 3-4 (2014), 211–407. Google Scholar
Digital Library
- P. Erdős and R. Rado. 1952.Google Scholar
- Combinatorial Theorems on Classifications of Subsets of a Given Set. Proceedings of the London Mathematical Society s3-2, 1 (1952), 417–439. 2.1.417Google Scholar
- Vitaly Feldman and David Xiao. 2015. Sample Complexity Bounds on Differentially Private Learning via Communication Complexity. SIAM J. Comput. 44, 6 (2015), 1740–1764.Google Scholar
Digital Library
- R.L. Graham, R.L. Graham, B.L. Rothschild, and J.H. Spencer. 1990.Google Scholar
- Ramsey Theory. Wiley. https://books.google.com/books?id=55oXT60dC54CGoogle Scholar
- Wilfrid Hodges. 1997.Google Scholar
- A Shorter Model Theory. Cambridge University Press, New York, NY, USA. Google Scholar
Digital Library
- Marek Karpinski and Angus Macintyre. 1997. Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. J. Comput. System Sci. 54, 1 (1997), 169–176. Google Scholar
Digital Library
- Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam D. Smith. 2011.Google Scholar
- What Can We Learn Privately? SIAM J. Comput. 40, 3 (2011), 793–826. Google Scholar
Digital Library
- Michael C Laskowski. 1992.Google Scholar
- Vapnik-Chervonenkis Classes of Definable Sets. Journal of the London Mathematical Society 2, 2 (1992), 377–384. Private PAC Learning Implies Finite Littlestone Dimension STOC ’19, June 23–26, 2019, Phoenix, AZ, USAGoogle Scholar
- Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, and Steven Z. Wu. 2017.Google Scholar
- Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM. In NIPS. 2563–2573. Google Scholar
Digital Library
- Nick Littlestone. 1987. Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm. Machine Learning 2, 4 (1987), 285–318.Google Scholar
Cross Ref
- Roi Livni and Pierre Simon. 2013. Honest compressions and their application to compression schemes. In Conference on Learning Theory. 77–92.Google Scholar
- D. Mubayi and A. Suk. 2017. A survey of quantitative bounds for hypergraph Ramsey problems. ArXiv e-prints (July 2017). arXiv: math.CO/1707.04229Google Scholar
- Benjamin I. P. Rubinstein, Peter L. Bartlett, Ling Huang, and Nina Taft. 2009.Google Scholar
- Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning. CoRR abs/0911.5708 (2009).Google Scholar
- Shai Shalev-Shwartz and Shai Ben-David. 2014.Google Scholar
- Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York, NY, USA. Google Scholar
Digital Library
- Saharon. Shelah. 1978.Google Scholar
- Salil P. Vadhan. 2017. The Complexity of Differential Privacy. In Tutorials on the Foundations of Cryptography. Springer International Publishing, 347–450.Google Scholar
- V.N. Vapnik and A.Ya. Chervonenkis. 1971.Google Scholar
- On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl. 16 (1971), 264–280.Google Scholar
- Yu-Xiang Wang, Jing Lei, and Stephen E. Fienberg. 2016.Google Scholar
Index Terms
Private PAC learning implies finite Littlestone dimension
Recommendations
Private and Online Learnability Are Equivalent
Let H be a binary-labeled concept class. We prove that H can be PAC learned by an (approximate) differentially private algorithm if and only if it has a finite Littlestone dimension. This implies a qualitative equivalence between online learnability and ...
Sample-efficient proper PAC learning with approximate differential privacy
STOC 2021: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of ComputingIn this paper we prove that the sample complexity of properly learning a class of Littlestone dimension d with approximate differential privacy is Õ(d6), ignoring privacy and accuracy parameters. This result answers a question of Bun et al. (FOCS 2020) ...
Bounds on the sample complexity for private learning and private data release
Learning is a task that generalizes many of the analyses that are applied to collections of data, in particular, to collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. ...





Comments