10.1145/3397271.3401301acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Data Poisoning Attacks against Differentially Private Recommender Systems

Published:25 July 2020Publication History

ABSTRACT

Recommender systems based on collaborative filtering are highly vulnerable to data poisoning attacks, where a determined attacker injects fake users with false user-item feedback, with an objective to either corrupt the recommender system or promote/demote a target set of items. Recently, differential privacy was explored as a defense technique against data poisoning attacks in the typical machine learning setting. In this paper, we study the effectiveness of differential privacy against such attacks on matrix factorization based collaborative filtering systems. Concretely, we conduct extensive experiments for evaluating robustness to injection of malicious user profiles by simulating common types of shilling attacks on real-world data and comparing the predictions of typical matrix factorization with differentially private matrix factorization.

Skip Supplemental Material Section

Supplemental Material

3397271.3401301.mp4

Recommender systems based on collaborative filtering are widely used, but highly vulnerable to data poisoning attacks, where a determined attacker injects fake users with false user-item feedback, with an objective to either corrupt the recommender system or promote/demote a target set of items. Recently, differential privacy was explored as a defense technique against data poisoning attacks in the typical machine learning setting. In this paper, we study the effectiveness of differential privacy against such attacks on matrix factorization based collaborative filtering systems. Concretely, we conduct extensive experiments for evaluating robustness to injection of malicious user profiles by simulating common types of shilling attacks on real-world data and comparing the predictions of typical matrix factorization with differentially private matrix factorization. We observe that differentially private MF is more robust to data poisoning in most experiments.

References

  1. C.Dwork and A.Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science(2014).Google ScholarGoogle Scholar
  2. F. M. Harper and J. A. Konstan. 2015. The Movie Lens Datasets: History and Context. In ACM Transactions on Interactive Intelligent Systems (TiiS'15).Google ScholarGoogle Scholar
  3. N. Hurley, Z. Cheng, and M. Zhang. 2009. Statistical attack detection. In ACM Conference on Recommender Systems (RecSys'09).Google ScholarGoogle Scholar
  4. B. Li, Y. Wang, A. Singh, and Y. Vorobeychik. 2016. Data Poisoning Attacks on Factorization-Based Collaborative Filtering. In NeurIPS'16.Google ScholarGoogle Scholar
  5. Z. Liu, Y.X. Wang, and A. Smola. 2015. Fast Differentially Private Matrix Factorization. In RecSys'15.Google ScholarGoogle Scholar
  6. Y. Ma, X. Zhu, and J. Hsu. 2019. Data Poisoning against Differentially-Private Learners: Attacks and Defenses. In IJCAI'19.Google ScholarGoogle Scholar
  7. C. E. Seminario and D. C. Wilson. 2014. Attacking item-based recommender systems with power items. In ACM Conference on Recommender systems (RecSys'14).Google ScholarGoogle Scholar
  8. J. Steinhardt, P.W. Koh, and P. Liang. 2017. Certified Defenses for Data Poisoning Attacks. In Conference on Neural Information Processing Systems (NeurIPS'17).Google ScholarGoogle Scholar
  9. Y. X. Wang, S. Fienberg, and A. Smola. 2015. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In ICML'15.Google ScholarGoogle Scholar
  10. Z. Yang and Z. Cai. 2017. Detecting abnormal profiles in collaborative filtering recommender systems. In Journal of Intelligent Information Systems.Google ScholarGoogle Scholar
  11. Z. Zhang and S. R. Kulkarni. 2013. Graph-based detection of shilling attacks in recommender systems. In IEEE Workshop Machine Learning for Signal Processing.Google ScholarGoogle Scholar

Index Terms

  1. Data Poisoning Attacks against Differentially Private Recommender Systems

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!