ABSTRACT
Recommender systems based on collaborative filtering are highly vulnerable to data poisoning attacks, where a determined attacker injects fake users with false user-item feedback, with an objective to either corrupt the recommender system or promote/demote a target set of items. Recently, differential privacy was explored as a defense technique against data poisoning attacks in the typical machine learning setting. In this paper, we study the effectiveness of differential privacy against such attacks on matrix factorization based collaborative filtering systems. Concretely, we conduct extensive experiments for evaluating robustness to injection of malicious user profiles by simulating common types of shilling attacks on real-world data and comparing the predictions of typical matrix factorization with differentially private matrix factorization.
Supplemental Material
- C.Dwork and A.Roth. 2014. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science(2014).Google Scholar
- F. M. Harper and J. A. Konstan. 2015. The Movie Lens Datasets: History and Context. In ACM Transactions on Interactive Intelligent Systems (TiiS'15).Google Scholar
- N. Hurley, Z. Cheng, and M. Zhang. 2009. Statistical attack detection. In ACM Conference on Recommender Systems (RecSys'09).Google Scholar
- B. Li, Y. Wang, A. Singh, and Y. Vorobeychik. 2016. Data Poisoning Attacks on Factorization-Based Collaborative Filtering. In NeurIPS'16.Google Scholar
- Z. Liu, Y.X. Wang, and A. Smola. 2015. Fast Differentially Private Matrix Factorization. In RecSys'15.Google Scholar
- Y. Ma, X. Zhu, and J. Hsu. 2019. Data Poisoning against Differentially-Private Learners: Attacks and Defenses. In IJCAI'19.Google Scholar
- C. E. Seminario and D. C. Wilson. 2014. Attacking item-based recommender systems with power items. In ACM Conference on Recommender systems (RecSys'14).Google Scholar
- J. Steinhardt, P.W. Koh, and P. Liang. 2017. Certified Defenses for Data Poisoning Attacks. In Conference on Neural Information Processing Systems (NeurIPS'17).Google Scholar
- Y. X. Wang, S. Fienberg, and A. Smola. 2015. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In ICML'15.Google Scholar
- Z. Yang and Z. Cai. 2017. Detecting abnormal profiles in collaborative filtering recommender systems. In Journal of Intelligent Information Systems.Google Scholar
- Z. Zhang and S. R. Kulkarni. 2013. Graph-based detection of shilling attacks in recommender systems. In IEEE Workshop Machine Learning for Signal Processing.Google Scholar
Index Terms
Data Poisoning Attacks against Differentially Private Recommender Systems
Recommendations
Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data
Recent studies reveal that recommender systems are vulnerable to data poisoning attack due to their openness nature. In data poisoning attack, the attacker typically recruits a group of controlled users to inject well-crafted user-item interaction data ...
Poisoning Attacks to Graph-Based Recommender Systems
Recommender system is an important component of many web services to help users locate items that match their interests. Several studies showed that recommender systems are vulnerable to poisoning attacks, in which an attacker injects fake data to a ...
Data poisoning attacks on neighborhood‐based recommender systems
AbstractNowadays, collaborative filtering recommender systems have been widely deployed in many commercial companies to make profit. Neighborhood‐based collaborative filtering (CF) is common and effective. To date, despite its effectiveness, there has ...






Comments