10.1145/3534678.3539119acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Open Access

FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling

Authors Info & Claims
Published:14 August 2022Publication History

ABSTRACT

Federated learning (FL) is a feasible technique to learn personalized recommendation models from decentralized user data. Unfortunately, federated recommender systems are vulnerable to poisoning attacks by malicious clients. Existing recommender system poisoning methods mainly focus on promoting the recommendation chances of target items due to financial incentives. In fact, in real-world scenarios, the attacker may also attempt to degrade the overall performance of recommender systems. However, existing general FL poisoning methods for degrading model performance are either ineffective or not concealed in poisoning federated recommender systems. In this paper, we propose a simple yet effective and covert poisoning attack method on federated recommendation, named FedAttack. Its core idea is using globally hardest samples to subvert model training. More specifically, the malicious clients first infer user embeddings based on local user profiles. Next, they choose the candidate items that are most relevant to the user embeddings as hardest negative samples, and find the candidates farthest from the user embeddings as hardest positive samples. The model gradients inferred from these poisoned samples are then uploaded for aggregation. Extensive experiments on two benchmark datasets show that FedAttack can effectively degrade the performance of various federated recommender systems, meanwhile cannot be effectively detected nor defended by many existing methods.

Skip Supplemental Material Section

Supplemental Material

KDD22-apfp1078.mp4

Presentation video

References

  1. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In AISTATS. 2938--2948.Google ScholarGoogle Scholar
  2. Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In NeurIPS, Vol. 32.Google ScholarGoogle Scholar
  3. Preeti Bhargava, Thomas Phan, Jiayu Zhou, and Juhan Lee. 2015. Who, what, when, and where: Multi-dimensional collaborative recommendations using tensor factorization on sparse user-generated data. In WWW. 130--140.Google ScholarGoogle Scholar
  4. Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In NIPS.Google ScholarGoogle Scholar
  5. Di Cao, Shan Chang, Zhijian Lin, Guohua Liu, and Donghong Sun. 2019. Understanding distributed poisoning attack in federated learning. In ICPADS. IEEE, 233--239.Google ScholarGoogle Scholar
  6. Chen Chen, Jingfeng Zhang, Anthony KH Tung, Mohan Kankanhalli, and Gang Chen. 2020. Robust federated recommendation system. arXiv preprint arXiv:2006.08259 (2020).Google ScholarGoogle Scholar
  7. Huiyuan Chen and Jing Li. 2019. Data poisoning attacks on cross-domain recommendation. In CIKM. 2177--2180.Google ScholarGoogle Scholar
  8. Jian Chen, Xuxin Zhang, Rui Zhang, ChenWang, and Ling Liu. 2021. De-pois: An attack-agnostic defense against data poisoning attacks. TIFS 16 (2021), 3412--3425.Google ScholarGoogle Scholar
  9. Clarence Chio and David Freeman. 2018. Machine learning and security: Protecting systems with data and algorithms. " O'Reilly Media, Inc.".Google ScholarGoogle Scholar
  10. Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local Model Poisoning Attacks to {Byzantine-Robust} Federated Learning. In USENIX Security. 1605--1622.Google ScholarGoogle Scholar
  11. Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018. Poisoning attacks to graph-based recommender systems. In Proceedings of the 34th Annual Computer Security Applications Conference. 381--392.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. TIIS 5, 4 (2015), 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW. 173--182.Google ScholarGoogle Scholar
  14. Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016. Session-based recommendations with recurrent neural networks. In ICLR.Google ScholarGoogle Scholar
  15. Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, and Diane Larlus. 2020. Hard negative mixing for contrastive learning. In NeurIPS, Vol. 33. 21798--21809.Google ScholarGoogle Scholar
  16. Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recommendation. In ICDM. IEEE, 197--206.Google ScholarGoogle Scholar
  17. Walid Krichene and Steffen Rendle. 2020. On sampled metrics for item recommendation. In KDD. 1748--1757.Google ScholarGoogle Scholar
  18. Bo Li, YiningWang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. NIPS 29 (2016).Google ScholarGoogle Scholar
  19. Feng Liang, Weike Pan, and Zhong Ming. 2021. Fedrec++: Lossless federated recommendation with explicit feedback. In AAAI, Vol. 35. 4224--4231.Google ScholarGoogle ScholarCross RefCross Ref
  20. Guanyu Lin, Feng Liang, Weike Pan, and Zhong Ming. 2020. Fedrec: Federated recommendation with explicit feedback. IEEE Intell. Syst. 36, 5 (2020), 21--30.Google ScholarGoogle ScholarCross RefCross Ref
  21. Ruixuan Liu, Yang Cao, Hong Chen, Ruoyang Guo, and Masatoshi Yoshikawa. 2021. FLAME: Differentially Private Federated Learning in the Shuffle Model. In AAAI, Vol. 35. 8688--8696.Google ScholarGoogle ScholarCross RefCross Ref
  22. Lingjuan Lyu, Han Yu, Xingjun Ma, Lichao Sun, Jun Zhao, Qiang Yang, and Philip S Yu. 2020. Privacy and robustness in federated learning: Attacks and defenses. arXiv preprint arXiv:2012.06337 (2020).Google ScholarGoogle Scholar
  23. Lingjuan Lyu, Han Yu, and Qiang Yang. 2020. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 (2020).Google ScholarGoogle Scholar
  24. Atulay Mahajan and Sangeeta Sharma. 2015. The malicious insiders threat in the cloud. IJERGS 3, 2 (2015), 245--256.Google ScholarGoogle Scholar
  25. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In SIGIR. 43--52.Google ScholarGoogle Scholar
  26. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In AISTATS. 1273--1282.Google ScholarGoogle Scholar
  27. Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, and Emil C Lupu. 2019. Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773 (2019).Google ScholarGoogle Scholar
  28. Thien Duc Nguyen, Samuel Marchal, Markus Miettinen, Hossein Fereidooni, N Asokan, and Ahmad-Reza Sadeghi. 2019. DÏoT: A federated self-learning anomaly detection system for IoT. In ICDCS. IEEE, 756--767.Google ScholarGoogle Scholar
  29. Ivan Palomares, Fiona Browne, and Peadar Davis. 2018. Multi-view fuzzy information fusion in collaborative filtering recommender systems: Application to the urban resilience domain. Data & Knowledge Engineering 113 (2018), 64--80.Google ScholarGoogle ScholarCross RefCross Ref
  30. Tao Qi, FangzhaoWu, ChuhanWu, Yongfeng Huang, and Xing Xie. 2020. Privacy-Preserving News Recommendation Model Learning. In EMNLP: Findings. 1423--1432.Google ScholarGoogle Scholar
  31. Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Kone?nGoogle ScholarGoogle Scholar
  32. y, Sanjiv Kumar, and H Brendan McMahan. 2021. Adaptive federated optimization. In ICLR.Google ScholarGoogle Scholar
  33. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In UAI. 452--461.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive learning with hard negative samples. In ICLR.Google ScholarGoogle Scholar
  35. Avi Schwarzschild, Micah Goldblum, Arjun Gupta, John P Dickerson, and Tom Goldstein. 2021. Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks. In ICML. 9389--9398.Google ScholarGoogle Scholar
  36. Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS.Google ScholarGoogle Scholar
  37. Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. 2022. Back to the drawing board: A critical evaluation of poisoning attacks on federated learning. In S&P.Google ScholarGoogle Scholar
  38. Junshuai Song, Zhao Li, Zehong Hu, Yucheng Wu, Zhenpeng Li, Jian Li, and Jun Gao. 2020. Poisonrec: an adaptive data poisoning framework for attacking black-box recommender systems. In ICDE. IEEE, 157--168.Google ScholarGoogle Scholar
  39. Jacob Steinhardt, Pang Wei W Koh, and Percy S Liang. 2017. Certified defenses for data poisoning attacks. NIPS 30 (2017).Google ScholarGoogle Scholar
  40. Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. In CIKM. 1441--1450.Google ScholarGoogle Scholar
  41. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning?. In NeurIPS FL Workshop.Google ScholarGoogle Scholar
  42. Rahim Taheri, Reza Javidan, Mohammad Shojafar, Zahra Pooranian, Ali Miri, and Mauro Conti. 2020. On defending against label flipping attacks on malware detection systems. Neural Computing & Applications 32, 18 (2020), 14781--14800.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Jiaxi Tang, Hongyi Wen, and Ke Wang. 2020. Revisiting adversarially learned injection attacks against recommender systems. In Recsys. 318--327.Google ScholarGoogle Scholar
  44. Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. 2020. Data poisoning attacks against federated learning systems. In ESORICS. 480--501.Google ScholarGoogle Scholar
  45. Eric Wallace, Tony Zhao, Shi Feng, and Sameer Singh. 2021. Concealed Data Poisoning Attacks on NLP Models. In NAACL.Google ScholarGoogle Scholar
  46. Wenjie Wang, Fuli Feng, Xiangnan He, Liqiang Nie, and Tat-Seng Chua. 2021. Denoising implicit feedback for recommendation. In WSDM. 373--381.Google ScholarGoogle Scholar
  47. Chenwang Wu, Defu Lian, Yong Ge, Zhihao Zhu, and Enhong Chen. 2021. Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems. In KDD. 1830--1840.Google ScholarGoogle Scholar
  48. ChuhanWu, FangzhaoWu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019. Npa: Neural news recommendation with personalized attention. In KDD. 2576--2584.Google ScholarGoogle Scholar
  49. Chuhan Wu, Fangzhao Wu, Yang Cao, Yongfeng Huang, and Xing Xie. 2021. Fedgnn: Federated graph neural network for privacy-preserving recommendation. arXiv preprint arXiv:2102.04925 (2021).Google ScholarGoogle Scholar
  50. Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Is News Recommendation a Sequential Recommendation Task? arXiv:2108.08984 (2021).Google ScholarGoogle Scholar
  51. Huang Xiao, Battista Biggio, Blaine Nelson, Han Xiao, Claudia Eckert, and Fabio Roli. 2015. Support vector machines under adversarial label contamination. Neurocomputing 160 (2015), 53--62.Google ScholarGoogle ScholarCross RefCross Ref
  52. Han Xiao, Huang Xiao, and Claudia Eckert. 2012. Adversarial label flips attack on support vector machines. In ECAI. 870--875.Google ScholarGoogle Scholar
  53. Hong Xuan, Abby Stylianou, Xiaotong Liu, and Robert Pless. 2020. Hard negative examples are hard, but useful. In ECCV. Springer, 126--142.Google ScholarGoogle Scholar
  54. Chaofei Yang, Qing Wu, Hai Li, and Yiran Chen. 2017. Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340 (2017).Google ScholarGoogle Scholar
  55. Liu Yang, Ben Tan, VincentWZheng, Kai Chen, and Qiang Yang. 2020. Federated recommendation systems. In Federated Learning. Springer, 225--239.Google ScholarGoogle Scholar
  56. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. TIST 10, 2 (2019), 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In ICML. 5650--5659.Google ScholarGoogle Scholar
  58. Wenhui Yu and Zheng Qin. 2020. Sampler design for implicit feedback data by noisy-label robust learning. In SIGIR. 861--870.Google ScholarGoogle Scholar
  59. Hengtong Zhang, Yaliang Li, Bolin Ding, and Jing Gao. 2020. Practical data poisoning attack against next-item recommendation. In WWW. 2458--2464.Google ScholarGoogle Scholar
  60. Hengtong Zhang, Changxin Tian, Yaliang Li, Lu Su, Nan Yang, Wayne Xin Zhao, and Jing Gao. 2021. Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data. In KDD. 2154--2164.Google ScholarGoogle Scholar
  61. Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Quoc Viet Hung Nguyen, and Lizhen Cui. 2021. PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion. arXiv preprint arXiv:2110.10926 (2021).Google ScholarGoogle Scholar
  62. Yihe Zhang, Xu Yuan, Jin Li, Jiadong Lou, Li Chen, and Nian-Feng Tzeng. 2021. Reverse Attack: Black-box Attacks on Collaborative Recommendation. In SIGSAC. 51--68.Google ScholarGoogle Scholar

Index Terms

  1. FedAttack: Effective and Covert Poisoning Attack on Federated Recommendation via Hard Sampling

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!