10.1145/3437880.3460403acmconferencesArticle/Chapter ViewAbstractPublication Pagesih-n-mmsecConference Proceedingsconference-collections
research-article

FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning

Authors Info & Claims
Published:21 June 2021Publication History

ABSTRACT

Federated learning is a secure machine learning technology proposed to protect data privacy and security in machine learning model training. However, recent studies show that federated learning is vulnerable to backdoor attacks, such as model replacement attacks and distributed backdoor attacks. Most backdoor defense techniques are not appropriate for federated learning since they are based on entire data samples that cannot be hold in federated learning scenarios. The newly proposed methods for federated learning sacrifice the accuracy of models and still fail once attacks persist in many training rounds. In this paper, we propose a novel and effective detection and defense technique called FederatedReverse for federated learning. We conduct extensive experimental evaluation of our solution. The experimental results show that, compared with the existing techniques, our solution can effectively detect and defend against various backdoor attacks in federated learning, where the success rate and duration of backdoor attacks can be greatly reduced and the accuracies of trained models are almost not reduced.

Skip Supplemental Material Section

Supplemental Material

IH&MMSec21-fp19.mp4

Presentation video about FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning

References

  1. 2000. Estimating a Dirichlet distribution. https://tminka.github.io/papers/dirichlet/.Google ScholarGoogle Scholar
  2. 2013. Using the Median Absolute Deviation to Find Outliers. https://eurekastatistics.com/using-the-median-absolute-deviation-to-find-outliers.Google ScholarGoogle Scholar
  3. 2020. Decentralized ML. https://decentralizedml.com/.Google ScholarGoogle Scholar
  4. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938--2948.Google ScholarGoogle Scholar
  5. MCD Barrozo and TJP Penna. 1994. Unlearning in Neural Networks with Many-Neuron Interactions. International Journal of Modern Physics C, Vol. 5, 03 (1994), 503--512.Google ScholarGoogle ScholarCross RefCross Ref
  6. Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. 2007. Greedy layer-wise training of deep networks. Advances in neural information processing systems, Vol. 19 (2007), 153.Google ScholarGoogle Scholar
  7. Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning. PMLR, 634--643.Google ScholarGoogle Scholar
  8. Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175--1191.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Huili Chen, Cheng Fu, Jishen Zhao, and Farinaz Koushanfar. 2019. DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks.. In IJCAI. 4658--4664.Google ScholarGoogle Scholar
  10. Zheng Leong Chua, Shiqi Shen, Prateek Saxena, and Zhenkai Liang. 2017. Neural nets can learn function type signatures from binaries. In 26th USENIX Security Symposium (USENIX Security 17). 99--116.Google ScholarGoogle Scholar
  11. Shuhao Fu, Chulin Xie, Bo Li, and Qifeng Chen. 2019. Attack-resistant federated learning with residual-based reweighting. arXiv preprint arXiv:1912.11464 (2019).Google ScholarGoogle Scholar
  12. Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. 2018. Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866 (2018).Google ScholarGoogle Scholar
  13. Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C Ranasinghe, and Surya Nepal. 2019. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference. 113--125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661 (2014).Google ScholarGoogle Scholar
  15. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017).Google ScholarGoogle Scholar
  16. Frank R Hampel. 1974. The influence curve and its role in robust estimation. Journal of the american statistical association, Vol. 69, 346 (1974), 383--393.Google ScholarGoogle ScholarCross RefCross Ref
  17. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  18. Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural computation, Vol. 18, 7 (2006), 1527--1554.Google ScholarGoogle Scholar
  19. Jakub Konevc nỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).Google ScholarGoogle Scholar
  20. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, Vol. 25 (2012), 1097--1105.Google ScholarGoogle Scholar
  21. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273--1282.Google ScholarGoogle Scholar
  22. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).Google ScholarGoogle Scholar
  23. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP). IEEE, 739--753.Google ScholarGoogle ScholarCross RefCross Ref
  24. Krishna Pillutla, Sham M Kakade, and Zaid Harchaoui. 2019. Robust aggregation for federated learning. arXiv preprint arXiv:1912.13445 (2019).Google ScholarGoogle Scholar
  25. Marc Ranzato, Christopher Poultney, Sumit Chopra, Yann LeCun, et al. 2007. Efficient learning of sparse representations with an energy-based model. Advances in neural information processing systems, Vol. 19 (2007), 1137.Google ScholarGoogle Scholar
  26. Eui Chul Richard Shin, Dawn Song, and Reza Moazzezi. 2015. Recognizing functions in binaries with neural networks. In 24th USENIX Security Symposium (USENIX Security 15). 611--626.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  28. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019).Google ScholarGoogle Scholar
  29. Jacob M Victor. 2013. The EU general data protection regulation: Toward a property regime for protecting data privacy. Yale LJ, Vol. 123 (2013), 513.Google ScholarGoogle Scholar
  30. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 707--723.Google ScholarGoogle ScholarCross RefCross Ref
  31. Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2019. Dba: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations .Google ScholarGoogle Scholar
  32. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), Vol. 10, 2 (2019), 1--19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision. Springer, 818--833.Google ScholarGoogle Scholar

Index Terms

  1. FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      IH&MMSec '21: Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security
      June 2021
      205 pages
      ISBN:9781450382953
      DOI:10.1145/3437880

      Copyright © 2021 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 June 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate128of318submissions,40%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!