10.1145/3510548.3519372acmconferencesArticle/Chapter ViewAbstractPublication PagescodaspyConference Proceedingsconference-collections
research-article
Open Access

Data Poisoning in Sequential and Parallel Federated Learning

Authors Info & Claims
Published:23 April 2022Publication History

ABSTRACT

Federated Machine Learning has recently become a prominent approach to leverage data that is distributed across different clients, without the need to centralize data. Models are trained locally, and only model parameters are shared and aggregated into a global model. Federated learning can increase privacy of sensitive data, as the data itself is never shared, and benefit from the distributed setting by utilizing computational resources of the clients. Adversarial Machine Learning attacks machine learning systems in respect to their confidentiality, integrity or availability. Recent research has shown that many forms of machine learning are susceptible to these types of attacks. Besides its advantages, federated learning opens new attack surfaces due to its distributed nature, which amplifies concerns of adversarial attacks. In this paper, we evaluate data poisoning attacks in federated settings. By altering certain training inputs that are used in the training phase with a specific pattern, an adversary may later trigger malicious behavior in the prediction phase. We show on datasets for traffic sign and face recognition that federated learning is effective on a similar level as centralized learning, but is indeed vulnerable to data poisoning attacks. We test both a parallel as well as a sequential (incremental cyclic) federated learning, and perform an in-depth analysis on several hyper-parameters of the adversaries.

References

  1. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How To Backdoor Federated Learning. In International Conference on Artificial Intelligence and Statistics (AISTATS ). PMLR, Palermo, Italy.Google ScholarGoogle Scholar
  2. Josef Bengtson, Filip Heikkilä, Per Nilsson, Lukas Nyström, Erik Persson, and Gustav Tellwe. 2018. Deep learning methods for recognizing signs/objects in road traffic.Google ScholarGoogle Scholar
  3. Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks against Support Vector Machines. In International Conference on Machine Learning (ICML ), , John Langford and Joelle Pineau (Eds.). Omnipress, New York, NY, USA.Google ScholarGoogle Scholar
  4. Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. In ACM SIGSAC Conference on Computer and Communications Security (CCS ). ACM, New York, NY, USA. https://doi.org/10.1145/2810103.2813677Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access , Vol. 7 (2019). https://doi.org/10.1109/ACCESS.2019.2909068Google ScholarGoogle Scholar
  6. Peter Kairouz, H. Brendan McMahan, et al. 2021. Advances and Open Problems in Federated Learning. Foundations and Trends® in Machine Learning , Vol. 14, 1--2 (2021). https://doi.org/10.1561/2200000083Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Hurieh Khalajzadeh, Mohammad Mansouri, and Mohammad Teshnehlab. 2013. Hierarchical structure based convolutional neural network for face recognition. International Journal of Computational Intelligence and Applications , Vol. 12, 03 (2013). https://doi.org/10.1142/S1469026813500181Google ScholarGoogle ScholarCross RefCross Ref
  8. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America , Vol. 114, 13 (2017). https://doi.org/10.1073/pnas.1611835114Google ScholarGoogle ScholarCross RefCross Ref
  9. Jakub Kone?ný, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016a. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arxiv: 1610.02527Google ScholarGoogle Scholar
  10. Jakub Konený, H. Brendan McMahan, Felix X. Yu, Peter Richtarik, Ananda Theertha Suresh, and Dave Bacon. 2016b. Federated Learning: Strategies for Improving Communication Efficiency. In Workshop on Private Multi-Party Machine Learning, Conference on Neural Information Processing Systems (NIPS).Google ScholarGoogle Scholar
  11. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In International Conference on Artificial Intelligence and Statistics. PMLR, Fort Lauderdale, FL, USA. http://proceedings.mlr.press/v54/mcmahan17a.htmlGoogle ScholarGoogle Scholar
  12. Jed Mills, Jia Hu, and Geyong Min. 2020. Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT. IEEE Internet of Things Journal , Vol. 7, 7 (July 2020). https://doi.org/10.1109/JIOT.2019.2956615Google ScholarGoogle ScholarCross RefCross Ref
  13. Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, and Ahmad-Reza Sadeghi. 2020. Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System. In Workshop on Decentralized IoT Systems and Security. Internet Society, San Diego, CA. https://doi.org/10.14722/diss.2020.23003Google ScholarGoogle Scholar
  14. Florian Nuding and Rudolf Mayer. 2020. Poisoning Attacks in Federated Learning: An Evaluation on Traffic Sign Classification. In ACM Conference on Data and Application Security and Privacy. ACM, New Orleans LA USA. https://doi.org/10.1145/3374664.3379534Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Transactions on Information Forensics and Security , Vol. 13, 5 (2018). https://doi.org/10.1109/TIFS.2017.2787987Google ScholarGoogle ScholarCross RefCross Ref
  16. Anastassiya Pustozerova and Rudolf Mayer. 2020. Information Leaks in Federated Learning. In Workshop on Decentralized IoT Systems and Security. Internet Society, San Diego, CA. https://doi.org/10.14722/diss.2020.23004Google ScholarGoogle Scholar
  17. Anastasia Pustozerova, Andreas Rauber, and Rudolf Mayer. 2021. Training Effective Neural Networks on Structured Data with Federated Learning. In Advanced Information Networking and Applications (AINA), Vol. 226. Springer International Publishing, Cham. https://doi.org/10.1007/978--3-030--75075--6_32Google ScholarGoogle Scholar
  18. Huma Rehman, Andreas Ekelhart, and Rudolf Mayer. 2019. Backdoor Attacks in Neural Networks -- A Systematic Evaluation on Multiple Traffic Sign Datasets. In Machine Learning and Knowledge Extraction. Springer International Publishing, Cham. https://doi.org/10.1007/978--3-030--29726--8_18Google ScholarGoogle Scholar
  19. Micah J. Sheller, G. Anthony Reina, Brandon Edwards, Jason Martin, and Spyridon Bakas. 2018. Multi-institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation. In BrainLes: International MICCAI Brainlesion Workshop. Springer International Publishing, Granada, Spain. https://doi.org/10.1007/978--3-030--11723--8_9Google ScholarGoogle Scholar
  20. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. In IEEE Symposium on Security and Privacy (SP). IEEE, San Jose, CA, USA. https://doi.org/10.1109/SP.2017.41Google ScholarGoogle Scholar
  21. Santiago Silva, Boris A. Gutman, Eduardo Romero, Paul M. Thompson, Andre Altmann, and Marco Lorenzi. 2019. Federated Learning in Distributed Medical Databases: Meta-Analysis of Large-Scale Subcortical Brain Data. In IEEE International Symposium on Biomedical Imaging (ISBI ). IEEE, Venice, Italy. https://doi.org/10.1109/ISBI.2019.8759317Google ScholarGoogle Scholar
  22. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. 2019. Can You Really Backdoor Federated Learning?. In Int. Workshop on Federated Learning for Data Privacy and Confidentiality at NeurIPS. Vancouver, Canada.Google ScholarGoogle Scholar
  23. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. In IEEE Symposium on Security and Privacy (SP). IEEE, San Francisco, CA, USA. https://doi.org/10.1109/SP.2019.00031Google ScholarGoogle Scholar
  24. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated Machine Learning: Concept and Applications. ACM Transactions on Intelligent Systems and Technology , Vol. 10, 2 (2019). https://doi.org/10.1145/3298981Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Data Poisoning in Sequential and Parallel Federated Learning

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        IWSPA '22: Proceedings of the 2022 ACM on International Workshop on Security and Privacy Analytics
        April 2022
        110 pages
        ISBN:9781450392303
        DOI:10.1145/3510548

        Copyright © 2022 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 April 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate18of58submissions,31%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!