skip to main content
research-article

On the Neural Backdoor of Federated Generative Models in Edge Computing

Published:22 October 2021Publication History
Skip Abstract Section

Abstract

Edge computing, as a relatively recent evolution of cloud computing architecture, is the newest way for enterprises to distribute computational power and lower repetitive referrals to central authorities. In the edge computing environment, Generative Models (GMs) have been found to be valuable and useful in machine learning tasks such as data augmentation and data pre-processing. Federated learning and distributed learning refer to training machine learning models in the edge computing network. However, federated learning and distributed learning also bring additional risks to GMs since all peers in the network have access to the model under training. In this article, we study the vulnerabilities of federated GMs to data-poisoning-based backdoor attacks via gradient uploading. We additionally enhance the attack to reduce the required poisonous data samples and cope with dynamic network environments. Last but not least, the attacks are formally proven to be stealthy and effective toward federated GMs. According to the experiments, neural backdoors can be successfully embedded by including merely \(5\%\) poisonous samples in the local training dataset of an attacker.

REFERENCES

  1. [1] Aïvodji Ulrich Matchi, Gambs Sébastien, and Martin Alexandre. 2019. IOTFLA: A secured and privacy-preserving smart home architecture implementing federated learning. In Proceedings of the 2019 IEEE Security and Privacy Workshops (SPW’19). IEEE Los Alamitos, CA, 175180.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Bagdasaryan Eugene, Veit Andreas, Hua Yiqing, Estrin Deborah, and Shmatikov Vitaly. 2018. How to backdoor federated learning. arxiv:1807.00459 (2018).Google ScholarGoogle Scholar
  3. [3] Baluja Shumeet and Fischer Ian. 2017. Adversarial transformation networks: Learning to generate adversarial examples. arxiv:1703.09387 (2017).Google ScholarGoogle Scholar
  4. [4] Baruch Gilad, Baruch Moran, and Goldberg Yoav. 2019. A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems. 86358645. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Bensalem Mounir, Dizdarević Jasenka, and Jukan Admela. 2020. Modeling of deep neural network (DNN) placement and inference in edge computing. arxiv:2001.06901 (2020).Google ScholarGoogle Scholar
  6. [6] Bhagoji Arjun Nitin, Chakraborty Supriyo, Mittal P., and Calo S.. 2018. Model poisoning attacks in federated learning. In Proceedings of the Workshop on Security in Machine Learning (SecML’18), collocated with the 32nd Conference on Neural Information Processing Systems (NeurIPS’18).Google ScholarGoogle Scholar
  7. [7] Bhagoji Arjun Nitin, Chakraborty Supriyo, Mittal Prateek, and Calo Seraphin. 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning. 634643.Google ScholarGoogle Scholar
  8. [8] Blanchard Peva, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems. 119129. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Clements Joseph and Lao Yingjie. 2018. Hardware trojan attacks on neural networks. arxiv:1806.05768 (2018).Google ScholarGoogle Scholar
  10. [10] Creswell Antonia, Bharath Anil A., and Sengupta Biswa. 2017. Latent poison-adversarial attacks on the latent space. arxiv:1711.02879 (2017).Google ScholarGoogle Scholar
  11. [11] Derui Wang, Li Chaoran, Wen Sheng, Nepal Surya, and Xiang Yang. 2019. Man-in-the-middle attacks against machine learning classifiers via malicious generative models. arxiv:1910.06838 (2019).Google ScholarGoogle Scholar
  12. [12] Ding Shaohua, Tian Yulong, Xu Fengyuan, Li Qun, and Zhong Sheng. 2019. Poisoning Attack on Deep Generative Models in Autonomous Driving. Retrieved September 22, 2021 from https://www.cs.wm.edu/~liqun/paper/securecomm19.pdf.Google ScholarGoogle Scholar
  13. [13] Doersch Carl. 2016. Tutorial on variational autoencoders. arxiv:1606.05908 (2016).Google ScholarGoogle Scholar
  14. [14] Du Yingjun, Xu Jun, Qiu Qiang, Zhen Xiantong, and Zhang Lei. 2020. Variational image deraining. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. 24062415.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Fang Minghong, Cao Xiaoyu, Jia Jinyuan, and Gong Neil. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In Proceedings of the 29th USENIX Security Symposium (USENIX Security’20). 16051622. https://www.usenix.org/conference/usenixsecurity20/presentation/fang. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Fung Clement, Yoon Chris J. M., and Beschastnikh Ivan. 2018. Mitigating Sybils in federated learning poisoning. arxiv:1808.04866 (2018).Google ScholarGoogle Scholar
  17. [17] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 26722680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Gu Tianyu, Dolan-Gavitt Brendan, and Garg Siddharth. 2017. BadNets: Identifying vulnerabilities in the machine learning model supply chain. arxiv:1708.06733 (2017).Google ScholarGoogle Scholar
  19. [19] El Mahdi El Mhamdi, Rachid Guerraoui, and Sebastien Rouault. 2018. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the International Conference on Machine Learning (ICML’18). 35213530.Google ScholarGoogle Scholar
  20. [20] Hayes Jamie and Ohrimenko Olga. 2018. Contamination attacks and mitigation in multi-party machine learning. In Advances in Neural Information Processing Systems. 66046615. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Kos Jernej, Fischer Ian, and Song Dawn. 2018. Adversarial examples for generative models. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW’18). IEEE, Los Alamitos, CA, 3642.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Larsen Anders Boesen Lindbo, Sønderby Søren Kaae, Larochelle Hugo, and Winther Ole. 2016. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the International Conference on Machine Learning (ICML’16). 15581566. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Li Tian, Sahu Anit Kumar, Talwalkar Ameet, and Smith Virginia. 2019. Federated learning: Challenges, methods, and future directions. arxiv:1908.07873 (2019).Google ScholarGoogle Scholar
  24. [24] Li Wenshuo, Yu Jincheng, Ning Xuefei, Wang Pengjun, Wei Qi, Wang Yu, and Yang Huazhong. 2018. Hu-Fu: Hardware and software collaborative attack framework against neural networks. In Proceedings of the 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI’18). IEEE, Los Alamitos, CA, 482487.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Liu Yingqi, Ma Shiqing, Aafer Yousra, Lee Wen-Chuan, Zhai Juan, Wang Weihang, and Zhang Xiangyu. 2018. Trojaning attack on neural networks. In Proceedings of the Network and Distributed Systems Security Symposium (NDSS’18).Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Nilsson Adrian, Smith Simon, Ulm Gregor, Gustavsson Emil, and Jirstrand Mats. 2018. A performance evaluation of federated learning algorithms. In Proceedings of the 2nd Workshop on Distributed Infrastructures for Deep Learning. 18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Samarakoon Sumudu, Bennis Mehdi, Saad Walid, and Debbah Mérouane. 2019. Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Transactions on Communications 68, 2 (2019), 1146–1159.Google ScholarGoogle Scholar
  28. [28] Sanghavi Jignyasa. 2020. Review of smart healthcare systems and applications for smart cities. In ICCCE 2019. Lecture Notes in Electrical Engineering, Vol. 570. Springer, 325331.Google ScholarGoogle Scholar
  29. [29] Shokri Reza and Shmatikov Vitaly. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 13101321. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Triastcyn Aleksei and Faltings Boi. 2019. Federated generative privacy. arxiv:1910.08385 (2019).Google ScholarGoogle Scholar
  31. [31] Vepakomma Praneeth, Gupta Otkrist, Swedish Tristan, and Raskar Ramesh. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. arxiv:1812.00564 (2018).Google ScholarGoogle Scholar
  32. [32] Verbraeken Joost, Wolting Matthijs, Katzy Jonathan, Kloppenburg Jeroen, Verbelen Tim, and Rellermeyer Jan S.. 2020. A survey on distributed machine learning. ACM Computing Surveys 53, 2 (2020), 133. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Xiao Chaowei, Li Bo, Zhu Jun Yan, He Warren, Liu Mingyan, and Song Dawn. 2018. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18). 39053911. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Yang Qiang, Liu Yang, Chen Tianjian, and Tong Yongxin. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 10, 2 (2019), 119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Yao Yuanshun, Li Huiying, Zheng Haitao, and Zhao Ben Y.. 2019. Latent backdoor attacks on deep neural networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 20412055. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Yin Dong, Chen Yudong, Kannan Ramchandran, and Bartlett Peter. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning (ICML’18). 56505659.Google ScholarGoogle Scholar

Index Terms

  1. On the Neural Backdoor of Federated Generative Models in Edge Computing

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Internet Technology
            ACM Transactions on Internet Technology  Volume 22, Issue 2
            May 2022
            582 pages
            ISSN:1533-5399
            EISSN:1557-6051
            DOI:10.1145/3490674
            • Editor:
            • Ling Liu
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 22 October 2021
            • Accepted: 1 September 2020
            • Revised: 1 August 2020
            • Received: 1 July 2020
            Published in toit Volume 22, Issue 2

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Refereed
          • Article Metrics

            • Downloads (Last 12 months)148
            • Downloads (Last 6 weeks)10

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!