Abstract
Edge computing, as a relatively recent evolution of cloud computing architecture, is the newest way for enterprises to distribute computational power and lower repetitive referrals to central authorities. In the edge computing environment, Generative Models (GMs) have been found to be valuable and useful in machine learning tasks such as data augmentation and data pre-processing. Federated learning and distributed learning refer to training machine learning models in the edge computing network. However, federated learning and distributed learning also bring additional risks to GMs since all peers in the network have access to the model under training. In this article, we study the vulnerabilities of federated GMs to data-poisoning-based backdoor attacks via gradient uploading. We additionally enhance the attack to reduce the required poisonous data samples and cope with dynamic network environments. Last but not least, the attacks are formally proven to be stealthy and effective toward federated GMs. According to the experiments, neural backdoors can be successfully embedded by including merely \(5\%\) poisonous samples in the local training dataset of an attacker.
- [1] . 2019. IOTFLA: A secured and privacy-preserving smart home architecture implementing federated learning. In Proceedings of the 2019 IEEE Security and Privacy Workshops
(SPW’19) . IEEE Los Alamitos, CA, 175–180.Google ScholarCross Ref
- [2] . 2018. How to backdoor federated learning.
arxiv:1807.00459 (2018).Google Scholar - [3] . 2017. Adversarial transformation networks: Learning to generate adversarial examples.
arxiv:1703.09387 (2017).Google Scholar - [4] . 2019. A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems. 8635–8645. Google Scholar
Digital Library
- [5] . 2020. Modeling of deep neural network (DNN) placement and inference in edge computing.
arxiv:2001.06901 (2020).Google Scholar - [6] . 2018. Model poisoning attacks in federated learning. In Proceedings of the Workshop on Security in Machine Learning
(SecML’18) , collocated with the 32nd Conference on Neural Information Processing Systems (NeurIPS’18).Google Scholar - [7] . 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning. 634–643.Google Scholar
- [8] . 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems. 119–129. Google Scholar
Digital Library
- [9] . 2018. Hardware trojan attacks on neural networks.
arxiv:1806.05768 (2018).Google Scholar - [10] . 2017. Latent poison-adversarial attacks on the latent space.
arxiv:1711.02879 (2017).Google Scholar - [11] . 2019. Man-in-the-middle attacks against machine learning classifiers via malicious generative models.
arxiv:1910.06838 (2019).Google Scholar - [12] . 2019. Poisoning Attack on Deep Generative Models in Autonomous Driving. Retrieved September 22, 2021 from https://www.cs.wm.edu/~liqun/paper/securecomm19.pdf.Google Scholar
- [13] . 2016. Tutorial on variational autoencoders.
arxiv:1606.05908 (2016).Google Scholar - [14] . 2020. Variational image deraining. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. 2406–2415.Google Scholar
Cross Ref
- [15] . 2020. Local model poisoning attacks to Byzantine-robust federated learning. In Proceedings of the 29th USENIX Security Symposium
(USENIX Security’20) . 1605–1622. https://www.usenix.org/conference/usenixsecurity20/presentation/fang. Google ScholarDigital Library
- [16] . 2018. Mitigating Sybils in federated learning poisoning.
arxiv:1808.04866 (2018).Google Scholar - [17] . 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672–2680. Google Scholar
Digital Library
- [18] . 2017. BadNets: Identifying vulnerabilities in the machine learning model supply chain.
arxiv:1708.06733 (2017).Google Scholar - [19] . 2018. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the International Conference on Machine Learning
(ICML’18) . 3521–3530.Google Scholar - [20] . 2018. Contamination attacks and mitigation in multi-party machine learning. In Advances in Neural Information Processing Systems. 6604–6615. Google Scholar
Digital Library
- [21] . 2018. Adversarial examples for generative models. In Proceedings of the 2018 IEEE Security and Privacy Workshops
(SPW’18) . IEEE, Los Alamitos, CA, 36–42.Google ScholarCross Ref
- [22] . 2016. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the International Conference on Machine Learning
(ICML’16) . 1558–1566. Google ScholarDigital Library
- [23] . 2019. Federated learning: Challenges, methods, and future directions.
arxiv:1908.07873 (2019).Google Scholar - [24] . 2018. Hu-Fu: Hardware and software collaborative attack framework against neural networks. In Proceedings of the 2018 IEEE Computer Society Annual Symposium on VLSI
(ISVLSI’18) . IEEE, Los Alamitos, CA, 482–487.Google ScholarCross Ref
- [25] . 2018. Trojaning attack on neural networks. In Proceedings of the Network and Distributed Systems Security Symposium
(NDSS’18). Google ScholarCross Ref
- [26] . 2018. A performance evaluation of federated learning algorithms. In Proceedings of the 2nd Workshop on Distributed Infrastructures for Deep Learning. 1–8. Google Scholar
Digital Library
- [27] . 2019. Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Transactions on Communications 68, 2 (2019), 1146–1159.Google Scholar
- [28] . 2020. Review of smart healthcare systems and applications for smart cities. In ICCCE 2019. Lecture Notes in Electrical Engineering, Vol. 570. Springer, 325–331.Google Scholar
- [29] . 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1310–1321. Google Scholar
Digital Library
- [30] . 2019. Federated generative privacy.
arxiv:1910.08385 (2019).Google Scholar - [31] . 2018. Split learning for health: Distributed deep learning without sharing raw patient data.
arxiv:1812.00564 (2018).Google Scholar - [32] . 2020. A survey on distributed machine learning. ACM Computing Surveys 53, 2 (2020), 1–33. Google Scholar
Digital Library
- [33] . 2018. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence
(IJCAI’18) . 3905–3911. Google ScholarDigital Library
- [34] . 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 10, 2 (2019), 1–19. Google Scholar
Digital Library
- [35] . 2019. Latent backdoor attacks on deep neural networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2041–2055. Google Scholar
Digital Library
- [36] . 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning
(ICML’18) . 5650–5659.Google Scholar
Index Terms
On the Neural Backdoor of Federated Generative Models in Edge Computing
Recommendations
Edge computing: A survey
AbstractIn recent years, the Edge computing paradigm has gained considerable popularity in academic and industrial circles. It serves as a key enabler for many future technologies like 5G, Internet of Things (IoT), augmented reality and ...
Highlights- A comprehensive survey on edge computing, i.e., Fog, Mobile-edge and Cloudlet.
- ...
Deviceless edge computing: extending serverless computing to the edge of the network
SYSTOR '17: Proceedings of the 10th ACM International Systems and Storage ConferenceThe serverless paradigm has been rapidly adopted by developers of cloud-native applications, mainly because it relieves them from the burden of provisioning, scaling and operating the underlying infrastructure. In this paper, we propose a novel ...
All one needs to know about fog computing and related edge computing paradigms: A complete survey
AbstractWith the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages ...






Comments