Abstract
Federated Edge Learning (FEL) allows edge nodes to train a global deep learning model collaboratively for edge computing in the Industrial Internet of Things (IIoT), which significantly promotes the development of Industrial 4.0. However, FEL faces two critical challenges: communication overhead and data privacy. FEL suffers from expensive communication overhead when training large-scale multi-node models. Furthermore, due to the vulnerability of FEL to gradient leakage and label-flipping attacks, the training process of the global model is easily compromised by adversaries. To address these challenges, we propose a communication-efficient and privacy-enhanced asynchronous FEL framework for edge computing in IIoT. First, we introduce an asynchronous model update scheme to reduce the computation time that edge nodes wait for global model aggregation. Second, we propose an asynchronous local differential privacy mechanism, which improves communication efficiency and mitigates gradient leakage attacks by adding well-designed noise to the gradients of edge nodes. Third, we design a cloud-side malicious node detection mechanism to detect malicious nodes by testing the local model quality. Such a mechanism can avoid malicious nodes participating in training to mitigate label-flipping attacks. Extensive experimental studies on two real-world datasets demonstrate that the proposed framework can not only improve communication efficiency but also mitigate malicious attacks while its accuracy is comparable to traditional FEL frameworks.
- [1] . 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 308–318. Google Scholar
Digital Library
- [2] . 2018. Byzantine stochastic gradient descent. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 4613–4623. Google Scholar
Digital Library
- [3] . 2017. QSGD: Communication-efficient SGD via gradient quantization and encoding. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 1709–1720. Google Scholar
Digital Library
- [4] . 2018. The convergence of sparsified gradient methods. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 5973–5983. Google Scholar
Digital Library
- [5] . 2020. How to backdoor federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics. PMLR, 2938–2948.Google Scholar
- [6] . 2019. Qsparse-local-SGD: Distributed SGD with quantization, sparsification and local computations. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 14695–14706. Google Scholar
Digital Library
- [7] . 2016. Internet of things (IoT): Smart and secure service delivery. ACM Transactions on Internet Technology 16, 4, Article 22 (
Dec. 2016), 7 pages.DOI : https://doi.org/10.1145/3013520 Google ScholarDigital Library
- [8] . 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning. PMLR, 634–643.Google Scholar
- [9] 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 119–129. Google Scholar
Digital Library
- [10] . 2019. Towards federated learning at scale: System design. In Proceedings of the Machine Learning and Systems. , , and (Eds.), Vol. 1, 374–388. Retrieved from https://proceedings.mlsys.org/paper/2019/file/bd686fd640be98efaae0091fa301e613-Paper.pdf.Google Scholar
- [11] . 2019. A dynamic service migration mechanism in edge cognitive computing. ACM Transactions on Internet Technology 19, 2 (2019), 1–15. Google Scholar
Digital Library
- [12] . 2019. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Transactions on Neural Networks and Learning Systems 31, 10 (2019), 4229–4238.Google Scholar
- [13] . 2019. Aggregathor: Byzantine machine learning via robust gradient aggregation. In Proceedings of the Conference on Systems and Machine Learning.Google Scholar
- [14] . 2008. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation. Springer, 1–19. Google Scholar
Digital Library
- [15] . 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3–4 (2014), 211–407.
DOI: 10.1561/0400000042 Google ScholarDigital Library
- [16] . 2020. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th {USENIX} Security Symposium. 1605–1622. Google Scholar
Digital Library
- [17] . 2020. Fog in the clouds: UAVs to provide edge computing to IoT devices. ACM Transactions on Internet Technology 20, 3 (2020), 1–26.Google Scholar
Digital Library
- [18] . 2020. The limitations of federated learning in sybil settings. In Proceedings of the 23rd International Symposium on Research in Attacks, Intrusion, and Defenses.Google Scholar
- [19] . 2011. Large-scale matrix factorization with distributed stochastic gradient descent. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 69–77. Google Scholar
Digital Library
- [20] . 2017. Differentially private federated learning: A client level perspective. arXiv:1712.07557. Retrieved from https://arxiv.org/abs/1712.07557.Google Scholar
- [21] . 2019. Smart-edge-CoCaCo: AI-enabled smart edge with joint computation, caching, and communication in heterogeneous IoT. IEEE Network 33, 2 (2019), 58–64.Google Scholar
Cross Ref
- [22] . 2019. Stochastic distributed learning with gradient quantization and variance reduction. arXiv:1904.05115. Retrieved from https://arxiv.org/abs/1904.05115.Google Scholar
- [23] . 2019. An audio-visual emotion recognition system using deep learning fusion for a cognitive wireless framework. IEEE Wireless Communications 26, 3 (2019), 62–68. Google Scholar
Digital Library
- [24] . 2019. Emotion recognition using secure edge and cloud computing. Information Sciences 504 (2019), 589–601.Google Scholar
Digital Library
- [25] . 2014. Extremal mechanisms for local differential privacy. In Proceedings of the 28th Annual Conference on Neural Information Processing Systems. 2879–2887. Google Scholar
Digital Library
- [26] . 2019. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet of Things Journal 6, 6 (2019), 10700–10714.Google Scholar
Cross Ref
- [27] . 2020. Reliable federated learning for mobile networks. IEEE Wireless Communications 27, 2 (2020), 72–80.Google Scholar
Cross Ref
- [28] . 2016. Federated learning: Strategies for improving communication efficiency. In Proceedings of the NIPS Workshop on Private Multi-Party Machine Learning. https://arxiv.org/abs/1610.05492Google Scholar
- [29] . 2009. Learning multiple layers of features from tiny images. 32–33. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.Google Scholar
- [30] . 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.Google Scholar
Cross Ref
- [31] . 2019. RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33, 1544–1551. Google Scholar
Digital Library
- [32] . 2019. Abnormal client behavior detection in federated learning. arXiv:1910.09933. Retrieved from https://arxiv.org/abs/1910.09933.Google Scholar
- [33] . 2020. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine 37, 3 (2020), 50–60.Google Scholar
- [34] . 2018. Deep gradient compression: Reducing the communication bandwidth for distributed training. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [35] . 2020. Deep anomaly detection for time-series data in industrial IoT: A communication-efficient on-device federated learning approach. IEEE Internet of Things Journal 8, 8 (2020), 6348–6358.Google Scholar
- [36] . 2020. A secure federated learning framework for 5G networks. IEEE Wireless Communications 27, 4 (2020), 24–31.Google Scholar
Cross Ref
- [37] . 2019. PPGAN: Privacy-preserving generative adversarial network. In Proceedings of the 2019 IEEE 25th International Conference on Parallel and Distributed Systems.
DOI : https://doi.org/10.1109/icpads47876.2019.00150Google Scholar - [38] . 2020. Privacy-preserving traffic flow prediction: A federated learning approach. IEEE Internet of Things Journal 7, 8 (2020), 7751–7763.Google Scholar
Cross Ref
- [39] . 2020. Federated learning for 6G communications: Challenges, methods, and future directions. China Communications 17, 9 (2020), 105–118.
DOI : https://doi.org/10.23919/JCC.2020.09.009Google ScholarCross Ref
- [40] . 2020. Privacy-preserving asynchronous federated learning mechanism for edge network computing. IEEE Access 8 (2020), 48970–48981.Google Scholar
- [41] . 2020. Blockchain and federated learning for privacy-preserved data sharing in industrial IoT. IEEE Transactions on Industrial Informatics 16, 6 (2020), 4177–4186.Google Scholar
Cross Ref
- [42] . 2020. Differentially private asynchronous federated learning for mobile edge computing in urban informatics. IEEE Transactions on Industrial Informatics 16, 3 (2020), 2134–2143.Google Scholar
Cross Ref
- [43] . 2020. Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles. IEEE Transactions on Vehicular Technology 69, 4 (2020), 4298–4311.Google Scholar
Cross Ref
- [44] . 2007. Mechanism design via differential privacy. In Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science. IEEE, 94–103. Google Scholar
Digital Library
- [45] . 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. 739–753.
DOI : https://doi.org/10.1109/SP.2019.00065Google ScholarCross Ref
- [46] . 2019. DÏoT: A federated self-learning anomaly detection system for IoT. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems. IEEE, 756–767.Google Scholar
Cross Ref
- [47] . 2019. DETOX: A redundancy-based framework for faster and more robust gradient aggregation. Advances in Neural Information Processing Systems 32 (2019), 10320–10330. Google Scholar
Digital Library
- [48] . 2018. A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017.Google Scholar
- [49] . 2019. A convergence analysis of distributed SGD with communication-efficient gradient sparsification. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. 3411–3417. Google Scholar
Digital Library
- [50] . 2018. Asynchronous federated learning for geospatial applications. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 21–28.Google Scholar
- [51] . 2018. Sparsified SGD with memory. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 4447–4458. Google Scholar
Digital Library
- [52] . 2020. Data poisoning attacks against federated learning systems. In Computer Security—ESORICS 2020. L. Chen, N. Li, K. Liang, and S. Schneider (Eds.), Lecture Notes in Computer Science, Vol. 12308, Springer, 480–501.
DOI : https://doi.org/10.1007/978-3-030-58951-6_24Google Scholar - [53] . 2018. Atomo: Communication-efficient learning via atomic sparsification. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 9850–9861. Google Scholar
Digital Library
- [54] . 2020. An efficient service function chaining placement algorithm in mobile edge computing. ACM Transactions on Internet Technology 20, 4 (2020), 1–21. Google Scholar
Digital Library
- [55] . 2018. Gradient sparsification for communication-efficient distributed optimization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 1299–1309. Google Scholar
Digital Library
- [56] . 2018. Gradient sparsification for communication-efficient distributed optimization. In Advances in Neural Information Processing Systems 31. , , , , , and (Eds.), Curran Associates, Inc., 1299–1309. Retrieved from http://papers.nips.cc/paper/7405-gradient-sparsification-for-communication-efficient-distributed-optimization.pdf. Google Scholar
Digital Library
- [57] . 2020. A framework for evaluating gradient leakage attacks in federated learning. arXiv:2004.10397. Retrieved from https://arxiv.org/abs/2004.10397.Google Scholar
- [58] . 2020. Dominant data set selection algorithms for electricity consumption time-series data analysis based on affine transformation. IEEE Internet of Things Journal 7, 5 (2020), 4347–4360.Google Scholar
Cross Ref
- [59] . 2020. Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. IEEE Transactions on Signal Processing 68 (2020), 4583–4596.Google Scholar
Cross Ref
- [60] . 2019. Asynchronous federated optimization. arXiv:1903.03934. Retrieved from https://arxiv.org/abs/1903.03934.Google Scholar
- [61] . 2007. Preserving data privacy in outsourcing data aggregation services. ACM Transactions on Internet Technology 7, 3 (
Aug. 2007), 17–es.DOI : https://doi.org/10.1145/1275505.1275510 Google ScholarDigital Library
- [62] . 2019. The tradeoff between privacy and accuracy in anomaly detection using federated XGBoost. arXiv:1907.07157. Retrieved from https://arxiv.org/abs/1907.07157.Google Scholar
- [63] . 2014. Enabling privacy-preserving image-centric social discovery. In Proceedings of the 2014 IEEE 34th International Conference on Distributed Computing Systems. IEEE, 198–207. Google Scholar
Digital Library
- [64] . 2019. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In Proceedings of the International Conference on Learning Representations. https://openreview.net/forum?id=BJgnXpVYwS.Google Scholar
- [65] . 2020. Shielding collaborative learning: Mitigating poisoning attacks through client-side detection. IEEE Transactions on Dependable and Secure Computing 18, 5 (2020), 1–1.
DOI : https://doi.org/10.1109/TDSC.2020.2986205Google Scholar - [66] . 2019. Deep leakage from gradients. In Proceedings of the 33rd Conference on Neural Information Processing Systems. 14747–14756. Google Scholar
Digital Library
Index Terms
Towards Communication-Efficient and Attack-Resistant Federated Edge Learning for Industrial Internet of Things
Recommendations
Bandit-based data poisoning attack against federated learning for autonomous driving models
AbstractIn Internet of Things (IoT) applications, federated learning is commonly used for distributedly training models in a privacy-preserving manner. Recently, federated learning is broadly applied to autonomous driving for training ...
Efficient authenticated key agreement protocols resistant to a denial-of-service attack
Malicious intruders may launch as many invalid requests as possible without establishing a server connection to bring server service to a standstill. This is called a denial-of-service (DoS) or distributed DoS (DDoS) attack. Until now, there has been no ...
Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection
AbstractThe Machine Learning-based Intrusion Detection System (ML-IDS) becomes more popular because it doesn't need to manually update the rules and can recognize variants better, However, due to the data privacy issue in ML-IDS, the Federated ...






Comments