skip to main content
research-article

EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks

Authors Info & Claims
Published:19 May 2022Publication History
Skip Abstract Section

Abstract

Edge intelligence has played an important role in constructing smart cities, but the vulnerability of edge nodes to adversarial attacks becomes an urgent problem. A so-called adversarial example can fool a deep learning model on an edge node for misclassification. Due to the transferability property of adversarial examples, an adversary can easily fool a black-box model by a local substitute model. Edge nodes in general have limited resources, which cannot afford a complicated defense mechanism like that on a cloud data center. To address the challenge, we propose a dynamic defense mechanism, namely EI-MTD. The mechanism first obtains robust member models of small size through differential knowledge distillation from a complicated teacher model on a cloud data center. Then, a dynamic scheduling policy, which builds on a Bayesian Stackelberg game, is applied to the choice of a target model for service. This dynamic defense mechanism can prohibit the adversary from selecting an optimal substitute model for black-box attacks. We also conduct extensive experiments to evaluate the proposed mechanism, and results show that EI-MTD could protect edge intelligence effectively against adversarial attacks in black-box settings.

REFERENCES

  1. [1] Yi Sun, Ding Liang, Xiaogang Wang, and Xiaoou Tang. 2015. DeepID3: Face recognition with very deep neural networks. arXiv:1502.00873. https://arxiv.org/pdf/1502.00873.pdf.Google ScholarGoogle Scholar
  2. [2] Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning. 160–167. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in Bayesian deep learning for computer vision? In Advances in Neural Information Processing Systems. 5574–5584. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Tuyen X. Tran, Abolfazl Hajisami, Parul Pandey, and Dario Pompili. 2017. Collaborative mobile edge computing in 5G networks: New paradigms, scenarios, and challenges. In IEEE Communications Magazine 55, 4 (2017), 54–61. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. [5] Yun Chao Hu, Milan Patel, Dario Sabella, Nurit Sprecher, and Valerie Young. 2015. Mobile edge computing - a key technology towards 5G. In ETSI White Paper 11, 11 (2015), 1–16.Google ScholarGoogle Scholar
  6. [6] Kehua Su, Jie Li, and Hongbo Fu. 2011. Smart city and the applications. In 2011 International Conference on Electronics, Communications and Control (ICECC). 1028–1031. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Tai-hoon Kim, Carlos Ramos, and Sabah Mohammed. 2017. Smart city and IoT. In Future Generation Computer Systems, 159–162. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv:1312.6199. https://arxiv.org/pdf/1312.6199.pdf.Google ScholarGoogle Scholar
  9. [9] Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv:1605.07277. https://arxiv.org/pdf/1605.07277.pdf.Google ScholarGoogle Scholar
  10. [10] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 506–519. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Wieland Brendel, Jonas Rauber, and Matthias Bethge. 2017. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv:1712.04248. https://arxiv.org/pdf/1712.04248.pdfGoogle ScholarGoogle Scholar
  12. [12] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning (ICML). 80, 2137–2146. http://proceedings.mlr.press/v80/ilyas18a/ilyas18a.pdf.Google ScholarGoogle Scholar
  13. [13] Lei Wu, Zhanxing Zhu, Cheng Tai, and Weinan E. 2018. Understanding and enhancing the transferability of adversarial examples. arXiv:1802.09707. https://arxiv.org/pdf/1802.09707.pdf.Google ScholarGoogle Scholar
  14. [14] Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7132–7141. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Song Han, Huizi Mao, and William J. Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv:1510.00149. https://arxiv.org/pdf/1510.00149.pdf.Google ScholarGoogle Scholar
  16. [16] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083. https://arxiv.org/pdf/1706.06083.pdf.Google ScholarGoogle Scholar
  17. [17] Linh Nguyen, Sky Wang, and Arunesh Sinha. 2017. A learning and masking approach to secure learning. In International Conference on Decision and Game Theory for Security. 453–464. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. 2019. Adversarial training for free! In Advances in Neural Information Processing Systems (NIPS). 3353–3364.Google ScholarGoogle Scholar
  19. [19] Eric Wong, Leslie Rice, and J. Zico Kolter. 2020. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  20. [20] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  21. [21] Sailik Sengupta, Tathagata Chakraborti, and Subbarao Kambhampati. 2019. MTDeep: Boosting the security of deep neural nets against adversarial attacks with moving target defense. In International Conference on Decision and Game Theory for Security. 479–491. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Abhishek Roy, Anshuman Chhabra, Charles A. Kamhoua, and Prasant Mohapatra. 2019. A moving target defense against adversarial machine learning. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing. 383–388. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23] Qun Song, Zhenyu Yan, and Rui Tan. 2019. Moving target defense for embedded deep visual sensing against adversarial examples. In Proceedings of the 17th Conference on Embedded Networked Sensor Systems. 124–137. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. Imagenet large scale visual recognition challenge. In International Journal of Computer Vision 115, 3 (2015), 211–252. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  26. [26] Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L. Yuille. 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2730–2739. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop.Google ScholarGoogle Scholar
  28. [28] Praveen Paruchuri, Jonathan P. Pearce, Janusz Marecki, Milind Tambe, and Sarit Kraus. 2008. Playing games for security: An efficient exact algorithm for solving Bayesian Stackelberg games. In International Joint Conference on Autonomous Agents & Multiagent Systems. 895-902. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Mahdieh Abbasi and Christian Gagné. 2017. Robustness to adversarial examples through an ensemble of specialists. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  30. [30] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] John C. Harsanyi and Reinhard Selten. 1972. A generalized Nash solution for two-person bargaining games with incomplete information. In Management Science 18, 5 (1972). 80–106. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4510–4520. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. 2018. Shufflenet v2: Practical guidelines for efficient CNN architecture design. In Proceedings of the European Conference on Computer Vision (ECCV). 116–131. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. 2017. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  35. [35] Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Google ScholarGoogle Scholar
  36. [36] Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. The space of transferable adversarial examples. arXiv:1704.03453. https://arxiv.org/pdf/1704.03453.pdf.Google ScholarGoogle Scholar
  37. [37] Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP). 39–57. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Xiang Ling, Shouling Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, and Ting Wang. 2019. DEEPSEC: A uniform platform for security analysis of deep learning models. In 2019 IEEE Symposium on Security and Privacy (SP). 673–690. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Andrew Ilyas, Logan Engstrom, and Aleksander Madry. 2018. Prior convictions: Black-box adversarial attacks with bandits and priors. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  40. [40] Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1625–1634. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 1528–1540. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. 2018. Countering adversarial images using input transformations. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  43. [43] Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. 2018. Thermometer encoding: One hot way to resist adversarial examples. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  44. [44] Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On detecting adversarial perturbations. In Proceedings of 5th International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  45. [45] Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. 2017. Detecting adversarial samples from artifacts. arXiv:1703.00410. https://arxiv.org/pdf/1703.00410.pdf.Google ScholarGoogle Scholar
  46. [46] Eric Wong and Zico Kolter. 2017. Provable defenses against adversarial examples via the convex outer adversarial polytope. In Proceedings of the 35th International Conference on Machine Learning (ICML). 5286–5295.Google ScholarGoogle Scholar
  47. [47] Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Semidefinite relaxations for certifying robustness to adversarial examples. In Advances in Neural Information Processing Systems 31 (NIPS 2018). 10877–10887.Google ScholarGoogle Scholar
  48. [48] Kai Y. Xiao, Vincent Tjeng, Nur Muhammad Shafiullah, and Aleksander Madry. 2018. Training for faster adversarial robustness verification via inducing ReLU stability. In Proceedings of 5th International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  49. [49] Francesco Croce, Maksym Andriushchenko, and Matthias Hein. 2018. Provable robustness of ReLU networks via maximization of linear regions. In Proceedings of Machine Learning Research. 2057–2066.Google ScholarGoogle Scholar
  50. [50] Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP). 582–597. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Valentina Zantedeschi, Maria-Irina Nicolae, and Ambrish Rawat. 2017. Efficient defenses against adversarial attacks. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 39–49. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2018. Mitigating adversarial effects through randomization. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  53. [53] Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. 2018. Stochastic activation pruning for robust adversarial defense. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  54. [54] Tiange Luo, Tianle Cai, Mengxiao Zhang, Siyu Chen, and Liwei Wang. 2019. RANDOM MASK: Towards robust convolutional neural networks. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  55. [55] M. Breton and A. Alj. 1988. Sequential Stackelberg equilibria in two-person games. In Journal of Optimization Theory and Applications. 71–97.Google ScholarGoogle Scholar

Index Terms

  1. EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Article Metrics

            • Downloads (Last 12 months)532
            • Downloads (Last 6 weeks)19

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text

          HTML Format

          View this article in HTML Format .

          View HTML Format
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!