skip to main content
research-article

A Novel Multi-Sample Generation Method for Adversarial Attacks

Published:04 March 2022Publication History
Skip Abstract Section

Abstract

Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.

REFERENCES

  1. [1] Abadi Martín, Agarwal Ashish, Barham Paul, Brevdo Eugene, Chen Zhifeng, Citro Craig, Corrado Greg S., Davis Andy, Dean Jeffrey, Devin Matthieu, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 265–283.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Apruzzese Giovanni, Andreolini Mauro, Colajanni Michele, and Marchetti Mirco. 2020. Hardening random forest cyber detectors against adversarial attacks. IEEE Transactions on Emerging Topics in Computational Intelligence 4, 4 (2020), 427–439.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Brendel Wieland, Rauber Jonas, and Bethge Matthias. 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In Proceedings of the 6th International Conference on Learning Representations.Google ScholarGoogle Scholar
  4. [4] Brunner Thomas, Diehl Frederik, Le Michael Truong, and Knoll Alois. 2019. Guessing smart: Biased sampling for efficient black-box adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 49584966.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Carlini Nicholas and Wagner David. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy. IEEE, 3957.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Chen Jinghui and Gu Quanquan. 2020. RayS: A ray searching method for hard-label adversarial attack. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 17391747.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Chen Pin-Yu, Sharma Yash, Zhang Huan, Yi Jinfeng, and Hsieh Cho-Jui. 2017. EAD: Elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  8. [8] Chen Pin-Yu, Zhang Huan, Sharma Yash, Yi Jinfeng, and Hsieh Cho-Jui. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 1526.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Chen Xuesong, Yan Xiyu, Zheng Feng, Jiang Yong, Xia Shu-Tao, Zhao Yong, and Ji Rongrong. 2020. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1017610185.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Duan Mingxing, Li Kenli, and Li Keqin. 2017. An ensemble cnn2elm for age estimation. IEEE Transactions on Information Forensics and Security 13, 3 (2017), 758772.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Duan Mingxing, Li Kenli, Liao Xiangke, Li Keqin, and Tian Qi. 2019. Features-enhanced multi-attribute estimation with convolutional tensor correlation fusion network. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Duan Mingxing, Li Kenli, Ouyang Aijia, Win Khin Nandar, Li Keqin, and Tian Qi. 2020. EGroupNet: A feature-enhanced network for age estimation with novel age group schemes. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 2 (2020), 123.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Duan Mingxing, Ouyang Aijia, Tan Guanghua, and Tian Qi. 2020. Age estimation using aging/rejuvenation features with device-edge synergy. IEEE Transactions on Circuits and Systems for Video Technology 31, 2 (2020), 608–620.Google ScholarGoogle Scholar
  14. [14] Duan Ranjie, Ma Xingjun, Wang Yisen, Bailey James, Qin A. Kai, and Yang Yun. 2020. Adversarial camouflage: Hiding physical-world attacks with natural styles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10001008.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Fan Yanbo, Wu Baoyuan, Li Tuanhui, Zhang Yong, Li Mingyang, Li Zhifeng, and Yang Yujiu. 2020. Sparse adversarial attack via perturbation factorization. In Proceedings of the 16th European Conference on Computer Vision.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Goodfellow Ian, Pouget-Abadie Jean, Mirza Mehdi, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems. 26722680.Google ScholarGoogle Scholar
  17. [17] Goodfellow Ian, Shlens Jonathon, and Szegedy Christian. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  18. [18] Guo Chuan, Gardner Jacob R., You Yurong, Wilson Andrew Gordon, and Weinberger Kilian Q.. 2019. Simple black-box adversarial attacks. In Proceedings of the International Conference on Machine Learning. 24842493.Google ScholarGoogle Scholar
  19. [19] Hang Jie, Han Keji, Chen Hui, and Li Yun. 2020. Ensemble adversarial black-box attacks against deep learning systems. Pattern Recognition 101 (2020), 107184. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Huai Mengdi, Sun Jianhui, Cai Renqin, Yao Liuyi, and Zhang Aidong. 2020. Malicious attacks against deep reinforcement learning interpretations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 472482.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Ilyas Andrew, Engstrom Logan, Athalye Anish, and Lin Jessy. 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the International Conference on Machine Learning. PMLR, 21372146.Google ScholarGoogle Scholar
  23. [23] Jandial Surgan, Mangla Puneet, Varshney Sakshi, and Balasubramanian Vineeth. 2019. Advgan++: Harnessing latent layers for adversary generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 00.Google ScholarGoogle ScholarCross RefCross Ref
  24. [24] Karim Fazle, Majumdar Somshubra, and Darabi Houshang. 2021. Adversarial attacks on time series. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 10 (2021), 3309–3320. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Kong Zelun, Guo Junfeng, Li Ang, and Liu Cong. 2020. PhysGAN: Generating physical-world-resilient adversarial examples for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1425414263.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Abouelnaga Yehya, Ali Ola S., Rady Hager, and Moustafa Mohamed. 2016. Cifar-10: Knn-based ensemble of classifiers. In Proceedings of the 2016 International Conference on Computational Science and Computational Intelligence (CSCI’16). IEEE, 1192–1195.Google ScholarGoogle Scholar
  27. [27] Kurakin Alexey, Goodfellow Ian, and Bengio Samy. 2017. Adversarial examples in the physical world. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  28. [28] LeCun Yann, Bottou Léon, Bengio Yoshua, and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 22782324.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Li Guanlin, Ding Shuya, Luo Jun, and Liu Chang. 2020. Enhancing intrinsic adversarial robustness via feature pyramid decoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 800808.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Liu Yanpei, Chen Xinyun, Liu Chang, and Song Dawn. 2017. Delving into transferable adversarial examples and black-box attacks. In 5th International Conference on Learning Representations (ICLR’17), Toulon, France, April 24-26, 2017. OpenReview.net. https://openreview.net/forum?id=Sys6GJqxl.Google ScholarGoogle Scholar
  31. [31] Lu Yantao, Jia Yunhan, Wang Jianyu, Li Bai, Chai Weiheng, Carin Lawrence, and Velipasalar Senem. 2020. Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 940949.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Madry Aleksander, Makelov Aleksandar, Schmidt Ludwig, Tsipras Dimitris, and Vladu Adrian. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations, (ICLR’18), Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rJzIBfZAb.Google ScholarGoogle Scholar
  33. [33] Mingxing Duan, Li Kenli, Xie Lingxi, Tian Qi, and Xiao Bin. 2021. Towards multiple black-boxes attack via adversarial example generation network. In Proceedings of the 29th ACM International Conference on Multimedia. 264272.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Moosavi-Dezfooli Seyed-Mohsen, Fawzi Alhussein, and Frossard Pascal. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 25742582.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Orekondy Tribhuvanesh, Schiele Bernt, and Fritz Mario. 2019. Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 49544963.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Pang Ren, Zhang Xinyang, Ji Shouling, Luo Xiapu, and Wang Ting. 2020. AdvMind: Inferring adversary intent of black-box attacks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 18991907.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. [37] Papernot Nicolas, McDaniel Patrick, Goodfellow Ian, Jha Somesh, Celik Z. Berkay, and Swami Ananthram. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 506519.Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Pengcheng Li, Yi Jinfeng, and Zhang Lijun. 2018. Query-efficient black-box attack by active learning. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 12001205.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Shi Yucheng, Han Yahong, and Tian Qi. 2020. Polishing decision-based adversarial noise with a customized sampling. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 10301038.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Shi Yucheng, Wang Siyu, and Han Yahong. 2019. Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 65196527.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Simonyan Karen and Zisserman Andrew. 2014. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, (ICLR’15), San Diego, CA, USA, May 7-9, 2015. http://arxiv.org/abs/1409.1556.Google ScholarGoogle Scholar
  42. [42] Su Du, Huynh Hieu Tri, Chen Ziao, Lu Yi, and Lu Wenmiao. 2020. Re-identification attack to privacy-preserving data analysis with noisy sample-mean. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 10451053.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Szegedy Christian, Zaremba Wojciech, Sutskever Ilya, Bruna Joan, Erhan Dumitru, Goodfellow Ian, and Fergus Rob. 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  44. [44] Tang Ruixiang, Du Mengnan, Liu Ninghao, Yang Fan, and Hu Xia. 2020. An embarrassingly simple approach for trojan attack in deep neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 218228.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Tramer Florian, Kurakin Alexey, Papernot Nicolas, Goodfellow Ian, Boneh Dan, and Mcdaniel Patrick. 2018. Ensemble adversarial training: Attacks and defenses. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18), Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rkZvSe-RZ.Google ScholarGoogle Scholar
  46. [46] Wang Jingyuan, Wu Yufan, Li Mingxuan, Lin Xin, Wu Junjie, and Li Chao. 2020. Interpretability is a kind of safety: An interpreter-based ensemble for adversary defense. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1524.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Wang Yue, Wang Ke, and Miao Chunyan. 2020. Truth discovery against strategic sybil attack in crowdsourcing. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 95104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Wu Junqi, Chen Bolin, Luo Weiqi, and Fang Yanmei. 2020. Audio steganography based on iterative adversarial attacks against convolutional neural networks. IEEE Transactions on Information Forensics and Security 15 (2020), 22822294. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Xiao Chaowei, Li Bo, Zhu Junyan, He Warren, Liu Mingyan, and Song Dawn. 2018. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 39053911.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Xiao Wenli, Jiang Hao, and Xia Song. 2020. A new black box attack generating adversarial examples based on reinforcement learning. In Proceedings of the 2020 Information Communication Technologies Conference (ICTC). IEEE, 141146.Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Zhang Yinghua, Song Yangqiu, Liang Jian, Bai Kun, and Yang Qiang. 2020. Two sides of the same coin: White-box and black-box attacks for transfer learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 29892997.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Zheng Haizhong, Zhang Ziqi, Gu Juncheng, Lee Honglak, and Prakash Atul. 2020. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11811190.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Zhou Mingyi, Wu Jing, Liu Yipeng, Liu Shuaicheng, and Zhu Ce. 2020. DaST: Data-free substitute training for adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 234243.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Zügner Daniel, Borchert Oliver, Akbarnejad Amir, and Guennemann Stephan. 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM ACM Transactions on Knowledge Discovery from Data 14, 5 (2020), 131.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A Novel Multi-Sample Generation Method for Adversarial Attacks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Multimedia Computing, Communications, and Applications
        ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 4
        November 2022
        497 pages
        ISSN:1551-6857
        EISSN:1551-6865
        DOI:10.1145/3514185
        • Editor:
        • Abdulmotaleb El Saddik
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 4 March 2022
        • Revised: 1 December 2021
        • Accepted: 1 December 2021
        • Received: 1 June 2021
        Published in tomm Volume 18, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!