Abstract
Deep learning models are widely used in daily life, which bring great convenience to our lives, but they are vulnerable to attacks. How to build an attack system with strong generalization ability to test the robustness of deep learning systems is a hot issue in current research, among which the research on black-box attacks is extremely challenging. Most current research on black-box attacks assumes that the input dataset is known. However, in fact, it is difficult for us to obtain detailed information for those datasets. In order to solve the above challenges, we propose a multi-sample generation model for black-box model attacks, called MsGM. MsGM is mainly composed of three parts: multi-sample generation, substitute model training, and adversarial sample generation and attack. Firstly, we design a multi-task generation model to learn the distribution of the original dataset. The model first converts an arbitrary signal of a certain distribution into the shared features of the original dataset through deconvolution operations, and then according to different input conditions, multiple identical sub-networks generate the corresponding targeted samples. Secondly, the generated sample features achieve different outputs through querying the black-box model and training the substitute model, which are used to construct different loss functions to optimize and update the generator and substitute model. Finally, some common white-box attack methods are used to attack the substitute model to generate corresponding adversarial samples, which are utilized to attack the black-box model. We conducted a large number of experiments on the MNIST and CIFAR-10 datasets. The experimental results show that under the same settings and attack algorithms, MsGM achieves better performance than the based models.
- [1] . 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI’16). 265–283.Google Scholar
Digital Library
- [2] . 2020. Hardening random forest cyber detectors against adversarial attacks. IEEE Transactions on Emerging Topics in Computational Intelligence 4, 4 (2020), 427–439.Google Scholar
Cross Ref
- [3] . 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In Proceedings of the 6th International Conference on Learning Representations.Google Scholar
- [4] . 2019. Guessing smart: Biased sampling for efficient black-box adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4958–4966.Google Scholar
Cross Ref
- [5] . 2017. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy. IEEE, 39–57.Google Scholar
Cross Ref
- [6] . 2020. RayS: A ray searching method for hard-label adversarial attack. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1739–1747.Google Scholar
Digital Library
- [7] . 2017. EAD: Elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google Scholar
- [8] . 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 15–26.Google Scholar
Digital Library
- [9] . 2020. One-shot adversarial attacks on visual tracking with dual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10176–10185.Google Scholar
Cross Ref
- [10] . 2017. An ensemble cnn2elm for age estimation. IEEE Transactions on Information Forensics and Security 13, 3 (2017), 758–772.Google Scholar
Cross Ref
- [11] . 2019. Features-enhanced multi-attribute estimation with convolutional tensor correlation fusion network. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (2019), 1–23.Google Scholar
Digital Library
- [12] . 2020. EGroupNet: A feature-enhanced network for age estimation with novel age group schemes. ACM Transactions on Multimedia Computing, Communications, and Applications 16, 2 (2020), 1–23.Google Scholar
Digital Library
- [13] . 2020. Age estimation using aging/rejuvenation features with device-edge synergy. IEEE Transactions on Circuits and Systems for Video Technology 31, 2 (2020), 608–620.Google Scholar
- [14] . 2020. Adversarial camouflage: Hiding physical-world attacks with natural styles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1000–1008.Google Scholar
Cross Ref
- [15] . 2020. Sparse adversarial attack via perturbation factorization. In Proceedings of the 16th European Conference on Computer Vision.Google Scholar
Digital Library
- [16] . 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems. 2672–2680.Google Scholar
- [17] . 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [18] . 2019. Simple black-box adversarial attacks. In Proceedings of the International Conference on Machine Learning. 2484–2493.Google Scholar
- [19] . 2020. Ensemble adversarial black-box attacks against deep learning systems. Pattern Recognition 101 (2020), 107184.
DOI: Google ScholarDigital Library
- [20] . 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.Google Scholar
Cross Ref
- [21] . 2020. Malicious attacks against deep reinforcement learning interpretations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 472–482.Google Scholar
Digital Library
- [22] . 2018. Black-box adversarial attacks with limited queries and information. In Proceedings of the International Conference on Machine Learning. PMLR, 2137–2146.Google Scholar
- [23] . 2019. Advgan++: Harnessing latent layers for adversary generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 0–0.Google Scholar
Cross Ref
- [24] . 2021. Adversarial attacks on time series. IEEE Transactions on Pattern Analysis and Machine Intelligence 43, 10 (2021), 3309–3320.
DOI: Google ScholarCross Ref
- [25] . 2020. PhysGAN: Generating physical-world-resilient adversarial examples for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14254–14263.Google Scholar
Cross Ref
- [26] . 2016. Cifar-10: Knn-based ensemble of classifiers. In Proceedings of the 2016 International Conference on Computational Science and Computational Intelligence (CSCI’16). IEEE, 1192–1195.Google Scholar
- [27] . 2017. Adversarial examples in the physical world. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [28] . 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.Google Scholar
Cross Ref
- [29] . 2020. Enhancing intrinsic adversarial robustness via feature pyramid decoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 800–808.Google Scholar
Cross Ref
- [30] . 2017. Delving into transferable adversarial examples and black-box attacks. In 5th International Conference on Learning Representations (ICLR’17), Toulon, France, April 24-26, 2017. OpenReview.net. https://openreview.net/forum?id=Sys6GJqxl.Google Scholar
- [31] . 2020. Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 940–949.Google Scholar
Cross Ref
- [32] . 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the 6th International Conference on Learning Representations, (ICLR’18), Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rJzIBfZAb.Google Scholar
- [33] . 2021. Towards multiple black-boxes attack via adversarial example generation network. In Proceedings of the 29th ACM International Conference on Multimedia. 264–272.Google Scholar
Digital Library
- [34] . 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.Google Scholar
Cross Ref
- [35] . 2019. Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4954–4963.Google Scholar
Cross Ref
- [36] . 2020. AdvMind: Inferring adversary intent of black-box attacks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1899–1907.Google Scholar
Digital Library
- [37] . 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 506–519.Google Scholar
Digital Library
- [38] . 2018. Query-efficient black-box attack by active learning. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 1200–1205.Google Scholar
Cross Ref
- [39] . 2020. Polishing decision-based adversarial noise with a customized sampling. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 1030–1038.Google Scholar
Cross Ref
- [40] . 2019. Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the Conference on Computer Vision and Pattern Recognition. 6519–6527.Google Scholar
Cross Ref
- [41] . 2014. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, (ICLR’15), San Diego, CA, USA, May 7-9, 2015. http://arxiv.org/abs/1409.1556.Google Scholar
- [42] . 2020. Re-identification attack to privacy-preserving data analysis with noisy sample-mean. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1045–1053.Google Scholar
Digital Library
- [43] . 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [44] . 2020. An embarrassingly simple approach for trojan attack in deep neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 218–228.Google Scholar
Digital Library
- [45] . 2018. Ensemble adversarial training: Attacks and defenses. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18), Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rkZvSe-RZ.Google Scholar
- [46] . 2020. Interpretability is a kind of safety: An interpreter-based ensemble for adversary defense. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 15–24.Google Scholar
Digital Library
- [47] . 2020. Truth discovery against strategic sybil attack in crowdsourcing. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 95–104.Google Scholar
Digital Library
- [48] . 2020. Audio steganography based on iterative adversarial attacks against convolutional neural networks. IEEE Transactions on Information Forensics and Security 15 (2020), 2282–2294.
DOI: Google ScholarDigital Library
- [49] . 2018. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 3905–3911.Google Scholar
Digital Library
- [50] . 2020. A new black box attack generating adversarial examples based on reinforcement learning. In Proceedings of the 2020 Information Communication Technologies Conference (ICTC). IEEE, 141–146.Google Scholar
Cross Ref
- [51] . 2020. Two sides of the same coin: White-box and black-box attacks for transfer learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2989–2997.Google Scholar
Digital Library
- [52] . 2020. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1181–1190.Google Scholar
Cross Ref
- [53] . 2020. DaST: Data-free substitute training for adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 234–243.Google Scholar
Cross Ref
- [54] . 2020. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM ACM Transactions on Knowledge Discovery from Data 14, 5 (2020), 1–31.Google Scholar
Digital Library
Index Terms
A Novel Multi-Sample Generation Method for Adversarial Attacks
Recommendations
Towards Multiple Black-boxes Attack via Adversarial Example Generation Network
MM '21: Proceedings of the 29th ACM International Conference on MultimediaThe current research on adversarial attacks aims at a single model while the research on attacking multiple models simultaneously is still challenging. In this paper, we propose a novel black-box attack method, referred to as MBbA, which can attack ...
Black-Box Buster: A Robust Zero-Shot Transfer-Based Adversarial Attack Method
Information and Communications SecurityAbstractRecent black-box adversarial attacks can take advantage of transferable adversarial examples generated by a similar substitute model to successfully fool the target model. However, these substitute models are either pre-trained models or trained ...
Secure deep neural networks using adversarial image generation and training with Noise-GAN
Highlights- Generating adversarial samples for evasion attacks are proposed using Noise-Generative Adversarial Networks.
AbstractRecent advances in artificial intelligence have increased the importance of security issues. Nowadays, deep neural networks (DNNs) are used in many critical applications such as pilot drones and self-driving cars. So, the DNN's ...






Comments