skip to main content
research-article
Open Access

Deep Learning Robustness Verification for Few-Pixel Attacks

Published:06 April 2023Publication History
Skip Abstract Section

Abstract

While successful, neural networks have been shown to be vulnerable to adversarial example attacks. In L0 adversarial attacks, also known as few-pixel attacks, the attacker picks t pixels from the image and arbitrarily perturbs them. To understand the robustness level of a network to these attacks, it is required to check the robustness of the network to perturbations of every set of t pixels. Since the number of sets is exponentially large, existing robustness verifiers, which can reason about a single set of pixels at a time, are impractical for L0 robustness verification. We introduce Calzone, an L0 robustness verifier for neural networks. To the best of our knowledge, Calzone is the first to provide a sound and complete analysis for L0 adversarial attacks. Calzone builds on the following observation: if a classifier is robust to any perturbation of a set of k pixels, for k>t, then it is robust to any perturbation of its subsets of size t. Thus, to reduce the verification time, Calzone predicts the largest k that can be proven robust, via dynamic programming and sampling. It then relies on covering designs to compute a covering of the image with sets of size k. For each set in the covering, Calzone submits its corresponding box neighborhood to an existing L robustness verifier. If a set’s neighborhood is not robust, Calzone repeats this process and covers this set with sets of size k′<k. We evaluate Calzone on several datasets and networks, for t≤ 5. Typically, Calzone verifies L0 robustness within few minutes. On our most challenging instances (e.g., t=5), Calzone completes within few hours. We compare to a MILP baseline and show that it does not scale already for t=3.

References

  1. Janne Alatalo, Joni Korpihalkola, Tuomo Sipola, and Tero Kokkonen. 2022. Chromatic and Spatial Analysis of One-Pixel Attacks Against an Image Classifier. In NETYS, https://doi.org/10.1007/978-3-031-17436-0_20 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Mitali Bafna, Jack Murtagh, and Nikhil Vyas. 2018. Thwarting Adversarial Examples: An L_0-Robust Sparse Fourier Transform. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicoló Cesa-Bianchi, and Roman Garnett (Eds.). 10096–10106. Google ScholarGoogle Scholar
  3. Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, and Martin T. Vechev. 2019. Certifying Geometric Robustness of Neural Networks.. Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS. Google ScholarGoogle Scholar
  4. Iliya Bluskov and Heikki Hämäläinen. 1998. New Upper Bounds on the Minimum Size of Covering Designs. Journal of Combinatorial Designs, 6, 1 (1998), 21–41. https://doi.org/10.1002/(SICI)1520-6610(1998)6:1<21::AID-JCD2>3.0.CO;2-Y Google ScholarGoogle ScholarCross RefCross Ref
  5. Aleksandar Bojchevski, Johannes Klicpera, and Stephan Günnemann. 2020. Efficient Robustness Certificates for Discrete Data: Sparsity-Aware Randomized Smoothing for Graphs, Images and More. In Proceedings of the 37th International Conference on Machine Learning, ICML (Proceedings of Machine Learning Research, Vol. 119). PMLR, 1003–1013. Google ScholarGoogle Scholar
  6. Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, and M. Pawan Kumar. 2020. Branch and Bound for Piecewise Linear Neural Network Verification. J. Mach. Learn. Res.. Google ScholarGoogle Scholar
  7. Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, and Pawan Kumar Mudigonda. 2018. A Unified View of Piecewise Linear Neural Network Verification. Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS. Google ScholarGoogle Scholar
  8. Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy, SP. IEEE Computer Society, 39–57. https://doi.org/10.1109/SP.2017.49 Google ScholarGoogle ScholarCross RefCross Ref
  9. Agnes Hui Chan and Richard A. Games. 1981. (n, k, t)-Covering Systems and Error-Trapping Decoding. IEEE Trans. Inf. Theory, 27, 5 (1981), 643–646. https://doi.org/10.1109/TIT.1981.1056392 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Chih-Hong Cheng, Georg Nührenberg, and Harald Ruess. 2017. Maximum Resilience of Artificial Neural Networks. In Automated Technology for Verification and Analysis - 15th International Symposium, ATVA, Deepak D’Souza and K. Narayan Kumar (Eds.) (Lecture Notes in Computer Science, Vol. 10482). 251–268. https://doi.org/10.1007/978-3-319-68167-2_18 Google ScholarGoogle ScholarCross RefCross Ref
  11. Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. 2019. Certified Adversarial Robustness via Randomized Smoothing. In Proceedings of the 36th International Conference on Machine Learning, ICML, Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.) (Proceedings of Machine Learning Research, Vol. 97). PMLR, 1310–1320. Google ScholarGoogle Scholar
  12. Pilu Crescenzi, Federico Montecalvo, and Gianluca Rossi. 2004. Optimal Covering Designs: Complexity Results and New Bounds. Discret. Appl. Math., 144, 3 (2004), 281–290. https://doi.org/10.1016/j.dam.2003.11.006 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Francesco Croce and Matthias Hein. 2019. Sparse and Imperceivable Adversarial Attacks. IEEE/CVF International Conference on Computer Vision, ICCV, https://doi.org/10.1109/ICCV.2019.00482 Google ScholarGoogle ScholarCross RefCross Ref
  14. Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian J. Goodfellow, Percy Liang, and Pushmeet Kohli. 2020. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, NeurIPS, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). Google ScholarGoogle Scholar
  15. Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output Range Analysis for Deep Feedforward Neural Networks. In NASA Formal Methods - 10th International Symposium, NFM, Aaron Dutle, César A. Muñoz, and Anthony Narkawicz (Eds.) (Lecture Notes in Computer Science, Vol. 10811). 121–138. https://doi.org/10.1007/978-3-319-77935-5_9 Google ScholarGoogle ScholarCross RefCross Ref
  16. Krishnamurthy (Dj) Dvijotham, Jamie Hayes, Borja Balle, J. Zico Kolter, Chongli Qin, András György, Kai Xiao, Sven Gowal, and Pushmeet Kohli. 2020. A Framework for robustness Certification of Smoothed Classifiers using F-Divergences. In 8th International Conference on Learning Representations, ICLR. OpenReview.net. Google ScholarGoogle Scholar
  17. Rüdiger Ehlers. 2017. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks. In Automated Technology for Verification and Analysis - 15th International Symposium, ATVA, Deepak D’Souza and K. Narayan Kumar (Eds.). https://doi.org/10.1007/978-3-319-68167-2_19 Google ScholarGoogle ScholarCross RefCross Ref
  18. Yizhak Yisrael Elboher, Justin Gottschlich, and Guy Katz. 2020. An Abstraction-Based Framework for Neural Network Verification. In Computer Aided Verification - 32nd International Conference, CAV, Shuvendu K. Lahiri and Chao Wang (Eds.) (Lecture Notes in Computer Science, Vol. 12224). Springer, 43–65. https://doi.org/10.1007/978-3-030-53288-8_3 Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Tuvi Etzion, Victor K. Wei, and Zhen Zhang. 1995. Bounds on the Sizes of Constant Weight Covering Codes. Des. Codes Cryptogr., 5, 3 (1995), 217–239. https://doi.org/10.1007/BF01388385 Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George J. Pappas. 2019. Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 11423–11434. Google ScholarGoogle Scholar
  21. Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin T. Vechev. 2018. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation.. In IEEE Symposium on Security and Privacy, https://doi.org/10.1109/SP.2018.00058 Google ScholarGoogle ScholarCross RefCross Ref
  22. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.. 2015. Explaining and Harnessing Adversarial Examples.. 3rd International Conference on Learning Representations, ICLR. Google ScholarGoogle Scholar
  23. Dan M Gordon. 2010. La Jolla covering repository. https://ljcr.dmgordon.org/cover/table.html Google ScholarGoogle Scholar
  24. Daniel M. Gordon, Greg Kuperberg, and Oren Patashnik. 1995. New Constructions for Covering Designs. J. COMBIN. DESIGNS, 3 (1995), 269–284. https://doi.org/10.1002/jcd.3180030404 Google ScholarGoogle ScholarCross RefCross Ref
  25. Daniel M. Gordon, Oren Patashnik, Greg Kuperberg, and Joel Spencer. 1996. Asymptotically Optimal Covering Designs. J. Comb. Theory, Ser. A, 75, 2 (1996), 270–280. https://doi.org/10.1006/jcta.1996.0077 Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Patrick Henriksen and Alessio Lomuscio. 2021. DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI, https://doi.org/10.24963/ijcai.2021/351 Google ScholarGoogle ScholarCross RefCross Ref
  27. Daniel Horsley. 2017. Generalising Fisher’s Inequality to Coverings and Packings. Comb., 37, 4 (2017), 673–696. https://doi.org/10.1007/s00493-016-3326-9 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Daniel Horsley and Rakhi Singh. 2018. New Lower Bounds for t-Coverings. Journal of Combinatorial Designs, 26, 8 (2018), 369–386. https://doi.org/10.1002/jcd.21591 Google ScholarGoogle ScholarCross RefCross Ref
  29. Malhar Jere, Briland Hitaj, Gabriela F. Ciocarlie, and Farinaz Koushanfar. 2019. Scratch that! An Evolution-based Adversarial Attack against Neural Networks. CoRR, abs/1912.02316 (2019), arXiv:1912.02316. arxiv:1912.02316 Google ScholarGoogle Scholar
  30. Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, and Neil Zhenqiang Gong. 2022. Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations. In The Tenth International Conference on Learning Representations, ICLR. OpenReview.net. Google ScholarGoogle Scholar
  31. Anan Kabaha and Dana Drachsler-Cohen. 2022. Boosting Robustness Verification of Semantic Feature Neighborhoods. In Static Analysis - 29th International Symposium, SAS (Lecture Notes in Computer Science, Vol. 13790). Springer, 299–324. https://doi.org/10.1007/978-3-031-22308-2_14 Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer.. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks.. In 29th Computer Aided Verification Conference, CAV, https://doi.org/10.1007/978-3-319-63387-9_5 Google ScholarGoogle ScholarCross RefCross Ref
  33. Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio.. 2017. Adversarial Examples in the Physical World.. 5th International Conference on Learning Representations, ICLR, Workshop Track Proceedings. Google ScholarGoogle Scholar
  34. Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi S. Jaakkola. 2019. Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 4911–4922. Google ScholarGoogle Scholar
  35. Alexander Levine and Soheil Feizi. 2020. Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI. AAAI Press, 4585–4593. https://doi.org/10.1609/aaai.v34i04.5888 Google ScholarGoogle ScholarCross RefCross Ref
  36. Pak Ching Li and GH John Van Rees. 2000. New Constructions of Lotto Designs. Utilitas Mathematica, 58 (2000), 45–64. Google ScholarGoogle Scholar
  37. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In 6th International Conference on Learning Representations, ICLR. Google ScholarGoogle Scholar
  38. Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, and Martin Vechev. 2021. Scaling Polyhedral Neural Network Verification on GPUs. In Proceedings of Machine Learning and Systems 3. Google ScholarGoogle Scholar
  39. Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. 2019. SparseFool: A Few Pixels Make a Big Difference. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR. Computer Vision Foundation / IEEE, 9087–9096. https://doi.org/10.1109/CVPR.2019.00930 Google ScholarGoogle ScholarCross RefCross Ref
  40. Jeet Mohapatra, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, and Luca Daniel. 2020. Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations. IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, https://doi.org/10.1109/CVPR42600.2020.00032 Google ScholarGoogle ScholarCross RefCross Ref
  41. Hoang-Quoc Nguyen-Son, Tran Phuong Thao, Seira Hidano, Vanessa Bracamonte, Shinsaku Kiyomoto, and Rie Shigetomi Yamaguchi. 2021. OPA2D: One-Pixel Attack, Detection, and Defense in Deep Neural Networks. International Joint Conference on Neural Networks, IJCNN, https://doi.org/10.1109/IJCNN52387.2021.9534332 Google ScholarGoogle ScholarCross RefCross Ref
  42. Alessandro De Palma, Rudy Bunel, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, and M. Pawan Kumar. 2021. Improved Branch and Bound for Neural Network Verification via Lagrangian Decomposition. CoRR, abs/2104.06718 (2021), arxiv:2104.06718 Google ScholarGoogle Scholar
  43. Jary Pomponi, Simone Scardapane, and Aurelio Uncini. 2022. Pixle: a fast and effective black-box attack based on rearranging pixels. In International Joint Conference on Neural Networks, IJCNN. IEEE, 1–7. https://doi.org/10.1109/IJCNN55064.2022.9892966 Google ScholarGoogle ScholarCross RefCross Ref
  44. Luca Pulina and Armando Tacchella. 2010. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. Computer Aided Verification, 22nd International Conference, CAV, https://doi.org/10.1007/978-3-642-14295-6_24 Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Luca Pulina and Armando Tacchella. 2012. Challenging SMT Solvers to Verify Neural Networks. In AI Commun. 10.3233/AIC-2012-0525, https://doi.org/10.3233/aic-2012-0525 Google ScholarGoogle ScholarCross RefCross Ref
  46. Weize Quan, Deeraj Nagothu, Nihal A. Poredi, and Yu Chen. 2021. CriPI: An Efficient Critical Pixels Identification Algorithm for Fast One-Pixel Attacks. In Defense + Commercial Sensing. https://doi.org/10.1117/12.2581377 Google ScholarGoogle ScholarCross RefCross Ref
  47. Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Semidefinite Relaxations for Certifying Robustness to Adversarial Examples. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicoló Cesa-Bianchi, and Roman Garnett (Eds.). 10900–10910. Google ScholarGoogle Scholar
  48. Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, and Marta Kwiatkowska. 2019. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI, Sarit Kraus (Ed.). ijcai.org, 5944–5952. https://doi.org/10.24963/ijcai.2019/824 Google ScholarGoogle ScholarCross RefCross Ref
  49. Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Marian Dan, and Martin T. Vechev. 2021. Scalable Polyhedral Verification of Recurrent Neural Networks. In Computer Aided Verification - 33rd International Conference, CAV, https://doi.org/10.1007/978-3-030-81685-8_10 Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. J. Schönheim. 1964. On coverings.. Pacific J. Math., 14, 4 (1964), 1405 – 1411. Google ScholarGoogle Scholar
  51. Lukas Schott, Jonas Rauber, Matthias Bethge, and Wieland Brendel. 2019. Towards the First Adversarially Robust Neural Network Model on MNIST. In 7th International Conference on Learning Representations, ICLR. OpenReview.net. Google ScholarGoogle Scholar
  52. Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T. Vechev. 2019. An Abstract Domain for Certifying Neural Networks.. In Proc. ACM Program. Lang, https://doi.org/10.1145/3290354 Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Sahil Singla and Soheil Feizi. 2020. Second-Order Provable Defenses against Adversarial Attacks. In Proceedings of the 37th International Conference on Machine Learning, ICML (Proceedings of Machine Learning Research, Vol. 119). PMLR, 8981–8991. Google ScholarGoogle Scholar
  54. Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One Pixel Attack for Fooling Deep Neural Networks. IEEE Transactions on Evolutionary Computation, 23, 5 (2019), Oct, 828–841. issn:1941-0026 https://doi.org/10.1109/tevc.2019.2890858 Google ScholarGoogle ScholarCross RefCross Ref
  55. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus.. 2014. Intriguing Properties of Neural Networks.. 2nd International Conference on Learning Representations, ICLR. Google ScholarGoogle Scholar
  56. Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. 2019. Evaluating Robustness of Neural Networks with Mixed Integer Programming.. In 7th International Conference on Learning Representations, ICLR. Google ScholarGoogle Scholar
  57. DT Todorov. 1985. Combinatorial coverings. Ph. D. Dissertation. PhD thesis, University of Sofia, 1985. Google ScholarGoogle Scholar
  58. Hoang-Dung Tran, Stanley Bak, Weiming Xiang, and Taylor T. Johnson. 2020. Verification of Deep Convolutional Neural Networks Using ImageStars. Computer Aided Verification - 32nd International Conference, CAV, https://doi.org/10.1007/978-3-030-53288-8_2 Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Shubham Ugare, Gagandeep Singh, and Sasa Misailovic. 2022. Proof Transfer for Fast Certification of Multiple Approximate Neural Networks. Proc. ACM Program. Lang., 6, OOPSLA1 (2022), 1–29. https://doi.org/10.1145/3527319 Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Danilo Vasconcellos Vargas and Jiawei Su. 2020. Understanding the One Pixel Attack: Propagation Maps and Locality Analysis. In Proceedings of the Workshop on Artificial Intelligence Safety 2020 co-located with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence, (IJCAI-PRICAI) (CEUR Workshop Proceedings, Vol. 2640). CEUR-WS.org. Google ScholarGoogle Scholar
  61. Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. 2021. Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. In KDD ’21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Feida Zhu, Beng Chin Ooi, and Chunyan Miao (Eds.). ACM, 1645–1653. https://doi.org/10.1145/3447548.3467295 Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal Security Analysis of Neural Networks using Symbolic Intervals. 27th USENIX Security Symposium, USENIX Security. Google ScholarGoogle Scholar
  63. Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021. Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification.. Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems, NeurIPS. Google ScholarGoogle Scholar
  64. Haoze Wu, Alex Ozdemir, Aleksandar Zeljic, Kyle Julian, Ahmed Irfan, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina S. Pasareanu, and Clark W. Barrett. 2020. Parallelization Techniques for Verifying Neural Networks. In 2020 Formal Methods in Computer Aided Design, FMCAD. IEEE, 128–137. https://doi.org/10.34727/2020/isbn.978-3-85448-042-6_20 Google ScholarGoogle ScholarCross RefCross Ref
  65. Kai Yuanqing Xiao, Vincent Tjeng, Nur Muhammad (Mahi) Shafiullah, and Aleksander Madry. 2019. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability. 7th International Conference on Learning Representations, ICLR. Google ScholarGoogle Scholar
  66. Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li.. 2019. Adversarial Examples: Attacks and Defenses for Deep Learning.. In IEEE Trans. Neural Networks Learn. Syst. pages 2805-2824, https://doi.org/10.1109/TNNLS.2018.2886017 Google ScholarGoogle ScholarCross RefCross Ref
  67. Chaoning Zhang, Philipp Benz, Tooba Imtiaz, and In So Kweon.. 2020. Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations.. IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, https://doi.org/10.1109/CVPR42600.2020.01453 Google ScholarGoogle ScholarCross RefCross Ref
  68. Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, and Qiang Liu. 2020. Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework. CoRR, abs/2002.09169 (2020). Google ScholarGoogle Scholar

Index Terms

  1. Deep Learning Robustness Verification for Few-Pixel Attacks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Article Metrics

          • Downloads (Last 12 months)237
          • Downloads (Last 6 weeks)40

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!