skip to main content
research-article
Open Access

Proof transfer for fast certification of multiple approximate neural networks

Published:29 April 2022Publication History
Skip Abstract Section

Abstract

Developers of machine learning applications often apply post-training neural network optimizations, such as quantization and pruning, that approximate a neural network to speed up inference and reduce energy consumption, while maintaining high accuracy and robustness.

Despite a recent surge in techniques for the robustness verification of neural networks, a major limitation of almost all state-of-the-art approaches is that the verification needs to be run from scratch every time the network is even slightly modified. Running precise end-to-end verification from scratch for every new network is expensive and impractical in many scenarios that use or compare multiple approximate network versions, and the robustness of all the networks needs to be verified efficiently.

We present FANC, the first general technique for transferring proofs between a given network and its multiple approximate versions without compromising verifier precision. To reuse the proofs obtained when verifying the original network, FANC generates a set of templates – connected symbolic shapes at intermediate layers of the original network – that capture the proof of the property to be verified. We present novel algorithms for generating and transforming templates that generalize to a broad range of approximate networks and reduce the verification cost.

We present a comprehensive evaluation demonstrating the effectiveness of our approach. We consider a diverse set of networks obtained by applying popular approximation techniques such as quantization and pruning on fully-connected and convolutional architectures and verify their robustness against different adversarial attacks such as adversarial patches, L0, rotation and brightening. Our results indicate that FANC can significantly speed up verification with state-of-the-art verifier, DeepZ by up to 4.1x.

References

  1. Filippo Amato, Alberto López, Eladia María Peña-Méndez, Petr Vaňhara, Aleš Hampl, and Josef Havel. 2013. Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine, 11, 2 (2013), 47–58.Google ScholarGoogle ScholarCross RefCross Ref
  2. Greg Anderson, Shankara Pailoor, Isil Dillig, and Swarat Chaudhuri. 2019. Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness. In Proc. Programming Language Design and Implementation (PLDI). 731–744.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ross Anderson, Joey Huchette, Will Ma, Christian Tjandraatmadja, and Juan Pablo Vielma. 2020. Strong mixed-integer programming formulations for trained neural networks. Mathematical Programming, 1–37.Google ScholarGoogle Scholar
  4. Pranav Ashok, Vahid Hashemi, Jan Kretínský, and Stefanie Mohr. 2020. DeepAbstract: Neural Network Abstraction for Accelerating Verification. In Automated Technology for Verification and Analysis - 18th International Symposium. 12302.Google ScholarGoogle Scholar
  5. Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, and Martin Vechev. 2019. Certifying Geometric Robustness of Neural Networks. In Advances in Neural Information Processing Systems. 32.Google ScholarGoogle Scholar
  6. Teodora Baluta, Zheng Leong Chua, Kuldeep S Meel, and Prateek Saxena. 2021. Scalable quantitative verification for deep neural networks. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). 312–323.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya Nori, and Antonio Criminisi. 2016. Measuring neural net robustness with constraints. Advances in neural information processing systems, 29 (2016), 2613–2621.Google ScholarGoogle Scholar
  8. Davis W. Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John V. Guttag. 2020. What is the State of Neural Network Pruning? In Proceedings of Machine Learning and Systems 2020, MLSys 2020.Google ScholarGoogle Scholar
  9. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, and Jiakai Zhang. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.Google ScholarGoogle Scholar
  10. Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Pushmeet Kohli, P Torr, and P Mudigonda. 2020. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21 (2020).Google ScholarGoogle Scholar
  11. Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In IEEE S&P Symposium. 39–57.Google ScholarGoogle Scholar
  12. Chih-Hong Cheng and Rongjie Yan. 2020. Continuous Safety Verification of Neural Networks. arxiv:2010.05689.Google ScholarGoogle Scholar
  13. Ping-yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studor, and Tom Goldstein. 2020. Certified Defenses for Adversarial Patches. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  14. Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified Adversarial Robustness via Randomized Smoothing. In Proceedings of the 36th International Conference on Machine Learning.Google ScholarGoogle Scholar
  15. Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis. 269–286.Google ScholarGoogle ScholarCross RefCross Ref
  16. Mikhail Figurnov, Aizhan Ibraimova, Dmitry P. Vetrov, and Pushmeet Kohli. 2016. PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions. In Advances in Neural Information Processing Systems 2016. 947–955.Google ScholarGoogle Scholar
  17. Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In Proc. International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  18. Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin T. Vechev. 2018. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In IEEE S&P Symposium. 3–18.Google ScholarGoogle Scholar
  19. Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021. A Survey of Quantization Methods for Efficient Neural Network Inference. CoRR, abs/2103.13630 (2021).Google ScholarGoogle Scholar
  20. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In 3rd International Conference on Learning Representations, ICLR 2015.Google ScholarGoogle Scholar
  21. 2021. Assessment of the robustness of neural networks. International Organization for Standardization.Google ScholarGoogle Scholar
  22. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. In 2018 IEEE Conference on Computer Vision and Pattern Recognition. 2704–2713.Google ScholarGoogle Scholar
  23. Kyle D. Julian, Mykel J. Kochenderfer, and Michael P. Owen. 2018. Deep Neural Network Compression for Aircraft Collision Avoidance Systems. CoRR, abs/1810.04240 (2018).Google ScholarGoogle Scholar
  24. Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Computer Aided Verification - 29th International Conference, CAV. 10426, 97–117.Google ScholarGoogle Scholar
  25. Guy Katz, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljic, David L. Dill, Mykel J. Kochenderfer, and Clark W. Barrett. 2019. The Marabou Framework for Verification and Analysis of Deep Neural Networks. In International Conference on Computer Aided Verification, CAV. 11561, 443–452.Google ScholarGoogle Scholar
  26. Jacob Laurel, Rem Yang, Gagandeep Singh, and Sasa Misailovic. 2022. A dual number abstraction for static analysis of Clarke Jacobians. Proc. ACM Program. Lang., 6, POPL (2022), 1–30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. 2019. Certified Robustness to Adversarial Examples with Differential Privacy. In IEEE S&P Symposium. 656–672.Google ScholarGoogle Scholar
  28. Jingyue Lu and M. Pawan Kumar. 2020. Neural Network Branching for Neural Network Verification. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  29. Antoine Miné. 2001. The Octagon Abstract Domain. In Working Conference on Reverse Engineering, WCRE’01. 310.Google ScholarGoogle Scholar
  30. Brandon Paulsen, Jingbo Wang, and Chao Wang. 2020. ReluDiff: differential verification of deep neural networks. In ICSE ’20: 42nd International Conference on Software Engineering.Google ScholarGoogle Scholar
  31. Brandon Paulsen, Jingbo Wang, Jiawei Wang, and Chao Wang. 2020. NEURODIFF: Scalable Differential Verificatfion of Neural Networks using Fine-Grained Approximation. In International Conference on Automated Software Engineering.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. DeepXplore: Automated Whitebox Testing of Deep Learning Systems. In Proceedings of the 26th Symposium on Operating Systems Principles. 1–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Stephan Rabanser, Stephan Günnemann, and Zachary C. Lipton. 2019. Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift. In Proc. Neural Information Processing Systems (NeurIPS). 1394–1406.Google ScholarGoogle Scholar
  34. Hadi Salman, Jerry Li, Ilya P. Razenshteyn, Pengchuan Zhang, Huan Zhang, Sébastien Bubeck, and Greg Yang. 2019. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. In Proc. Neural Information Processing Systems (NeurIPS). 11289–11300.Google ScholarGoogle Scholar
  35. Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. 2019. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks. In Advances in Neural Information Processing Systems 32. 9832–9842.Google ScholarGoogle Scholar
  36. Hashim Sharif, Yifan Zhao, Maria Kotsifakou, Akash Kothari, Ben Schreiber, Elizabeth Wang, Yasmin Sarita, Nathan Zhao, Keyur Joshi, Vikram S. Adve, Sasa Misailovic, and Sarita V. Adve. 2021. ApproxTuner: a compiler and runtime system for adaptive approximations. In PPoPP ’21: 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 262–277.Google ScholarGoogle Scholar
  37. Gagandeep Singh. 2018. ERAN. https://github.com/eth-sri/eran.Google ScholarGoogle Scholar
  38. Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin Vechev. 2019. Beyond the single neuron convex barrier for neural network certification. In Advances in Neural Information Processing Systems. 15098–15109.Google ScholarGoogle Scholar
  39. Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev. 2018. Fast and Effective Robustness Certification. In Advances in Neural Information Processing Systems. 31.Google ScholarGoogle Scholar
  40. Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T. Vechev. 2019. An abstract domain for certifying neural networks. Proc. ACM Program. Lang., 3, POPL (2019).Google ScholarGoogle Scholar
  41. Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. Boosting Robustness Certification of Neural Networks. In International Conference on Learning Representations.Google ScholarGoogle Scholar
  42. Gagandeep Singh, Markus Püschel, and Martin T. Vechev. 2017. Fast polyhedra abstract domain. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL. 46–59.Google ScholarGoogle Scholar
  43. Christian Sprecher, Marc Fischer, Dimitar I. Dimitrov, Gagandeep Singh, and Martin Vechev. 2021. Proof Transfer for Neural Network Verification. arxiv:2109.00542.Google ScholarGoogle Scholar
  44. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations.Google ScholarGoogle Scholar
  45. TFLite. 2017. TF Lite post-training quantization. https://www.tensorflow.org/lite/performance/post_training_quantization.Google ScholarGoogle Scholar
  46. Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, Krunal Patel, and Juan Pablo Vielma. 2020. The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification. arXiv preprint arXiv:2006.14076.Google ScholarGoogle Scholar
  47. Vincent Tjeng, Kai Xiao, and Russ Tedrake. 2017. Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356.Google ScholarGoogle Scholar
  48. Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. 2019. Evaluating Robustness of Neural Networks with Mixed Integer Programming. In International Conference on Learning Representations, ICLR 2019.Google ScholarGoogle Scholar
  49. Hoang-Dung Tran, Stanley Bak, Weiming Xiang, and Taylor T. Johnson. 2020. Verification of Deep Convolutional Neural Networks Using ImageStars. In Proc. Computer Aided Verification (CAV). 18–42.Google ScholarGoogle Scholar
  50. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems. 6367–6377.Google ScholarGoogle Scholar
  51. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal Security Analysis of Neural Networks using Symbolic Intervals. In 27th USENIX Security Symposium, USENIX Security. 1599–1614.Google ScholarGoogle Scholar
  52. Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021. Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification. arXiv preprint arXiv:2103.06624.Google ScholarGoogle Scholar
  53. Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. 2018. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach. In Proc. International Conference on Learning Representations, ICLR 2018.Google ScholarGoogle Scholar
  54. Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S Dhillon, and Luca Daniel. 2018. Towards fast computation of certified robustness for relu networks. arXiv preprint arXiv:1804.09699.Google ScholarGoogle Scholar
  55. Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. 2020. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.Google ScholarGoogle Scholar
  56. Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018. 4944–4953.Google ScholarGoogle Scholar

Index Terms

  1. Proof transfer for fast certification of multiple approximate neural networks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image Proceedings of the ACM on Programming Languages
          Proceedings of the ACM on Programming Languages  Volume 6, Issue OOPSLA1
          April 2022
          687 pages
          EISSN:2475-1421
          DOI:10.1145/3534679
          Issue’s Table of Contents

          Copyright © 2022 ACM

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 29 April 2022
          Published in pacmpl Volume 6, Issue OOPSLA1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!