skip to main content
research-article

Magma: A Ground-Truth Fuzzing Benchmark

Published:15 June 2021Publication History
Skip Abstract Section

Abstract

High scalability and low running costs have made fuzz testing the de facto standard for discovering software bugs. Fuzzing techniques are constantly being improved in a race to build the ultimate bug-finding tool. However, while fuzzing excels at finding bugs in the wild, evaluating and comparing fuzzer performance is challenging due to the lack of metrics and benchmarks. For example, crash count---perhaps the most commonly-used performance metric---is inaccurate due to imperfections in deduplication techniques. Additionally, the lack of a unified set of targets results in ad hoc evaluations that hinder fair comparison. We tackle these problems by developing Magma, a ground-truth fuzzing benchmark that enables uniform fuzzer evaluation and comparison. By introducing real bugs into real software, Magma allows for the realistic evaluation of fuzzers against a broad set of targets. By instrumenting these bugs, Magma also enables the collection of bug-centric performance metrics independent of the fuzzer. Magma is an open benchmark consisting of seven targets that perform a variety of input manipulations and complex computations, presenting a challenge to state-of-the-art fuzzers. We evaluate seven widely-used mutation-based fuzzers (AFL, AFLFast, AFL++, FairFuzz, MOpt-AFL, honggfuzz, and SymCC-AFL) against Magma over 200,000 CPU-hours. Based on the number of bugs reached, triggered, and detected, we draw conclusions about the fuzzers' exploration and detection capabilities. This provides insight into fuzzer performance evaluation, highlighting the importance of ground truth in performing more accurate and meaningful evaluations.

References

  1. Kayla Afanador and Cynthia Irvine. 2020. Representativeness in the Benchmark for Vulnerability Analysis Tools (B-VAT). In 13th USENIX Workshop on Cyber Security Experimentation and Test (CSET 20). USENIX Association. https://www.usenix.org/conference/cset20/presentation/afanadorGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  2. Mike Aizatsky, Kostya Serebryany, Oliver Chang, Abhishek Arya, and Meredith Whittaker. 2016. Announcing OSS-Fuzz: Continuous fuzzing for open source software. https://opensource.googleblog.com/2016/12/ announcing-oss-fuzz-continuous-fuzzing.html. Accessed: 2019-09-09. Proc. ACM Meas. Anal. Comput. Syst., Vol. 4, No. 3, Article 49. Publication date: December 2020. Magma: A Ground-Truth Fuzzing Benchmark 49:21Google ScholarGoogle Scholar
  3. Branden Archer and Darkkey. [n.d.]. radamsa: A Black-box mutational fuzzer. https://gitlab.com/akihe/radamsa. Accessed: 2019-09-09.Google ScholarGoogle Scholar
  4. Brad Arkin. 2009. Adobe Reader and Acrobat Security Initiative. http://blogs.adobe.com/security/2009/05/adobe_ reader_and_acrobat_secur.html. Accessed: 2019-09-09.Google ScholarGoogle Scholar
  5. Abhishek Arya and Cris Neckar. 2012. Fuzzing for security. https://blog.chromium.org/2012/04/fuzzing-for-security. html. Accessed: 2019-09-09.Google ScholarGoogle Scholar
  6. Domagoj Babic, Stefan Bucur, Yaohui Chen, Franjo Ivancic, Tim King, Markus Kusano, Caroline Lemieux, László Szekeres, and Wei Wang. 2019. FUDGE: Fuzz Driver Generation at Scale. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Stephen M. Blackburn, Robin Garner, Chris Hoffmann, Asjad M. Khang, Kathryn S. McKinley, Rotem Bentzur, Amer Diwan, Daniel Feinberg, Daniel Frampton, Samuel Z. Guyer, Martin Hirzel, Antony Hosking, Maria Jump, Han Lee, J. Eliot B. Moss, Aashish Phansalkar, Darko Stefanovic, Thomas VanDrunen, Daniel von Dincklage, and Ben Wiedermann. 2006. The DaCapo Benchmarks: Java Benchmarking Development and Analysis. In Proceedings of the 21st Annual ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages, and Applications (Portland, Oregon, USA) (OOPSLA '06). Association for Computing Machinery, New York, NY, USA, 169--190. https: //doi.org/10.1145/1167473.1167488 Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Tim Blazytko, Moritz Schlögel, Cornelius Aschermann, Ali Abbasi, Joel Frank, Simon Wörner, and Thorsten Holz. 2020. AURORA: Statistical Crash Analysis for Automated Root Cause Explanation. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 235--252. https://www.usenix.org/conference/usenixsecurity20/ presentation/blazytkoGoogle ScholarGoogle Scholar
  9. Marcel Böhme and Brandon Falk. 2020. Fuzzing: On the Exponential Cost of Vulnerability Discovery. In Proceedings of the 2020 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020). ACM, New York, NY, USA. https://doi.org/10.1145/3368089.3409729 Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Marcel Böhme, Van-Thuan Pham, and Abhik Roychoudhury. 2016. Coverage-based Greybox Fuzzing As Markov Chain. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (Vienna, Austria) (CCS '16). ACM, New York, NY, USA, 1032--1043. https://doi.org/10.1145/2976749.2978428 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Brian Caswell. [n.d.]. Cyber Grand Challenge Corpus. http://www.lungetech.com/cgc-corpus/.Google ScholarGoogle Scholar
  12. P. Chen and H. Chen. 2018. Angora: Efficient Fuzzing by Principled Search. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Alamitos, CA, USA, 711--725. https://doi.org/10.1109/SP.2018.00046Google ScholarGoogle ScholarCross RefCross Ref
  13. N. Coppik, O. Schwahn, and N. Suri. 2019. MemFuzz: Using Memory Accesses to Guide Fuzzing. In 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). 48--58. https://doi.org/10.1109/ICST.2019.00015Google ScholarGoogle ScholarCross RefCross Ref
  14. B. Dolan-Gavitt, P. Hulin, E. Kirda, T. Leek, A. Mambretti, W. Robertson, F. Ulrich, and R. Whelan. 2016. LAVA: Large-Scale Automated Vulnerability Addition. In 2016 IEEE Symposium on Security and Privacy (SP). 110--121. https: //doi.org/10.1109/SP.2016.15Google ScholarGoogle Scholar
  15. T. Dullien. 2020. Weird Machines, Exploitability, and Provable Unexploitability. IEEE Transactions on Emerging Topics in Computing 8, 2 (2020), 391--403.Google ScholarGoogle ScholarCross RefCross Ref
  16. Andrea Fioraldi, Dominik Maier, Heiko Eißfeldt, and Marc Heuse. 2020. AFL++ : Combining Incremental Steps of Fuzzing Research. In 14th USENIX Workshop on Offensive Technologies (WOOT 20). USENIX Association. https: //www.usenix.org/conference/woot20/presentation/fioraldi Accessed: 2020--10--19.Google ScholarGoogle Scholar
  17. Vijay Ganesh, Tim Leek, and Martin C. Rinard. 2009. Taint-based directed whitebox fuzzing. In 31st International Conference on Software Engineering, ICSE 2009, May 16--24, 2009, Vancouver, Canada, Proceedings. IEEE, 474--484. https://doi.org/10.1109/ICSE.2009.5070546 Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Patrice Godefroid, Adam Kiezun, and Michael Y. Levin. 2008. Grammar-based whitebox fuzzing. In Proceedings of the ACM SIGPLAN 2008 Conference on Programming Language Design and Implementation, Tucson, AZ, USA, June 7--13, 2008, Rajiv Gupta and Saman P. Amarasinghe (Eds.). ACM, 206--215. https://doi.org/10.1145/1375581.1375607 Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Google. [n.d.]. FuzzBench. https://google.github.io/fuzzbench/. Accessed: 2020-05-02.Google ScholarGoogle Scholar
  20. Google. [n.d.]. Fuzzer Test Suite. https://github.com/google/fuzzer-test-suite. Accessed: 2019-09-06.Google ScholarGoogle Scholar
  21. Google. [n.d.]. honggfuzz. http://honggfuzz.com/. Accessed: 2019--10--19.Google ScholarGoogle Scholar
  22. Gustavo Grieco, Martín Ceresa, and Pablo Buiras. 2016. QuickFuzz: an automatic random fuzzer for common file formats. In Proceedings of the 9th International Symposium on Haskell, Haskell 2016, Nara, Japan, September 22--23, 2016, Geoffrey Mainland (Ed.). ACM, 13--20. https://doi.org/10.1145/2976002.2976017 Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Sumit Gulwani. 2010. Dimensions in Program Synthesis. In Proceedings of the 12th International ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming (Hagenberg, Austria) (PPDP '10). Association for Computing Machinery, New York, NY, USA, 13--24. https://doi.org/10.1145/1836089.1836091 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. John L. Henning. 2000. SPEC CPU2000: Measuring CPU Performance in the New Millennium. Computer 33, 7 (July 2000), 28--35. https://doi.org/10.1109/2.869367 Proc. ACM Meas. Anal. Comput. Syst., Vol. 4, No. 3, Article 49. Publication date: December 2020. 49:22 Ahmad Hazimeh et al. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Intel. [n.d.]. Intel Pin API Reference. https://software.intel.com/sites/landingpage/pintool/docs/71313/Pin/html/index. html.Google ScholarGoogle Scholar
  26. Kyriakos K. Ispoglou. 2020. FuzzGen: Automatic Fuzzer Generation. In Proceedings of the USENIX Conference on Security Symposium.Google ScholarGoogle Scholar
  27. Ajay Joshi, Aashish Phansalkar, L. Eeckhout, and L. K. John. 2006. Measuring benchmark similarity using inherent program characteristics. IEEE Trans. Comput. 55, 6 (2006), 769--782. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Edward L Kaplan and Paul Meier. 1958. Nonparametric estimation from incomplete observations. Journal of the American statistical association 53, 282 (1958), 457--481.Google ScholarGoogle ScholarCross RefCross Ref
  29. V. Kashyap, J. Ruchti, L. Kot, E. Turetsky, R. Swords, S. A. Pan, J. Henry, D. Melski, and E. Schulte. 2019. Automated Customized Bug-Benchmark Generation. In 2019 19th International Working Conference on Source Code Analysis and Manipulation (SCAM). 103--114.Google ScholarGoogle Scholar
  30. George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, and Michael Hicks. 2018. Evaluating Fuzz Testing. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (Toronto, Canada) (CCS '18). ACM, New York, NY, USA, 2123--2138. https://doi.org/10.1145/3243734.3243804 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Caroline Lemieux and Koushik Sen. 2018. FairFuzz: a targeted mutation strategy for increasing greybox fuzz testing coverage. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3--7, 2018, Marianne Huchard, Christian Kästner, and Gordon Fraser (Eds.). ACM, 475--485. https://doi.org/10.1145/3238147.3238176 Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Yuekang Li, Bihuan Chen, Mahinthan Chandramohan, Shang-Wei Lin, Yang Liu, and Alwen Tiu. 2017. Steelix: Program- State Based Binary Fuzzing. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (Paderborn, Germany) (ESEC/FSE 2017). Association for Computing Machinery, New York, NY, USA, 627--637. https: //doi.org/10.1145/3106237.3106295 Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Yuwei Li, Shouling Ji, Yuan Chen, Sizhuang Liang,Wei-Han Lee, Yueyao Chen, Chenyang Lyu, ChunmingWu, Raheem Beyah, Peng Cheng, Kangjie Lu, and Ting Wang. 2021. UNIFUZZ: A Holistic and Pragmatic Metrics-Driven Platform for Evaluating Fuzzers. In 30th USENIX Security Symposium (USENIX Security 21). USENIX Association.Google ScholarGoogle Scholar
  34. LLVM Foundation. [n.d.]. libFuzzer. https://llvm.org/docs/LibFuzzer.html. Accessed: 2019-09-06.Google ScholarGoogle Scholar
  35. Shan Lu, Zhenmin Li, Feng Qin, Lin Tan, Pin Zhou, and Yuanyuan Zhou. 2005. Bugbench: Benchmarks for evaluating bug detection tools. In In Workshop on the Evaluation of Software Defect Detection Tools.Google ScholarGoogle Scholar
  36. Chi-Keung Luk, Robert Cohn, Robert Muth, Harish Patil, Artur Klauser, Geoff Lowney, Steven Wallace, Vijay Janapa Reddi, and Kim Hazelwood. 2005. Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation. In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation (Chicago, IL, USA) (PLDI '05). Association for Computing Machinery, New York, NY, USA, 190--200. https://doi.org/10.1145/ 1065010.1065034 Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Chenyang Lyu, Shouling Ji, Chao Zhang, Yuwei Li,Wei-Han Lee, Yu Song, and Raheem Beyah. 2019. MOPT: Optimized Mutation Scheduling for Fuzzers. In 28th USENIX Security Symposium, USENIX Security 2019, Santa Clara, CA, USA, August 14--16, 2019., Nadia Heninger and Patrick Traynor (Eds.). USENIX Association, 1949--1966. https://www.usenix. org/conference/usenixsecurity19/presentation/lyu Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Valentin J. M. Manès, HyungSeok Han, Choongwoo Han, Sang Kil Cha, Manuel Egele, Edward J. Schwartz, and Maverick Woo. 2019. The Art, Science, and Engineering of Fuzzing: A Survey. IEEE Transactions on Software Engineering (2019). https://doi.org/10.1109/TSE.2019.2946563Google ScholarGoogle Scholar
  39. Valentin J. M. Manès, Soomin Kim, and Sang Kil Cha. 2020. Ankou: Guiding Grey-Box Fuzzing towards Combinatorial Difference. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE '20). Association for Computing Machinery, New York, NY, USA, 1024--1036. https://doi.org/10.1145/3377811. 3380421 Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. MITRE. 2007. Common Weakness Enumeration (CWE). https://cwe.mitre.org/.Google ScholarGoogle Scholar
  41. Timothy Nosco, Jared Ziegler, Zechariah Clark, Davy Marrero, Todd Finkler, Andrew Barbarello, and W. Michael Petullo. 2020. The Industrial Age of Hacking. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 1129--1146. https://www.usenix.org/conference/usenixsecurity20/presentation/noscoGoogle ScholarGoogle Scholar
  42. Peach Tech. [n.d.]. Peach Fuzzer Platform. https://www.peach.tech/products/peach-fuzzer/peach-platform/. Accessed: 2019-09-09.Google ScholarGoogle Scholar
  43. Karl Pearson. 1901. LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2, 11 (1901), 559--572.Google ScholarGoogle ScholarCross RefCross Ref
  44. H. Peng, Y. Shoshitaishvili, and M. Payer. 2018. T-Fuzz: Fuzzing by Program Transformation. In 2018 IEEE Symposium on Security and Privacy (SP). 697--710. https://doi.org/10.1109/SP.2018.00056Google ScholarGoogle ScholarCross RefCross Ref
  45. T. Petsios, A. Tang, S. Stolfo, A. D. Keromytis, and S. Jana. 2017. NEZHA: Efficient Domain-Independent Differential Testing. In 2017 IEEE Symposium on Security and Privacy (SP). 615--632. https://doi.org/10.1109/SP.2017.27 Proc. ACM Meas. Anal. Comput. Syst., Vol. 4, No. 3, Article 49. Publication date: December 2020. Magma: A Ground-Truth Fuzzing Benchmark 49:23Google ScholarGoogle Scholar
  46. Theofilos Petsios, Jason Zhao, Angelos D. Keromytis, and Suman Jana. 2017. SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (Dallas, Texas, USA) (CCS '17). ACM, New York, NY, USA, 2155--2168. https://doi.org/10. 1145/3133956.3134073 Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Van-Thuan Pham, Marcel Böhme, and Abhik Roychoudhury. 2016. Model-based whitebox fuzzing for program binaries. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering, ASE 2016, Singapore, September 3--7, 2016, David Lo, Sven Apel, and Sarfraz Khurshid (Eds.). ACM, 543--553. https://doi.org/10.1145/2970276. 2970316 Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. A. Phansalkar, A. Joshi, L. Eeckhout, and L. K. John. 2005. Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites. In IEEE International Symposium on Performance Analysis of Systems and Software, 2005. ISPASS 2005. 10--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Sebastian Poeplau and Aurélien Francillon. 2020. Symbolic execution with SymCC: Don't interpret, compile!. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, 181--198. https://www.usenix.org/conference/ usenixsecurity20/presentation/poeplauGoogle ScholarGoogle Scholar
  50. Aleksandar Prokopec, Andrea Rosà, David Leopoldseder, Gilles Duboscq, Petr T'ma, Martin Studener, Lubomír Bulej, Yudi Zheng, Alex Villazón, Doug Simon, Thomas Würthinger, and Walter Binder. 2019. Renaissance: Benchmarking Suite for Parallel Applications on the JVM. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (Phoenix, AZ, USA) (PLDI 2019). Association for Computing Machinery, New York, NY, USA, 31--47. https://doi.org/10.1145/3314221.3314637 Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Tim Rains. 2012. Security Development Lifecycle: A Living Process. https://www.microsoft.com/security/blog/2012/ 02/01/security-development-lifecycle-a-living-process/. Accessed: 2019-09-09.Google ScholarGoogle Scholar
  52. Subhajit Roy, Awanish Pandey, Brendan Dolan-Gavitt, and Yu Hu. 2018. Bug Synthesis: Challenging Bug-Finding Tools with Deep Faults. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Lake Buena Vista, FL, USA) (ESEC/FSE 2018). Association for Computing Machinery, New York, NY, USA, 224--234. https://doi.org/10.1145/3236024.3236084 Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitriy Vyukov. 2012. AddressSanitizer: A Fast Address Sanity Checker. In 2012 USENIX Annual Technical Conference, Boston, MA, USA, June 13--15, 2012, Gernot Heiser and Wilson C. Hsieh (Eds.). USENIX Association, 309--318. https://www.usenix.org/conference/atc12/ technical-sessions/presentation/serebryany Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Yan Shoshitaishvili, Ruoyu Wang, Christopher Salls, Nick Stephens, Mario Polino, Audrey Dutcher, John Grosen, Siji Feng, Christophe Hauser, Christopher Kruegel, and Giovanni Vigna. 2016. SoK: (State of) The Art of War: Offensive Techniques in Binary Analysis. In IEEE Symposium on Security and Privacy.Google ScholarGoogle ScholarCross RefCross Ref
  55. D. Song, J. Lettner, P. Rajasekaran, Y. Na, S. Volckaert, P. Larsen, and M. Franz. 2019. SoK: Sanitizing for Security. In 2019 IEEE Symposium on Security and Privacy (SP). 1275--1295. https://doi.org/10.1109/SP.2019.00010Google ScholarGoogle Scholar
  56. Daan Sprenkels. [n.d.]. LLVM provides no side-channel resistance. https://dsprenkels.com/cmov-conversion.html. Accessed: 2020-02--13.Google ScholarGoogle Scholar
  57. Standard Performance Evaluation Corporation. [n.d.]. SPEC Benchmark Suite. https://www.spec.org/. Accessed: 2020-02--12.Google ScholarGoogle Scholar
  58. Nick Stephens, John Grosen, Christopher Salls, Andrew Dutcher, Ruoyu Wang, Jacopo Corbetta, Yan Shoshitaishvili, Christopher Kruegel, and Giovanni Vigna. 2016. Driller: Augmenting Fuzzing Through Selective Symbolic Execution. In 23rd Annual Network and Distributed System Security Symposium, NDSS 2016, San Diego, California, USA, February 21--24, 2016. The Internet Society. http://dx.doi.org/10.14722/ndss.2016.23368Google ScholarGoogle Scholar
  59. Trail of Bits. [n.d.]. DARPA Challenge Binaries on Linux, OS X, and Windows. https://github.com/trailofbits/cb-multios/. Accessed: 2020--10-04.Google ScholarGoogle Scholar
  60. Jóakim v. Kistowski, Jeremy A. Arnold, Karl Huppler, Klaus-Dieter Lange, John L. Henning, and Paul Cao. 2015. How to Build a Benchmark. In Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering (Austin, Texas, USA) (ICPE '15). Association for Computing Machinery, New York, NY, USA, 333--336. https://doi.org/10.1145/ 2668930.2688819 Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Jonas Benedict Wagner. 2017. Elastic Program Transformations Automatically Optimizing the Reliability/Performance Trade-off in Systems Software. (2017), 149. https://doi.org/10.5075/epfl-thesis-7745Google ScholarGoogle Scholar
  62. Junjie Wang, Bihuan Chen, Lei Wei, and Yang Liu. 2017. Skyfire: Data-Driven Seed Generation for Fuzzing. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22--26, 2017. IEEE Computer Society, 579--594. https://doi.org/10.1109/SP.2017.23Google ScholarGoogle Scholar
  63. Junjie Wang, Bihuan Chen, Lei Wei, and Yang Liu. 2019. Superion: grammar-aware greybox fuzzing. In Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25--31, 2019, Joanne M. Atlee, Tevfik Bultan, and Jon Whittle (Eds.). IEEE / ACM, 724--735. https://doi.org/10.1109/ICSE.2019.00081 Proc. ACM Meas. Anal. Comput. Syst., Vol. 4, No. 3, Article 49. Publication date: December 2020. 49:24 Ahmad Hazimeh et al. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. MaverickWoo, Sang Kil Cha, Samantha Gottlieb, and David Brumley. 2013. Scheduling black-box mutational fuzzing. In 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS'13, Berlin, Germany, November 4--8, 2013, Ahmad-Reza Sadeghi, Virgil D. Gligor, and Moti Yung (Eds.). ACM, 511--522. https://doi.org/10.1145/2508859.2516736 Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Insu Yun, Sangho Lee, Meng Xu, Yeongjin Jang, and Taesoo Kim. 2018. QSYM: A Practical Concolic Execution Engine Tailored for Hybrid Fuzzing. In Proceedings of the 27th USENIX Conference on Security Symposium (Baltimore, MD, USA) (SEC'18). USENIX Association, Berkeley, CA, USA, 745--761. http://dl.acm.org/citation.cfm?id=3277203.3277260 Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Michal Zalewski. [n.d.]. American Fuzzy Lop (AFL) Technical Whitepaper. http://lcamtuf.coredump.cx/afl/technical_ details.txt. Accessed: 2019-09-06.Google ScholarGoogle Scholar

Index Terms

  1. Magma: A Ground-Truth Fuzzing Benchmark

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!