skip to main content
research-article
Public Access

Build It, Break It, Fix It: Contesting Secure Development

Published:17 April 2020Publication History
Skip Abstract Section

Abstract

Typical security contests focus on breaking or mitigating the impact of buggy systems. We present the Build-it, Break-it, Fix-it (BIBIFI) contest, which aims to assess the ability to securely build software, not just break it. In BIBIFI, teams build specified software with the goal of maximizing correctness, performance, and security. The latter is tested when teams attempt to break other teams’ submissions. Winners are chosen from among the best builders and the best breakers. BIBIFI was designed to be open-ended—teams can use any language, tool, process, and so on, that they like. As such, contest outcomes shed light on factors that correlate with successfully building secure software and breaking insecure software. We ran three contests involving a total of 156 teams and three different programming problems. Quantitative analysis from these contests found that the most efficient build-it submissions used C/C++, but submissions coded in a statically type safe language were 11× less likely to have a security flaw than C/C++ submissions. Break-it teams that were also successful build-it teams were significantly better at finding security bugs.

References

  1. ICPC Foundation. 2018. ACM-ICPC International Collegiate Programming Contest. Retrieved from http://icpc.baylor.edu.Google ScholarGoogle Scholar
  2. BSIMM. 2020. Building Security In Maturity Model (BSIMM). Retrieved from http://bsimm.com.Google ScholarGoogle Scholar
  3. DEF CON Communications. 2018. Capture the Flag Archive. Retrieved from https://www.defcon.org/html/links/dc-ctf.html.Google ScholarGoogle Scholar
  4. DEF CON Communications. 2010. DEF CON Hacking Conference. Retrieved from http://www.defcon.org.Google ScholarGoogle Scholar
  5. Git. 2020. Git – distributed version control management system. Retrieved from http://git-scm.com.Google ScholarGoogle Scholar
  6. Google. 2020. Google Code Jam. Retrieved from http://code.google.com/codejam.Google ScholarGoogle Scholar
  7. ICFP Programming Contest. 2019. Retrieved from http://icfpcontest.org.Google ScholarGoogle Scholar
  8. Federal Business Council. 2012. Maryland Cyber Challenge 8 Competition. Retrieved from http://www.fbcinc.com/e/cybermdconference/competitorinfo.aspx.Google ScholarGoogle Scholar
  9. TOPCODER. 2020. Top Coder competitions. Retrieved from http://apps.topcoder.com/wiki/display/tc/Algorithm+Overview.Google ScholarGoogle Scholar
  10. Michael Snoyman. 2020. Yesod Web Framework for Haskell. Retrieved from http://www.yesodweb.com.Google ScholarGoogle Scholar
  11. American Fuzzing Lop (AFL). 2018. Retrieved from http://lcamtuf.coredump.cx/afl/.Google ScholarGoogle Scholar
  12. Martín Abadi, Mihai Budiu, Úlfar Erlingsson, and Jay Ligatti. 2009. Control-flow integrity principles, implementations, and applications. ACM Trans. Info. Syst. Secur. 13, 1 (2009), 4:1--4:40.Google ScholarGoogle Scholar
  13. Y. Acar, M. Backes, S. Fahl, S. Garfinkel, D. Kim, M. L. Mazurek, and C. Stransky. 2017. Comparing the usability of cryptographic APIs. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17).Google ScholarGoogle Scholar
  14. Yasemin Acar, Christian Stransky, Dominik Wermke, Michelle L. Mazurek, and Sascha Fahl. 2017. Security developer studies with GitHub users: Exploring a convenience sample. In Proceedings of the Symposium on Usable Privacy and Security (SOUPS’17).Google ScholarGoogle Scholar
  15. Yasemin Acar, Christian Stransky, Dominik Wermke, Charles Weir, Michelle L. Mazurek, and Sascha Fahl. [n.d.]. Developers need support too: A survey of security advice for software developers. In Proceedings of the IEEE Secure Development Conference (SecDev’17).Google ScholarGoogle Scholar
  16. Daniele Antonioli, Hamid Reza Ghaeini, Sridhar Adepu, Martin Ochoa, and Nils Ole Tippenhauer. 2017. Gamifying ICS security training and research: Design, implementation, and results of S3. In Proceedings of the International Conference on Cyber-Physical Systems (CPS’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ingolf Becker, Simon Parkin, and M. Angela Sasse. 2017. Finding security champions in blends of security culture. In Proceedings of the 2nd European Workshop on Usable Security (EuroUSEC’17). Internet Society.Google ScholarGoogle Scholar
  18. Daniel J. Bernstein, Tanja Lange, and Peter Schwabe. 2012. The security impact of a new cryptographic library. In Proceedings of the International Conference on Cryptology and Information Security in Latin America. Springer, 159--176.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Paul E. Black, Lee Badger, Barbara Guttman, and Elizabeth Fong. 2016. Dramatically Reducing Software Vulnerabilities: Report to the White House Office of Science and Technology Policy. Technical Report Draft NISTIR 8151. National Institute of Standards and Technology. Retrieved from http://csrc.nist.gov/publications/drafts/nistir-8151/nistir8151_draft.pdf.Google ScholarGoogle Scholar
  20. Kevin Bock, George Hughey, and Dave Levin. 2018. King of the hill: A novel cybersecurity competition for teaching penetration testing. In Proceedings of the USENIX Workshop on Advances in Security Education (ASE’18).Google ScholarGoogle Scholar
  21. Kenneth P. Burnham, David R. Anderson, and Kathryn P. Huyvaert. 2011. AIC model selection and multimodel inference in behavioral ecology: Some background, observations, and comparisons. Behav. Ecol. Sociobiol. 65, 1 (2011), 23--35.Google ScholarGoogle ScholarCross RefCross Ref
  22. Peter Chapman, Jonathan Burket, and David Brumley. 2014. PicoCTF: A game-based computer security competition for high school students. In Proceedings of the USENIX Summit on Gaming, Games, and Gamification in Security Education (3GSE’14).Google ScholarGoogle Scholar
  23. Brian Chess and Jacob West. 2007. Secure Programming with Static Analysis. Addison-Wesley.Google ScholarGoogle Scholar
  24. Nicholas Childers, Bryce Boe, Lorenzo Cavallaro, Ludovico Cavedon, Marco Cova, Manuel Egele, and Giovanni Vigna. 2010. Organizing large scale hacking competitions. In Proceedings of the Conference on Detection of Intrusions and Malware 8 Vulnerability Assessment (DIMVA’10).Google ScholarGoogle ScholarCross RefCross Ref
  25. Jacob Cohen. 1988. Statistical Power Analysis for the Behavioral Sciences. Lawrence Erlbaum Associates.Google ScholarGoogle Scholar
  26. Art Conklin. 2005. The use of a collegiate cyber defense competition in information security education. In Proceedings of the Information Security Curriculum Development Conference (InfoSecCD’05).Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Art Conklin. 2006. Cyber defense competitions and information security education: An active learning solution for a capstone course. In Proceedings of the Hawaii International Conference on System Sciences (HICSS’06).Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Gregory Conti, Thomas Babbitt, and John Nelson. 2011. Hacking competitions and their untapped potential for security education. Secur. Privacy 9, 3 (2011), 56--59.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Adam Doupé, Manuel Egele, Benjamin Caillat, Gianluca Stringhini, Gorkem Yakin, Ali Zand, Ludovico Cavedon, and Giovanni Vigna. 2011. Hit’Em where it hurts: A live security exercise on cyber situational awareness. In Proceedings of the Annual Computer Security Applications Conference (ACSAC’11).Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. dragostech.com inc. [n.d.]. CanSecWest Applied Security Conference. Retrieved from http://cansecwest.com.Google ScholarGoogle Scholar
  31. Chris Eagle. 2013. Computer security competitions: Expanding educational outcomes. Secur. Privacy 11, 4 (2013).Google ScholarGoogle Scholar
  32. Anne Edmundson, Brian Holtkamp, Emanuel Rivera, Matthew Finifter, Adrian Mettler, and David Wagner. 2013. An empirical study on the effectiveness of security code review. In Proceedings of the International Symposium on Engineering Secure Software and Systems (ESSOS’13).Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Manuel Egele, David Brumley, Yanick Fratantonio, and Christopher Kruegel. 2013. An empirical study of cryptographic misuse in android applications. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS’13). ACM, 73--84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Sascha Fahl, Marian Harbach, Henning Perl, Markus Koetter, and Matthew Smith. 2013. Rethinking SSL development in an appified world. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS’13).Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Matthew Finifter and David Wagner. 2011. Exploring the relationship betweenweb application development tools and security. In Proceedings of the USENIX Conference on Web Application Development (WebApps’11).Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Martin Georgiev, Subodh Iyengar, Suman Jana, Rishita Anubhai, Dan Boneh, and Vitaly Shmatikov. 2012. The most dangerous code in the world: validating SSL certificates in non-browser software. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS’12). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Keith Harrison and Gregory White. 2010. An empirical study on the effectiveness of common security measures. In Proceedings of the Hawaii International Conference on System Sciences (HICSS’10).Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Lance J. Hoffman, Tim Rosenberg, and Ronald Dodge. 2005. Exploring a national cybersecurity exercise for universities. Secur. Privacy 3, 5 (2005), 27--33.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Michael Howard and David LeBlanc. 2003. Writing Secure Code. Microsoft Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Michael Howard and Steve Lipner. 2006. The Security Development Lifecycle. Microsoft Press.Google ScholarGoogle Scholar
  41. Queena Kim. 2014. Want to learn cybersecurity? Head to Def Con. Retrieved from http://www.marketplace.org/2014/08/25/tech/want-learn-cybersecurity-head-def-con.Google ScholarGoogle Scholar
  42. George Klees, Andrew Ruef, Benji Cooper, Shiyi Wei, and Michael Hicks. 2018. Evaluating fuzz testing. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Gary McGraw. 2006. Software Security: Building Security In. Addison-Wesley.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. David Molnar, Xue Cong Li, and David A. Wagner. 2009. Dynamic test generation to find integer bugs in x86 binary Linux programs. In Proceedings of the USENIX Security Symposium.Google ScholarGoogle Scholar
  45. N. J. D. Nagelkerke. 1991. A note on a general definition of the coefficient of determination. Biometrika 78, 3 (09 1991), 691--692.Google ScholarGoogle Scholar
  46. National Collegiate Cyber Defense Competition. [n.d.]. Retrieved from http://www.nationalccdc.org.Google ScholarGoogle Scholar
  47. Daniela Oliveira, Marissa Rosenthal, Nicole Morin, Kuo-Chuan Yeh, Justin Cappos, and Yanyan Zhuang. 2014. It’s the psychology stupid: How heuristics explain software vulnerabilities and how priming can illuminate developer’s blind spots. In Proceedings of the Annual Computer Security Applications Conference (ACSAC’14).Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Daniela Seabra Oliveira, Tian Lin, Muhammad Sajidur Rahman, Rad Akefirad, Donovan Ellis, Eliany Perez, Rahul Bobhate, Lois A. DeLong, Justin Cappos, and Yuriy Brun. 2018. API blindspots: Why experienced developers write vulnerable code. In Proceedings of the Symposium on Usable Privacy and Security (SOUPS’18).Google ScholarGoogle Scholar
  49. Daniela Seabra Oliveira, Tian Lin, Muhammad Sajidur Rahman, Rad Akefirad, Donovan Ellis, Eliany Perez, Rahul Bobhate, Lois A. DeLong, Justin Cappos, and Yuriy Brun. 2018. API blindspots: Why experienced developers write vulnerable code. In Proceedings of the 14th Symposium on Usable Privacy and Security (SOUPS’18).Google ScholarGoogle Scholar
  50. OWASP. 2010. Secure Coding Practices - Quick Reference Guide. Retrieved from https://www.owasp.org/images/0/08/OWASP_SCP_Quick_Reference_Guide_v2.pdf.Google ScholarGoogle Scholar
  51. James Parker, Niki Vazou, and Michael Hicks. 2019. LWeb: Information flow security for multi-tier web applications. Proc. ACM Program. Lang. 3 (Jan. 2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Van-Thuan Pham, Sakaar Khurana, Subhajit Roy, and Abhik Roychoudhury. 2017. Bucketing failing tests via symbolic analysis. In Proceedings of the International Conference on Fundeamental Approaches to Software Engineering (FASE’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Polytechnic Institute of New York University. [n.d.]. CSAW—CyberSecurity Competition 2012. Retrieved from http://www.poly.edu/csaw2012/csaw-CTF.Google ScholarGoogle Scholar
  54. L. Prechelt. 2011. Plat_Forms: A web development platform comparison by an exploratory experiment searching for emergent platform properties. IEEE Trans. Softw. Eng. 37, 1 (2011), 95--108.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. psql [n.d.]. PostgreSQL: The world’s most advanced open source database. Retrieved from http://www.postgresql.org.Google ScholarGoogle Scholar
  56. Andrew Ruef, Michael Hicks, James Parker, Dave Levin, Michelle L. Mazurek, and Piotr Mardziel. 2016. Build it, break it, fix it: Contesting secure development. In Proceedings of the Conference on Computer and Communications Security (CCS’16).Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Andrew Ruef, Michael Hicks, James Parker, Dave Levin, Atif Memon, Jandelyn Plane, and Piotr Mardziel. 2015. Build it break it: Measuring and comparing development security. In Proceedings of the International Conference on Cyber Security for Emerging Technologies (CSET’15).Google ScholarGoogle Scholar
  58. Jerome H. Saltzer and Michael D. Schroeder. 1975. The protection of information in computer systems. Proc. IEEE 63, 9 (1975), 1278--1308.Google ScholarGoogle ScholarCross RefCross Ref
  59. Riccardo Scandariato, James Walden, and Wouter Joosen. 2013. Static analysis versus penetration testing: A controlled experiment. In Proceedings of the International Symposium on Software Reliability Engineering (ISSRE’13).Google ScholarGoogle ScholarCross RefCross Ref
  60. Robert C. Seacord. 2013. Secure Coding in C and C++. Addison-Wesley.Google ScholarGoogle Scholar
  61. Deian Stefan, Alejandro Russo, John Mitchell, and David Mazieres. 2011. Flexible dynamic information flow control in Haskell. In Proceedings of the ACM SIGPLAN Haskell Symposium.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Christian Stransky, Yasemin Acar, Duc Cuong Nguyen, Dominik Wermke, Doowon Kim, Elissa M. Redmiles, Michael Backes, Simson Garfinkel, Michelle L. Mazurek, and Sascha Fahl. 2017. Lessons learned from using an online platform to conduct large-scale, online controlled security experiments with software developers. In Proceedings of the International Conference on Cyber Security for Emerging Technologies (CSET’17).Google ScholarGoogle Scholar
  63. Positive Technologies. 2018. ATM logic attacks: scenarios, 2018. Retrieved from https://www.ptsecurity.com/upload/corporate/ww-en/analytics/ATM-Vulnerabilities-2018-eng.pdf.Google ScholarGoogle Scholar
  64. Christopher Thompson and David Wagner. 2017. A large-scale study of modern code review and security in open source projects. In Proceedings of the 13th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Erik Trickel, Francesco Disperati, Eric Gustafson, Faezeh Kalantari, Mike Mabey, Naveen Tiwari, Yeganeh Safaei, Adam Doupé, and Giovanni Vigna. 2017. Shell we play a game? CTF-as-a-service for security education. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering (ASE’17).Google ScholarGoogle Scholar
  66. Úlfar Erlingsson. 2012. personal communication stating that CFI was not deployed at Microsoft due to its overhead exceeding 10%.Google ScholarGoogle Scholar
  67. Rijnard van Tonder, John Kotheimer, and Claire Le Goues. 2018. Semantic crash bucketing. In Proceedings of the IEEE International Conference on Automated Software Engineering (ASE’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. John Viega and Gary McGraw. 2001. Building Secure Software: How to Avoid Security Problems the Right Way. Addison-Wesley.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Daniel Votipka, Kelsey R. Fulton, James Parker, Matthew Hou, Michelle L. Mazurek, and Michael Hicks. 2020. Understanding security mistakes developers make: Qualitative analysis from build it, break it, fix it. In Proceedings of the 29th USENIX Security Symposium (USENIXSecurity’20). USENIX Association.Google ScholarGoogle Scholar
  70. D. Votipka, R. Stevens, E. Redmiles, J. Hu, and M. Mazurek. 2018. Hackers vs. testers: A comparison of software vulnerability discovery processes. In Proceedings of the IEEE IEEE Symposium on Security and Privacy (S8P’18).Google ScholarGoogle Scholar
  71. James Walden, Jeff Stuckman, and Riccardo Scandariato. 2014. Predicting vulnerable components: Software metrics vs text mining. In Proceedings of the IEEE International Symposium on Software Reliability Engineering.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Charles Weir, Awais Rashid, and James Noble. 2017. I’d like to have an argument, please: Using dialectic for effective app security. In Proceedings of the 2nd European Workshop on Usable Security (EuroUSEC’17). Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  73. SeongIl Wi, Jaeseung Choi, and Sang Kil Cha. 2018. Git-based CTF: A simple and effective approach to organizing in-course attack-and-defense security competition. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering (ASE’18).Google ScholarGoogle Scholar
  74. Glenn Wurster and P. C. van Oorschot. 2008. The developer is the enemy. In Proceedings of the New Security Paradigms Workshop (NSPW’08). 89.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. J. Xie, H. R. Lipford, and B. Chu. 2011. Why do programmers make security errors? In Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing. Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6070393.Google ScholarGoogle Scholar
  76. Muhammad Mudassar Yamin, Basel Katt, Espen Torseth, Vasileios Gkioulos, and Stewart James Kowalski. 2018. Make it and break it: An IoT smart home testbed case study. In Proceedings of the 2nd International Symposium on Computer Science and Intelligent Control (ISCSIC’18).Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Joonseok Yang, Duksan Ryu, and Jongmoon Baik. 2016. Improving vulnerability prediction accuracy with secure coding standard violation measures. In Proceedings of the International Conference on Big Data and Smart Computing (BigComp’16).Google ScholarGoogle Scholar

Index Terms

  1. Build It, Break It, Fix It: Contesting Secure Development

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Privacy and Security
          ACM Transactions on Privacy and Security  Volume 23, Issue 2
          May 2020
          149 pages
          ISSN:2471-2566
          EISSN:2471-2574
          DOI:10.1145/3394723
          Issue’s Table of Contents

          Copyright © 2020 ACM

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 17 April 2020
          • Accepted: 1 February 2020
          • Revised: 1 December 2019
          • Received: 1 June 2019
          Published in tops Volume 23, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!