skip to main content
research-article

Type-directed automatic incrementalization

Authors Info & Claims
Published:11 June 2012Publication History
Skip Abstract Section

Abstract

Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes.

In this paper, we describe a significant step towards solving the problem of automatic incrementalization: a programming language and a compiler that can, given a few type annotations describing what can change over time, compile a conventional program that assumes its data to be static (unchanging over time) to an incremental program. Based on recent advances in self-adjusting computation, including a theoretical proposal for translating purely functional programs to self-adjusting programs, we develop techniques for translating conventional Standard ML programs to self-adjusting programs. By extending the Standard ML language, we design a fully featured programming language with higher-order features, a module system, and a powerful type system, and implement a compiler for this language. The resulting programming language, LML, enables translating conventional programs decorated with simple type annotations into incremental programs that can respond to changes in their data correctly and efficiently.

We evaluate the effectiveness of our approach by considering a range of benchmarks involving lists, vectors, and matrices, as well as a ray tracer. For these benchmarks, our compiler incrementalizes existing code with only trivial amounts of annotation. The resulting programs are often asymptotically more efficient, leading to orders of magnitude speedups in practice.

References

  1. M. Abadi, B. W. Lampson, and J.-J. Lévy. Analysis and caching of dependencies. In International Conference on Functional Programming, pages 83--91, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. U. A. Acar, G. E. Blelloch, and R. Harper. Adaptive functional programming. ACM Trans. Prog. Lang. Sys., 28 (6): 990--1034, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. U. A. Acar, A. Ahmed, and M. Blume. Imperative self-adjusting computation. In Proceedings of the 25th Annual ACM Symposium on Principles of Programming Languages, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. U. A. Acar, G. E. Blelloch, M. Blume, R. Harper, and K. Tangwongsan. An experimental analysis of self-adjusting computation. ACM Trans. Prog. Lang. Sys., 32 (1): 3:1--53, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. U. A. Acar, A. Cotter, B. Hudson, and D. Türkoğlu. Dynamic well-spaced point sets. In Symposium on Computational Geometry, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. U. A. Acar, A. Cotter, B. Hudson, and D. Türkoğlu. Parallelism in dynamic well-spaced point sets. In Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. K. Agarwal, L. J. Guibas, H. Edelsbrunner, J. Erickson, M. Isard, S. Har-Peled, J. Hershberger, C. Jensen, L. Kavraki, P. Koehl, M. Lin, D. Manocha, D. Metaxas, B. Mirtich, D. Mount, S. Muthukrishnan, D. Pai, E. Sacks, J. Snoeyink, S. Suri, and O. Wolfson. Algorithmic issues in modeling motion. ACM Comput. Surv., 34 (4): 550--572, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. W. Appel and T. Jim. Shrinking lambda expressions in linear time. J. Funct. Program., 7 (5): 515--540, Sept. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. F. Baader and T. Nipkow. Term rewriting and all that. Cambridge University Press, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. R. Bellman. Dynamic Programming. Princeton Univ. Press, 1957.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Bhatotia, A. Wieder, R. Rodrigues, U. A. Acar, and R. Pasquini. Incoop: MapReduce for incremental computations. In ACM Symposium on Cloud Computing, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Burckhardt, D. Leijen, C. Sadowski, J. Yi, and T. Ball. Two for the price of one: A model for parallel and incremental computation. In ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. Carlsson. Monads for incremental computing. In International Conference on Functional Programming, pages 26--35, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. Chen, J. Dunfield, M. A. Hammer, and U. A. Acar. Implicit self-adjusting computation for purely functional programs. In Int'l Conference on Functional Programming (ICFP '11), pages 129--141, Sept. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Y.-J. Chiang and R. Tamassia. Dynamic algorithms in computational geometry. Proceedings of the IEEE, 80 (9): 1412--1434, 1992.Google ScholarGoogle ScholarCross RefCross Ref
  16. K. Crary, A. Kliger, and F. Pfenning. A monadic analysis of information flow security with mutable state. Journal of Functional Programming, 15 (2): 249--291, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. DeltaML. DeltaML web site. http://ttic.uchicago.edu/ pl/sa-sml/.Google ScholarGoogle Scholar
  18. A. Demers, T. Reps, and T. Teitelbaum. Incremental evaluation of attribute grammars with application to syntax-directed editors. In Principles of Programming Languages, pages 105--116, 1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Demetrescu, S. Emiliozzi, and G. F. Italiano. Experimental analysis of dynamic all pairs shortest path algorithms. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 369--378, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Demetrescu, I. Finocchi, and G. Italiano. Handbook on Data Structures and Applications, chapter 36: Dynamic Graphs. CRC Press, 2005.Google ScholarGoogle Scholar
  21. M. Hammer and U. A. Acar. Memory management for self-adjusting computation. In International Symposium on Memory Management, pages 51--60, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. M. Hammer, U. A. Acar, M. Rajagopalan, and A. Ghuloum. A proposal for parallel self-adjusting computation. In DAMP '07: Declarative Aspects of Multicore Programming, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. Hammer, G. Neis, Y. Chen, and U. A. Acar. Self-adjusting stack machines. In ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. A. Hammer, U. A. Acar, and Y. Chen. CEAL: a C-based language for self-adjusting computation. In ACM SIGPLAN Conference on Programming Language Design and Implementation, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. N. Heintze and J. G. Riecke. The SLam calculus: programming with secrecy and integrity. In Principles of Programming Languages (POPL '98), pages 365--377, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Heydon, R. Levin, and Y. Yu. Caching function calls using precise dependencies. In Programming Language Design and Implementation, pages 311--320, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. Hoover. Incremental Graph Evaluation. PhD thesis, Department of Computer Science, Cornell University, May 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. R. Jakob. Dynamic Planar Convex Hull. PhD thesis, Department of Computer Science, University of Aarhus, 2002.Google ScholarGoogle Scholar
  29. D. J. King. A ray tracer for spheres, 1998. http://www.cs.rice.edu/ dmp4866/darcs/nofib/spectral/sphere/.Google ScholarGoogle Scholar
  30. R. Ley-Wild, M. Fluet, and U. A. Acar. Compiling self-adjusting programs with continuations. In Int'l Conference on Functional Programming, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. J. McCarthy. A basis for a mathematical theory of computation. In P. Braffort and D. Hirschberg, editors, Computer Programming and Formal Systems, pages 33--70. North-Holland, Amsterdam, 1963.Google ScholarGoogle ScholarCross RefCross Ref
  32. D. Michie. "Memo" functions and machine learning. Nature, 218: 19--22, 1968.Google ScholarGoogle ScholarCross RefCross Ref
  33. MLton. MLton web site. http://www.mlton.org.Google ScholarGoogle Scholar
  34. A. C. Myers. JFlow: practical mostly-static information flow control. In Principles of Programming Languages, pages 228--241, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. H. A. Newman. On theories with a combinatorial definition of "equivalence". Annals of Mathematics, 43 (2): 223--243, 1942.Google ScholarGoogle ScholarCross RefCross Ref
  36. F. Pottier and V. Simonet. Information flow inference for ML. ACM Trans. Prog. Lang. Sys., 25 (1): 117--158, Jan. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. W. Pugh and T. Teitelbaum. Incremental computation via function caching. In Principles of Programming Languages, pages 315--328, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. G. Ramalingam and T. Reps. A categorized bibliography on incremental computation. In Principles of Programming Languages, pages 502--510, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. A. Sabelfeld and A. C. Myers. Language-based information-flow security. IEEE J. Selected Areas in Communications, 21 (1), 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. A. Sampson, W. Dietl, E. Fortuna, D. Gnanapragasam, L. Ceze, and D. Grossman. EnerJ: Approximate data types for safe and general low-power computation. In Programming Language Design and Implementation, pages 164--174, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. A. Shankar and R. Bodik. DITTO: Automatic incrementalization of data structure invariant checks (in Java). In Programming Language Design and Implementation, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. O. Sümer, U. A. Acar, A. Ihler, and R. Mettu. Fast parallel and adaptive updates for dual-decomposition solvers. In Conference on Artificial Intelligence (AAAI), 2011.Google ScholarGoogle Scholar
  43. N. Swamy, N. Guts, D. Leijen, and M. Hicks. Lightweight monadic programming in ML. In International Conference on Functional Programming (ICFP), Sept. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Type-directed automatic incrementalization

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!