Abstract
We present techniques for incremental computing by introducing adaptive functional programming. As an adaptive program executes, the underlying system represents the data and control dependences in the execution in the form of a dynamic dependence graph. When the input to the program changes, a change propagation algorithm updates the output and the dynamic dependence graph by propagating changes through the graph and re-executing code where necessary. Adaptive programs adapt their output to any change in the input, small or large.We show that adaptivity techniques are practical by giving an efficient implementation as a small ML library. The library consists of three operations for making a program adaptive, plus two operations for making changes to the input and adapting the output to these changes. We give a general bound on the time it takes to adapt the output, and based on this, show that an adaptive Quicksort adapts its output in logarithmic time when its input is extended by one key.To show the safety and correctness of the mechanism we give a formal definition of AFL, a call-by-value functional language extended with adaptivity primitives. The modal type system of AFL enforces correct usage of the adaptivity mechanism, which can only be checked at run time in the ML library. Based on the AFL dynamic semantics, we formalize thechange-propagation algorithm and prove its correctness.
- Abadi, M., Lampson, B. W., and Levy, J.-J. 1996. Analysis and caching of dependencies. In Proceedings of the International Conference on Functional Programming, pp. 83--91.]] Google Scholar
- Acar, U. A. 2005. Self-Adjusting Computation. Ph.D. dissertation. Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA, May.]] Google Scholar
- Acar, U. A., Blelloch, G. E., and Harper, R. 2003. Selective memoization. In Proceedings of the 30th Annual ACM Symposium on Principles of Programming Languages. ACM, New York.]] Google Scholar
- Acar, U. A., Blelloch, G. E., Harper, R., Vittes, J. L., and Woo, M. 2004. Dynamizing static algorithms with applications to dynamic trees and history independence. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA). ACM, New York.]] Google Scholar
- Acar, U. A., Blelloch, G. E., Blume, M., Harper, R., and Tangwongsan, K. 2005a. A library for self-adjusting computation. In Proceedings of the ACM SIGPLAN Workshop on ML. ACM, New York.]]Google Scholar
- Acar, U. A., Blelloch, G. E., and Vittes, J. L. 2005b. An experimental analysis of change propagation in dynamic trees. In Proceedings of the Workshop on Algorithm Engineering and Experimentation. ACM, New York.]]Google Scholar
- Basch, J., Guibas, L. J., and Hershberger, J. 1999. Data structures for mobile data. J. Algorithms 31, 1, 1--28.]] Google Scholar
- Bellman, R. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ.]] Google Scholar
- Carlsson, M. 2002. Monads for incremental computing. In Proceedings of the 7th ACM SIGPLAN International Conference on Functional Programming. ACM, New York, pp. 26--35.]] Google Scholar
- Demers, A., Reps, T., and Teitelbaum, T. 1981. Incremental evaluation of attribute grammars with application to syntax directed editors. In Proceedings of the 8th Annual ACM Symposium on Principles of Programming Languages. ACM, New York, pp. 105--116.]] Google Scholar
- Dietz, P. F. and Sleator, D. D. 1987. Two algorithms for maintaining order in a list. In Proceedings of the 19th ACM Symposium on Theory of Computing. ACM, New York, pp. 365--372.]] Google Scholar
- Dietz, P. F. 1989. Fully persistent arrays. In Proceedings of the Workshop on Algorithms and Data Structures. (Aug.) Lecture Notes in Computer Science, Vol 382. Springer-Verlag, ACM, New York, pp. 67--74.]] Google Scholar
- Driscoll, J. R., Sarnak, N., Sleator, D. D., and Tarjan, R. E. 1989. Making data structures persistent. J. Comput. Syst. Sci. 38, 1, (Feb.) 86--124.]] Google Scholar
- Driscoll, J. R., Sleator, D. D., and Tarjan, R. E. 1994. Fully persistent lists with catenation. J. ACM 41, 5, 943--959.]] Google Scholar
- Field, J. and Teitelbaum, T. 1990. Incremental reduction in the lambda calculus. In Proceedings of the ACM '90 Conference on LISP and Functional Programming. (June). ACM, New York, pp. 307--322.]] Google Scholar
- Field, J. 1991. Incremental reduction in the lambda calculus and related reduction systems. Ph.D. dissertation. Department of Computer Science, Cornell University.]] Google Scholar
- Heydon, A., Levin, R., Mann, T., and Yu, Y. 1999. The Vesta approach to software configuration management. Rep. 1999-001, Compaq Systems Research Center.]] Google Scholar
- Heydon, A., Levin, R., and Yu, Y. 2000. Caching function calls using precise dependencies. In Proceedings of the 2000 ACM SIGPLAN Conference on PLDI. (May). ACM, New York, pp. 311--320.]] Google Scholar
- Hoover, R. 1987. Incremental graph evaluation. Ph.D. dissertation. Department of Computer Science, Cornell University.]] Google Scholar
- Liu, Y. A., Stoller, S., and Teitelbaum, T. 1998. Static caching for incremental computation. ACM Trans. Prog. Lang. Syst. 20, 3, 546--585.]] Google Scholar
- Liu, Y. A. 1996. Incremental Computation: A Semantics-Based Systematic Transformational Approach. Ph.D. dissertation. Department of Computer Science, Cornell University.]] Google Scholar
- McCarthy, J. 1963. A Basis for a mathematical theory of computation. In Computer Programming and Formal Systems. P. Braffort and D. Hirschberg, Eds., North-Holland, Amsterdam. The Netherlands, pp. 33--70.]]Google Scholar
- Michie, D. 1968. ‘memo’ functions and machine learning. Nature 21, 8, 19--22.]]Google Scholar
- Miller, G. L. and Reif, J. H. 1985. Parallel tree contraction and its application. In Proceedings of the 26th Annual IEEE Symposium on Foundations of Computer Science. IEEE Computer Society Press, Los Alamitos, CA, pp. 487--489.]]Google Scholar
- Pfenning, F. and Davies, R. 2001. A judgmental reconstruction of modal logic. Math. Struct. Comput. Sci. 11, 511--540.]] Google Scholar
- Pugh, W. 1988. Incremental computation via function caching. Ph.D. dissertation. Department of Computer Science, Cornell University.]]Google Scholar
- Pugh, W. and Teitelbaum, T. 1989. Incremental computation via function caching. In Proceedings of the 16th Annual ACM Symposium on Principles of Programming Languages. ACM, New York, pp. 315--328.]] Google Scholar
- Ramalingam, G. and Reps, T. 1993. A categorized bibliography on incremental computation. In Conference Record of the 20th Annual ACM Symposium on Principles of Programming Languages. ACM, New York, pp. 502--510.]] Google Scholar
- Reps, T. 1982. Optimal-time incremental semantic analysis for syntax-directed editors. In Proceedings of the 9th Annual Symposium on Principles of Programming Languages. ACM, New York, pp. 169--176.]] Google Scholar
- Sleator, D. D. and Tarjan, R. E. 1983. A data structure for dynamic trees. J. Comput. Syst. Sci. 26, 3, 362--391.]] Google Scholar
- Sundaresh, R. S. and Hudak, P. 1991. Incremental compilation via partial evaluation. In Conference Record of the 18th Annual ACM Symposium on Principles of Programming Languages. ACM, New York, pp. 1--13.]] Google Scholar
- Yellin, D. M. and Strom, R. E. 1991. INC: A language for incremental computations. ACM Trans. Prog. Lang. Syst. 13, 2, 211--236.]] Google Scholar
Index Terms
Adaptive functional programming
Recommendations
Adaptive functional programming
POPL '02: Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languagesAn adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general ...
Adaptive functional programming
An adaptive computation maintains the relationship between its input and output as the input changes. Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability. We propose a general ...
Iterative neural networks for adaptive inference on resource-constrained devices
AbstractThe computational cost of evaluating a neural network usually only depends on design choices such as the number of layers or the number of units in each layer and not on the actual input. In this work, we build upon deep Residual Networks (ResNets)...






Comments