Abstract
Using static analysis techniques compilers for lazy functional languages can be used to identify parts of a program that can be legitimately evaluated in parallel and ensure that those expressions are executed concurrently with the main thread of execution. These techniques can produce improvements in the runtime performance of a program, but are limited by the static analyses’ poor prediction of runtime performance. This paper outlines the development of a system that uses iterative profile-directed improvement in addition to well-studied static analysis techniques. This allows us to achieve higher performance gains than through static analysis alone.
- B. Lippmeier, “{Haskell} Implicit parallel functional programming,” https://mail.haskell.org/pipermail/haskell/2005-January/015213.Google Scholar
- html, Jan 2005, {Online; accessed 13-March-2015}.Google Scholar
- R. J. M. Hughes, “The Design and Implementation of Programming Languages,” Ph.D. dissertation, Programming Research Group, Oxford University, July 1983.Google Scholar
- S. L. Peyon Jones, “Parallel Implementations of Functional Programming Languages,” Comput. J., vol. 32, no. 2, pp. 175–186, Apr. 1989. Google Scholar
Digital Library
- K. Hammond and G. Michelson, Research Directions in Parallel Functional Programming. Springer-Verlag, 2000. Google Scholar
Digital Library
- G. Hogen, A. Kindler, and R. Loogen, “Automatic Parallelization of Lazy Functional Programs,” in ESOP’92. Springer, pp. 254–268. Google Scholar
Digital Library
- G. Tremblay and G. R. Gao, “The Impact of Laziness on Parallelism and the Limits of Strictness Analysis,” in Proceedings High Performance Functional Computing, 1995, pp. 119–k133.Google Scholar
- T. Harris and S. Singh, “Feedback Directed Implicit Parallelism,” in ICFP ’07: Proceedings of the 12th ACM SIGPLAN International Conference on Functional Programming. New York, NY, USA: ACM, 2007, pp. 251–264. Google Scholar
Digital Library
- S. P. Jones, C. Clack, and J. Salkind, “GRIP: A High Performance Architecture for Parallel Graph Reduction,” in Functional Programming Languages and Computer Architecture: Third International Conference (Portland, Oregon). Springer Verlag, 1987. Google Scholar
Digital Library
- D. Jones, Jr., S. Marlow, and S. Singh, “Parallel Performance Tuning for Haskell,” in Proceedings of the 2nd ACM SIGPLAN Symposium on Haskell. New York, NY, USA: ACM, 2009, pp. 81–92. Google Scholar
Digital Library
- C. Runciman and D. Wakeling, “Profiling Parallel Functional Computations (Without Parallel Machines),” in Functional Programming, Glasgow 1993. Springer, 1994, pp. 236–251.Google Scholar
- R. Hinze, “Projection-based Strictness Analysis: Theoretical and Practical Aspects,” 1995, Inaugural Dissertation, University of Bonn.Google Scholar
- P. W. Trinder, K. Hammond, H.-W. Loidl, and S. L. Peyton Jones, “Algorithm + Strategy = Parallelism,” J. Funct. Program., vol. 8, no. 1, pp. 23–60, Jan. 1998. Google Scholar
Digital Library
- C. Clack and S. Peyton Jones, “The Four-Stroke Reduction Engine,” in Proceedings of the 1986 ACM conference on LISP and functional programming. ACM, 1986, pp. 220–232. Google Scholar
Digital Library
- A. Mycroft, “The Theory and Practice of Transforming Call-by-Need Into Call-by-Value,” in International Symposium on Programming. Springer, 1980, pp. 269–281. Google Scholar
Digital Library
- S. Peyton Jones, P. Sestoft, and J. Hughes, “Demand Analysis,” 2006, unpublished draft.Google Scholar
- P. Wadler, “Strictness Analysis on Non-Flat Domains,” in Abstract Interpretation of Declarative Languages. Ellis Horwood, 1987, pp. 266–275.Google Scholar
- P. Wadler and R. J. M. Hughes, “Projections for Strictness Analysis,” in Functional Programming Languages and Computer Architecture. Springer, 1987, pp. 385–407. Google Scholar
Digital Library
- S. Marlow, P. Maier, H. Loidl, M. Aswad, and P. Trinder, “Seq No More: Better Strategies for Parallel Haskell,” in Proceedings of the third ACM Haskell symposium on Haskell. ACM, 2010, pp. 91–102. Google Scholar
Digital Library
- R. Kubiak, J. Hughes, and J. Launchbury, “Implementing Projection-Based Strictness Analysis,” in Functional Programming, Glasgow 1991. Springer, 1992, pp. 207–224. Google Scholar
Digital Library
- H. W. Loidl, “Granularity in Large-Scale Parallel Functional Programming,” Ph.D. dissertation, PhD thesis, Department of Computing Science, University of Glasgow, 1998.Google Scholar
- L. Augustsson and T. Johnsson, “Parallel Graph Reduction with the v, G -Machine,” in Proceedings of the 4th International Conference on Functional Programming Languages and Computer Architecture, ser. FPCA ’89. New York, NY, USA: ACM, 1989, pp. 202–213. Google Scholar
Digital Library
- B. O’Sullivan, “Criterion: A Haskell Microbenchmarking kibrary,” https://hackage.haskell.org/package/criterion, 2009.Google Scholar
- I. Sergey, D. Vytiniotis, and S. Peyton Jones, “Modular, Higher-order Cardinality Analysis in Theory and Practice,” in Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, ser. POPL ’14. New York, NY, USA: ACM, 2014, pp. 335–347. Google Scholar
Digital Library
- J. M. Calderón Trilla, S. Poulding, and C. Runciman, “Weaving Parallel Threads: Searching for Useful Parallelism in Functional Programs,” in Proceedings of the Symposium on Search-Based Software Engineering, 2015.Google Scholar
Cross Ref
- G. Keller, M. M. Chakravarty, R. Leshchinskiy, S. Peyton Jones, and B. Lippmeier, “Regular, Shape-Polymorphic, Parallel Arrays in Haskell,” in ICFP ’10: Proceedings of the 15th ACM SIGPLAN International Conference on Functional Programming, vol. 45, no. 9. ACM, 2010, pp. 261–272. Google Scholar
Digital Library
- M. M. Chakravarty, G. Keller, S. Lee, T. L. McDonell, and V. Grover, “Accelerating Haskell Array Codes with Multicore GPUs,” in Proceedings of the 6th Workshop on Declarative Aspects of Multicore Programming. ACM, 2011, pp. 3–14. Google Scholar
Digital Library
- A. Bloss, “Path Analysis and the Optimization of Nonstrict Functional Languages,” ACM Transactions on Programming Languages and Systems (TOPLAS), vol. 16, no. 3, pp. 328–369, 1994. Google Scholar
Digital Library
Index Terms
Improving implicit parallelism
Recommendations
Improving implicit parallelism
Haskell '15: Proceedings of the 2015 ACM SIGPLAN Symposium on HaskellUsing static analysis techniques compilers for lazy functional languages can be used to identify parts of a program that can be legitimately evaluated in parallel and ensure that those expressions are executed concurrently with the main thread of ...
Exploiting Implicit Parallelism in Dynamic Array Programming Languages
ARRAY'14: Proceedings of ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array ProgrammingWe have built an interpreter for the array programming language J. The interpreter exploits implicit data parallelism in the language to achieve good parallel speedups on a variety of benchmark applications.
Many array programming languages operate on ...
Lenient evaluation and parallelism
In a companion paper (Tremblay G. Lenient evaluation is neither strict nor lazy. Computer languages 2000; 26:43-66.), we showed that non-strict functional languages were not necessarily lazy. More precisely, non-strict functional languages can be ...






Comments