Abstract
We present a complete redesign of evaluation strategies, a key abstraction for specifying pure, deterministic parallelism in Haskell. Our new formulation preserves the compositionality and modularity benefits of the original, while providing significant new benefits. First, we introduce an evaluation-order monad to provide clearer, more generic, and more efficient specification of parallel evaluation. Secondly, the new formulation resolves a subtle space management issue with the original strategies, allowing parallelism (sparks) to be preserved while reclaiming heap associated with superfluous parallelism. Related to this, the new formulation provides far better support for speculative parallelism as the garbage collector now prunes unneeded speculation. Finally, the new formulation provides improved compositionality: we can directly express parallelism embedded within lazy data structures, producing more compositional strategies, and our basic strategies are parametric in the coordination combinator, facilitating a richer set of parallelism combinators.
We give measurements over a range of benchmarks demonstrating that the runtime overheads of the new formulation relative to the original are low, and the new strategies even yield slightly better speedups on average than the original strategies
Supplemental Material
- }}M. K. Aswad, P. W. Trinder, A. D. Al Zain, G. J. Michaelson, and J. Berthold. Low pain vs no pain multi-core Haskells. In TFP '09 - Draft Proc. of Symp. on Trends in Functional Programming, pages 112--130, Komarno, Slovakia, June 2009.Google Scholar
- }}C. Baker-Finch, D. J. King, and P. Trinder. An operational semantics for parallel lazy evaluation. In ICFP '00 - Intl. Conf. on Functional Programming, pages 162--173, Montreal, Canada, Sept. 2000. ACM Press. Google Scholar
Digital Library
- }}G. E. Blelloch. Programming parallel algorithms. Commun. ACM, 39 (3): 85--97, 1996. Google Scholar
Digital Library
- }}M. M. T. Chakravarty, R. Leshchinskiy, S. Peyton Jones, G. Keller, and S. Marlow. Data parallel Haskell: a status report. In DAMP '07 - Workshop on Declarative Aspects of Multicore Programming, pages 10--18, Nice, France, Jan. 2007. ACM Press. Google Scholar
Digital Library
- }}M. I. Cole. Algorithmic Skeletons: Structured Management of Parallel Computation. MIT Press & Pitman, 1989. Google Scholar
Digital Library
- }}C. Grelck and S.-B. Scholz. SAC - from high-level programming with arrays to efficient parallel execution. Parallel Processing Letters, 13 (3): 401--412, 2003.Google Scholar
Cross Ref
- }}K. Hammond and G. Michaelson, editors. Research Directions in Parallel Functional Programming. Springer, 1999. Google Scholar
Digital Library
- }}T. Harris and S. Singh. Feedback directed implicit parallelism. In ICFP '07 - Intl. Conf. on Functional Programming, pages 251--264, Freiburg, Germany, Oct. 2007. ACM Press. Google Scholar
Digital Library
- }}T. Harris, S. Marlow, and S. Peyton Jones. Haskell on a shared-memory multiprocessor. In Haskell '05 - Workshop on Haskell, pages 49--61, Tallinn, Estonia, Sept. 2005. ACM Press. Google Scholar
Digital Library
- }}D. Jones Jr., S. Marlow, and S. Singh. Parallel performance tuning for Haskell. In Haskell '09 - Symposium on Haskell, pages 81--92, Edinburgh, Scotland, Sept. 2009. ACM Press. Google Scholar
Digital Library
- }}H.-W. Loidl, P. W. Trinder, K. Hammond, S. B. Junaidu, R. G. Morgan, and S. L. Peyton Jones. Engineering parallel symbolic programs in GpH. Concurrency - Practice and Experience, 11 (12): 701--752, 1999.Google Scholar
- }}H.-W. Loidl, P. W. Trinder, and C. Butz. Tuning task granularity and data locality of data parallel GpH programs. Parallel Processing Letters, 11 (4): 471--486, 2001.Google Scholar
Cross Ref
- }}R. Loogen, Y. Ortega-Mallén, and R. Peña-Marí. Parallel functional programming in Eden. J. Funct. Program., 15 (3): 431--475, 2005. Google Scholar
Digital Library
- }}S. Marlow, S. Peyton Jones, and S. Singh. Runtime support for multicore Haskell. In ICFP '09 - Intl. Conf. on Functional Programming, pages 65--78, Edinburgh, Scotland, Sept. 2009. ACM Press. Google Scholar
Digital Library
- }}C. McBride and R. Paterson. Applicative programming with effects. J. Funct. Program., 18 (1): 1--13, 2008. Google Scholar
Digital Library
- }}E. Mohr, D. A. Kranz, and R. H. Halstead Jr. Lazy task creation: A technique for increasing the granularity of parallel programs. IEEE Trans. Parallel Distrib. Syst., 2 (3): 264--280, 1991. Google Scholar
Digital Library
- }}R. S. Nikhil and Arvind. Implicit Parallel Programming in pH. Morgan Kaufmann Publishers, 2001. Google Scholar
Digital Library
- }}P. Prabhu, G. Ramalingam, and K. Vaswani. Safe programmable speculative parallelism. In PLDI '10 - Conf. on Programming Language Design and Implementation, pages 50--61, Toronto, Ontario, Canada, June 2010. ACM Press. Google Scholar
Digital Library
- }}N. Scaife, S. Horiguchi, G. Michaelson, and P. Bristow. A parallel SML compiler based on algorithmic skeletons. J. Funct. Program., 15 (4): 615--650, 2005. Google Scholar
Digital Library
- }}P. W. Trinder, K. Hammond, H.-W. Loidl, and S. L. Peyton Jones. Algorithms + strategy = parallelism. J. Funct. Program., 8 (1): 23--60, 1998. Google Scholar
Digital Library
- }}P. W. Trinder, H.-W. Loidl, and R. F. Pointon. Parallel and distributed Haskells. J. Funct. Program., 12 (4&5): 469--510, 2002. Special Issue on Haskell. Google Scholar
Digital Library
Index Terms
Seq no more: better strategies for parallel Haskell
Recommendations
Seq no more: better strategies for parallel Haskell
Haskell '10: Proceedings of the third ACM Haskell symposium on HaskellWe present a complete redesign of evaluation strategies, a key abstraction for specifying pure, deterministic parallelism in Haskell. Our new formulation preserves the compositionality and modularity benefits of the original, while providing significant ...
Systematic Development of Correct Bulk Synchronous Parallel Programs
PDCAT '10: Proceedings of the 2010 International Conference on Parallel and Distributed Computing, Applications and TechnologiesWith the current generalisation of parallel architectures arises the concern of applying formal methods to parallelism. The complexity of parallel, compared to sequential, programs makes them more error-prone and difficult to verify. Bulk Synchronous ...
Coordinating functional processes with Haskell#
SAC '02: Proceedings of the 2002 ACM symposium on Applied computingThis paper presents Haskell#, a parallel functional language based on coordination. Haskell# supports lazy stream communication and facilities, at coordination level, to the specification of data parallel programs. Haskell# supports a clean and complete,...







Comments