Abstract
A fundamental challenge of parallel programming is to ensure that the observable outcome of a program remains deterministic in spite of parallel execution. Language-level enforcement of determinism is possible, but existing deterministic-by-construction parallel programming models tend to lack features that would make them applicable to a broad range of problems. Moreover, they lack extensibility: it is difficult to add or change language features without breaking the determinism guarantee.
The recently proposed LVars programming model, and the accompanying LVish Haskell library, took a step toward broadly-applicable guaranteed-deterministic parallel programming. The LVars model allows communication through shared monotonic data structures to which information can only be added, never removed, and for which the order in which information is added is not observable. LVish provides a Par monad for parallel computation that encapsulates determinism-preserving effects while allowing a more flexible form of communication between parallel tasks than previous guaranteed-deterministic models provided.
While applying LVar-based programming to real problems using LVish, we have identified and implemented three capabilities that extend its reach: inflationary updates other than least-upper-bound writes; transitive task cancellation; and parallel mutation of non-overlapping memory locations. The unifying abstraction we use to add these capabilities to LVish---without suffering added complexity or cost in the core LVish implementation, or compromising determinism---is a form of monad transformer, extended to handle the Par monad. With our extensions, LVish provides the most broadly applicable guaranteed-deterministic parallel programming interface available to date. We demonstrate the viability of our approach both with traditional parallel benchmarks and with results from a real-world case study: a bioinformatics application that we parallelized using our extended version of LVish.
- Arvind, R. S. Nikhil, and K. K. Pingali. I-structures: data structures for parallel computing. ACM Trans. Program. Lang. Syst., 11(4), Oct. 1989. Google Scholar
Digital Library
- G. E. Blelloch, J. T. Fineman, P. B. Gibbons, and J. Shun. Internally deterministic parallel algorithms can be fast. In PPoPP, 2012. Google Scholar
Digital Library
- R. L. Bocchino, Jr., V. S. Adve, D. Dig, S. V. Adve, S. Heumann, R. Komuravelli, J. Overbey, P. Simmons, H. Sung, and M. Vakilian. A type and effect system for deterministic parallel Java. In OOPSLA, 2009. Google Scholar
Digital Library
- Z. Budimlić, M. Burke, V. Cavé, K. Knobe, G. Lowney, R. Newton, J. Palsberg, D. Peixotto, V. Sarkar, F. Schlimbach, and S. Taşirlar. Concurrent Collections. Sci. Program., 18(3-4), Aug. 2010. Google Scholar
Digital Library
- M. M. Chakravarty, G. Keller, S. Lee, T. L. McDonell, and V. Grover. Accelerating Haskell array codes with multicore GPUs. In DAMP, 2011. Google Scholar
Digital Library
- H. Cui, J. Wu, C.-C. Tsai, and J. Yang. Stable deterministic multi-threading through schedule memoization. In OSDI, 2010. Google Scholar
Digital Library
- A. Foltzer, A. Kulkarni, R. Swords, S. Sasidharan, E. Jiang, and R. R. Newton. A meta-scheduler for the par-monad: Composable scheduling for the heterogeneous cloud. In ICFP: International Conference on Functional Programming. ACM, 2012. Google Scholar
Digital Library
- M. I. Gordon, W. Thies, M. Karczmarek, J. Lin, A. S. Meli, C. Leger, A. A. Lamb, J. Wong, H. Hoffman, D. Z. Maze, and S. Amarasinghe. A stream compiler for communication-exposed architectures. In ASPLOS, 2002. Google Scholar
Digital Library
- G. Kahn. The semantics of a simple language for parallel programming. In J. L. Rosenfeld, editor, Information processing. North Holland, Amsterdam, Aug. 1974.Google Scholar
- O. Kiselyov, A. Sabry, and C. Swords. Extensible effects: an alternative to monad transformers. In Haskell, 2013. Google Scholar
Digital Library
- L. Kuper and R. R. Newton. LVars: lattice-based data structures for deterministic parallelism. In FHPC, 2013. Google Scholar
Digital Library
- L. Kuper, A. Turon, N. R. Krishnaswami, and R. R. Newton. Freeze after writing: Quasi-deterministic parallel programming with LVars. In POPL, 2014. Google Scholar
Digital Library
- C. E. Leiserson, T. B. Schardl, and J. Sukha. Deterministic parallel random-number generation for dynamic-multithreading platforms. In PPoPP, 2012. Google Scholar
Digital Library
- T. Liu, C. Curtsinger, and E. D. Berger. Dthreads: Efficient deterministic multithreading. In SOSP, 2011. Google Scholar
Digital Library
- S. Marlow, S. Peyton Jones, and S. Singh. Runtime support for multicore Haskell. In ICFP, 2009. Google Scholar
Digital Library
- S. Marlow, R. Newton, and S. Peyton Jones. A monad for deterministic parallelism. In Haskell, 2011. Google Scholar
Digital Library
- P. Narayanan and R. R. Newton. Graph algorithms in a guaranteed-deterministic language. In Workshop on Deterministic and Correctness in Parallel Programming (WoDet'14), 2014.Google Scholar
- R. R. Newton and I. L. Newton. PhyBin: binning trees by topology. PeerJ, 1:e187, Oct. 2013.Google Scholar
Cross Ref
- S.-J. Sul and T. L. Williams. A randomized algorithm for comparing sets of phylogenetic trees. In APBC, 2007.Google Scholar
Cross Ref
- V. Weinberg. Data-parallel programming with Intel Array Building Blocks (ArBB). CoRR, abs/1211.1581, 2012.Google Scholar
Index Terms
Taming the parallel effect zoo: extensible deterministic parallelism with LVish
Recommendations
Taming the parallel effect zoo: extensible deterministic parallelism with LVish
PLDI '14: Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and ImplementationA fundamental challenge of parallel programming is to ensure that the observable outcome of a program remains deterministic in spite of parallel execution. Language-level enforcement of determinism is possible, but existing deterministic-by-construction ...
A type and effect system for deterministic parallel Java
OOPSLA '09Today's shared-memory parallel programming models are complex and error-prone.While many parallel programs are intended to be deterministic, unanticipated thread interleavings can lead to subtle bugs and nondeterministic semantics. In this paper, we ...
FiberSCIP-A Shared Memory Parallelization of SCIP
Recently, parallel computing environments have become significantly popular. In order to obtain the benefit of using parallel computing environments, we have to deploy our programs for these effectively. This paper focuses on a parallelization of SCIP ...







Comments