Abstract
The increasing availability of commodity multicore processors is making parallel computing available to the masses. Traditional parallel languages are largely intended for large-scale scientific computing and tend not to be well-suited to programming the applications one typically finds on a desktop system. Thus we need new parallel-language designs that address a broader spectrum of applications. In this paper, we present Manticore, a language for building parallel applications on commodity multicore hardware including a diverse collection of parallel constructs for different granularities of work. We focus on the implicitly-threaded parallel constructs in our high-level functional language. We concentrate on those elements that distinguish our design from related ones, namely, a novel parallel binding form, a nondeterministic parallel case form, and exceptions in the presence of data parallelism. These features differentiate the present work from related work on functional data parallel language designs, which has focused largely on parallel problems with regular structure and the compiler transformations --- most notably, flattening --- that make such designs feasible. We describe our implementation strategies and present some detailed examples utilizing various mechanisms of our language.
Supplemental Material
Available for Download
Supplemental material for: Implicitly-threaded parallelism in Manticore
- Arvind, R. S. Nikhil, and K. K. Pingali. I-structures: Data structures for parallel computing. ACM TOPLAS, 11(4), October 1989, pp. 598--632. Google Scholar
Digital Library
- Armstrong, J., R. Virding, C. Wikström, and M. Williams. Concurrent programming in ERLANG (2nd ed.). Prentice Hall International (UK) Ltd., Hertfordshire, UK, UK, 1996. Google Scholar
Digital Library
- Boehm, H.-J., R. Atkinson, and M. Plass. Ropes: an alternative to strings. Software-Practice & Experience, 25(12), 1995, pp. 1315--1330. Google Scholar
Digital Library
- Barton, R., D. Adkins, H. Prokop, M. Frigo, C. Joerg, M. Renard, D. Dailey, and C. Leiserson. Cilk Pousse, 1998. Viewed on March 20, 2008 at 2:45 PM.Google Scholar
- Blelloch, G. E., S. Chatterjee, J. C. Hardwick, J. Sipelstein, and M. Zagha. Implementation of a portable nested data-parallel language. JPDC, 21(1), 1994, pp. 4--14. Google Scholar
Digital Library
- Buck, I., T. Foley, D. Horn, J. Sugerman, K. Fatahalian, M. Houston, and P. Hanrahan. Brook for GPUs: stream computing on graphics hardware. SIGGRAPH '04, 23(3), July 2004, pp. 777--786. Google Scholar
Digital Library
- Blelloch, G. E. and J. Greiner. A provable time and space efficient implementation of NESL. In ICFP '96, New York, NY, May 1996. ACM, pp. 213--225. Google Scholar
Digital Library
- Blumofe, R. D., C. F. Joerg, B. C. Kuszmaul, C. E. Leiserson, K. H. Randall, and Y. Zhou. Cilk: an efficient multithreaded runtime system. SIGPLAN Not., 30(8), 1995, pp. 207--216. Google Scholar
Digital Library
- Blelloch, G. E. Programming parallel algorithms. CACM, 39(3), March 1996, pp. 85--97. Google Scholar
Digital Library
- Chakravarty, M. M. T. and G. Keller. More types for nested data parallel programming. In ICFP '00, New York, NY, September 2000. ACM, pp. 94--105. Google Scholar
Digital Library
- Chakravarty, M. M. T., G. Keller, R. Leshchinskiy, and W. Pfannenstiel. Nepal - Nested Data Parallelism in Haskell. In Euro-Par '01, vol. 2150 of LNCS, New York, NY, August 2001. Springer-Verlag, pp. 524--534. Google Scholar
Digital Library
- Chakravarty, M. M. T., R. Leschchinski, S. Peyton Jones, G. Keller, and S. Marlow. Data Parallel Haskell: A status report. In DAMP '07, New York, NY, January 2007. ACM, pp. 10--18. Google Scholar
Digital Library
- Dean, J. and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In OSDI '04, December 2004, pp. 137--150. Google Scholar
Digital Library
- Dailey, D. and C. E. Leiserson. Using Cilk to write multiprocessor chess programs. The Journal of the International Computer Chess Association, 2002.Google Scholar
- Fluet, M., N. Ford, M. Rainey, J. Reppy, A. Shaw, and Y. Xiao. Status report: The Manticore project. In ML '07, New York, NY, October 2007. ACM, pp. 15--24. Google Scholar
Digital Library
- Fluet, M., M. Rainey, J. Reppy, A. Shaw, and Y. Xiao. Manticore: A heterogeneous parallel language. In DAMP '07, New York, NY, January 2007. ACM, pp. 37--44. Google Scholar
Digital Library
- Fluet, M., M. Rainey, and J. Reppy. A scheduling framework for general-purpose parallel languages. In ICFP '08, New York, NY, September 2008. ACM. Google Scholar
Digital Library
- Gaudiot, J.-L., T. DeBoni, J. Feo, W. Bohm, W. Najjar, and P. Miller. The Sisal model of functional programming and its implementation. In pAs '97, Los Alamitos, CA, March 1997. IEEE Computer Society Press, pp. 112--123. Google Scholar
Digital Library
- GHC. The Glasgow Haskell Compiler. Available from http://www.haskell.org/ghc.Google Scholar
- Gansner, E. R. and J. H. Reppy (eds.). The Standard ML Basis Library. Cambridge University Press, Cambridge, England, 2004. Google Scholar
Digital Library
- Halstead Jr., R. H. Implementation of multilisp: Lisp on a multiprocessor. In LFP '84, New York, NY, August 1984. ACM, pp. 9--17. Google Scholar
Digital Library
- Hammond, K. Parallel SML: a Functional Language and its Implementation in Dactl. The MIT Press, Cambridge, MA, 1991.Google Scholar
- Hedqvist, P. A parallel and multithreaded ERLANG implementation. Master's dissertation, Computer Science Department, Uppsala University, Uppsala, Sweden, June 1998.Google Scholar
- Hauser, C., C. Jacobi, M. Theimer, B. Welch, and M. Weiser. Using threads in interactive systems: A case study. In SOSP '93, December 1993, pp. 94--105. Google Scholar
Digital Library
- Harris, T., S. Marlow, S. Peyton Jones, and M. Herlihy. Composable memory transactions. In PPoPP '05, New York, NY, June 2005. ACM, pp. 48--60. Google Scholar
Digital Library
- Jones, M. P. and P. Hudak. Implicit and explicit parallel programming in Haskell. Technical Report Research Report YALEU/DCS/RR-982, Yale University, August 1993.Google Scholar
- Leshchinskiy, R., M. M. T. Chakravarty, and G. Keller. Higher order flattening. In V. Alexandrov, D. van Albada, P. Sloot, and J. Dongarra (eds.), ICCS '06, number 3992 in LNCS, New York, NY, May 2006. Springer-Verlag, pp. 920--928. Google Scholar
Digital Library
- Leroy, X. and F. Pessaux. Type-based analysis of uncaught exceptions. ACM Trans. Program. Lang. Syst., 22(2), 2000, pp. 340--377. Google Scholar
Digital Library
- Luckham, D. Programming with Specifications. Texts and Monographs in Computer Science. Springer-Verlag, 1990.Google Scholar
Cross Ref
- Nikhil, R. S. and Arvind. Implicit Parallel Programming in pH. Morgan Kaufmann Publishers, San Francisco, CA, 2001. Google Scholar
Digital Library
- Nikhil, R. S. ID Language Reference Manual. Laboratory for Computer Science, MIT, Cambridge, MA, July 1991.Google Scholar
- Parnas, D. L. A technique for software module specification with examples. CACM, 15(5), May 1972, pp. 330--336. Google Scholar
Digital Library
- Pike, R., S. Dorward, R. Griesemer, and S. Quinlan. Interpreting the data: Parallel analysis with sawzall. Scientific Programming Journal, 13(4), 2005, pp. 227--298. Google Scholar
Digital Library
- Peyton Jones, S., A. Gordon, and S. Finne. Concurrent Haskell. In POPL '96, New York, NY, January 1996. ACM, pp. 295--308. Google Scholar
Digital Library
- Peyton Jones, S., A. Reid, F. Henderson, T. Hoare, and S. Marlow. A semantics for imprecise exceptions. In PLDI '99, New York, NY, May 1999. ACM, pp. 25--36. Google Scholar
Digital Library
- Rainey, M. The Manticore runtime model. Master's dissertation, University of Chicago, January 2007. Available from http://manticore.cs.uchicago.edu.Google Scholar
- Reppy, J. H. CML: A higher-order concurrent language. In PLDI '91, New York, NY, June 1991. ACM, pp. 293--305. Google Scholar
Digital Library
- Reppy, J. H. Concurrent Programming in ML. Cambridge University Press, Cambridge, England, 1999. Google Scholar
Digital Library
- Reppy, J. and Y. Xiao. Toward a parallel implementation of Concurrent ML. In DAMP '08, New York, NY, January 2008. ACM.Google Scholar
- Shaw, A. Data parallelism in Manticore. Master's dissertation, University of Chicago, July 2007. Available from http://manticore.cs.uchicago.edu.Google Scholar
- Trinder, P. W., K. Hammond, H.-W. Loidl, and S. L. Peyton Jones. Algorithm strategy = parallelism. JFP, 8(1), January 1998, pp. 23--60. Google Scholar
Digital Library
- Tarditi, D., S. Puri, and J. Oglesby. Accelerator: using data parallelism to program gpus for general-purpose uses. SIGOPS Oper. Syst. Rev., 40(5), 2006, pp. 325--335. Google Scholar
Digital Library
- Yi, K. An abstract interpretation for estimating uncaught exceptions in standard ml programs. Sci. Comput. Program., 31(1), 1998, pp. 147--173. Google Scholar
Digital Library
Index Terms
Implicitly-threaded parallelism in Manticore
Recommendations
Implicitly-threaded parallelism in Manticore
ICFP '08: Proceedings of the 13th ACM SIGPLAN international conference on Functional programmingThe increasing availability of commodity multicore processors is making parallel computing available to the masses. Traditional parallel languages are largely intended for large-scale scientific computing and tend not to be well-suited to programming ...
Implicitly threaded parallelism in manticore
The increasing availability of commodity multicore processors is making parallel computing ever more widespread. In order to exploit its potential, programmers need languages that make the benefits of parallelism accessible and understandable. Previous ...
Exploiting Implicit Parallelism in Dynamic Array Programming Languages
ARRAY'14: Proceedings of ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array ProgrammingWe have built an interpreter for the array programming language J. The interpreter exploits implicit data parallelism in the language to achieve good parallel speedups on a variety of benchmark applications.
Many array programming languages operate on ...







Comments