Abstract
Over the past decade, many programming languages and systems for parallel-computing have been developed, including Cilk, Fork/Join Java, Habanero Java, Parallel Haskell, Parallel ML, and X10. Although these systems raise the level of abstraction at which parallel code are written, performance continues to require the programmer to perform extensive optimizations and tuning, often by taking various architectural details into account. One such key optimization is granularity control, which requires the programmer to determine when and how parallel tasks should be sequentialized.
In this paper, we briefly describe some of the challenges associated with automatic granularity control when trying to achieve portable performance for parallel programs with arbitrary nesting of parallel constructs. We consider a result from the functional-programming community, whose starting point is to consider an "oracle" that can predict the work of parallel codes, and thereby control granularity. We discuss the challenges in implementing such an oracle and proving that it has the desired theoretical properties under the nested-parallel programming model.
- U. A. Acar, A. Charguéraud, and M. Rainey. 2016. Oracle-guided scheduling for controlling granularity in implicitly parallel languages. JFP 26 (2016).Google Scholar
- A. Duran, J. Corbalan, and E. Ayguade. 2008. An adaptive cut-off for task parallelism. In SC. 1--11. Google Scholar
Digital Library
- Intel. 2011. Intel Threading Building Blocks. (2011). https://www.threadingbuildingblocks.org/.Google Scholar
- S. Iwasaki and K. Taura. 2016. A static cut-off for task parallel programs. In PACT. 139--150. Google Scholar
Digital Library
- E. Mohr, D. A. Kranz, and R. H. Halstead. 1991. Lazy task creation: a technique for increasing the granularity of parallel programs. IEEE TPDS 2, 3 (1991), 264--280. Google Scholar
Digital Library
- J. Pehoushek and J. Weening. 1990. Low-cost process creation and dynamic partitioning in Qlisp. In LNCS. Vol. 441.182--199. Google Scholar
Digital Library
- A. Tzannes, G. C. Caragea, U. Vishkin, and R. Barua. 2014. Lazy Scheduling: A Runtime Adaptive Scheduler for Declarative Parallelism. ACM TOPLAS 36, 3 (2014), 10:1--10:51. Google Scholar
Digital Library
- J. S. Weening. 1989. Parallel Execution of Lisp Programs. Ph.D. Dissertation. Google Scholar
Digital Library
Recommendations
Performance challenges in modular parallel programs
PPoPP '18: Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel ProgrammingOver the past decade, many programming languages and systems for parallel-computing have been developed, including Cilk, Fork/Join Java, Habanero Java, Parallel Haskell, Parallel ML, and X10. Although these systems raise the level of abstraction at ...
Compiling machine-independent parallel programs
Initial evidence is presented that explicitly parallel, machine-independent programs can automatically be translated into parallel machine code that is competitive in performance with hand-written code.The programming language used is Modula-2*, an ...







Comments