skip to main content
10.1145/1640089.1640095acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

Parallel programming with object assemblies

Published:25 October 2009Publication History

ABSTRACT

We present Chorus, a high-level parallel programming model suitable for irregular, heap-manipulating applications like mesh refinement and epidemic simulations, and JChorus, an implementation of the model on top of Java. One goal of Chorus is to express the dynamic and instance-dependent patterns of memory access that are common in typical irregular applications. Its other focus is locality of effects: the property that in many of the same applications, typical imperative commands only affect small, local regions in the shared heap.

Chorus addresses dynamism and locality through the unifying abstraction of an object assembly: a local region in a shared data structure equipped with a short-lived, speculative thread of control. The thread of control in an assembly can only access objects within the assembly. While objects can migrate from assembly to assembly, such migration is local--i.e., objects only move from one assembly to a neighboring one--and does not lead to aliasing. Programming primitives include a merge operation, by which an assembly merges with an adjacent assembly, and a split operation, which splits an assembly into smaller ones. Our abstractions are race and deadlock-free, and inherently data-centric.

We demonstrate that Chorus and JChorus allow natural programming of several important applications exhibiting irregular data-parallelism. We also present an implementation of JChorus based on a many-to-one mapping of assemblies to lower-level threads, and report on preliminary performance numbers.

References

  1. DSTM 2.1 beta. Available from http://www.cs.brown.edu/~mph/.Google ScholarGoogle Scholar
  2. The Lonestar Benchmark Suite. Available from http://iss.ices.utexas.edu/lonestar/.Google ScholarGoogle Scholar
  3. G. Agha, I. Mason, S. Smith, and C. Talcott. A foundation for actor computation. Journal of Functional Programming, 7(1):1--72, 1997. Google ScholarGoogle Scholar
  4. W. Aiello, Ch. Kalmanek, P. McDaniel, S. Sen, O. Spatscheck, and J. van der Merwe. Analysis of communities of interest in data networks. In PAM, pages 83--96, 2005. Google ScholarGoogle Scholar
  5. J. Barnes and P. Hut. A hierarchical O(N log N) force-calculation algorithm. Nature, 324(4):446--449, December 1986.Google ScholarGoogle Scholar
  6. C. Boyapati, R. Lee, and M. Rinard. Ownership types for safe programming: preventing data races and deadlocks. In OOPSLA, pages 211--230, 2002. Google ScholarGoogle Scholar
  7. D. Burke, J. Epstein, D. Cummings, J. Parker, K. Cline, R. Singa, and S. Chakravarty. Individual-based computational modeling of smallpox epidemic control strategies. Academic Emergency Medicine, 13(11):1142--1149, 2006.Google ScholarGoogle Scholar
  8. B. Chamberlain, D. Callahan, and H. Zima. Parallel programmability and the chapel language. Int. Journal of High Performance Computing Applications, 21(3):291--312, 2007. Google ScholarGoogle Scholar
  9. S. Chandra, V. Saraswat, V. Sarkar, and R. Bodık. Type inference for locality analysis of distributed data structures. In PPOPP, pages 11--22, 2008. Google ScholarGoogle Scholar
  10. P. Charles, C. Grothoff, V. Saraswat, C. Donawa, A. Kielstra, K. Ebcioglu, C. von Praun, and V. Sarkar. X10: an object--oriented approach to non-uniform cluster computing. In OOPSLA, pages 519--538, 2005. Google ScholarGoogle Scholar
  11. P. Chew. Guaranteed-quality mesh generation for curved surfaces. In Symposium on Computational Geometry, pages 274--280, 1993. Google ScholarGoogle Scholar
  12. Sun Chung and Anne Condon. Parallel implementation of borvka's minimum spanning tree algorithm. In IPPS, pages 302--308, 1996. Google ScholarGoogle Scholar
  13. D. Clarke, T. Wrigstad, J. Östlund, and E. Johnsen. Minimal ownership for active objects. In APLAS, pages 139--154, 2008. Google ScholarGoogle Scholar
  14. E. de Sturler and D. Loher. Parallel iterative solvers for irregular sparse matrices in high performance fortran. Future Generation Computer Systems, 13(4-5):315--325, 1998. Google ScholarGoogle Scholar
  15. Z. Galil and G. Italiano. Data structures and algorithms for disjoint set union problems. ACM Comput. Surv., 23(3):319--344, 1991. Google ScholarGoogle Scholar
  16. T. Grune Yanoff. Agent-based models as policy decision tools: The case of smallpox vaccination. Technical report, Royal Institute of Technology, Sweden.Google ScholarGoogle Scholar
  17. F. Guidec, P. Calégari, and P. Kuonen. Parallel irregular software for wave propagation simulation. Future Generation Computer Systems, 13(4-5):279--289, 1998. Google ScholarGoogle Scholar
  18. T. Harris and K. Fraser. Language support for lightweight transactions. In OOPSLA, pages 388--402, 2003. Google ScholarGoogle Scholar
  19. T. Harris, S. Marlow, S. L. Peyton Jones, and M. Herlihy. Composable memory transactions. In PPOPP, pages 48--60, 2005. Google ScholarGoogle Scholar
  20. C. Hewitt, P. Bishop, and R. Steiger. A universal modular actor formalism for artificial intelligence. In IJCAI, pages 235--245, 1973. Google ScholarGoogle Scholar
  21. K. Hildrum and P. Yu. Focused community discovery. In ICDM, pages 641--644, 2005. Google ScholarGoogle Scholar
  22. D. Jungnickel and M. Swamy. Graphs, Networks, and Algorithms. Springer, 2004. Google ScholarGoogle Scholar
  23. N. Kobayashi, B. Pierce, and D. Turner. Linearity and the pi-calculus. In POPL, pages 358--371, 1996. Google ScholarGoogle Scholar
  24. M. Kulkarni, M. Burtscher, R. Inkulu, K. Pingali, and C. Cascaval. How much parallelism is there in irregular applications? In PPOPP, pages 3--14, 2009. Google ScholarGoogle Scholar
  25. M. Kulkarni, M. Burtscher, K. Pingali, and C. Cascaval. Lonestar: A suite of parallel irregular programs. In ISPASS, 2009.Google ScholarGoogle Scholar
  26. M. Kulkarni, K. Pingali, G. Ramanarayanan, B. Walter, K. Bala, and L. Chew. Optimistic parallelism benefits from data partitioning. In ASPLOS, pages 233--243, 2008. Google ScholarGoogle Scholar
  27. M. Kulkarni, K. Pingali, B. Walter, G. Ramanarayanan, K. Bala, and P. Chew. Optimistic parallelism requires abstractions. In PLDI, pages 211--222, 2007. Google ScholarGoogle Scholar
  28. J. Larus and C. Kozyrakis. Transactional memory. Communications of the ACM, 51(7), 2008. Google ScholarGoogle Scholar
  29. E. A. Lee. Are new languages necessary for multicore?, 2007.Google ScholarGoogle Scholar
  30. K. Pingali, M. Kulkarni, D. Nguyen, M. Burtscher, M. Mendez-Lojo, D. Prountzos, X. Sui, and Z. Zhong. Amorphous data-parallelism in irregular applications. Technical Report TR-09-05, University of Texas at Austin, 2009.Google ScholarGoogle Scholar
  31. M. F. Spear, V. J. Marathe, L. Dalessandro, and M. L. Scott. Privatization techniques for software transactional memory. In PODC, 2007. Google ScholarGoogle Scholar
  32. S. Srinivasan and A. Mycroft. Kilim: Isolation-typed actors for java. In ECOOP, pages 104--128, 2008. Google ScholarGoogle Scholar
  33. H. Sutter and J. Larus. Software and the concurrency revolution. ACM Queue, 3(7):54--62, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Vaziri, F. Tip, and J. Dolby. Associating synchronization constraints with data in an object-oriented language. In POPL, pages 334--345, 2006. Google ScholarGoogle Scholar
  35. M. Wolfe. High-Performance Compilers for Parallel Computing. Addison Wesley, 1995. Google ScholarGoogle Scholar

Index Terms

  1. Parallel programming with object assemblies

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!