skip to main content
research-article

Synthesizing parallel graph programs via automated planning

Published:03 June 2015Publication History
Skip Abstract Section

Abstract

We describe a system that uses automated planning to synthesize correct and efficient parallel graph programs from high-level algorithmic specifications. Automated planning allows us to use constraints to declaratively encode program transformations such as scheduling, implementation selection, and insertion of synchronization. Each plan emitted by the planner satisfies all constraints simultaneously, and corresponds to a composition of these transformations. In this way, we obtain an integrated compilation approach for a very challenging problem domain. We have used this system to synthesize parallel programs for four graph problems: triangle counting, maximal independent set computation, preflow-push maxflow, and connected components. Experiments on a variety of inputs show that the synthesized implementations perform competitively with hand-written, highly-tuned code.

References

  1. A. Aho, R. Sethi, and J. Ullman. Compilers: principles, techniques, and tools. Addison Wesley, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. G. Barthe, J. M. Crespo, S. Gulwani, C. Kunz, and M. Marron. From relational verification to SIMD loop synthesis. PPoPP ’13, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. J. C. Bik and H. A. G. Wijshoff. Compilation techniques for sparse matrix computations. In ICS, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. G. E. Blelloch, J. T. Fineman, P. B. Gibbons, and J. Shun. Internally deterministic parallel algorithms can be fast. PPoPP ’12, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Cherem, T. Chilimbi, and S. Gulwani. Inferring locks for atomic sections. In PLDI. ACM, 2008. ISBN 978-1-59593-860-2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Cong, G. Almasi, and V. Saraswat. Fast pgas connected components algorithms. PGAS ’09. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. T. Cormen, C. Leiserson, R. Rivest, and C. Stein, editors. Introduction to Algorithms. MIT Press, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Eriksson and C. Kessler. Integrated code generation for loops. ACM Trans. Embed. Comput. Syst., 11S(1), June 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. R. E. Fikes and N. J. Nilsson. Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2, 1971.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. Gulwani, S. Jha, A. Tiwari, and R. Venkatesan. Synthesis of loopfree programs. PLDI ’11, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Hawkins, A. Aiken, K. Fisher, M. Rinard, and M. Sagiv. Concurrent data representation synthesis. In PLDI, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Herlihy and E. Koskinen. Transactional boosting: a methodology for highly-concurrent transactional objects. In PPoPP. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. Itzhaky, S. Gulwani, N. Immerman, and M. Sagiv. A simple inductive synthesis methodology and its applications. In OOPSLA, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. F. JaJa. An introduction to parallel algorithms. Addison Wesley, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. A. Johnson and R. Eigenmann. Context-sensitive domainindependent algorithm composition and selection. PLDI ’06, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. R. Joshi, G. Nelson, and K. Randall. Denali: A goal-directed superoptimizer. In PLDI, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. Joshi, G. Nelson, and Y. Zhou. Denali: A practical algorithm for generating optimal code. ACM Trans. Program. Lang. Syst., 28(6), 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. A. Kautz, B. Selman, et al. Planning as satisfiability. ECAI, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. H. Kwak, C. Lee, H. Park, and S. Moon. What is Twitter, a social network or a news media? WWW ’10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Graphlab: A new parallel framework for machine learning. In UAI, 2010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. H. Massalin. Superoptimizer: A look at the smallest program. In ASPLOS, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. B. McCloskey, F. Zhou, D. Gay, and E. Brewer. Autolocker: synchronization inference for atomic sections. In POPL. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. L. A. Meyerovich, M. E. Torok, E. Atkinson, and R. Bodik. Parallel schedule synthesis for attribute grammars. PPoPP ’13, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. D. Nguyen, A. Lenharth, and K. Pingali. A lightweight infrastructure for graph analytics. In SOSP, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. K. Pingali, D. Nguyen, M. Kulkarni, M. Burtscher, M. A. Hassaan, R. Kaleem, T. H. Lee, A. Lenharth, R. Manevich, M. Méndez-Lojo, D. Prountzos, and X. Sui. The TAO of parallelism in algorithms. In PLDI, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. D. Prountzos, R. Manevich, and K. Pingali. Elixir: A system for synthesizing concurrent graph programs. OOPSLA, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. T. Rompf, A. K. Sujeeth, N. Amin, K. J. Brown, V. Jovanovic, H. Lee, M. Jonnalagedda, K. Olukotun, and M. Odersky. Optimizing data structures in high-level programs: New directions for extensible compilers based on staging. In POPL, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. N. Satish, N. Sundaram, M. M. A. Patwary, J. Seo, J. Park, M. A. Hassaan, S. Sengupta, Z. Yin, and P. Dubey. Navigating the maze of graph analytics frameworks using massive graph datasets. SIGMOD, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. T. Schank. Algorithmic Aspects of Triangle-Based Network Analysis. PhD thesis, Universität Karlsruhe, 2007.Google ScholarGoogle Scholar
  30. E. Schkufza, R. Sharma, and A. Aiken. Stochastic superoptimization. ASPLOS ’13, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Y. Shiloach and U. Vishkin. An o(log n) parallel connectivity algorithm. J. Algorithms, 3(1):57–67, 1982.Google ScholarGoogle Scholar
  32. J. Shun and G. E. Blelloch. Ligra: A lightweight graph processing framework for shared memory. PPoPP ’13, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. A. Solar-Lezama, C. Jones, and R. Bodik. Sketching concurrent data structures. In PLDI, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. R. Tate, M. Stepp, Z. Tatlock, and S. Lerner. Equality saturation: A new approach to optimization. In POPL, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. Vechev and E. Yahav. Deriving linearizable fine-grained concurrent objects. In PLDI, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. M. Vechev, E. Yahav, and G. Yorsh. Abstraction-guided synthesis of synchronization. In POPL, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Synthesizing parallel graph programs via automated planning

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!