skip to main content
abstract
Public Access

A hierarchical approach to reducing communication in parallel graph algorithms

Published:24 January 2015Publication History
Skip Abstract Section

Abstract

Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5×-8× improvement for a variety of graph algorithms at 12,000+ cores.

References

  1. The graph 500 list. http://www.graph500.org, 2013.Google ScholarGoogle Scholar
  2. A. Buss, Harshvardhan, I. Papadopoulos, O. Pearce, T. Smith, G. Tanase, N. Thomas, X. Xu, M. Bianco, N. M. Amato, and L. Rauchwerger. STAPL: Standard Template Adaptive Parallel Library. In Proc. Annual Haifa Experimental Systems Conference (SYSTOR), pp. 1–10, New York, NY, USA, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Harshvardhan, A. Fidel, N. M. Amato, and L. Rauchwerger. The STAPL Parallel Graph Library. In Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, pp 46–60, Springer, 2012.Google ScholarGoogle Scholar
  4. Harshvardhan, A. Fidel, N. M. Amato, and L. Rauchwerger. Kla: A new algorithmic paradigm for parallel graph computations. In Proceedings of the 23rd International Conference on Parallel Architectures and Compilation Techniques, PACT ’14, pp. 27–38, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A hierarchical approach to reducing communication in parallel graph algorithms

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM SIGPLAN Notices
            ACM SIGPLAN Notices  Volume 50, Issue 8
            PPoPP '15
            August 2015
            290 pages
            ISSN:0362-1340
            EISSN:1558-1160
            DOI:10.1145/2858788
            • Editor:
            • Andy Gill
            Issue’s Table of Contents
            • cover image ACM Conferences
              PPoPP 2015: Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
              January 2015
              290 pages
              ISBN:9781450332057
              DOI:10.1145/2688500

            Copyright © 2015 Owner/Author

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 24 January 2015

            Check for updates

            Qualifiers

            • abstract
          • Article Metrics

            • Downloads (Last 12 months)38
            • Downloads (Last 6 weeks)4

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader
          About Cookies On This Site

          We use cookies to ensure that we give you the best experience on our website.

          Learn more

          Got it!