skip to main content
poster

Applying the concurrent collections programming model to asynchronous parallel dense linear algebra

Published:09 January 2010Publication History
Skip Abstract Section

Abstract

This poster is a case study on the application of a novel programming model, called Concurrent Collections (CnC), to the implementation of an asynchronous-parallel algorithm for computing the Cholesky factorization of dense matrices. In CnC, the programmer expresses her computation in terms of application-specific operations, partially-ordered by semantic scheduling constraints. We demonstrate the performance potential of CnC in this poster, by showing that our Cholesky implementation nearly matches or exceeds competing vendor-tuned codes and alternative programming models. We conclude that the CnC model is well-suited for expressing asynchronous-parallel algorithms on emerging multicore systems.

References

  1. Z. Budimlić, A. Chandramowlishwaran, K. Knobe, G. Lowney, V. Sarkar, and L. Treggiari. Multi-core implementations of the Concurrent Collections programming model. In Proc. Workshop on Compilers for Parallel Computing (CPC), January 2009.Google ScholarGoogle Scholar
  2. A. Buttari, J. Langou, J. Kurzak, and J. Dongarra. A class of parallel tiled linear algebra algorithms for multicore architectures. Technical Report UT-CS-07-600 (LAPACK Working Note 191), University of Tennessee Knoxville, September 2007.Google ScholarGoogle Scholar
  3. Intel® Concurrent Collections for C++. http://software.intel.com/en-us/articles/intel-concurrent-collections-for-cc/, 2009.Google ScholarGoogle Scholar
  4. K. Knobe. Ease of use with Concurrent Collections (CnC). In Proc. USENIX Workshop on Hot Topics in Parallelism (HotPar), March 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. H. Ltaeif, J. Kurzak, and J. Dongarra. Scheduling two-sided transformations using algorithms-by-tiles on multicore architectures. Technical Report UT-CS-09-637 (LAPACK Working Note 214), University of Tennessee Knoxville, February 2009.Google ScholarGoogle Scholar
  6. J. M. Perez, R. M. Badia, and J. Labarta. A dependency-aware task-based programming environment for multicore architectures. In Proc. IEEE Int'l. Conf. Cluster Computing (CLUSTER), pages 142--151, September 2008.Google ScholarGoogle Scholar
  7. E. Chan, E. S. Quintana-Ortí, G. Quintana-Ortí, and R. van de Geijn. SuperMatrix out-of-order scheduling of matrix operations for SMP and multi-core architectures. In Proc. ACM Symp. Parallelism in Algorithms and Architectures (SPAA), pages 116--125, June 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Applying the concurrent collections programming model to asynchronous parallel dense linear algebra

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 45, Issue 5
        PPoPP '10
        May 2010
        346 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/1837853
        Issue’s Table of Contents
        • cover image ACM Conferences
          PPoPP '10: Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
          January 2010
          372 pages
          ISBN:9781605588773
          DOI:10.1145/1693453

        Copyright © 2010 Copyright held by author(s).

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 January 2010

        Check for updates

        Qualifiers

        • poster

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!