skip to main content
poster

POSTER: An Architecture and Programming Model for Accelerating Parallel Commutative Computations via Privatization

Published:26 January 2017Publication History
Skip Abstract Section

Abstract

Synchronization and data movement are the key impediments to an efficient parallel execution. To ensure that data shared by multiple threads remain consistent, the programmer must use synchronization (e.g., mutex locks) to serialize threads' accesses to data. This limits parallelism because it forces threads to sequentially access shared resources. Additionally, systems use cache coherence to ensure that processors always operate on the most up-to-date version of a value even in the presence of private caches. Coherence protocol implementations cause processors to serialize their accesses to shared data, further limiting parallelism and performance.

References

  1. C. Blundell, A. Raghavan, and M. M. Martin. Retcon: Transactional repair without replay. In Proceedings of the 37th Annual International Symposium on Computer Architecture, ISCA '10, pages 258--269, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S. Burckhardt, D. Leijen, C. Sadowski, J. Yi, and T. Ball. Two for the price of one: A model for parallel and incremental computation. In Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications, OOPSLA '11, pages 427--444, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. C. Rinard and P. C. Diniz. Eliminating synchronization bottlenecks using adaptive replication. ACM Transactions on Programming Languages and Systems (TOPLAS), 25(3):316--359, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. P. Tu and D. A. Padua. Automatic array privatization. In Proceedings of the 6th International Workshop on Languages and Compilers for Parallel Computing, pages 500--521, London, UK, UK, 1994. Springer-Verlag. Google ScholarGoogle ScholarCross RefCross Ref
  5. H. Yu, H.-J. Ko, and Z. Li. General data structure expansion for multi-threading. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI '13, pages 243--252, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Zhang, W. Horn, and D. Sanchez. Exploiting commutativity to reduce the cost of updates to shared data in cache-coherent systems. In Proceedings of the 48th International Symposium on Microarchitecture, MICRO-48, pages 13--25, New York, NY, USA, 2015. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. POSTER: An Architecture and Programming Model for Accelerating Parallel Commutative Computations via Privatization

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGPLAN Notices
      ACM SIGPLAN Notices  Volume 52, Issue 8
      PPoPP '17
      August 2017
      442 pages
      ISSN:0362-1340
      EISSN:1558-1160
      DOI:10.1145/3155284
      Issue’s Table of Contents
      • cover image ACM Conferences
        PPoPP '17: Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
        January 2017
        476 pages
        ISBN:9781450344937
        DOI:10.1145/3018743

      Copyright © 2017 Owner/Author

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 26 January 2017

      Check for updates

      Qualifiers

      • poster
    • Article Metrics

      • Downloads (Last 12 months)8
      • Downloads (Last 6 weeks)1

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!