skip to main content
article

Opportunity for compute partitioning in pursuit of energy-efficient systems

Published:13 June 2016Publication History
Skip Abstract Section

Abstract

Performance of computing systems, from handhelds to supercomputers, is increasingly constrained by the energy consumed. A significant and increasing fraction of the energy is consumed in the movement of data. In a compute node, caches have been very effective in reducing data movement by exploiting the available data locality in programs. Program regions with poor data locality, then effect most of the data movement, and consequently consume an ever larger fraction of energy. In this paper we explore the energy efficiency opportunity of minimizing the data movement in precisely such program regions, by first imagining the possibility of compute near memory, and then partitioning the program’s execution between a compute core and the compute near memory (CnM). Due to the emergence of 3D stacked memory, a CnM implementation appears more realistic. Our focus is on evaluating the partitioning opportunity in applications and to do a limit study of systems enabled with CnM capabilities to understand and guide their architectural embodiment. We describe an automated method of analyzing the data access pattern of optimized workload binaries, via a binary-instrumentation tool called SnapCnM, to identify the beneficial program regions (loops) for CnM execution.We also perform a limit study to evaluate the impact of such partitioning over a range of parameters affecting CnM design choices. Our results show that compute partitioning a small (<10%) fraction of a workload can improve its energy efficiency from 3% (for compute-bound applications) to 27% (for memory-bound applications). From the study in this work we discuss the important aspects that help to shape the future CnM design space.

References

  1. J. Aamer, S. C. Robert, C.-K. Luk, and B. Jacob. CMP$im: A pin-based on-the-fly multi-core cache simulator. In Fourth Annual Workshop on Modeling, Benchmarking and Simulation, MoBS, 2008 B. Akin, J. C. Hoe, and F. Franchetti. Hamlet: Hardware accelerated memory layout transform within 3d-stacked dram. In High Performance Extreme Computing Conference (HPEC). IEEE, Sept 2014.Google ScholarGoogle Scholar
  2. J. Andrews and N. Baker. Xbox 360 system architecture. IEEE Micro, 26 (2):25–37, Mar. 2006. ISSN 0272-1732 K. Beyls and E. H. DHollander. Reuse distance as a metric for cache behavior. In Proceedings of the IASTED Conference on Parallel and Distributed Computing and Systems, pages 617–662, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. B. Black, M. Annavaram, N. Brekelbaum, J. DeVale, L. Jiang, G. H. Loh, D. McCauley, P. Morrow, D. W. Nelson, D. Pantuso, et al. Die stacking (3d) microarchitecture. In Microarchitecture, 2006. MICRO-39. 39th Annual IEEE/ACM International Symposium on, pages 469–479. IEEE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. E. Carlson, W. Heirman, and L. Eeckhout. Sniper: Exploring the level of abstraction for scalable and accurate parallel multi-core simulation. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’11, pages 52:1–52:12, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0771-0 J. Chame, J. Shin, and M. Hall. Code transformations for exploiting bandwidth in pim-based systems, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. G. Keramidas, P. Petoumenos, and S. Kaxiras. Cache replacement based on reuse-distance prediction. In Proceedings of the 25th International Conference on Computer Design, ICCD ’07, pages 245–250, Washington, DC, USA, 2007. IEEE Computer Society. ISBN 978-1-4244-1258-7 T. Kgil, S. D’Souza, A. Saidi, N. Binkert, R. Dreslinski, T. Mudge, S. Reinhardt, and K. Flautner. Picoserver: using 3d stacking technology to enable a compact energy efficient chip multiprocessor. ACM SIGARCH Computer Architecture News, 34(5):117–128, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  6. M. Kim, C. K. Luk, and H. Kim. Prospector: Discovering parallelism via dynamic data-dependence profiling. Technical report, Georgia Inst. of Technology, 2009.Google ScholarGoogle Scholar
  7. P. Kogge, T. Sunaga, H. Miyataka, K. Kitamura, and E. Retter. Combined dram and logic chip for massively parallel systems. In Proceedings of the 16th Conference on Advanced Research in VLSI, pages 4–16, Washington, USA, 1995. IEEE. ISBN 0-8186-7074-9 P. M. Kogge, S. C. Bass, J. B. Brockman, D. Z. Chen, and E. Sha. Pursuing a petaflop: Point designs for 100 tf computers using pim technologies. In Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation, FRONTIERS ’96, pages 88–, Washington, DC, USA, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. IEEE Computer Society. ISBN 0-8186-7551-9 C. E. Kozyrakis, S. Perissakis, D. Patterson, T. Anderson, K. Asanovic, N. Cardwell, R. Fromm, J. Golbus, B. Gribstad, K. Keeton, R. Thomas, N. Treuhaft, and K. Yelick. Scalable processors in the billion-transistor era: Iram. Computer, 30(9):75–78, Sept. 1997. ISSN 0018-9162 J. Lee, D. Solihin, and J. Torrettas. Automatically mapping code on an intelligent memory architecture. In The Seventh International Symposium on High-Performance Computer Architecture, HPCA., pages 121–132. IEEE, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. H. Loh. 3d-stacked memory architectures for multi-core processors. In ACM SIGARCH computer architecture news, volume 36, pages 453–464. IEEE Computer Society, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S.-M. Yoo, J. Renau, M. Huang, and J. Torrellas. Flexram architecture design parameters. Center for Supercomputing Research and Development (CSRD), Tech. Rep, 1584, 2000.Google ScholarGoogle Scholar
  11. Q. Zhu, T. Graf, H. Sumbul, L. Pillegi, and F. Franchetti. Accelerating sparse matrix-matrix multiplication with 3d-stacked logic-in-memory hardware. In High Performance Extreme Computing Conference (HPEC), pages 1–6. IEEE, Sept 2013Google ScholarGoogle Scholar

Index Terms

  1. Opportunity for compute partitioning in pursuit of energy-efficient systems

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGPLAN Notices
      ACM SIGPLAN Notices  Volume 51, Issue 5
      LCTES '16
      May 2016
      122 pages
      ISSN:0362-1340
      EISSN:1558-1160
      DOI:10.1145/2980930
      • Editor:
      • Andy Gill
      Issue’s Table of Contents
      • cover image ACM Conferences
        LCTES 2016: Proceedings of the 17th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, Tools, and Theory for Embedded Systems
        June 2016
        122 pages
        ISBN:9781450343169
        DOI:10.1145/2907950

      Copyright © 2016 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 June 2016

      Check for updates

      Qualifiers

      • article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!