Abstract
Performance of computing systems, from handhelds to supercomputers, is increasingly constrained by the energy consumed. A significant and increasing fraction of the energy is consumed in the movement of data. In a compute node, caches have been very effective in reducing data movement by exploiting the available data locality in programs. Program regions with poor data locality, then effect most of the data movement, and consequently consume an ever larger fraction of energy. In this paper we explore the energy efficiency opportunity of minimizing the data movement in precisely such program regions, by first imagining the possibility of compute near memory, and then partitioning the program’s execution between a compute core and the compute near memory (CnM). Due to the emergence of 3D stacked memory, a CnM implementation appears more realistic. Our focus is on evaluating the partitioning opportunity in applications and to do a limit study of systems enabled with CnM capabilities to understand and guide their architectural embodiment. We describe an automated method of analyzing the data access pattern of optimized workload binaries, via a binary-instrumentation tool called SnapCnM, to identify the beneficial program regions (loops) for CnM execution.We also perform a limit study to evaluate the impact of such partitioning over a range of parameters affecting CnM design choices. Our results show that compute partitioning a small (<10%) fraction of a workload can improve its energy efficiency from 3% (for compute-bound applications) to 27% (for memory-bound applications). From the study in this work we discuss the important aspects that help to shape the future CnM design space.
- J. Aamer, S. C. Robert, C.-K. Luk, and B. Jacob. CMP$im: A pin-based on-the-fly multi-core cache simulator. In Fourth Annual Workshop on Modeling, Benchmarking and Simulation, MoBS, 2008 B. Akin, J. C. Hoe, and F. Franchetti. Hamlet: Hardware accelerated memory layout transform within 3d-stacked dram. In High Performance Extreme Computing Conference (HPEC). IEEE, Sept 2014.Google Scholar
- J. Andrews and N. Baker. Xbox 360 system architecture. IEEE Micro, 26 (2):25–37, Mar. 2006. ISSN 0272-1732 K. Beyls and E. H. DHollander. Reuse distance as a metric for cache behavior. In Proceedings of the IASTED Conference on Parallel and Distributed Computing and Systems, pages 617–662, 2001. Google Scholar
Digital Library
- B. Black, M. Annavaram, N. Brekelbaum, J. DeVale, L. Jiang, G. H. Loh, D. McCauley, P. Morrow, D. W. Nelson, D. Pantuso, et al. Die stacking (3d) microarchitecture. In Microarchitecture, 2006. MICRO-39. 39th Annual IEEE/ACM International Symposium on, pages 469–479. IEEE, 2006. Google Scholar
Digital Library
- T. E. Carlson, W. Heirman, and L. Eeckhout. Sniper: Exploring the level of abstraction for scalable and accurate parallel multi-core simulation. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’11, pages 52:1–52:12, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0771-0 J. Chame, J. Shin, and M. Hall. Code transformations for exploiting bandwidth in pim-based systems, 2000. Google Scholar
Digital Library
- G. Keramidas, P. Petoumenos, and S. Kaxiras. Cache replacement based on reuse-distance prediction. In Proceedings of the 25th International Conference on Computer Design, ICCD ’07, pages 245–250, Washington, DC, USA, 2007. IEEE Computer Society. ISBN 978-1-4244-1258-7 T. Kgil, S. D’Souza, A. Saidi, N. Binkert, R. Dreslinski, T. Mudge, S. Reinhardt, and K. Flautner. Picoserver: using 3d stacking technology to enable a compact energy efficient chip multiprocessor. ACM SIGARCH Computer Architecture News, 34(5):117–128, 2006.Google Scholar
Cross Ref
- M. Kim, C. K. Luk, and H. Kim. Prospector: Discovering parallelism via dynamic data-dependence profiling. Technical report, Georgia Inst. of Technology, 2009.Google Scholar
- P. Kogge, T. Sunaga, H. Miyataka, K. Kitamura, and E. Retter. Combined dram and logic chip for massively parallel systems. In Proceedings of the 16th Conference on Advanced Research in VLSI, pages 4–16, Washington, USA, 1995. IEEE. ISBN 0-8186-7074-9 P. M. Kogge, S. C. Bass, J. B. Brockman, D. Z. Chen, and E. Sha. Pursuing a petaflop: Point designs for 100 tf computers using pim technologies. In Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computation, FRONTIERS ’96, pages 88–, Washington, DC, USA, 1996. Google Scholar
Digital Library
- IEEE Computer Society. ISBN 0-8186-7551-9 C. E. Kozyrakis, S. Perissakis, D. Patterson, T. Anderson, K. Asanovic, N. Cardwell, R. Fromm, J. Golbus, B. Gribstad, K. Keeton, R. Thomas, N. Treuhaft, and K. Yelick. Scalable processors in the billion-transistor era: Iram. Computer, 30(9):75–78, Sept. 1997. ISSN 0018-9162 J. Lee, D. Solihin, and J. Torrettas. Automatically mapping code on an intelligent memory architecture. In The Seventh International Symposium on High-Performance Computer Architecture, HPCA., pages 121–132. IEEE, 2001. Google Scholar
Digital Library
- G. H. Loh. 3d-stacked memory architectures for multi-core processors. In ACM SIGARCH computer architecture news, volume 36, pages 453–464. IEEE Computer Society, 2008. Google Scholar
Digital Library
- S.-M. Yoo, J. Renau, M. Huang, and J. Torrellas. Flexram architecture design parameters. Center for Supercomputing Research and Development (CSRD), Tech. Rep, 1584, 2000.Google Scholar
- Q. Zhu, T. Graf, H. Sumbul, L. Pillegi, and F. Franchetti. Accelerating sparse matrix-matrix multiplication with 3d-stacked logic-in-memory hardware. In High Performance Extreme Computing Conference (HPEC), pages 1–6. IEEE, Sept 2013Google Scholar
Index Terms
Opportunity for compute partitioning in pursuit of energy-efficient systems
Recommendations
Opportunity for compute partitioning in pursuit of energy-efficient systems
LCTES 2016: Proceedings of the 17th ACM SIGPLAN/SIGBED Conference on Languages, Compilers, Tools, and Theory for Embedded SystemsPerformance of computing systems, from handhelds to supercomputers, is increasingly constrained by the energy consumed. A significant and increasing fraction of the energy is consumed in the movement of data. In a compute node, caches have been very ...
Location-aware cache management for many-core processors with deep cache hierarchy
SC '13: Proceedings of the International Conference on High Performance Computing, Networking, Storage and AnalysisAs cache hierarchies become deeper and the number of cores on a chip increases, managing caches becomes more important for performance and energy. However, current hardware cache management policies do not always adapt optimally to the applications ...
Soft-OLP: Improving Hardware Cache Performance through Software-Controlled Object-Level Partitioning
PACT '09: Proceedings of the 2009 18th International Conference on Parallel Architectures and Compilation TechniquesPerformance degradation of memory-intensive programs caused by the LRU policy's inability to handle weak-locality data accesses in the last level cache is increasingly serious for two reasons. First,the last-level cache remains in the CPU's critical ...







Comments