Abstract
When implementing a function mapping on the contemporary GPU, several contradictory performance factors affecting distribution of computation into GPU kernels have to be balanced. A decomposition-fusion scheme suggests to decompose the computational problem to be solved by several simple functions implemented as standalone kernels and to fuse some of these functions later into more complex kernels to improve memory locality. In this paper, a prototype of source-to-source compiler automating the fusion phase is presented and the impact of fusions generated by the compiler as well as compiler efficiency is experimentally evaluated.
- B. Catanzaro, N. Sundaram, and K. Keutzer. A Map Reduce Framework for Programming Graphics Processors. In Workshop on Software Tools for MultiCore Systems, 2008.Google Scholar
- J. Filipovič, I. Peterlík, and J. Fousek. GPU Acceleration of Equations Assembly in Finite Elements Method-Preliminary Results. In Symposium on Application Accelerators in High-Performance Computing, 2009.Google Scholar
- Jan Fousek, Jiri Filipovič, and Matúš Madzin. Automatic Fusions of CUDA-GPU Kernels for Parallel Map. In Second International Workshop on Highly-Efficient Accelerators and Reconfigurable Technologies, 2011.Google Scholar
- Jared Hoberock and Nathan Bell. Thrust: A Parallel Template Library, 2010. Version 1.3.0.Google Scholar
Recommendations
NVIDIA cuda software and gpu parallel computing architecture
ISMM '07: Proceedings of the 6th international symposium on Memory managementIn the past, graphics processors were special purpose hardwired application accelerators, suitable only for conventional rasterization-style graphics applications. Modern GPUs are now fully programmable, massively parallel floating point processors. ...
Communication and computation optimization of concurrent kernels using kernel coalesce on a GPU
General purpose computation on graphics processing unit GPU is rapidly entering into various scientific and engineering fields. Many applications are being ported onto GPUs for better performance. Various optimizations, frameworks, and tools are being ...
Many-core GPU computing with NVIDIA CUDA
ICS '08: Proceedings of the 22nd annual international conference on SupercomputingIn the past, graphics processors were special-purpose hardwired application accelerators, suitable only for conventional graphics applications. Modern GPUs are fully programmable, massively parallel floating point processors. In this talk I will ...






Comments