skip to main content
research-article

VirtCL: a framework for OpenCL device abstraction and management

Published:24 January 2015Publication History
Skip Abstract Section

Abstract

The interest in using multiple graphics processing units (GPUs) to accelerate applications has increased in recent years. However, the existing heterogeneous programming models (e.g., OpenCL) abstract details of GPU devices at the per-device level and require programmers to explicitly schedule their kernel tasks on a system equipped with multiple GPU devices. Unfortunately, multiple applications running on a multi-GPU system may compete for some of the GPU devices while leaving other GPU devices unused. Moreover, the distributed memory model defined in OpenCL, where each device has its own memory space, increases the complexity of managing the memory among multiple GPU devices. In this article we propose a framework (called VirtCL) that reduces the programming burden by acting as a layer between the programmer and the native OpenCL run-time system for abstracting multiple devices into a single virtual device and for scheduling computations and communications among the multiple devices. VirtCL comprises two main components: (1) a front-end library, which exposes primary OpenCL APIs and the virtual device, and (2) a back-end run-time system (called CLDaemon) for scheduling and dispatching kernel tasks based on a history-based scheduler. The front-end library forwards computation requests to the back-end CLDaemon, which then schedules and dispatches the requests. We also propose a history-based scheduler that is able to schedule kernel tasks in a contention- and communication-aware manner. Experiments demonstrated that the VirtCL framework introduced a small overhead (mean of 6%) but outperformed the native OpenCL run-time system for most benchmarks in the Rodinia benchmark suite, which was due to the abstraction layer eliminating the time-consuming initialization of OpenCL contexts. We also evaluated different scheduling policies in VirtCL with a real-world application (clsurf) and various synthetic workload traces. The results indicated that the VirtCL framework provides scalability for multiple kernel tasks running on multi-GPU systems.

References

  1. Advanced Micro Devices Inc. Accelerated parallel processing (APP) SDK. http://developer. amd.com/tools-and-sdks/heterogeneous-computing/ amd-accelerated-parallel-processing-app-sdk/, 2012.Google ScholarGoogle Scholar
  2. Advanced Micro Devices Inc. clMath (formerly APPML). http://developer.amd.com/ tools-and-sdks/heterogeneous-computing/ amd-accelerated-parallel-processing-math-libraries/, 2014.Google ScholarGoogle Scholar
  3. E. Agullo, C. Augonnet, J. Dongarra, M. Faverge, H. Ltaief, S. Thibault, and S. Tomov. QR factorization on a multicore node enhanced with multiple GPU accelerators. In Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium, IPDPS ’’11, pages 932–943, Washington, DC, USA, 2011. IEEE Computer Society. ISBN 978-0-7695-4385-7.. URL http://dx.doi.org/10.1109/IPDPS.2011.90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Barak and A. Shiloh. VCL cluster platform. Technical report, 2009. URL http://www.mosix.org/txt\_vcl.html.Google ScholarGoogle Scholar
  5. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-up robust features (SURF). Comput. Vis. Image Underst., 110(3):346–359, June 2008. ISSN 1077-3142.. URL http://dx.doi.org/10.1016/j. cviu.2007.09.014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. R. Brochard. FreeOCL: A multi-platform implementation of OpenCL 1.2 targeting CPUs, 2012. URL https://code.google.com/p/ freeocl/. Version 0.3.6.Google ScholarGoogle Scholar
  7. CAPS Enterprise, Cray Inc., Nvidia Corp., and the Portland Group. OpenACC — directives for accelerators. http://www. openacc-standard.org, 2011.Google ScholarGoogle Scholar
  8. S. Che, M. Boyer, J. Meng, D. Tarjan, J. W. Sheaffer, S.-H. Lee, and K. Skadron. Rodinia: A benchmark suite for heterogeneous computing. In Proceedings of the 2009 IEEE International Symposium on Workload Characterization, IISWC ’09, pages 44–54, Washington, DC, USA, 2009. IEEE Computer Society. ISBN 978-1-4244-5156-2.. URL http://dx.doi.org/10.1109/IISWC.2009.5306797. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. F. Diamos and S. Yalamanchili. Harmony: An execution model and runtime for heterogeneous many core systems. In Proceedings of the 17th International Symposium on High Performance Distributed Computing, HPDC ’08, pages 197–200, New York, NY, USA, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. ACM. ISBN 978-1-59593-997-5.. URL http://doi.acm.org/ 10.1145/1383422.1383447.Google ScholarGoogle Scholar
  11. C. Evans. Notes on the OpenSURF library. Technical Report CSTR- 09-001, January 2009.Google ScholarGoogle Scholar
  12. M. Fatica. Accelerating linpack with CUDA on heterogeneous clusters. In Proceedings of 2Nd Workshop on General Purpose Processing on Graphics Processing Units, GPGPU-2, pages 46–51, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-517-8.. URL http://doi.acm.org/10.1145/1513895.1513901. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. N. Gautam. Analysis of Queues: Methods and Applications. CRC Press, Inc., Boca Raton, FL, USA, 1st edition, 2012. ISBN 1439806586, 9781439806586. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. I. Gelado, J. E. Stone, J. Cabezas, S. Patel, N. Navarro, and W.m. W. Hwu. An asymmetric distributed shared memory model for heterogeneous parallel systems. SIGPLAN Not., 45(3):347–358, Mar. 2010. ISSN 0362-1340.. URL http://doi.acm.org/10.1145/ 1735971.1736059. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. D. Han and T. S. Abdelrahman. hiCUDA: high-level GPGPU programming. IEEE Trans. Parallel Distrib. Syst., 22(1):78–90, Jan. 2011. ISSN 1045-9219.. URL http://dx.doi.org/10.1109/ TPDS.2010.62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Harchol-Balter. Task assignment with unknown duration. J. ACM, 49(2):260–288, Mar. 2002. ISSN 0004-5411. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. . URL http://doi.acm.org/10.1145/506147.506154.Google ScholarGoogle Scholar
  18. M. Harchol-Balter. Queueing Disciplines. John Wiley & Sons, Inc., 2010. ISBN 9780470400531.. URL http://dx.doi.org/10. 1002/9780470400531.eorms0699.Google ScholarGoogle Scholar
  19. M. Harchol-Balter, M. E. Crovella, and C. D. Murta. On choosing a task assignment policy for a distributed server system. J. Parallel Distrib. Comput., 59(2):204–228, Nov. 1999. ISSN 0743-7315.. URL http://dx.doi.org/10.1006/jpdc.1999.1577. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Intel Corp. Intel 64 and IA-32 architectures software developer’s manual—volume 2B: Instruction set reference, N-Z, Sept. 2014.Google ScholarGoogle Scholar
  21. T. B. Jablin, P. Prabhu, J. A. Jablin, N. P. Johnson, S. R. Beard, and D. I. August. Automatic CPU-GPU communication management and optimization. In Proceedings of the 32Nd ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’11, pages 142–151, New York, NY, USA, 2011. ACM. ISBN 978-1- 4503-0663-8.. URL http://doi.acm.org/10.1145/1993498. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. 1993516.Google ScholarGoogle Scholar
  23. Khronos OpenCL Working Group. The OpenCL specification, version 1.2. http://www.khronos.org/registry/cl/specs/ opencl-1.2.pdf, 2012.Google ScholarGoogle Scholar
  24. J. Kim, H. Kim, J. H. Lee, and J. Lee. Achieving a single compute device image in OpenCL for multiple GPUs. In Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming, PPoPP ’11, pages 277–288, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0119-0.. URL http://doi.acm. org/10.1145/1941553.1941591. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. T. Komoda, S. Miwa, H. Nakamura, and N. Maruyama. Integrating multi-GPU execution in an OpenACC compiler. In Proceedings of the 2013 42Nd International Conference on Parallel Processing, ICPP ’13, pages 260–269, Washington, DC, USA, 2013. IEEE Computer Society. ISBN 978-0-7695-5117-3.. URL http: //dx.doi.org/10.1109/ICPP.2013.35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. L. Lamport. How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Trans. Comput., 28(9):690– 691, Sept. 1979. ISSN 0018-9340.. URL http://dx.doi.org/ 10.1109/TC.1979.1675439. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Lawrence Livermore National Laboratory. BlueGene/L. https: //asc.llnl.gov/computing\_resources/bluegenel/, 2011.Google ScholarGoogle Scholar
  28. K. Li and P. Hudak. Memory coherence in shared virtual memory systems. ACM Trans. Comput. Syst., 7(4):321–359, Nov. 1989. ISSN 0734-2071.. URL http://doi.acm.org/10.1145/75104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. 75105.Google ScholarGoogle Scholar
  30. Message Passing Forum. MPI: A message-passing interface standard. Technical report, Knoxville, TN, USA, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. P. Mistry, C. Gregg, N. Rubin, D. Kaeli, and K. Hazelwood. Analyzing program flow within a many-kernel OpenCL application. In Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units, GPGPU-4, pages 10:1–10:8, New York, NY, USA, 2011. ACM. ISBN 978-1-4503-0569-3.. URL http://doi.acm.org/10.1145/1964179.1964193. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. M. Mitzenmacher. Dynamic models for file sizes and double pareto distributions. Internet Mathematics, 1(3):305–333, Jan. 2004. ISSN 1542-7951.. URL http://www.tandfonline.com/doi/abs/10. 1080/15427951.2004.10129092.Google ScholarGoogle ScholarCross RefCross Ref
  33. D. C. Montgomery, E. A. Peck, and G. G. Vining. Introduction to linear regression analysis. 2012. ISBN 0470542810. URL http://www.google.com.tw/books?hl=zh-TW\&lr=\&id= 0yR4KUL4VDkC\&pgis=1.Google ScholarGoogle Scholar
  34. National Nuclear Security Administration. New LLNL computer, a work in progress, is world’s fastest. http://nnsa.energy. gov/sites/default/files/nnsa/newsletters/10/2004Nov\ _NNSA\_News.pdf, 2004.Google ScholarGoogle Scholar
  35. Nvidia Corp. CUDA C programming guide. http://docs.nvidia. com/cuda/pdf/CUDA\_C\_Programming\_Guide.pdf, 2012.Google ScholarGoogle Scholar
  36. Nvidia Corp. Nvidia management library (NVML). https:// developer.nvidia.com/nvidia-management-library-nvml, 2012.Google ScholarGoogle Scholar
  37. Nvidia Corp. CUBLAS. https://developer.nvidia.com/ cublas, 2014.Google ScholarGoogle Scholar
  38. Nvidia Corp. Geforce GTX Titan Z. http://www.geforce.com/ hardware/desktop-gpus/geforce-gtx-titan-z, 2014.Google ScholarGoogle Scholar
  39. J. Protic, M. Tomasevic, and V. Milutinovic. Distributed shared memory: concepts and systems. IEEE Parallel Distrib. Technol., 4(2): 63–79, June 1996. ISSN 1063-6552.. URL http://dx.doi.org/ 10.1109/88.494605. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. F. Song, S. Tomov, and J. Dongarra. Enabling and scaling matrix computations on heterogeneous multi-core and multi-GPU systems. In Proceedings of the 26th ACM International Conference on Supercomputing, ICS ’12, pages 365–376, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1316-2.. URL http://doi.acm. org/10.1145/2304576.2304625. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. J. A. Stuart and J. D. Owens. Multi-GPU MapReduce on GPU clusters. In Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium, IPDPS ’’11, pages 1068–1079, Washington, DC, USA, 2011. IEEE Computer Society. ISBN 978- 0-7695-4385-7.. URL http://dx.doi.org/10.1109/IPDPS. 2011.102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. M. Stumm and S. Zhou. Algorithms implementing distributed shared memory. Computer, 23(5):54–64, May 1990. ISSN 0018-9162.. URL http://dx.doi.org/10.1109/2.53355. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. S. Xiao, P. Balaji, Q. Zhu, R. Thakur, S. Coghlan, H. Lin, G. Wen, J. Hong, and W.-c. Feng. VOCL: An optimized environment for transparent virtualization of graphics processing units. In Proceedings of 2012 Innovative Parallel Computing, InPar ’12, pages 1–12. IEEE, May 2012. ISBN 978-1-4673-2633-9.. URL http://ieeexplore. ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6339609.Google ScholarGoogle Scholar
  44. Xiph.Org Foundation. Xiph.org video test media. http://media. xiph.org/video/derf/, 2013.Google ScholarGoogle Scholar

Index Terms

  1. VirtCL: a framework for OpenCL device abstraction and management

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGPLAN Notices
      ACM SIGPLAN Notices  Volume 50, Issue 8
      PPoPP '15
      August 2015
      290 pages
      ISSN:0362-1340
      EISSN:1558-1160
      DOI:10.1145/2858788
      • Editor:
      • Andy Gill
      Issue’s Table of Contents
      • cover image ACM Conferences
        PPoPP 2015: Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
        January 2015
        290 pages
        ISBN:9781450332057
        DOI:10.1145/2688500

      Copyright © 2015 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 24 January 2015

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!