skip to main content
research-article

A parallel programming model for ada

Published:06 November 2011Publication History
Skip Abstract Section

Abstract

Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.

References

  1. H. Sutter and J. Larus, "Software and the concurrency revolution," Queue, vol. 3, pp. 54--62, September 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, and K. A. Yelick. The landscape of parallel computing research: A view from berkeley. Technical Report UCB/EECS-2006--183, EECS Department, University of California, Berkeley, Dec 2006.Google ScholarGoogle Scholar
  3. M. Frigo, C. E. Leiserson, and K. H. Randall. The implementation of the cilk-5 multithreaded language. SIGPLAN Not., 33:212--223, May 1998 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Intel. Thread Building Blocks, http://threadingbuildingblocks.org/. Last access September 2011.Google ScholarGoogle Scholar
  5. D. Lea. A java fork/join framework. In Proceedings of the ACM 2000 conference on Java Grande, JAVA '00, pages 36--43, New York, NY, USA, 2000. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Marowka. Parallel computing on any desktop. Commun. ACM, 50:74--78, September 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Microsoft. Task parallel library, http://msdn.microsoft.com/en-us/library/dd460717.aspx. Last access September 2011.Google ScholarGoogle Scholar
  8. K. Taura, K. Tabata, and A. Yonezawa. Stackthreads/mp: integrating futures into calling standards. ACM SIGPLAN Notices, 34(8):60--71, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. B. Moore, "Parallelism generics for Ada 2005 and beyond", SIGAda'10 Proceedings of the ACM SIGAda annual conference, October 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. R. D. Blumofe and C. E. Leiserson. Scheduling multithreaded computations by work stealing. J. ACM, 46:720--748, September 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Moore, B., "A comparison of work-sharing, work-seeking, and work-stealing parallelism strategies using Paraffin with Ada 2005", Ada User Journal, Vol 32, N. 1, March 2011.Google ScholarGoogle Scholar
  12. A. Burns and A. J. Wellings, "Dispatching Domains for Multiprocessor Platforms and their Representation in Ada," 15th International Conference on Reliable Software Technologies - Ada-Europe 2010, Valencia, Spain, June 14--18, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. H. G. Mayer, S. Jahnichen, "The data-parallel Ada run-time system, simulation and empirical results", Proceedings of Seventh International Parallel Processing Symposium, Aprl 1993, Newport, CA , USA , pp. 621 -- 627 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Hind , E. Schonberg, "Efficient Loop-Level Parallelism in Ada", TriAda 91, October 1991 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. Thornley, "Integrating parallel dataflow programming with the Ada tasking model". In Proceedings of the conference on TRI-Ada '94 (TRI-Ada '94), Charles B. Engle, Jr. (Ed.). ACM, New York, NY, USA, 417--428, 1994. DOI=10.1145/197694.197742 http://doi.acm.org/10.1145/197694.197742 Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. Thornley, "Declarative Ada: parallel dataflow programming in a familiar context". In Proceedings of the 1995 ACM 23rd annual conference on Computer science (CSC '95), C. Jinshong Hwang and Betty W. Hwang (Eds.). ACM, New York, NY, USA, 73--80, 1995. DOI=10.1145/259526.259540 http://doi.acm.org/10.1145/259526.259540 Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. Harper, "Parallelism is not concurrency", Ropert Harper Blog, http://existentialtype.wordpress.com/ 2011/03/17/parallelism-is-not-concurrency/, Last access September 2011.Google ScholarGoogle Scholar
  18. Intel, Cilk Plus, http://software.intel.com/en-us/articles/intel-cilk-plus/, Last access September 2011Google ScholarGoogle Scholar
  19. C. Leiserson, "The Cilk++ concurrency platform", Proceedings of the 46th Annual Design Automation Conference , ACM New York, USA, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. H. Baker, C. Hewitt, "The Incremental Garbage Collection of Processes". Proceedings of the Symposium on Artificial Intelligence Programming Languages, SIGPLAN Notices 12, August 1977. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A parallel programming model for ada

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!