Abstract
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.
- H. Sutter and J. Larus, "Software and the concurrency revolution," Queue, vol. 3, pp. 54--62, September 2005. Google Scholar
Digital Library
- K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, and K. A. Yelick. The landscape of parallel computing research: A view from berkeley. Technical Report UCB/EECS-2006--183, EECS Department, University of California, Berkeley, Dec 2006.Google Scholar
- M. Frigo, C. E. Leiserson, and K. H. Randall. The implementation of the cilk-5 multithreaded language. SIGPLAN Not., 33:212--223, May 1998 Google Scholar
Digital Library
- Intel. Thread Building Blocks, http://threadingbuildingblocks.org/. Last access September 2011.Google Scholar
- D. Lea. A java fork/join framework. In Proceedings of the ACM 2000 conference on Java Grande, JAVA '00, pages 36--43, New York, NY, USA, 2000. ACM. Google Scholar
Digital Library
- A. Marowka. Parallel computing on any desktop. Commun. ACM, 50:74--78, September 2007. Google Scholar
Digital Library
- Microsoft. Task parallel library, http://msdn.microsoft.com/en-us/library/dd460717.aspx. Last access September 2011.Google Scholar
- K. Taura, K. Tabata, and A. Yonezawa. Stackthreads/mp: integrating futures into calling standards. ACM SIGPLAN Notices, 34(8):60--71, 1999. Google Scholar
Digital Library
- B. Moore, "Parallelism generics for Ada 2005 and beyond", SIGAda'10 Proceedings of the ACM SIGAda annual conference, October 2010. Google Scholar
Digital Library
- R. D. Blumofe and C. E. Leiserson. Scheduling multithreaded computations by work stealing. J. ACM, 46:720--748, September 1999. Google Scholar
Digital Library
- Moore, B., "A comparison of work-sharing, work-seeking, and work-stealing parallelism strategies using Paraffin with Ada 2005", Ada User Journal, Vol 32, N. 1, March 2011.Google Scholar
- A. Burns and A. J. Wellings, "Dispatching Domains for Multiprocessor Platforms and their Representation in Ada," 15th International Conference on Reliable Software Technologies - Ada-Europe 2010, Valencia, Spain, June 14--18, 2010. Google Scholar
Digital Library
- H. G. Mayer, S. Jahnichen, "The data-parallel Ada run-time system, simulation and empirical results", Proceedings of Seventh International Parallel Processing Symposium, Aprl 1993, Newport, CA , USA , pp. 621 -- 627 Google Scholar
Digital Library
- M. Hind , E. Schonberg, "Efficient Loop-Level Parallelism in Ada", TriAda 91, October 1991 Google Scholar
Digital Library
- J. Thornley, "Integrating parallel dataflow programming with the Ada tasking model". In Proceedings of the conference on TRI-Ada '94 (TRI-Ada '94), Charles B. Engle, Jr. (Ed.). ACM, New York, NY, USA, 417--428, 1994. DOI=10.1145/197694.197742 http://doi.acm.org/10.1145/197694.197742 Google Scholar
Digital Library
- J. Thornley, "Declarative Ada: parallel dataflow programming in a familiar context". In Proceedings of the 1995 ACM 23rd annual conference on Computer science (CSC '95), C. Jinshong Hwang and Betty W. Hwang (Eds.). ACM, New York, NY, USA, 73--80, 1995. DOI=10.1145/259526.259540 http://doi.acm.org/10.1145/259526.259540 Google Scholar
Digital Library
- R. Harper, "Parallelism is not concurrency", Ropert Harper Blog, http://existentialtype.wordpress.com/ 2011/03/17/parallelism-is-not-concurrency/, Last access September 2011.Google Scholar
- Intel, Cilk Plus, http://software.intel.com/en-us/articles/intel-cilk-plus/, Last access September 2011Google Scholar
- C. Leiserson, "The Cilk++ concurrency platform", Proceedings of the 46th Annual Design Automation Conference , ACM New York, USA, 2009. Google Scholar
Digital Library
- H. Baker, C. Hewitt, "The Incremental Garbage Collection of Processes". Proceedings of the Symposium on Artificial Intelligence Programming Languages, SIGPLAN Notices 12, August 1977. Google Scholar
Digital Library
Index Terms
A parallel programming model for ada
Recommendations
A parallel programming model for ada
SIGAda '11: Proceedings of the 2011 ACM annual international conference on Special interest group on the ada programming languageOver the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power ...
Hybrid Parallel Programming on GPU Clusters
ISPA '10: Proceedings of the International Symposium on Parallel and Distributed Processing with ApplicationsNowadays, NVIDIA’s CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has ...
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi
ICPP '13: Proceedings of the 2013 42nd International Conference on Parallel ProcessingNAS parallel benchmarks (NPB) are a set of applications commonly used to evaluate parallel systems. We use the NPB-OpenMP version to examine the performance of the Intel's new Xeon Phi co-processor and focus in particular on the many core aspect of the ...







Comments