Remember Me
Or use your Academic/Social account:


Or use your Academic/Social account:


You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.


Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message


Verify Password:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Hazem Ali; Luís Pinho (2011)
Publisher: ACM
Types: Article
Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [2] K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, and K. A. Yelick. The landscape of parallel computing research: A view from berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, Dec 2006.
    • [3] M. Frigo, C. E. Leiserson, and K. H. Randall. The implementation of the cilk-5 multithreaded language. SIGPLAN Not., 33:212-223, May 1998
    • [4] Intel. Thread Building Blocks, http://threadingbuildingblocks.org/. Last access September 2011.
    • [5] D. Lea. A java fork/join framework. In Proceedings of the ACM 2000 conference on Java Grande, JAVA '00, pages 36- 43, New York, NY, USA, 2000. ACM.
    • [6] A. Marowka. Parallel computing on any desktop. Commun. ACM, 50:74-78, September 2007.
    • [7] Microsoft. Task parallel library, http://msdn.microsoft.com/en-us/library/dd460717.aspx. Last access September 2011.
    • [8] K. Taura, K. Tabata, and A. Yonezawa. Stackthreads/mp: integrating futures into calling standards. ACM SIGPLAN Notices, 34(8):60-71, 1999.
    • [9] B. Moore, “Parallelism generics for Ada 2005 and beyond”, SIGAda'10 Proceedings of the ACM SIGAda annual conference, October 2010.
    • [10] R. D. Blumofe and C. E. Leiserson. Scheduling multithreaded computations by work stealing. J. ACM, 46:720-748, September 1999.
    • [11] Moore, B., “A comparison of work-sharing, work-seeking, and work-stealing parallelism strategies using Paraffin with Ada 2005”, Ada User Journal, Vol 32, N. 1, March 2011.
    • [12] A. Burns and A. J. Wellings, “Dispatching Domains for Multiprocessor Platforms and their Representation in Ada,” 15th International Conference on Reliable Software Technologies - Ada-Europe 2010, Valencia, Spain, June 14- 18, 2010.
    • [13] H. G. Mayer, S. Jahnichen, “The data-parallel Ada run-time system, simulation and empirical results”, Proceedings of Seventh International Parallel Processing Symposium, Aprl 1993, Newport, CA , USA , pp. 621 - 627
    • [14] M. Hind , E. Schonberg, “Efficient Loop-Level Parallelism in Ada”, TriAda 91, October 1991
    • [15] J. Thornley, “Integrating parallel dataflow programming with the Ada tasking model”. In Proceedings of the conference on TRI-Ada '94 (TRI-Ada '94), Charles B. Engle, Jr. (Ed.). ACM, New York, NY, USA, 417-428, 1994. DOI=10.1145/197694.197742 http://doi.acm.org/10.1145/197694.197742
    • [16] J. Thornley, “Declarative Ada: parallel dataflow programming in a familiar context". In Proceedings of the
    • [17] R. Harper, “Parallelism is not concurrency”, Ropert Harper Blog, http://existentialtype.wordpress.com/ 2011/03/17/parallelism-is-not-concurrency/, Last access September 2011.
    • [18] Intel, Cilk Plus, http://software.intel.com/en-us/articles/intelcilk-plus/, Last access September 2011
    • [19] C. Leiserson, “The Cilk++ concurrency platform”, Proceedings of the 46th Annual Design Automation Conference , ACM New York, USA, 2009.
    • [20] H. Baker, C. Hewitt, "The Incremental Garbage Collection of Processes". Proceedings of the Symposium on Artificial Intelligence Programming Languages, SIGPLAN Notices 12, August 1977.
  • No related research data.
  • No similar publications.

Share - Bookmark

Funded by projects


Cite this article