Author image not provided
 James Dinan

Authors:
Add personal information
  Affiliation history
Bibliometrics: publication history
Average citations per article6.42
Citation Count308
Publication count48
Publication years2006-2019
Available for download21
Average downloads per article206.76
Downloads (cumulative)4,342
Downloads (12 Months)279
Downloads (6 Weeks)48
SEARCH
ROLE
Arrow RightAuthor only


AUTHOR'S COLLEAGUES
See all colleagues of this author

SUBJECT AREAS
See all subject areas




BOOKMARK & SHARE


48 results found Export Results: bibtexendnoteacmrefcsv

Result 1 – 20 of 48
Result page: 1 2 3

Sort by:

1 published by ACM
August 2019 ICPP 2019: Proceedings of the 48th International Conference on Parallel Processing
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 21,   Downloads (12 Months): 27,   Downloads (Overall): 27

Full text available: PDFPDF
Realizing scalable performance with irregular parallel applications is challenging on large-scale distributed memory clusters. These applications typically require continuous, dynamic load balancing to maintain efficiency. Work stealing is a common approach to dynamic distributed load balancing. However its use in conjunction with advanced network offload capabilities is not well understood. ...

2 published by ACM
August 2018 ICPP 2018: Proceedings of the 47th International Conference on Parallel Processing
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 5,   Downloads (12 Months): 47,   Downloads (Overall): 58

Full text available: PDFPDF
Many HPC applications have successfully applied Partitioned Global Address Space (PGAS) parallel programming models to efficiently manage shared data that is distributed across multiple nodes in a distributed memory system. However, while the flat addressing model provided by PGAS systems is effective for regular array data, it renders such systems ...

3
December 2016 Concurrency and Computation: Practice & Experience: Volume 28 Issue 17, December 2016
Publisher: John Wiley and Sons Ltd.
Bibliometrics:
Citation Count: 0

The Message Passing Interface MPI 3.0 standard includes a significant revision to MPI's remote memory access RMA interface, which provides support for one-sided communication. MPI-3 RMA is expected to greatly enhance the usability and performance of MPI RMA. We present the first complete implementation of MPI-3 RMA and document implementation ...
Keywords: remote memory access RMA, MPICH, Message Passing Interface MPI, one-sided communication

4
November 2016 COM-HPC '16: Proceedings of the First Workshop on Optimization of Communication in HPC
Publisher: IEEE Press
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 0,   Downloads (12 Months): 7,   Downloads (Overall): 56

Full text available: PDFPDF
Partitioned Global Address Space (PGAS) parallel programming models can provide an efficient mechanism for managing shared data stored across multiple nodes in a distributed memory system. However, these models are traditionally directly addressed and, for applications with loosely-structured or sparse data, determining the location of a given data element within ...
Keywords: PGAS, offload, parallel hash table, portals

5
September 2016 Concurrency and Computation: Practice & Experience: Volume 28 Issue 13, September 2016
Publisher: John Wiley and Sons Ltd.
Bibliometrics:
Citation Count: 0

Quantum Monte Carlo QMC applications perform simulation with respect to an initial state of the quantum mechanical system, which is often captured by using a cubic B-spline basis. This representation is stored as a read-only table of coefficients and accesses to the table are generated at random as part of ...
Keywords: PGAS, global arrays, quantum Monte Carlo

6
September 2016 Concurrency and Computation: Practice & Experience: Volume 28 Issue 13, September 2016
Publisher: John Wiley and Sons Ltd.
Bibliometrics:
Citation Count: 0

Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that ...
Keywords: partitioned global address space, GPU, task parallelism

7
May 2016 IEEE Transactions on Parallel and Distributed Systems: Volume 27 Issue 5, May 2016
Publisher: IEEE Press
Bibliometrics:
Citation Count: 2

Data movement in high-performance computing systems accelerated by graphics processing units (GPUs) remains a challenging problem. Data communication in popular parallel programming models, such as the Message Passing Interface (MPI), is currently limited to the data stored in the CPU memory space. Auxiliary memory systems, such as GPU memory, are ...

8
September 2015 PGAS '15: Proceedings of the 2015 9th International Conference on Partitioned Global Address Space Programming Models
Publisher: IEEE Computer Society
Bibliometrics:
Citation Count: 0

Partitioned Global Address Space (PGAS) and one-sided communication models allow shared data to be transparently and asynchronously accessed by any process within a parallel computation. In order to ensure that updates are performed in the intended order, the programmer must either use potentially slower ordered communication, or perform operations that ...
Keywords: Fence, One-sided, Ordering, PGAS, OpenSHMEM

9
September 2015 Procedia Computer Science: Volume 51 Issue C, September 2015
Publisher: Elsevier Science Publishers B. V.
Bibliometrics:
Citation Count: 0

Exascale studies project reliability challenges for future high-performance computing (HPC) systems. We propose the Global View Resilience (GVR) system, a library that enables applications to add resilience in a portable, application-controlled fashion using versioned distributed arrays. We describe GVR's interfaces to distributed arrays, versioning, and cross-layer error recovery. Using several ...
Keywords: Scalable computing, Application-based fault tolerance, Resilience, Exascale, Fault tolerance

10 published by ACM
June 2015 ACM Transactions on Parallel Computing: Volume 2 Issue 2, July 2015
Publisher: ACM
Bibliometrics:
Citation Count: 5
Downloads (6 Weeks): 15,   Downloads (12 Months): 105,   Downloads (Overall): 255

Full text available: PDFPDF
The Message Passing Interface (MPI) 3.0 standard, introduced in September 2012, includes a significant update to the one-sided communication interface, also known as remote memory access (RMA). In particular, the interface has been extended to better support popular one-sided and global-address-space parallel programming models to provide better access to hardware ...
Keywords: one-sided communication, MPI, RMA

11
May 2015 IPDPSW '15: Proceedings of the 2015 IEEE International Parallel and Distributed Processing Symposium Workshop
Publisher: IEEE Computer Society
Bibliometrics:
Citation Count: 0

ASHES Introduction and Committees

12
November 2014 SC '14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Publisher: IEEE Press
Bibliometrics:
Citation Count: 3
Downloads (6 Weeks): 2,   Downloads (12 Months): 11,   Downloads (Overall): 104

Full text available: PDFPDF
One-sided communication decouples data movement and synchronization by providing support for asynchronous reads and updates of distributed shared data. While such interfaces can be extremely efficient, they also impose challenges in properly performing asynchronous accesses to shared data. This paper presents MC-Checker, a new tool that detects memory consistency errors ...
Keywords: one-sided communication, MPI, bug detection

13
November 2014 SC '14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Publisher: IEEE Press
Bibliometrics:
Citation Count: 7
Downloads (6 Weeks): 1,   Downloads (12 Months): 20,   Downloads (Overall): 194

Full text available: PDFPDF
Modern high-speed interconnection networks are designed with capabilities to support communication from multiple processor cores. The MPI endpoints extension has been proposed to ease process and thread count tradeoffs by enabling multithreaded MPI applications to efficiently drive independent network communication. In this work, we present the first implementation of the ...
Keywords: hybrid parallel programming, endpoints, MPI

14
November 2014 International Journal of High Performance Computing Applications: Volume 28 Issue 4, November 2014
Publisher: Sage Publications, Inc.
Bibliometrics:
Citation Count: 5

MPI defines a one-to-one relationship between MPI processes and ranks. This model captures many use cases effectively; however, it also limits communication concurrency and interoperability between MPI and programming models that utilize threads. This paper describes the MPI endpoints extension, which relaxes the longstanding one-to-one relationship between MPI processes and ...
Keywords: communication concurrency, endpoints, MPI, hybrid parallel programming, interoperability

15 published by ACM
October 2014 PGAS '14: Proceedings of the 8th International Conference on Partitioned Global Address Space Programming Models
Publisher: ACM
Bibliometrics:
Citation Count: 3
Downloads (6 Weeks): 1,   Downloads (12 Months): 4,   Downloads (Overall): 50

Full text available: PDFPDF
This paper introduces a proposed extension to the OpenSHMEM parallel programming model, called communication contexts. Contexts introduce a new construct that allows a programmer to generate independent streams of communication operations. In hybrid executions where multiple threads execute within an OpenSHMEM process, contexts eliminate interference between threads, and enable the ...

16 published by ACM
October 2014 PGAS '14: Proceedings of the 8th International Conference on Partitioned Global Address Space Programming Models
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 0,   Downloads (12 Months): 1,   Downloads (Overall): 36

Full text available: PDFPDF
One-sided append represents a new class of one-sided operations that can be used to aggregate messages from multiple communication sources into a single destination buffer. This new communication paradigm is analyzed in terms of its impact on the OpenSHMEM parallel programming model and applications. Implementation considerations are discussed and an ...

17 published by ACM
October 2014 PGAS '14: Proceedings of the 8th International Conference on Partitioned Global Address Space Programming Models
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 1,   Downloads (12 Months): 4,   Downloads (Overall): 59

Full text available: PDFPDF
The purpose of this document is to stimulate discussions on support for multi-threaded execution in OpenSHMEM. Why is there a need for any thread support at all for an API that follows a shared global address space paradigm? In our ongoing work, we investigate opportunities and challenges introduced through multi-threading, ...
Keywords: OpenSHMEM, hybrid programming, threading

18
March 2014 OpenSHMEM 2014: Proceedings of the First Workshop on OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools - Volume 8356
Publisher: Springer-Verlag New York, Inc.
Bibliometrics:
Citation Count: 3

OpenSHMEM provides a one-sided communication interface that allows for asynchronous, one-sided communication operations on data stored in a partitioned global address space. While communication in this model is efficient, synchronizations must currently be achieved through collective barriers or one-sided updates of sentinel locations in the global address space. These synchronization ...

19
December 2013 Computing: Volume 95 Issue 12, December 2013
Publisher: Springer-Verlag New York, Inc.
Bibliometrics:
Citation Count: 10

Hybrid parallel programming with the message passing interface (MPI) for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity ...
Keywords: Hybrid parallel programming, MPI-3.0, 68N19 other progamming techniques (objects-oriented, sequential, concurrent, automatic, etc.), Shared memory

20
October 2013 ICPP '13: Proceedings of the 2013 42nd International Conference on Parallel Processing
Publisher: IEEE Computer Society
Bibliometrics:
Citation Count: 1

MPI is the de facto standard for portable parallel programming on high-end systems. However, while the MPI standard provides functional portability, it does not provide sufficient performance portability across platforms. We present a framework that enables users to provide hints about communication patterns used within MPI applications. These annotations are ...
Keywords: high performance computing, parallel programming, automatic programming



The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us