Author image not provided
 Prabhanjan Kambadur

Authors:
Add personal information
  Affiliation history
Bibliometrics: publication history
Average citations per article7.64
Citation Count107
Publication count14
Publication years2006-2016
Available for download6
Average downloads per article486.00
Downloads (cumulative)2,916
Downloads (12 Months)152
Downloads (6 Weeks)17
SEARCH
ROLE
Arrow RightAuthor only


AUTHOR'S COLLEAGUES
See all colleagues of this author

SUBJECT AREAS
See all subject areas




BOOKMARK & SHARE


12 results found Export Results: bibtexendnoteacmrefcsv

Result 1 – 12 of 12
Sort by:

1 published by ACM
March 2016 ACM Transactions on Parallel Computing (TOPC) - Special Issue on PPOPP 2014: Volume 2 Issue 4, March 2016
Publisher: ACM
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 0,   Downloads (12 Months): 14,   Downloads (Overall): 95

Full text available: PDFPDF
X10 is a high-performance, high-productivity programming language aimed at large-scale distributed and shared-memory parallel applications. It is based on the Asynchronous Partitioned Global Address Space (APGAS) programming model, supporting the same fine-grained concurrency mechanisms within and across shared-memory nodes. We demonstrate that X10 delivers solid performance at petascale by running ...
Keywords: APGAS, X10, performance, scalability

2 published by ACM
February 2014 PPoPP '14: Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming
Publisher: ACM
Bibliometrics:
Citation Count: 15
Downloads (6 Weeks): 1,   Downloads (12 Months): 16,   Downloads (Overall): 376

Full text available: PDFPDF
X10 is a high-performance, high-productivity programming language aimed at large-scale distributed and shared-memory parallel applications. It is based on the Asynchronous Partitioned Global Address Space (APGAS) programming model, supporting the same fine-grained concurrency mechanisms within and across shared-memory nodes. We demonstrate that X10 delivers solid performance at petascale by running ...
Keywords: APGAS, X10, performance, scalability
Also published in:
November 2014  ACM SIGPLAN Notices - PPoPP '14: Volume 49 Issue 8, August 2014

3
June 2013 ICML'13: Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28
Publisher: JMLR.org
Bibliometrics:
Citation Count: 0

The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012a) turns non-negative matrix factorization (NMF) into a tractable problem. Recently, a new class of provably-correct NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem as that of finding the extreme rays of ...

4
May 2013 IBM Journal of Research and Development: Volume 57 Issue 3-4, May/July 2013
Publisher: IBM Corp.
Bibliometrics:
Citation Count: 0

Massive-scale analytics (MSA) applications are characterized by the large amount of data that they process and the complexity of algorithms used to process the data. The ideal MSA system will not only support processing of large amounts of data but also offer a high degree of parallelism and support scheduling ...

5 published by ACM
August 2011 KDD '11: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Publisher: ACM
Bibliometrics:
Citation Count: 16
Downloads (6 Weeks): 3,   Downloads (12 Months): 35,   Downloads (Overall): 981

Full text available: PDFPDF
In the last decade, advances in data collection and storage technologies have led to an increased interest in designing and implementing large-scale parallel algorithms for machine learning and data mining (ML-DM). Existing programming paradigms for expressing large-scale parallelism such as MapReduce (MR) and the Message Passing Interface (MPI) have been ...
Keywords: machine learning, map/reduce, data mining, parallelism

6 published by ACM
February 2011 PPoPP '11: Proceedings of the 16th ACM symposium on Principles and practice of parallel programming
Publisher: ACM
Bibliometrics:
Citation Count: 31
Downloads (6 Weeks): 11,   Downloads (12 Months): 74,   Downloads (Overall): 857

Full text available: PDFPDF
On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly ...
Keywords: distributed work-stealing, global load balancing, x10, uts
Also published in:
September 2011  ACM SIGPLAN Notices - PPoPP '11: Volume 46 Issue 8, August 2011

7
January 2010
Bibliometrics:
Citation Count: 0

The multi-core era brings new challenges to the programming community. Parallelization requirements of applications in mainstream computing and applications in emergent fields of high performance computing such as informatics must be explored. With parallelism now ubiquitous, programmability, composability, and reuse need to be closely examined in applications developed ...

8 published by ACM
November 2009 SC '09: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Publisher: ACM
Bibliometrics:
Citation Count: 14
Downloads (6 Weeks): 1,   Downloads (12 Months): 12,   Downloads (Overall): 606

Full text available: PDFPDF
HPC today faces new challenges due to paradigm shifts in both hardware and software. The ubiquity of multi-cores, many-cores, and GPGPUs is forcing traditional serial as well as distributed-memory parallel applications to be parallelized for these architectures. Emerging applications in areas such as informatics are placing unique requirements on parallel ...

9
May 2008 IWOMP'08: Proceedings of the 4th international conference on OpenMP in a new era of parallelism
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 5

This paper proposes extensions to the OpenMP standard toprovide first-class support for parallelizing generic libraries such as theC++ Standard Library (SL). Generic libraries are especially known fortheir efficiency, reusability and composibility. As such, with the adventof ubiquitous parallelism, generic libraries offer an excellent avenue forparallelizing the existing applications that use ...

10
September 2007 PVM/MPI'07: Proceedings of the 14th European conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 11

In this paper we make the case for adding standard nonblocking collective operations to the MPI standard. The nonblocking point-to-point and blocking collective operations currently defined by MPI provide important performance and abstraction benefits. To allow these benefits to be simultaneously realized, we present an application programming interface for non-blocking ...

11
May 2007 ICCS '07: Proceedings of the 7th international conference on Computational Science, Part I: ICCS 2007
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 1

This paper describes a new approach for parallelizing generic software libraries. Generic algorithms are expressed in terms of type properties, which allows them to work with entire families of types rather than specific types. Despite this generality, generic algorithms can be made as efficient as their hand-coded variants through the ...
Keywords: Type Properties, Specialization, Parallel Algorithms

12
September 2006 EuroPVM/MPI'06: Proceedings of the 13th European PVM/MPI User's Group conference on Recent advances in parallel virtual machine and message passing interface
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 5

The Message Passing Interface (MPI) is the de facto standard for writing message passing applications. Much of MPI's power stems from its ability to provide a high-performance, consistent interface across C, Fortran, and C++. Unfortunately, with cross-language consistency at the forefront, MPI tends to support only the lowest common denominator ...



The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us