Abstract
This poster proposes an efficient runtime scheduler that provides provable performance guarantees to parallel programs that use data structures through the use of implicit batching.
- R. D. Blumofe and C. E. Leiserson. Scheduling multithreaded computations by work stealing. Journal of the ACM, 46(5):720--748, 1999. Google Scholar
Digital Library
- A. Braginsky and E. Petrank. A lock-free B+ tree. In SPAA, pages 58--67, 2012. Google Scholar
Digital Library
- D. Hendler, I. Incze, N. Shavit, and M. Tzafrir. Flat combining and the synchronization-parallelism tradeoff. In SPAA, pages 355--364, 2010. Google Scholar
Digital Library
- W. J. Paul, U. Vishkin, and H. Wagener. Parallel dictionaries in 2--3 trees. In ICALP, pages 597--609, 1983. Google Scholar
Digital Library
Index Terms
Provably good scheduling for parallel programs that use data structures through implicit batching
Recommendations
Provably good scheduling for parallel programs that use data structures through implicit batching
PPoPP '14: Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programmingThis poster proposes an efficient runtime scheduler that provides provable performance guarantees to parallel programs that use data structures through the use of implicit batching.
Provably good scheduling for parallel programs that use data structures through implicit batching
SPAA '14: Proceedings of the 26th ACM symposium on Parallelism in algorithms and architecturesAlthough concurrent data structures are commonly used in practice on shared-memory machines, even the most efficient concurrent structures often lack performance theorems guaranteeing linear speedup for the enclosing parallel program. Moreover, ...
When serial batch scheduling involves parallel batching decisions: A branch and price scheme
Highlights- Serial batching and parallel batching nested within each other.
- Flow time and ...
AbstractThis paper studies a scheduling problem in which serial and parallel batching decisions must be simultaneously taken. There is a single machine with a fixed capacity that can process multiple jobs at the same time (i.e., parallel ...







Comments