Abstract
Parsl is a parallel programming library for Python that aims to make it easy to specify parallelism in programs and to realize that parallelism on arbitrary parallel and distributed computing systems. Parsl relies on developers annotating Python functions-wrapping either Python or external applications-to indicate that these functions may be executed concurrently. Developers can then link together functions via the exchange of data. Parsl establishes a dynamic dependency graph and sends tasks for execution on connected resources when dependencies are resolved. Parsl's runtime system enables different compute resources to be used, from laptops to supercomputers, without modification to the Parsl program.
- Airflow Project. 2019. Airflow. https://airflow.apache.org/. Accessed Sep 1, 2020.Google Scholar
- Y. Babuji, K. Chard, I. Foster, D.S. Katz, M. Wilde, A. Woodard, and J. Wozniak. 2018. Parsl: Scalable Parallel Scripting in Python. In 10th International Workshop on Science Gateways (IWSG).Google Scholar
- Y. Babuji, A. Woodard, Z. Li, D.S. Katz, B. Clifford, R. Kumar, L. Lacinski, R. Chard, J. Wozniak, I. Foster, M. Wilde, and K. Chard. 2019. Parsl: Pervasive Parallel Programming in Python. In 28th ACM International Symposium on High- Performance Parallel and Distributed Computing (HPDC). Google Scholar
Digital Library
- Yadu Babuji, AnnaWoodard, Zhuozhao Li, Daniel S. Katz, Ben Clifford, Ian Foster, Michael Wilde, and Kyle Chard. 2019. Scalable Parallel Programming in Python with Parsl. In Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (Learning) (Chicago, IL, USA) (PEARC '19). ACM, Article 22, 8 pages. https://doi.org/10.1145/3332186.3332231 Google Scholar
Digital Library
- M.G. Burke, K. Knobe, R. Newton, and V. Sarkar. 2011. Concurrent Collections Programming Model. Springer US, Boston, MA, 364--371.Google Scholar
- R. Chard, Y. Babuji, Z. Li, T. Skluzacek, A. Woodard, B. Blaiszik, I. Foster, and K. Chard. 2020. funcX: A Federated Function Serving Fabric for Science. Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing. https://doi.org/10.1145/3369583.3392683 Google Scholar
Digital Library
- L. Dagum and R. Menon. 1998. OpenMP: an industry standard API for sharedmemory programming. Computational Science & Engineering, IEEE 5, 1 (1998), 46--55. Google Scholar
Digital Library
- Dask Development Team. 2016. Dask: Library for dynamic task scheduling. https: //dask.orgGoogle Scholar
- E. Deelman, K. Vahi, G. Juve, M. Rynge, S. Callaghan, P.J. Maechling, R. Mayani, W. Chen, R.F. da Silva, et al. 2015. Pegasus, a workflow management system for science automation. Future Generation Computer Systems 46 (2015), 17--35. Google Scholar
Digital Library
- P. Di Tommaso, M. Chatzou, E.W. Floden, P.P. Barja, E. Palumbo, and C. Notredame. 2017. Nextflow enables reproducible computational workflows. Nature Biotechnology 35, 4 (2017), 316.Google Scholar
Cross Ref
- A. Jain, S.P. Ong, W. Chen, B. Medasani, X. Qu, M. Kocher, M. Brafman, Gu. Petretto, G. Rignanese, et al. 2015. FireWorks: A dynamic workflow system designed for high-throughput applications. Concurrency and Computation: Practice and Experience 27, 17 (2015), 5037--5059. Google Scholar
Digital Library
- Luigi Team. [n.d.]. Luigi. https://github.com/spotify/luigi. Accessed Sep 1, 2020.Google Scholar
- M. Wilde, M. Hategan, J. M. Wozniak, B. Clifford, D. S. Katz, and I. Foster. 2011. Swift: A language for distributed parallel scripting. Parallel Comput. 37, 9 (2011), 633--652. Google Scholar
Digital Library
Index Terms
(auto-classified)Extended Abstract: Productive Parallel Programming with Parsl
Recommendations
Some Aspects of Message-Passing on Future Hybrid Systems (Extended Abstract)
Proceedings of the 15th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing InterfaceIn the future, most systems in high-performance computing (HPC) will have a hierarchical hardware design, e.g., a cluster of ccNUMA or shared memory nodes with each node having several multi-core CPUs. Parallel programming must combine the distributed ...
The NYU Ultracomputer—designing a MIMD, shared-memory parallel machine (Extended Abstract)
ISCA '82: Proceedings of the 9th annual symposium on Computer ArchitectureWe present the design for the NYU Ultracomputer, a shared-memory MIMD parallel machine composed of thousands of autonomous processing elements. This machine uses an enhanced message switching network with the geometry of an Omega-network to approximate ...
Stochastic performance models of parallel task systems (extended abstract)
This paper considers the class of parallel computations represented by directed, acyclic task graphs. These include parallel loops, multiphase algorithms, partitioning and merging algorithms, as well as any arbitrary parallel computation that can be ...






Comments