effectively decouple the context for invoking MPI progress and querying the
completion status of MPI requests, thereby circumventing synchronization com-
plexities and task-engine interference. Meanwhile, the MPIX Async proposal
empowers applications to integrate custom progress hooks into MPI progress,
enabling them to harness MPI progress and extend MPI functionality from the
user layer. Through examples and micro-benchmark testing, we demonstrate
the effectiveness of these extensions in bringing MPI to modern asynchronous
programming.
Acknowledgments. This research was supported by the U.S. Department of
Energy, Office of Science, under Contract DE-AC02-06CH11357.
References
1. Castillo, E., Jain, N., Casas, M., Moreto, M., Schulz, M., Beivide, R., Valero,
M., Bhatele, A.: Optimizing computation-communication overlap in asynchronous
task-based programs. In: Proceedings of the ACM International Conference on
Supercomputing. pp. 380–391 (2019)
2. Haller, P., Odersky, M.: Event-based programming without inversion of control.
In: Joint Modular Languages Conference. pp. 4–22. Springer (2006)
3. Hatanaka, M., Takagi, M., Hori, A., Ishikawa, Y.: Offloaded MPI persistent col-
lectives using persistent generalized request interface. In: Proceedings of the 24th
European MPI Users’ Group Meeting. pp. 1–10 (2017)
4. Hoefler, T., Lumsdaine, A.: Message progression in parallel computing – to thread
or not to thread? In: 2008 IEEE International Conference on Cluster Computing.
pp. 213–222. IEEE (2008)
5. Holmes, D.J., Skjellum, A., Schafer, D.: Why is MPI (perceived to be) so complex?:
Part 1 – does strong progress simplify MPI? In: Proceedings of the 27th European
MPI Users’ Group Meeting. pp. 21–30 (2020)
6. Laguna, I., Marshall, R., Mohror, K., Ruefenacht, M., Skjellum, A., Sultana, N.: A
large-scale study of MPI usage in open-source HPC applications. In: Proceedings
of the International Conference for High Performance Computing, Networking,
Storage and Analysis. pp. 1–14 (2019)
7. Latham, R., Gropp, W., Ross, R., Thakur, R.: Extending the MPI-2 generalized
request interface. In: Recent Advances in Parallel Virtual Machine and Message
Passing Interface: 14th European PVM/MPI User’s Group Meeting, Paris, France,
September 30-October 3, 2007. Proceedings 14. pp. 223–232. Springer (2007)
8. Okur, S., Hartveld, D.L., Dig, D., Deursen, A.v.: A study and toolkit for asyn-
chronous programming in C#. In: Proceedings of the 36th International Confer-
ence on Software Engineering. pp. 1117–1127 (2014)
9. Ruefenacht, M., Bull, M., Booth, S.: Generalisation of recursive doubling for allre-
duce: Now with simulation. Parallel Computing 69, 24–44 (2017)
10. Ruhela, A., Subramoni, H., Chakraborty, S., Bayatpour, M., Kousha, P., Panda,
D.K.: Efficient design for MPI asynchronous progress without dedicated resources.
Parallel Computing 85, 13–26 (2019)
11. Schafer, D., Ghafoor, S., Holmes, D., Ruefenacht, M., Skjellum, A.: User-level
scheduled communications for MPI. In: 2019 IEEE 26th International Conference
on High Performance Computing, Data, and Analytics (HiPC). pp. 290–300. IEEE
(2019)
28