SELECT * FROM pwn_ihpcf_person_info WHERE ihpcfusername = 'Gropp' William D. GroppDirector and Chief Scientist, National Center for Supercomputing Applications; Thomas M. Siebel Chair, Department of Computer Science, University of Illinois in Urbana-Champaign

William D. Gropp

Director and Chief Scientist, National Center for Supercomputing Applications; Thomas M. Siebel Chair, Department of Computer Science, University of Illinois in Urbana-Champaign

William Gropp is Director and Chief Scientist of the National Center for Supercomputing Applications and holds the Thomas M. Siebel Chair in the Department of Computer Science at the University of Illinois in Urbana-Champaign. He received his Ph.D. in Computer Science from Stanford University in 1982 and worked at Yale University and Argonne National Laboratory. His research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. He is a Fellow of ACM, IEEE, and SIAM and a member of the National Academy of Engineering.


Talk Title:

MPI+X for Extreme Scale Computing


Talk Abstract:

The Message Passing Interface (MPI) has been a very successful API for developing both libraries and applications for systems from the smallest to the largest.  In practice, however, users and developers have often found performance anomalies that can significantly impact scalability. These problems can be made both better and worse by combining MPI with other programming models, such as using MPI for internode programming and OpenMP for intranode programming. To understand the source of these problems, this talk looks at the common performance models for MPI communication and will show that these models can be misleading and inaccurate. A new performance model is shown that provides insight into achieving better performance.  The use of MPI+X to address the issues revealed by this model is discussed, and some of the open issues in using MPI+X as an exascale programming model, including limitations, challenges, and opportunities in the performance of MPI implementations, are discussed.


Update my resume!