(Click on title slide to access entire slide deck.)

Lecture Panopto Watch the lecture here.

Lecture 19

We continue to develop the message passing paradigm for distributed memory parallel computing, looking in more depth at principal abstractions in MPI (communicator, message envelope, message contents) and two variants of “six-function MPI”. Laplace’s equation is part of the HPC canon so we do a deep dive on approaches for solving Laplace’s equation (iteratively) with MPI – which leads to the the compute-communicate (aka BSP) pattern.


Previous section:
Next section: