Parallel Programming: MPI & Messaging

CS 641 Lecture, Dr. Lawlor

First, read this basic MPI tutorial, which has a good survey of all the MPI routines.  The gory details are in the MPI 1.1 standard, which is fairly readable for a standard (but then, most standards are hideously unreadable).  Pay attention to the "big eight" MPI functions: MPI_Init, MPI_Finalize, MPI_Comm_size, MPI_Comm_rank, MPI_Send, MPI_Recv, MPI_Bcast, and MPI_Reduce.  Those are really the only functions in MPI 1.1, the rest are just small variations on those themes.

For example, here's an idiomatic MPI program where the first process sends one integer to the last process:
#include <mpi.h> /* for MPI_ functions */
#include <stdio.h> /* for printf */
#include <stdlib.h> /* for atexit */

void call_finalize(void) {MPI_Finalize();}
int main(int argc,char *argv[]) {
MPI_Init(&argc,&argv);
atexit(call_finalize); /*<- important to avoid weird errors! */

int rank=0,size=1;
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);

int tag=17; /*<- random integer ID for this message exchange */
if (rank==0) {
int val=1234;
MPI_Send(&val,1,MPI_INT, size-1,tag,MPI_COMM_WORLD);
printf("Rank %d sent value %d\n",rank,val);
}
if (rank==size-1) {
MPI_Status sts;
int val=0;
MPI_Recv(&val,1,MPI_INT, MPI_ANY_SOURCE,tag,MPI_COMM_WORLD,&sts);
printf("Rank %d received value %d\n",rank,val);
}
return 0;
}
You can copy this same program on the UAF powerwall from ~olawlor/641_demo/simplempi/.

MPI on the Powerwall

I've given you all accounts on the UAF Bioinformatics Powerwall up in Chapman 205.  You can SSH in directly from on-campus, but from off-campus the UAF firewall will block you unless you SSH to port 80 (normally the web port).  On UNIX systems, that's like "ssh -p 80 fsxxx@powerwall0.cs.uaf.edu".

Once you're logged into the powerwall, copy over the "mandel" example program with:
    cp -r ~olawlor/441_demo/mandel .

Now build and run the "mandel" program like so:
    cd mandel
    make
    mpirun -np 2 ./mandel

You can edit the source code with pico, a friendly little editor (use the arrow keys, press Ctrl-X to exit)
    pico main.cpp

You can make a backup copy of the code with
    cp main.cpp bak_v1_original.cpp

If you have a UNIX machine, you can view the resulting fractal image with:
    xv out.ppm
Or you can copy the files off the powerwall for local display; you may prefer to
    convert out.ppm out.jpg
to get a normal JPEG.

Theoretical Message-Passing Performance

Most network cards require some (fairly large) fixed amount of time per message, plus some (smaller) amount of time per byte of the message:
    tnet = a + b * L;

Where
The bottom line is that shorter messages are always faster.  *But* due to the per-message overhead, they may not be *appreciably* faster.  For example, for Ethernet a zero-length message takes 50us.  A 100-byte message takes 51us (50us/message+100*10ns/byte).  So you might as well send 100 bytes if you're going to bother sending anything!

In general, you'll hit 50% of the network's achievable bandwidth when sending messages so
    a = b * L
or
    L = a / b
For Ethernet, this breakeven point is at L = 5000 bytes/message, which is amazingly long!  Shorter messages are "alpha dominated", which means you're waiting for the network latency (speed of light, acknowledgements, OS overhead).  Longer messages are "beta dominated", which means you're waiting for the network bandwidth (signalling rate).  Smaller messages don't help much if you're alpha dominated!

The large per-message cost of most networks means many applications cannot get good performance with an "obvious" parallel strategy, like sending individual floats when they're needed.  Instead, you have to figure out beforehand what data needs to be sent, and send everything at once.

Parallel CPU Performance

OK, that's the network's performance.  What about the CPU?  A typical model is:
    tcpu = k * n / p

Where

Parallel Problem Performance

Say we're doing an n=1mpixel Mandelbrot rendering in parallel on p=10 processors.  Step one is to render all the pixels:
    tcpu = k * n / p
    tcpu = 300 ns/pixel * 1m pixels / 10 processors = 30ms / processor

After rendering the pixels, we've got to send them to one place for output.  So each processor sends a message of size L = (3 bytes/pixel * n / p) = 300Kbytes/processor, and thus:
    tnet = a + b * L
    tnet = a + b * (3 bytes/pixel * 1m bytes / 10 processors) = 3.05 ms/processor
(Careful: this assumes all the messages sent can be received at one place in parallel, which often isn't the case!)

Hence in this case we spend almost 10 times longer computing the data as we do sending it, which is pretty good.

Unlike the network, the CPU spends its time actually solving the problem.  So CPU time is usually seen as "good" useful work, while network time is "bad" parallel overhead.  So more generally, you'd like to make sure the "computation to communication ratio", tcpu/tnet is as high as possible.  For the mandelbrot problem:

    tcpu / tnet = k * n / p / ( a + b * n / p * 3)

Or, simplifying:
    tcpu / tnet = k / ( a * p / n + b * 3)

Remember, a higher CPU/net ratio means more time spent computing, and hence higher efficiency.  So we can immediately see that:
Or, to spend more time computing *relative* to the time spent communicating:
Of course, what users really care about is the *total* time spent, tcpu + tnet:
    tcpu + tnet = k * n / p + a + b * n / p * 3

To spend less time overall:
Note that even if you do all these things, you can *never* solve the mandelbrot problem in parallel in less than the per-message time a.  For Ethernet, this means parallel problems are always going to take 50us.  In particular, if the problem can be solved serially in less than 50us, going parallel can't possibly help!  Luckily, it's rare you care about a problem that takes less than 50us total.  And if that sub-50us computation is just a part of a bigger problem, you can parallelize the *bigger* problem.

In general, the per-message cost means you want to parallelize the *coarsest* pieces of the computation you can!

Parallelism Generally

Here's a survey of data structures used by scientific simulation software.  I prepared this while in Illinois, working on a variety of scientific simulation software.

 Some applications are quite easy to parallelize.  Others aren't.