CS 321 Spring 2013  >  Lecture Notes for Friday, February 8, 2013

CS 321 Spring 2013
Lecture Notes for Friday, February 8, 2013

Introduction to IPC

IPC is Inter-Process Communication: communication between different processes or threads.

Issues include:

Two categories of IPC:

Message passing can be synchronous or asynchronous. Synchronous, or blocking operations are ones in which we request data transfers and then wait for them to complete. With asynchronous, or non-blocking operations, we do not.

Improperly done IPC can result in a race condition. Again, a race condition exists when process/thread scheduling decisions can affect the correctness of a program.

Shared-Memory IPC

Implementing Mutual Exclusion

We we deal with shared resources, we often need to implement mutual exclusion: only one process/thread gets to access a variable at once. Portion of code that accesses a shared resource is a critical region. We want the following to be true:

Some terminology: Busy waiting occurs when a thread continually checks whether it is acceptable to perform some action (for example, enter a critical region). We want to avoid this, since the thread is essentially doing nothing, still takes up processor time.

One option for avoiding multiple threads being in a critical region simultaneously is to disable interrupts when a thread enters a critical region, so that no other thread can execute. This has a number of problems. It requires user-level code to be given too much power (the power to disable interrupts and the responsibility to re-enable them when it leaves the critical region). It does not allow any other processes to execute—even if they do not access the shared resource. And it may not achieve its goal on architectures with multiple processors/cores, since another processor may be executing a thread in a critical region.

A successful solution for implementing mutual exclusion involves lock variables, usually called simply locks.

At its simplest, we can think of a lock as an int variable. The variable is 1 if a resource is being used, and 0 if not. Consider the following (faulty!) code.

// Incorrect code!!!

int lock = 0;

void enter_region()  // Call when entering a critical region
{
    while (lock != 0) ;
    // An interrupt here would be BAD
    lock = 1;
}

void leave_region()  // Call when leaving a critical region
{
    lock = 0;
}

Above, when we enter a critical region, we wait for the lock to become 0. If the lock is 1, then this looks like an infinite loop. But remember that we are dealing with multiple threads. When the thread holding the lock exits its critical region, it will call leave_region setting the lock to 0. Then the thread that is waiting for the lock will exit its while loop. It will set the lock to 1, indicating that it has the lock. It does whatever it needs to, then sets the lock back to 0, when the critical region is complete, thus releasing the lock and letting some other thread acquire it.

But the above code has problems. One problem is that it does a busy wait. This makes the code inefficient, but, by itself, it does not make the code incorrect.

However, this code is incorrect. Suppose threads are waiting for the lock. And suppose there is an interrupt just after one exits its while loop, but before it sets the lock variable to 1. Then we can have two threads acquiring the lock at the same time. Mutual exclusion has failed. Thus, the above code leads to race conditions.

Strict Alternation

A solution that actually works: strict alternation (see the text). This involves a spin lock: a lock variable that uses busy waiting. Strict alternation is unsatisfactory, because it can prevent a thread from entering a critical region even when no other thread is in a critical region. It also imposes its own scheduling; by the nature of the method, the lock variable can determine which thread gets to execute anything other than busy-wait code next.

Peterson’s solution

Another purely software solution, Peterson’s solution deals with the main problems of Strict Alternation.

Shared-Memory IPC will be continued next time.


CS 321 Spring 2013: Lecture Notes for Friday, February 8, 2013 / Updated: 8 Feb 2013 / Glenn G. Chappell / ggchappell@alaska.edu