CS 321 Spring 2012  >  Lecture Notes for Wednesday, February 8, 2012

CS 321 Spring 2012
Lecture Notes for Wednesday, February 8, 2012

Threads (cont’d) [2.2]

More on POSIX Threads

When is a Thread Finished

Last time we wrote a multithreaded program with a master thread that spawned off multiple slave threads, and then waited until they were all done. Or at least that was the idea. The program, thread1.cpp (NetRun link), does not always work correctly.

There are two problems with this program.

The first problem above can be dealt with inside the C++ language. Declaring a variable volatile tells the compiler not to make these assumptions.

volatile int threadsdone = 0;

The second problem only exists on a machine with multiple processors/cores, but it is more difficult to deal with when it happens. Some processors have provisions for locked instructions that force no simultaneous access. On my system, I can replace straight C++

++threadsdone;

with x86 assembly embedded in C++.

__asm__("lock incl (threadsdone)");

However, this is specific to both my processor and my compiler; it may not work for you (and it is not a very nice solution, regardless).

Waiting for a thread to finish, is a common thing to do in a multithreaded program, and it turns out that pThreads provides a facility for doing this—one that works. It is called thread joining.

When a slave thread terminates, it is said to “join” the master. The master can do a (blocking) wait for this by calling pthread_join. This takes two parameters: the pthread_t of the thread that we wait for, and a void ** that allows for passing back the thread’s return value. Set the latter parameter to NULL (“0”) if you do not care about the parameter.

pthread_join(pt[i], 0);

See thread2.cpp (NetRun link) for a demo of pThreads using thread joining.

Later, we will look at another solution involving a kernel-provided lock, called a mutex. Such locks will also allow us to clean up the messy output of the above mutlithreaded programs.

A Race-Condition Story

Recall that a race condition is when the results of a program change in significant/surprising/bad ways due to scheduling decisions. We discuss a famous example of a race condition that caused some serious harm.

The THERAC-25 was a radiation therapy device made in the 1980s. It had two modes: direct electron-beam therapy, in which a patient was irradiated with a low-power electron beam, and multivolt x-ray therapy, in which a patient was irradiated with x-rays. The x-rays were produced by aiming a high-power electron beam at a metal “target” that blocked the beam.

In order to change between the two modes, the beam power needed to be altered, and the target needed to be put in place or taken away, as appropriate. All this was handled by a computer running a multithreaded program.

When the machine entered electron-beam mode, there was an eight-second calibration cycle, and then the target was removed. If, in that eight seconds, the operator told the machine to go to x-ray mode, another thread was spawned, which put the target into place and increased the beam strength. Then, after the eight seconds were over, the first thread would remove the target. The result was a high-energy electron beam irradiating a patient with no target in the way.

This not being a physics or medicine class, we will not go into details; suffice to say that this is BAD; as a result of this bug, three people died, and others were seriously injured. Race conditions are serious business; watch out for them.

Thread Implementation Issues

As noted in the text, user threads generally have better performance than kernel threads. The issues are essentially the same as those for monolithic kernels vs. microkernels: the number of user/kernel mode switches can be too high.

One way to deal with this problem is for the kernel to allocate resources to deal with threads, that can then be dealt with in user mode. A kernel can create a number of virtual processors, for running threads. The user-mode code can make use of these without additional user/kernal mode switches.

See also pop-up threads (discussed in the text) for another way to improve thread performance in some situations.


CS 321 Spring 2012: Lecture Notes for Wednesday, February 8, 2012 / Updated: 12 Feb 2012 / Glenn G. Chappell / ggchappell@alaska.edu