CS 321 Spring 2013  >  Lecture Notes for Friday, February 1, 2013

CS 321 Spring 2013
Lecture Notes for Friday, February 1, 2013

Introduction to Processes (cont’d) [2.1]

Implementation of Processes

An OS kernel keeps track of all processes. We call the structure in which process information is stored, the process table. Each entry is a process control block. This contains all information needed to move a process into a running state.

Such a movement is called a context switch. The old process’s information is stored in its control block, the new process’s information is read from its control block, the registers are set up, and execution of the new process resumes (or begins).

In Linux, the process table is a doubly linked list. Each entry is of type task_struct. This stucture might vary a bit between kernel releases, but it contains roughly the following data:

Processor Usage

We noted that running multiple processes concurrently can reduce processor idle time—thus increasing efficiency—as compared to running only a single process. This happens because one process can be running while another is blocked. See the text for a more thorough analysis.

Threads [2.2]

Levels of Context Sharing

Note: This is not in the text.

When a computer is performing some task, the code executes in some context: local variables, address space, the executable being run, hardware, etc.

Different pieces of code might share a great deal of context, or very little. We consider some of the possibilities, ordered from most context shared to least.

In the same function
If the two pieces of code lie in the same function, then they would share almost everything, even local variables. The only thing they would not share is a time slot: one will be executed first.
We could put the code into separate functions. Then they would be somewhat insulated from each other; in particular, they would have different stack frames, and thus different local variables.
Coroutines, or user-level threads
These are different threads of execution that are managed in user space, typically by the environment that supports the application, or the application itself. Pieces of code in different coroutines share the same data space, but have separate program counters and stacks.

We typically use the term coroutine when the implementation is at the language level. The idea is generally that two (or more) functions are currently active and communicating via some channel. Which function is actually being executed goes back and forth as necessary, in order to handle information passing through the channel. Support for coroutines is becoming increasingly common in modern programming languages. For example, the programming language Go includes coroutines in the form of the cutesily named “goroutines”. The Python programming language includes limited coroutine support in the form of “generators”.

We talk about user-level threads when these are managed at a somewhat lower level, and they are presented to the application programmer as running concurrently.

(Kernel) Threads
Kernel threads, often simply threads, are much like processes, but share the same address space. The separation between such threads is managed at a lower level than coroutines, by the OS kernel. Multiple threads might be managed under a single kernel process-table entry.
As above, but different address spaces. Managed separately by the kernel.
Virtual machines
Here we might run the pieces of code on the same machine, but under different OSs, and possibly different virtual processors.
Remote execution
Last, and most separated, we can run the code on different machines.

See generate.py (NetRun link) for an example of the use of coroutines in the Python programming language.

(Kernel) threads will be our topic for the next few days.

Threads will be continued next time.

CS 321 Spring 2013: Lecture Notes for Friday, February 1, 2013 / Updated: 1 Feb 2013 / Glenn G. Chappell / ggchappell@alaska.edu