CS 321 Spring 2012  >  Lecture Notes for Wednesday, February 29, 2012

CS 321 Spring 2012
Lecture Notes for Wednesday, February 29, 2012

Introduction to Memory Management

All processes want all of the memory, and they want it now. This being impossible, we use a memory hierarchy.

Registers     Faster
.
Processor Cache .
.
Main Memory .
.
Local Storage .
.
Network-Accessed Data Greater Capacity

The above is generally an oversimplification. In particular, modern processors have multple levels of cache. Following the above pattern, the levels with the greatest speed, will have the lowest capacity.

We try to predict what data will be needed soon, and we place it high in the hierarchy. We move data around the hierarchy in chunks. A chunk moved between main memory and local storage is called a (memory) page. A chunk moved between main memory and the processor cache is called a cache line.

Because of this chunking, we like algorithms whose memory accesses tend to be near recent accesses. Such algorithms are said to have locality of reference. On modern computers, they tend to execute faster than algorithms without this property. Code that takes into account the existence of the processor cache is said to be cache-aware.

Memory Abstractions

From the point of view of an Assembly Language program, the interface to memory is very simple. We do something like

mov rax, ____

Where the blank is filled in with some memory address.

But, as with many things in the operating-systems field, what is behind the interface, may not be what we expect. How the interface maps to actual data, is what a memory abstraction is all about.

No Memory Abstraction

The simplest memory abstraction is no memory abstraction. Logical addresses (found, e.g., in an Assembly Language program), are identical to physical addresses (locations on actual memory hardware).

Simple Contiguous Address Space

The first nontrivial memory abstraction gives each process a separate address space: a block of contiguous memory that no other process has access to.

We specify a contiguous address space using two special-purpose processor registers: the base register and the limit register. The base register holds the phyical address of a process’s address space, which is mapped to logical address zero. The limit register holds the size of the address space.

Each process has its own copy of the base and limit values. When a context switch is performed, the values for the old process are saved, and the new process’s values are placed in the registers.

In order to map logical addresses to physical addresses, the hardware first checks whether the logical address is greater than the limit value. If so, the process is attempting to access memory that it does not have the privilege to access. If the logical address is in range, then the hardware adds the base value to it, to obtain the physical address.

A simple contiguous address-space abstraction allows for the following.

However, this memory abstraction does not conveniently allow for any of the following.

Modern Virtual Memory

In modern virtual memory, we do memory mapping. in which we establish a correspondence between three things:

We do this mapping at a relatively fine-grained level, e.g., memory pages might be 4K in size.

Modern virtual memory allows for the above-listed operations to be done. It also allows for other operations. For example, normal memory allocation establishes a correspondence with an unnamed file (called an anonymous file). But we can just as easily establish a correspondence with a named file. Then reading from and writing to the portion of memory, will read from and write to the file. The result is a memory-mapped file, a particularly efficient way to do file I/O.

We will cover modern virtual memory in more detail later in the semester.


CS 321 Spring 2012: Lecture Notes for Wednesday, February 29, 2012 / Updated: 6 May 2012 / Glenn G. Chappell / ggchappell@alaska.edu