CS 321 Spring 2013  >  Lecture Notes for Monday, March 18, 2013

CS 321 Spring 2013
Lecture Notes for Monday, March 18, 2013

Page-Replacement Algorithms

Introduction

When we move something up in the memory hierarchy, something else generally must be moved down to make room for it. How do we determine what to move down? This issue might occur at any level in the memory hierarchy; we will look at the question of which memory page to swap out to storage if a page is fetched from storage and memory is full. An algorithm to answer this question is a page-replacement algorithm. Every modern OS has one, and these are endlessly tweaked. We will cover the major ideas behind such algorithms.

When the processor accesses a memory location, we want the location’s page to be in memory. If it is not, we have a page fault, and we must fetch the page from storage. What happens is that the memory access triggers an interrupt, which executes code that chooses a page to swap out (if necessary), writes this to storage, reads the page into storage, and updates the page table.

When page faults come one after the other, the machine is thrashing; this results in extreme inefficiency. One technique that helps avoid thrashing is for code to have locality of reference. This means that, when a location is accessed, following accesses will be to nearby locations.

Example Algorithms

Again, a page replacement algorithm chooses which page to swap out to storage.

We would like to use the Optimal Algorithm: swap out the page whose next access will be the furthest in the future. This requires prediction, and is generally impossible. So we try to make a good guess. Note that whether a guess is “good” depends on how memory is being used; different algorithms may be preferred in different situations.

In the Not Recently Used Algorithm, each page is marked with two bits: R and M. R indicates whether the page has been referenced since the last check, and M indicates whether it has been modified. Giving R the value 1 for yes and 0 for no, and giving M the value 2 for yes and 0 for no, we can add R and M to obtain a number from 0 to 3. We swap out a page chosen at random from the lowest-numbered nonempty class.

In the FIFO Algorithm we place pages in a queue. We insert a page when it is fetched, and we swap it out when it reaches the front of the queue. This is not a good algorithm, since it may remove a heavily-used page. But the idea of a queue can be useful.

In the Second Chance Algorithm, we use both a queue and the R and M bits. When a page reaches the front, if its R bit is set, the we clear this bit and reinsert the page. A variant is the Clock Algorithm, the name of which refers to the idea of storing a queue in a circular buffer. This makes reinsertion very easy.

In the Least Recently Used algorithm we throw out the page that has been unused for the longest time. This sounds good, but it requires frequent checks and modifications to a large amount of data, and so can be inefficient.

A working set is the set of pages that a process is currently using. Some methods try to mark pages as lying in the working set and keep such pages in memory. This may result in pages being fetched when they have not been accessed (as opposed to demand paging, in which a page is fetched only on a page fault involving that page). Note that the working-set idea can lead to thrashing if the working set does not fit in memory.

The WSClock Algorithm combines the Clock Algorithm with the idea of a working set.


CS 321 Spring 2013: Lecture Notes for Monday, March 18, 2013 / Updated: 6 May 2013 / Glenn G. Chappell / ggchappell@alaska.edu