Memory Cache and Performance

CS 301 Lecture, Dr. Lawlor

Cache Pitfall #1: Random accesses

You can see a subtle cache effect from this simple code, that loops over an array:
enum {n=1024/4};
int arr[n];

int inc_arr(void) {
for (int i=0;i<n;i++)
arr[i]++;
return 0;
}

int foo(void) {
double t=time_function(inc_arr);
printf("%.3f ns/element\n",t/n*1.0e9);
return 0;
}

(Try this in NetRun now!)

This takes 1.2ns per element as written.  If you crank up n to be one megabyte (1024*1024/4), it takes 2.0ns per element.  Over one megabyte takes up to 2.3ns per element. 

So there's about a 2x slowdown for *sequential* accesses in big arrays compared to small arrays.  The reason is that the small array can fit in a special faster memory area called 'cache',which has higher bandwidth, but the big array is to big to fit.

There's a much bigger effect from the following code:
enum {n=1024/4};
int arr[n];

int inc_arr(void) {
unsigned int loc=0;
for (int i=0;i<n;i++) {
arr[loc%n]++;
loc+=12345;
}
return 0;
}

int foo(void) {
double t=time_function(inc_arr);
printf("%.3f ns/element\n",t/n*1.0e9);
return 0;
}

(Try this in NetRun now!)

There's an over 10x slowdown for *random* accesses in a big array, compared to a small array!

Application: Linked List

For example, here's a linked list that takes 15ns per iteration:
class linked_listnode {
public:
linked_listnode *next;
int data;
linked_listnode(linked_listnode *next_,int data_) {next=next_; data=data_;}
};

linked_listnode *make_list(void) { /* build a million-node list */
linked_listnode *head=0;
for (int i=0;i<1000000;i++) {
char *padding=new char[rand()%40];
head=new linked_listnode(head,rand()%7);
}
return head;
}
linked_listnode *list=make_list(); /*<- build list once, so it's not timed with foo */

int foo(void) { /* walk through the linked list */
int sum=0;
for (linked_listnode *cur=list; cur!=NULL; cur=cur->next)
sum+=cur->data;
return sum;
}

(Try this in NetRun now!)

If I just change how I allocate this list, leaving out the "padding", I double the speed to 7ns!
linked_listnode *make_list(void) { /* build a million-node list */
linked_listnode *head=0;
for (int i=0;i<1000000;i++) {
head=new linked_listnode(head,rand()%7);
}
return head;
}

(Try this in NetRun now!)

If I change the allocation again to allocate all the nodes in an array first, then initialize the nodes in the array using the "placement new" operator, then we get even better performance, just 3ns!
#include <new>
class linked_listnode {
public:
linked_listnode *next;
int data;
linked_listnode(linked_listnode *next_,int data_) {next=next_; data=data_;}
linked_listnode() {}
};

linked_listnode *make_list(void) { /* build a million-node list */
linked_listnode *head=0;
linked_listnode *arr=new linked_listnode[1000000];
for (int i=0;i<1000000;i++) {
head=new(&arr[i]) linked_listnode(head,rand()%7);
}
return head;
}
linked_listnode *list=make_list(); /*<- build list once, so it's not timed with foo */

int foo(void) { /* walk through the linked list */
int sum=0;
for (linked_listnode *cur=list; cur!=NULL; cur=cur->next)
sum+=cur->data;
return sum;
}

(Try this in NetRun now!)

The bottom line is that padding, even padding inside the "new" allocator's space, can significantly impact the space and time used by your program.

Hardware: Levels of Cache

Here's an inner loop that does something funny--it jumps around (using "loc") inside an array called "buf", but only within the bounds established by "mask".  Like any loop, each iteration takes some amount of time; but what's suprising is that there's a very strong dependence of the speed on the value of "mask", which establishes the size of the array we're jumping around in.

(Executable NetRun Link)
	for (i=0;i<max;i++) { /* jump around in buffer, incrementing as we go */
sum+=buf[loc&mask]++;
loc+=1234567;
}
Here's the performance of this loop, in nanoseconds per iteration, as a function of the array size (as determined by "mask").
Size (KB) 2.8GHz P4 1.6GHz P-M 2.2Ghz Athlon64 2.0GHz PPC G5 900MHz P3 300MHz PPC
1.02 4.92 4.88 2.27 5.53 17.47 16.03
2.05 4.6 4.87 2.28 5.53 17.47 16.03
4.1 4.72 4.9 2.28 5.18 17.48 16.03
8.19 4.83 4.91 2.28 3.6 17.49 16.03
16.38 4.75 4.91 2.28 3.6 17.52 16.03
32.77 6.59 4.84 2.28 3.63 21.57 16.03
65.54 6.84 10.1 2.29 5.31 21.58 16.64
131.07 6.92 10.11 5.26 5.31 21.97 40.31
262.14 7.13 10.11 6.92 5.31 98.28 40.34
524.29 8.48 10.07 10.13 23.98 144.04 52.33
1048.58 19.33 10.43 38.95 44.59 153.2 49.86
2097.15 54.33 28.87 76.15 99.11 156.86 144.76
4194.3 76.31 85.3 78.05 112.55 157.32 256.09
8388.61 75.33 111.43 78.81 210.04 159.43 342.73
16777.22 77.49 120.39 81.77 214.19 166.52 166.52
33554.43 77.93 126.73 81.56 208.21 168.58 168.58

I claim each performance plateau corresponds to a chunk of hardware.  Note that there are three jumps in the timings:
Note that machines have been getting faster and faster (see the slow machines on the right), but RAM isn't much faster nowadays!

Hardware: DRAM Hardware

DRAM cells are basically little capacitors, and their charge does slowly bleed off over a timescale of milliseconds; internally they need to be "refreshed" (read off and re-written), which special circuitry performs.

There's more detail than you'd ever want to know about DRAM chip interfacing: RAS/CAS, EDO, FPM, DDR, etc at Ars Technica

Charged DRAM cells are slightly light sensitive.  This means you can actually build a camera using a RAM chip as a sensor (this was much easier back in the day, when chips were often packaged in a little metal "can" instead of being semi-permanently cast into plastic or ceramic).  The downside is the sensors have a native 1-bit resolution (real black-and-white), so even to get grayscale you've got to take several exposures and average.  The upside is that the chips in a $50 1GB (8 gigabit) DIMM could be used to build a 8 *gigapixel* camera (in literal on-or-off black and white, however), and the thing could be read out at several frames per second.  If you were willing to restrict the focal area and resolution, you could easily get hundreds of thousands of frames per second, although you'd probably need a very bright lightsource.

Software: Memory Speed

In general, memory accesses have performance that's:
So if you're getting bad performance, you can either:
These are actually two aspects of "locality": the values you access should be similar.  The cache lets you reuse stuff you've used recently in time (temporal locality); streaming access is about touching stuff nearby in space (spatial locality).

There are lots of different ways to improve locality:

Application: 2D Arrays implemented as 1D Arrays

C and C++ don't actually support "real" 2D arrays.

For example, here's some 2D array code--check the disassembly
int arr[4][3];
int foo(void) {
arr[0][0]=0xA0B0;
arr[0][1]=0xA0B1;
arr[0][2]=0xA0B2;
arr[1][0]=0xA1B0;
arr[2][0]=0xA2B0;
arr[3][0]=0xA3B0;
return 0;
}
(executable NetRun link)

Here, arr[i][j] is at a 1D address like arr[i*3+j].  Here's a picture of the array, and the 1D addresses of each element:

i==
0
1
2
3
j==0
[0]
[3]
[6]
[9]
j==1
[1]
[4]
[7]
[10]
j==2
[2]
[5]
[8]
[11]

Note that adjacent j indices are actually adjacent in memory; adjacent i indices are separated by 3 ints.  There are lots of ways to say this same thing:
In general, when you write
    int arr[sx][sy];
    arr[x][y]++;

The compiler turns this into a 1D array like so:
    int arr[sx*sy];
    arr[x*sy+y]++;

Here y is the fast index. Note that the x coordinate is multiplied by the y size.  This is because between x and x+1, there's one whole set of y's.  Between y and y+1, there's nothing!  So you want your *innermost* loop to be y, since then that loop will access contiguous array elements.

We could also have written
    int arr[sy][sx];
    arr[y][x]++;

Which in 1D notation is:  
    int arr[sy*sx];
    arr[x+y*sx]++;

Now x is the fast index, and the y coordinate is multiplied by the x size.  You now want your innermost loop to be x, since adjacent x values are contiguous in memory.

For example, this program has terrible performance-- 75ns/pixel:
(Executable NetRun Link)
enum {sx=512};
enum {sy=512};
double img[sx*sy];

int inc_img(void) {
for (int x=0;x<sx;x++)
for (int y=0;y<sy;y++)
img[x+y*sx]++;
return 0;
}

int foo(void) {
double t=time_function(inc_img);
printf("%.3f ns/pixel\n",t/(sx*sy)*1.0e9);
return 0;
}
We can speed the program up by either:

Cache Pitfall #2: Set-associative Cache Mapping

Anytime you have a cache, of any kind, you need to figure out what to do when the cache gets full.  Generally, you face this problem when you've got a new element X to load into the cache--which cache slot do you place  X into?

The simplest approach is a "direct mapped cache", where element X goes into cache slot X%N (where N is the size of the cache).  Direct mapping means elements 1 and 2 will go into different adjacent slots, but you can support many elements.

For example, the Pentium 4's L1 cache is 64KB in size and direct-mapped.  This means address 0x0ABCD and address 0x1ABCD (which are 64KB apart) both get mapped to the same place in the cache.  So even though this program is very fast (5.2ns/call):
enum {n=1024*1024};
char arr[n];

int foo(void) {
arr[0]++;
arr[12345]++;
return 0;
}

(Try this in NetRun now!)

By contrast this very similar-looking program is very slow (20+ns/call), because both array elements (exactly 64KB apart) map to the same line of the cache, so the CPU keeps overwriting one with the other (called "cache thrashing"), and the cache is totally useless:
enum {n=1024*1024};
char arr[n];

int foo(void) {
arr[0]++;
arr[65536]++;
return 0;
}

(Try this in NetRun now!)

In general, power-of-two jumps in memory can be very slow on direct-mapped machines.  This is one of the only cases on computers where powers of two are not ideal!

Some machines avoid the thrashing of a direct-mapped cache by allowing a given memory address to sit in one of two or four slots, called two- or four-way "set associative caching" (described here, or in Bryant and O' Hallaron book, Chapter 6).  On such machines, you can still get cache thrashing with power of two jumps, but you need longer jumps to do so:

In general, there are many more complicated cache replacement algorithms, although most of them are too complicated to implement in hardware.  The OS treats RAM as a cache of data on disk, and since disks are so slow, it can pay to be smart about choosing which page to write to disk.

Cache Pitfall #3: Cache Coherence and Cache Thrashing

If two threads try to write to the *same* variable at the same time, you can get the wrong answer.

But surprisingly, if two threads try to write to different but nearby variables at the same time, you get the right answer, but a big performance hit!
enum {n=2}; /* number of threads */
enum {m=1000*1000}; /* number of times to access the int */
int offset=1; /* distance, in ints, between threads' accesses */
volatile int arr[1025];
void hammer_on_int(volatile int *ptr) {
for (int i=0;i<m;i++)
(*ptr)++;
}
int multi_hammer(void) {
#pragma omp parallel for
for (int thread=0;thread<n;thread++) {
hammer_on_int(&arr[thread*offset]);
}
return 0;
}
int foo(void) {
for (offset=1;offset<=1024;offset*=2)
printf("%d-byte offset took %.3f ns/++\n",
sizeof(int)*offset,time_function(multi_hammer)*1.0e9/m);
return 0;
}

(Try this in NetRun now!)

This program prints out:
4-byte offset took 19.437 ns/++
8-byte offset took 19.304 ns/++
16-byte offset took 20.442 ns/++
32-byte offset took 22.939 ns/++
64-byte offset took 2.615 ns/++
128-byte offset took 2.601 ns/++
256-byte offset took 2.598 ns/++
512-byte offset took 2.572 ns/++
1024-byte offset took 2.857 ns/++
2048-byte offset took 2.664 ns/++
4096-byte offset took 2.571 ns/++
Program complete. Return 0 (0x0)
When two threads are working on data within the same 64-byte cache line, there's a substantial (10x!) slowdown.

Hardware: Snoopy Cache Coherence Protocol

The problem is the cache coherence protocol the two CPUs use to ensure that writes to different locations will combine properly.

In particular, most modern machines use the MESI variant of MSI (go read 'em!).  These protocols have line granularity, so if two CPUs are accessing data near each other (within one cache line), you'll get "cache thrashing" as the CPUs have to hand the line back and forth.

The solution to this "false sharing" problem is to just separate data accessed by multiple threads--they've got to be at least one cache line apart for good performance.

Larger machines, where there is no shared bus, usually have non-uniform memory access: local memory is faster to access than remote memory.  To keep track of the caches, NUMA machines usually use some form of directory protocol, where there's a tiny stub associated with each potential cache line in memory, telling you the CPU responsible for keeping track of that cache line, or -1 if it's uncached.

You can even write your own distributed shared memory cache coherence protocol, using local RAM as a cache for remote RAM.  This is called Software Distributed Shared Memory (SDSM). Typically the "cache line" size is the same as the size of a hardware page, say 4KB for x86.  This is pretty big, so to avoid false sharing on SDSM systems, you need to make sure different threads' data is many kilobytes separated!