Memory Speed and 2D Arrays

CS 301 Lecture, Dr. Lawlor

Memory Speed

Memory accesses have performance that's:
So if you're getting bad performance, you can either:
These are actually two aspects of "locality": the values you access should be similar.  The cache lets you reuse stuff you've used recently in time (temporal locality); streaming access is about touching stuff nearby in space (spatial locality).

There are lots of different ways to improve locality:
Here's the actual memory performance I measured on a Core2 Duo CPU using this NetRun program.

Size   \   Stride
2 11 43 155 546
1.0 KB 2.96ns 2.97ns 2.98ns 2.96ns 2.96ns
4.0 KB 3.03ns 3.03ns 3.10ns 3.10ns 3.06ns
16.0 KB 3.04ns 3.05ns 3.10ns 3.10ns 3.06ns
64.0 KB 3.03ns 3.10ns 3.84ns 3.47ns 3.24ns
256.0 KB 3.03ns 3.10ns 3.82ns 3.96ns 3.96ns
1024.0 KB 3.03ns 3.18ns 4.57ns 4.01ns 4.53ns
4096.0 KB 3.05ns 3.21ns 22.19ns 31.05ns 35.02ns
16384.0 KB 3.03ns 3.27ns 22.90ns 42.74ns 43.54ns

Note that 256KB access is always fast (good temporal locality; those 256 bytes just get hit over and over).  Similarly, stride-2 access is always fast (good spatial locality; the next access is just 2 bytes from the previous access).  The only time memory access is slow is when jumping around in a big buffer--and it's 10x slower when you do that!

2D Arrays implemented as 1D Arrays

C and C++ don't actually support "real" 2D arrays.

For example, here's some 2D array code--check the disassembly
int arr[4][3];
int foo(void) {
arr[0][0]=0xA0B0;
arr[0][1]=0xA0B1;
arr[0][2]=0xA0B2;
arr[1][0]=0xA1B0;
arr[2][0]=0xA2B0;
arr[3][0]=0xA3B0;
return 0;
}
(executable NetRun link)

Here, arr[i][j] is at a 1D address like arr[i*3+j].  Here's a picture of the array, and the 1D addresses of each element:

i==
0
1
2
3
j==0
[0]
[3]
[6]
[9]
j==1
[1]
[4]
[7]
[10]
j==2
[2]
[5]
[8]
[11]

Note that adjacent j indices are actually adjacent in memory; adjacent i indices are separated by 3 ints.  There are lots of ways to say this same thing:
In general, when you write
    int arr[sx][sy];
    arr[x][y]++;

The compiler turns this into a 1D array like so:
    int arr[sx*sy];
    arr[x*sy+y]++;

Here y is the fast index. Note that the x coordinate is multiplied by the y size.  This is because between x and x+1, there's one whole set of y's.  Between y and y+1, there's nothing!  So you want your *innermost* loop to be y, since then that loop will access contiguous array elements.

We could also have written
    int arr[sy][sx];
    arr[y][x]++;

Which in 1D notation is:  
    int arr[sy*sx];
    arr[x+y*sx]++;

Now x is the fast index, and the y coordinate is multiplied by the x size.  You now want your innermost loop to be x, since adjacent x values are contiguous in memory.

2D Reconstructed from 1D

There's a cool trick that can be used to reconstruct the 2D array indices of a 1D array index.  If
    i = x*sy+y;
then
   y = i % sy;
   x = i / sy;
(Of course, sadly these are the slowest instructions on your CPU!  Hopefully sy is a power of two!)

And if
    i = y*sx + x;
then
    x = i % sx;
    y = i / sx;

This can be handy in a bunch of ways--you can write a single loop to initialize all the elements in a 2D array, or send a 2D array index in a single "int".