Also see the times for various operations in my CS 301 performance notes.

- Single, meaning just one.

- Instruction, as in a machine code instruction, executed by hardware.

- Multiple, as in more than one--from 2 to a thousand or so.

- Data, as in floats or ints.

Back in the 1980's, "vector" machines were quite popular in supercomputing centers. For example, the 1988 Cray Y-MP was a typical vector machine. When I was an undergraduate, ARSC still had a Y-MP vector machine. The Y-MP had eight "vector" registers, each of which held 64 doubles (that's a total of 4KB of registers!). A single Y-MP machine language instruction could add all 64 corresponding numbers in two vector registers, which enabled the Y-MP to achieve the (then) mind-blowing speed of *millions* floating-point operations per second. Vector machines have now almost completely died out; the NEC SV-1 and the Japanese "Earth Simulator" are the last of this breed. Vector machines are classic SIMD, because one instruction can modify 64 doubles.

But the most common form of SIMD today are the "multimedia" instruction set extensions in normal CPUs. The usual arrangment for multimedia instructions is for single instructions to operate on four 32-bit floats. These four-float instructions exist almost everywhere nowdays:

- x86, where they are called "SSE", or "Streaming SIMD Extensions"
- PowerPC, where they are called "AltiVec"
- Cell Broadband Engine, where they are part of the "Synergistic Processing Elements"
- Graphics cards, where they are part of the pixel processors

For example, "add" adds two integer registers, like eax and ebx. "addps" adds two SSE registers, like xmm3 and xmm6. There are SSE versions of most other arithmetic operations: subps, mulps, divps, etc.

There's one curious operation called "shufps", which does a permutation of the incoming floats. The permutation is controlled by 4 2-bit "select" fields, one for each destination float, that determines which incoming float goes to that slot. For example, a swizzle can copy the first float into all four destination slots, with a select field of 0x00. To copy the last float into all four destination slots, the select field is 0xff. A select field of 00010111, or 0x17, would set the first float from the last (float 3), copy float 1 into the middle two slots, and fill the last slot with the first float (float 0). This is the *only* operation which rearranges floats between different slots in the registers. (In graphics language, this operation is called a "swizzle".) For example, if xmm1 contains the four floats "(0.0, 1.1, 2.2, 3.3)", then:

shufps xmm1,xmm1, 0xff

will set all 4 of xmm1's floats to the last value (index 11).

There are actually two flavors of the SSE move instruction: "movups" moves a value between *unaligned* addresses (not a multiple of 16); "movaps" moves a value between *aligned* addresses. The aligned move is substantially faster, but it will segfault if the address you give isn't a multiple of 16! Luckily, most "malloc" implementations return you 16-byte aligned data, and the stack is aligned to 16 bytes with most compilers. But for maximum portability, for the aligned move to work you sometimes have to manually add padding (using pointer arithmetic), which is really painful.

Here's a simple SSE assembly example where we load up a constant, add it to itself with SSE, and then write it out:

global foo(executable NetRun link)

foo:

movups xmm1,[thing2]; <- copy the four floats into xmm1

addps xmm1,xmm1; <- add floats to themselves

movups [thing1],xmm1; <- move that constant into the global "retval"

; Print out retval

extern farray_print

push 4 ;<- number of floats to print

push thing1 ;<- points to array of floats

call farray_print

add esp,8 ; <- pop off arguments

ret

section .data

thing1:

dd 0.0 ;<- our return value

dd 0.0 ;

dd 0.0 ;

dd 0.0 ;

thing2:

dd 10.2;<- source constant

dd 100.2

dd 1000.2

dd 10000.2

The "dd" lines declare a 32-bit constant. NASM is smart enough to automatically use float format if you type a constant with a decimal point!

- GCC 3.x under linux. Be sure to compile with "-msse", or the header will whine.

- Visual C++ 7.0 under Windows.

- Intel C/C++ compiler under Linux or Windows. Makes bogus
complaint about "no EMMS instruction before return", although the
annoying emms() call is thankfully only needed with MMX, not SSE.

The C version of an SSE register is the user-friendly and self-explanatory type "__m128". All the instructions start with "_mm_" (i.e., MultiMedia). The suffix indicates the data type; in these examples, we'll just be talking about 4 floats, which use the suffix "_ps" (Packed Single-precision floats). SSE supports other data types in the same 128 bits, but 4 floats seems to be the sweet spot. So, for example, "_mm_load_ps" loads up 4 floats into a __m128, "_mm_add_ps" adds 4 corresponding floats together, etc. Major useful operations are:

__m128 _mm_load_ps(float *src) |
Load 4 floats from a 16-byte aligned address. WARNING: Segfaults if the address isn't a multiple of 16! |

__m128 _mm_loadu_ps(float *src) | Load 4 floats from an unaligned address (4x slower!) |

__m128 _mm_load1_ps(float *src) | Load 1 individual float into all 4 fields of an __m128 |

__m128 _mm_setr_ps(float a,float b,float c,float d) |
Load 4 separate floats from parameters into an __m128 |

void _mm_store_ps(float *dest,__m128 src) |
Store 4 floats to an aligned address. |

void _mm_storeu_ps(float *dest,__m128 src) | Store 4 floats to unaligned address |

__m128 _mm_add_ps(__m128 a,__m128 b) |
Add corresponding floats (also "sub") |

__m128 _mm_mul_ps(__m128 a,__m128 b) | Multiply corresponding floats (also "div", but it's slow) |

__m128 _mm_min_ps(__m128 a,__m128 b) | Take corresponding minimum (also "max") |

__m128 _mm_sqrt_ps(__m128 a) | Take square roots of 4 floats (12ns, slow like divide) |

__m128 _mm_rcp_ps(__m128 a) | Compute rough (12-bit accuracy) reciprocal of all 4 floats (as fast as an add!) |

__m128 _mm_rsqrt_ps(__m128 a) | Rough (12-bit) reciprocal-square-root of all 4 floats (fast) |

__m128 _mm_shuffle_ps(__m128 lo,__m128 hi, _MM_SHUFFLE(hi3,hi2,lo1,lo0)) |
Interleave inputs into low 2 floats and high 2 floats of output. Basically out[0]=lo[lo0]; out[1]=lo[lo1]; out[2]=hi[hi2]; out[3]=hi[hi3]; For example, _mm_shuffle_ps(a,a,_MM_SHUFFLE(i,i,i,i)) copies the float a[i] into all 4 output floats. |

So take your classic "loop over floats":

for (int i=0;i<n_vals;i++) {(executable NetRun link)

vals[i]=vals[i]*a+b;

}

This takes about 4.5 ns per float.

Step 1 is to unroll this loop 4 times, since SSE works on blocks of 4 floats at a time:

for (int i=0;i<n_vals;i+=4) {(executable NetRun link)

vals[i+0]=vals[i+0]*a+b;

vals[i+1]=vals[i+1]*a+b;

vals[i+2]=vals[i+2]*a+b;

vals[i+3]=vals[i+3]*a+b;

}

This alone speeds the code up by about 2x, because we don't have to check the loop counter as often.

We can then replace the guts of the loop with SSE instructions:

__m128 ma=_mm_load1_ps(&a); /* 4 copies of a, in an SSE register */(executable NetRun link)

__m128 mb=_mm_load1_ps(&b);

for (int i=0;i<n_vals;i+=4) {

__m128 v=_mm_load_ps(&vals[i]);

v=_mm_add_ps(_mm_mul_ps(v,ma),mb);

_mm_store_ps(&vals[i],v);

}

This gives us about a 4x speedup over the original, and still a 2x speedup over the unrolled version!

Note that if we use the unaligned loads and stores (loadu and storeu), we lose almost all the performance benefit from using SSE in the first place!