SIMD parallelism, floating-point code, and SIMD Floats (SSE)

CS 641 Lecture, Dr. Lawlor

SIMD

This is an old, simple, but powerful idea-- "SIMD", which stands for Single Instruction Multiple Data:
You can do lots interesting SIMD work without using any special instructions--plain old C will do, if you treat an "int" as 32 completely independent bits, because any normal "int" instructions will operate on all the bits at once.  This use of bitwise operations is often called "SIMD within a register (SWAR)" or "word-SIMD"; see Sean Anderson's "Bit Twiddling Hacks" for a variety of amazing examples.

Vector Machines

Back in the 1980's, "vector" machines were quite popular in supercomputing centers.  For example, the 1988 Cray Y-MP was a typical vector machine.  When I was an undergraduate, ARSC still had a Y-MP vector machine.  The Y-MP had eight "vector" registers, each of which held 64 doubles (that's a total of 4KB of registers!).  A single Y-MP machine language instruction could add all 64 corresponding numbers in two vector registers, which enabled the Y-MP to achieve the (then) mind-blowing speed of *millions* floating-point operations per second.  Vector machines have now almost completely died out; the NEC SV-1 and the Japanese "Earth Simulator" are the last of this breed.  Vector machines are classic SIMD, because one instruction can modify 64 doubles.

Bitwise Operations in C

There are actually six bit-parallel ("bitwise") operators in C/C++/Java/C#:
Here's the "truth table" for the 3 main binary bitwise operations: AND, OR, and XOR.  Again, these are accessible directly from all C-like languages using '&' (ampersand or just "and"), '|' (pipe or just "or"), and '^' (caret or just "ex-oar").
A
B
A&B (AND)
A|B (OR)
A^B (XOR)
0
0
0
0
0
0
1
0
1
1
1
0
0
1
1
1
1
1
1
0
Interpretations:

Both-1 Either-1 Different


Spreads-0
Spreads-1
Flip A if B is set

One warning: bitwise operations have strange precedence.  Specifically, "A&B==C" is interpreted by default like "A&(B==C)", not "(A&B)==C"; and "A>>B+C" is interpreted as "A>>(B+C)".  This can cause horrible, hard-to-understand errors, so I just stick in parenthesis around everything when I'm using bitwise operators.

Applications of Bitwise Operations as SIMD

The simplest bitwise operation is the NOT operation: NOT 0 is 1; NOT 1 is 0.  Bitwise NOT is written as "~" in C/C++/Java/C#.   The cool thing about the NOT (and other bitwise) operations is that they operate on *all* the bits of an integer at *once*.  That means a 32-bit integer NOT does thirty-two separate things at the same time!  This "single-instruction, multiple data" (SIMD) approach is a really common way to speed up computations; it's the basis of the MMX, SSE, and AltiVec instruction set extensions. But you can do lots interesting SIMD work without using any special instructions--plain old C will do.  This use of bitwise operations is often called "SIMD within a register (SWAR)" or "word-SIMD"; see Sean Anderson's "Bit Twiddling Hacks" for a variety of amazing examples.

For example, say you're Google.  For each possible word, you store a big list of documents containing one 32-bit integer per document.  Each bit of that integer corresponds to a section of the document where the word might occur (e.g., bit 0 is set if the word occurs in the first 3% of the document, bit 1 represents the next 3%, and so on).  If you've got two search terms, for a given document you can look up two integers, A and B, that represent where those terms appear in the document.  If both terms appear near the same place in the document, that's good--and we can find a list of all those places (where both bits are set to 1) in just one clock cycle using just "A&B"!  The same idea shows up in the "region codes" of Cohen-Sutherland clipping, used in computer graphics.

But what if I want to compare two documents for similarity?  Note that C-like languages don't provide a "bitwise-compare" operator, that sets equal bits to 1 and unequal bits to 0 (because == compares whole integers, not individual bits).  So how can you compute this?

Well, all the bitwise operations do interesting and useful things when inverted:
A
B
~(A&B) (NAND)
~(A|B) (NOR)
~(A^B) (XNOR)
0
0
1
1
1
0
1
1
0
0
1
0
1
0
0
1
1
0
0
1
Interpretations:

Either-0
Both-0 Equality

Note that "~(A^B)" is 1 if both input bits are identical, and 0 if they're different--so it's perfect for computing document similarity!

The final major bitwise operation is bit shifting, represented in C-like languages using ">>" and "<<".  Left shift ("<<", pointing left), shifts bits into higher positions, filling the now-empty lower positions with zeros.  For unsigned numbers, right shifting (">>", pointing right) shifts bits into lower positions and fills the empties with zeros.  If you wants to search for both-term matches in B that are one bit off of those in A, you can use "A&(B<<1)" or "A&(B>>1)", depending on the direction you want to check.  To check both directions, use "(A&(B<<1))|(A&(B>>1))".

The genome uses just 4 digits (ACGT, for the acids Adenine, Cytosine, Guanine, and Thymine), and so can be represented in as few as 2 bits per nucleotide (genetic digit).  Former UAF CS undergraduate James Long, now up the hill doing bioinformatics research, developed a very fast sofware implementation of the Smith-Waterman string (or gene) matching algorithm called CBL that uses this SIMD-within-a-register approach.

Floats

Ordinary integers can only represent integral values.  "Floating-point numbers" can represent non-integral values.   This is useful for engineering, science, statistics, graphics, and any time you need to represent numbers from the real world, which are rarely integral!

Floats store numbers in an odd way--they're really storing the number in scientific notation, like
    x = + 3.785746 * 105
Note that:
Scientific notation is designed to be compatible with slide rules (here's a circular slide rule demo); slide rules are basically a log table starting at 1.  This works because log(1) = 0, and log(a) + log(b) = log(ab).  But slide rules only give you the mantissa; you need to figure out the exponent yourself.  The "order of magnitude" guess that engineers (and I) like so much is just a calculation using zero significant digits--no mantissa, all exponent.

In binary, you can represent a non-integer like "two and three-eighths" as "10.011".  That is, there's:
for a total of two plus 1/4 plus 1/8, or "2+3/8".  Note that this is a natural measurement in carpenter's fractional inches, but it's a weird unnatural thing in metric-style decimal inches.  That is, fractions that are (negative) powers of two have a nice binary representation, but look weird in decimal (2 3/8 = 2.375); decimals have a nice decimal representation, but look very weird as a binary fraction.

Normalized Numbers

Scientific notation can represent the same number in several different ways:
    x = + 3.785746 * 105  = + 0.3785746 * 106 = + 0.003785746 * 107 = + 37.85746 * 104 

It's common to "normalize" a number in scientific notation so that:
  1. There's exactly one digit to the left of the decimal point.
  2. And that digit ain't zero.
This means the 105 version is the "normal" way to write the number above.

In binary, a "normalized" number *always* has a 1 at the left of the decimal point (if it ain't zero, it's gotta be one).  So sometimes there's no reason to even store the 1; you just know it's there!

(Note that there are also "denormalized" numbers, like 0.0, that don't have a leading 1.  This is how zero is represented--there's an implicit leading 1 only if the exponent field is nonzero, an implicit leading 0 if the exponent field is zero...)

Float as Bits

Floats represent continuous values.  But they do it using discrete bits.

A "float" (as defined by IEEE Standard 754) consists of three bitfields:
Sign
Exponent
Fraction (or "Mantissa")
1 bit--
  0 for positive
  1 for negative
8 unsigned bits--
  127 means 20
  137 means 210
23 bits-- a binary fraction.

Don't forget the implicit leading 1!
The sign is in the highest-order bit, the exponent in the next 8 bits, and the fraction in the remaining bits.

The hardware interprets a float as having the value:

    value = (-1) sign * 2 (exponent-127) * 1.fraction

Note that the mantissa has an implicit leading binary 1 applied (unless the exponent field is zero, when it's an implicit leading 0; a "denormalized" number).

For example, the value "8" would be stored with sign bit 0, exponent 130 (==3+127), and mantissa 000... (without the leading 1), since:

    8 = (-1) 0 * 2 (130-127) * 1.0000....

You can actually dissect the parts of a float using a "union" and a bitfield like so:
/* IEEE floating-point number's bits:  sign  exponent   mantissa */
struct float_bits {
unsigned int fraction:23; /**< Value is binary 1.fraction ("mantissa") */
unsigned int exp:8; /**< Value is 2^(exp-127) */
unsigned int sign:1; /**< 0 for positive, 1 for negative */
};

/* A union is a struct where all the fields *overlap* each other */
union float_dissector {
float f;
float_bits b;
};

float_dissector s;
s.f=8.0;
std::cout<<s.f<<"= sign "<<s.b.sign<<" exp "<<s.b.exp<<" fract "<<s.b.fraction<<"\n";
return 0;
(Executable NetRun link)

There are several different sizes of floating-point types:
C Datatype
Size
Approx. Precision
Approx. Range
Exponent Bits
Fraction Bits
+-1 range
float
4 bytes (everywhere)
1.0x10-7
1038
8
23
224
double
8 bytes (everywhere)
2.0x10-15
10308
11
52
253
long double
12-16 bytes (if it even exists)
2.0x10-20
104932
15
64
265

Nowadays floats have roughly the same performance as integers: addition takes about two nanoseconds (slightly slower than integer addition); multiplication takes a few nanoseconds; and division takes a dozen or more nanoseconds.  That is, floats are now cheap, and you can consider using floats for all sorts of stuff--even when you don't care about fractions.

SIMD Floats

The most common form of SIMD today are the "multimedia" instruction set extensions in normal CPUs.  The usual arrangment for multimedia instructions is for single instructions to operate on four 32-bit floats.  These four-float instructions exist almost everywhere nowdays:
We'll look at the x86 version below.

SSE Assembly

SSE instructions were first introduced with the Intel Pentium II, but they're now found on all modern x86 processors, including the 64-bit versions.  SSE introduces 8 new registers, called xmm0 through xmm7, that each contain four 32-bit single-precision floats.  New instructions that operate on these registers have the suffix "ps", for "Packed Single-precision".  See the x86 reference manual for a complete list of SSE instructions.

For example, "add" adds two integer registers, like eax and ebx.  "addps" adds two SSE registers, like xmm3 and xmm6.  There are  SSE versions of most other arithmetic operations: subps, mulps, divps, etc.

There's one curious operation called "shufps", which does a  permutation of the incoming floats.  The permutation is controlled by 4 2-bit "select" fields, one for each destination float, that determines which incoming float goes to that slot.  For example, a swizzle can copy the first float into all four destination slots, with a select field of 0x00.  To copy the last float into all four destination slots, the select field is 0xff.  A select field of 00010111, or 0x17, would set the first float from the last (float 3), copy float 1 into the middle two slots, and fill the last slot with the first float (float 0).  This is the *only* operation which rearranges floats between different slots in the registers.  (In graphics language, this operation is called a "swizzle".)   For example, if xmm1 contains the four floats "(0.0, 1.1, 2.2, 3.3)", then:
      shufps xmm1,xmm1, 0xff
will set all 4 of xmm1's floats to the last value (index 11).

There are actually two flavors of the SSE move instruction: "movups" moves a value between *unaligned* addresses (not a multiple of 16); "movaps" moves a value between *aligned* addresses.  The aligned move is substantially faster, but it will segfault if the address you give isn't a multiple of 16!   Luckily, most "malloc" implementations return you 16-byte aligned data, and the stack is aligned to 16 bytes with most compilers.  But for maximum portability, for the aligned move to work you sometimes have to manually add padding (using pointer arithmetic), which is really painful.

Here's a simple SSE assembly example where we load up a constant, add it to itself with SSE, and then write it out:
	movups xmm1,[thing1]; <- copy the four floats into xmm1
addps xmm1,xmm1; <- add floats to themselves
movups [retval],xmm1; <- move that constant into the global "retval"

; Print out retval
extern farray_print
push 4 ;<- number of floats to print
push retval ;<- points to array of floats
call farray_print
add esp,8 ; <- pop off arguments
ret

section .data
thing1: dd 10.2, 100.2, 1000.2, 10000.2;<- source constant
retval: dd 0.0, 0.0, 0.0, 0.0 ;<- our return value

(Try this in NetRun now!)

Here's a version that loads up two separate float constants, and adds them:

	movups xmm1,[thing1]; <- copy the four floats into xmm1
movups xmm6,[thing2]; <- copy the four floats into xmm1
addps xmm1,xmm6; <- add floats
movups [retval],xmm1; <- move that constant into the global "retval"

; Print out retval
extern farray_print
push 4 ;<- number of floats to print
push retval ;<- points to array of floats
call farray_print
add esp,8 ; <- pop off arguments
ret

section .data
thing1: dd 10.2, 100.2, 1000.2, 10000.2;<- source constant
thing2: dd 1.2, 2.2, 3.2, 4.2;<- source constant
retval: dd 0.0, 0.0, 0.0, 0.0 ;<- our return value

(Try this in NetRun now!)

The "dd" lines declare a 32-bit constant.  NASM is smart enough to automatically use float format if you type a constant with a decimal point!

SSE in C/C++

The x86 SSE instructions can be accessed from C/C++ via the header <xmmintrin.h>.  (Corresonding Apple headers exist for PowerPC AltiVec; the AltiVec instructions have different names but are almost identical.)   The xmmintrin header exists and works out-of-the-box with most modern compilers:
Documentation can be found by just reading the xmmintrin header.  The underlying instructions have different names, which are listed in the x86 reference manual and summarized below and at this graphical and helpful SSE site.

The C version of an SSE register is the user-friendly and self-explanatory type "__m128".  All the instructions start with "_mm_" (i.e., MultiMedia).  The suffix indicates the data type; in these examples, we'll just be talking about 4 floats, which use the suffix "_ps" (Packed Single-precision floats).  SSE supports other data types in the same 128 bits, but 4 floats seems to be the sweet spot.  So, for example, "_mm_load_ps" loads up 4 floats into a __m128, "_mm_add_ps" adds 4 corresponding floats together, etc.  Major useful operations are:
__m128 _mm_load_ps(float *src)
Load 4 floats from a 16-byte aligned address.  WARNING: Segfaults if the address isn't a multiple of 16!
__m128 _mm_loadu_ps(float *src) Load 4 floats from an unaligned address (4x slower!)
__m128 _mm_load1_ps(float *src) Load 1 individual float into all 4 fields of an __m128
__m128 _mm_setr_ps(float a,float b,float c,float d)
Load 4 separate floats from parameters into an __m128
void _mm_store_ps(float *dest,__m128 src)
Store 4 floats to an aligned address.
void _mm_storeu_ps(float *dest,__m128 src) Store 4 floats to unaligned address
__m128 _mm_add_ps(__m128 a,__m128 b)
Add corresponding floats (also "sub")
__m128 _mm_mul_ps(__m128 a,__m128 b) Multiply corresponding floats (also "div", but it's slow)
__m128 _mm_min_ps(__m128 a,__m128 b) Take corresponding minimum (also "max")
__m128 _mm_sqrt_ps(__m128 a) Take square roots of 4 floats (12ns, slow like divide)
__m128 _mm_rcp_ps(__m128 a) Compute rough (12-bit accuracy) reciprocal of all 4 floats (as fast as an add!)
__m128 _mm_rsqrt_ps(__m128 a) Rough (12-bit) reciprocal-square-root of all 4 floats (fast)
__m128 _mm_shuffle_ps(__m128 lo,__m128 hi,
       _MM_SHUFFLE(hi3,hi2,lo1,lo0))
Interleave inputs into low 2 floats and high 2 floats of output. Basically
   out[0]=lo[lo0];
   out[1]=lo[lo1];
   out[2]=hi[hi2];
   out[3]=hi[hi3];
For example, _mm_shuffle_ps(a,a,_MM_SHUFFLE(i,i,i,i)) copies the float a[i] into all 4 output floats.
There are also instructions for integer conversion, comparsions, various bitwise operations, and cache-friendly prefetches and streaming store operations.

So take your classic "loop over floats":
	for (int i=0;i<n_vals;i++) { 
        vals[i]=vals[i]*a+b;
}
(executable NetRun link)

This takes about 4.5 ns per float.

Step 1 is to unroll this loop 4 times, since SSE works on blocks of 4 floats at a time:
	for (int i=0;i<n_vals;i+=4) { 
        vals[i+0]=vals[i+0]*a+b;
        vals[i+1]=vals[i+1]*a+b;
        vals[i+2]=vals[i+2]*a+b;
        vals[i+3]=vals[i+3]*a+b;
}
(Try this in NetRun now!)

This alone speeds the code up by about 2x, because we don't have to check the loop counter as often.

We can then replace the guts of the loop with SSE instructions:
	__m128 SSEa=_mm_load1_ps(&a);
__m128 SSEb=_mm_load1_ps(&b);
__m128 v=_mm_load_ps(&vec[i]);
v=_mm_add_ps(_mm_mul_ps(v,SSEa),SSEb);
_mm_store_ps(&vec[i],v);

(Try this in NetRun now!)

This gives us about a 4x speedup over the original, and still a 2x speedup over the unrolled version!
(Note: I'd previously called "_mm_load_ps1" in the above example, which works, but is a bit slower.)

Note that if we use the unaligned loads and stores (loadu and storeu), we lose almost all the performance benefit from using SSE in the first place!