Graphics Processing Unit (GPU) Programming in CUDA

CS 301: Assembly Language Programming Lecture, Dr. Lawlor

The GPU is not just for graphics anymore, it's a fully programmable general-purpose computer with incredibly high parallelism.  The idea is you use the GPU alongside the CPU, so a typical CUDA program has this division of labor:

Normal CPU, the "__host__" Graphics Card, the "__device__"

Runs main, in the usual way

Reads files

Talks on network

Allocates memory for both sides

Invokes GPU kernels

Runs special kernel functions, using blocks of threads

Delivers high performance on compute-intensive tasks


CUDA uses a special compiler "nvcc", but by default all your CUDA code is normal C++, running normally:
#include <stdio.h>

int main() {
	printf("Hello, world!\n");
	return 0;
}

(Try this in NetRun now!)

To have code run on the GPU, you define a kernel using the "__global__" keyword before the function.  You then call the kernel using a special syntax <<<number_of_blocks_of_threads, threads_per_block>>>:
#include <stdio.h>

__global__ void do_stuff(void)
{
	printf("Hello from GPU land!\n");
}

int main() {
	printf("Hello, world!\n");
	do_stuff<<<1,2>>>(); // run kernel with 1 block of 2 threads each
	cudaDeviceSynchronize(); // wait for GPU threads to finish
	return 0;
}

(Try this in NetRun now!)

Inside a kernel, you can ask which thread and block you are using the "threadIdx.x" and "blockIdx.x" builtin variables:
#include <stdio.h>

__global__ void do_stuff(void)
{
	int thr=threadIdx.x;
	int blk=blockIdx.x;
	printf("Hello from GPU thread %d of block %d\n",thr,blk);
}

int main() {
	printf("Hello, world!\n");
	do_stuff<<<2,8>>>(); // run kernel with 2 blocks of 8 threads each
	cudaDeviceSynchronize(); // wait for GPU threads to finish
	return 0;
}

(Try this in NetRun now!)

Also you don't want to print directly from the GPU, but write an array read by the CPU.  Currently, you can't access CPU memory directly from the GPU or vice versa, but you can call cudaMalloc from the CPU side to allocate GPU memory, and cudaMemcpy to copy data from CPU to GPU, or back.  You should also check for errors; here I've added a macro "check" to do this.

#include <cuda.h>
#include <iostream>
#include <fstream>
#include "lib/inc.c"

#define check(cudacall) { int err=cudacall; if (err!=cudaSuccess) std::cout<<"CUDA ERROR "<<err<<" at line "<<__LINE__<<"'s "<<#cudacall<<"\n";}

/* GPU Code! */
__global__ void fill_in_array(int *dest,const int *src,int v) {
	int i=threadIdx.x+blockIdx.x*blockDim.x;
	
	dest[i]=src[i]+v;
}

/* Run on CPU */
int main(int argc,char *argv[]) {
	int wid=256,ht=10000;
	int data[wid];
	for (int i=0;i<wid;i++) data[i]=i;

	int *src=0, *dest=0; /* LIVES ON THE GPU!!!! */
	check(cudaMalloc((void **)&src, wid*ht*sizeof(src[0])));
	check(cudaMemcpy(src,data,wid*sizeof(data[0]),cudaMemcpyHostToDevice));
	check(cudaMalloc((void **)&dest, wid*ht*sizeof(dest[0])));

	double start=time_in_seconds();
	fill_in_array<<<ht,wid>>>(dest,src,3);
	int harr[wid];
	check(cudaMemcpy(harr,dest,wid*sizeof(dest[0]),cudaMemcpyDeviceToHost));
	double elapsed=time_in_seconds()-start;

	std::cout<<"Copied back array at rate "<<elapsed*1.0e9/(wid*ht)<<"ns/pixel, time "<<elapsed*1.0e6<<" microseconds\n";

	for (int i=0;i<wid;i++) printf("%08x\n",harr[i]);

	return 0;
}

(Try this in NetRun now!)


To get decent performance, you usually make blocks of 256 threads each, and make thousands of blocks. 

Here's some performance data from NetRun's NVIDIA GeForce GTX 670:

memcpy size 64: 0.01 GB/sec (8.3 us)
memcpy size 256: 0.03 GB/sec (8.3 us)
memcpy size 1024: 0.12 GB/sec (8.5 us)
memcpy size 4096: 0.47 GB/sec (8.7 us)
memcpy size 16384: 1.45 GB/sec (11.3 us)
memcpy size 65536: 3.21 GB/sec (20.4 us) <- half of peak copy speed
memcpy size 262144: 5.29 GB/sec (49.5 us)
memcpy size 1048576: 6.28 GB/sec (166.9 us)
memcpy size 4194304: 6.60 GB/sec (635.9 us)
memcpy size 16777216: 6.67 GB/sec (2513.8 us)
GPU write size 64: 0.01 GB/sec (9.8 us) GPU write size 256: 0.03 GB/sec (9.8 us) GPU write size 1024: 0.10 GB/sec (9.8 us) GPU write size 4096: 0.42 GB/sec (9.8 us) GPU write size 16384: 1.64 GB/sec (10.0 us) GPU write size 65536: 6.03 GB/sec (10.9 us) GPU write size 262144: 17.32 GB/sec (15.1 us) <- half of peak write speed GPU write size 1048576: 32.76 GB/sec (32.0 us) GPU write size 4194304: 41.38 GB/sec (101.4 us) GPU write size 16777216: 39.99 GB/sec (419.5 us)

Time for sort of size 16: 21099.579 ns/elt (0.338 ms)
Time for sort of size 64: 5302.252 ns/elt (0.339 ms)
Time for sort of size 256: 1377.077 ns/elt (0.353 ms)
Time for sort of size 1024: 312.655 ns/elt (0.320 ms)
Time for sort of size 4096: 78.331 ns/elt (0.321 ms)
Time for sort of size 16384: 20.922 ns/elt (0.343 ms) <- nearly the same time as tiny array!
Time for sort of size 65536: 8.636 ns/elt (0.566 ms)
Time for sort of size 262144: 5.332 ns/elt (1.398 ms) <- half of peak sort speed
Time for sort of size 1048576: 3.527 ns/elt (3.699 ms)
Time for sort of size 4194304: 3.027 ns/elt (12.694 ms)

Note that really big arrays, with millions of floats, deliver way better performance than smaller arrays.  The basic trouble is that going through the OS, across the PCIe bus, into the GPU, and back takes like 10 microseconds (10us = 10000ns).   If you go to all that just to get one or two floats, or even a few thousand, you'll get terrible performance--the GPU prefers really big blocks of data with many thousands of elements.