# HW1: Raytrace a Sphere

2010, Dr. Lawlor, CS 481/681, CS, UAF

For this assignment, build a simple OpenGL program that raytraces one sphere.  Your program must:
• Use GLSL shaders to intersect camera rays with the sphere (see details below).
• Use per-pixel lighting to illuminate the sphere.
• Use a sane perspective 3D camera.
• Allow the user to both move and rotate the camera through the scene using the keyboard and mouse.
You can use any proxy geometry you like, but I recommend just drawing a slightly larger sphere (e.g., glutSolidSphere).  You need not use OpenGL's (horrible) interface for lighting and shading; just hardcoding the sphere properties and light sources into your program is fine for now.

## Intersecting Camera Rays with a Sphere

First, figure out your world-coordinates camera location.  Call this the vec3 C.   HINT: put in some "reference" geometry, and get your camera motion working and coordinate system settled before trying to do any raytracing!

Next, figure out the direction from the camera to the pixel you're currently drawing--if your proxy geometry is at a vec3 G (for example, a varying value), then the direction from the camera to the proxy is G-C.
`varying vec3 C; // camera location, world coordinatesvarying vec3 G; // proxy geometry location, world coordinates...	vec3 D = normalize(G-C); // points from camera to geometry`
This means points along the camera ray satisfy this "parametric equation" for any float t:
`	vec3 P = C + t * D; // move down the camera ray a distance t`
We just need to determine the t value along our camera ray where the ray intersects our sphere.  When does a ray intersect a sphere?  Well, points on the sphere satisfy
`	r = length(P);`
Recall that length(P) = sqrt(P.x*P.x + P.y*P.y + P.z*P.z).  The square root is annoying, so square both sides:
`	r*r = P.x*P.x + P.y*P.y + P.z*P.z;`
It's a lot more compact to write the right-hand side as a vector dot product of P with P:
`	r*r = dot(P,P);`
Now if we substitute in our parametric ray equation, we get:
`	r*r = dot(C + t * D,C + t * D);`
Rearranging terms (since dot product distributes over vector addition):
`	r*r = dot(C,C) + 2*t*dot(D,C) + t*t*dot(D,D);`
The only unknown is the float t.  This is a quadratic equation in t.  If we write it with the standard quadratic form 0 = c + b*t+a*t*t, the coefficients are:
`	float a=dot(D,D);	float b=2.0*dot(D,C);	float c=dot(C,C)-r*r;`
From the quadratic equation, then, we get:
`	float det=b*b-4.0*a*c; // quadratic determinant	if (det<0.0) discard; /* ray misses sphere */	float t=(-b-sqrt(det))/(2.0*a); // -b+sqrt... is ray leaving sphere `
The sphere-ray intersection point P is then C+t*D.  Because a sphere's surface normal always points directly away from the origin of the sphere (our origin), the normal is equal to this intersection point, but normalized:
`	vec3 P=C+t*D;	vec3 N=normalize(P);`
You can then shade using the usual diffuse and specular components.

My raytraced sphere looks like this.  Note the grid, used to keep track of where I am in 3D space. 