Integration of Differential Equations

CS 482 Lecture, Dr. Lawlor

In a class about simulations, you'd be correct in thinking we're going to design and analyze quite a few simulations.  These are going to differ in really fundamental ways, but there are many broad concepts that show up repeatedly in this field.

One of the central concepts in simulation is the idea of integrating differential equations, like the equations of motion for a simple Newtonian object, or the Navier-Stokes equations for fluid dynamics. 

Differential Equations

The usual Newtonian definition of velocity V is the time derivative of position P:
    V = dP/dt

If we have a nice functional form for V or P, this is extremely mathematically useful, since we can use all the tools of calculus on a differential definition like this.  If we don't have a nice functional form, like with many real problems, we can't use calculus at all (this is also the case if we've forgotten calculus).

This is why we use computers, but computers don't do infinitesimals.  There are a variety of ways to "discretize" this continuous differential equation into computational steps; but the simplest is to replace the infinitesimal derivatives with a finite difference:
    V = delta_P / delta_t
which means
    V = (new_P - old_P) / delta_t

Given a set of positions, this finite difference lets us estimate the velocity between known positions. We can also solve for new_P, to get a classic equation known as an Euler timestep:
    new_P = old_P + V * delta_t

In a programming language, we'd usually write this as:
   P += V * delta_t;

One big problem here is to best approximate the underlying partial differential equation, "delta_t" should be as small as possible.  Small enough, and roundoff begins to change your simulation results.  Also, making the timestep one-tenth as big needs ten times as many steps for the same simulation, so you quickly run into computational limitations: even a simple partial differential equation can consume arbitrary computational power and still not be perfect! 

The other problem is our finite difference equation isn't a very accurate approximation to the continuous equation, usually causing energy to bleed off the system.  You can do better by taking the average V, known as Leapfrog integration (more next week!).

3D Vectors

In the finite difference above, if P and V are vectors with xyz components, we can just update each component of the vector:
   P.x += V.x * delta_t;
   P.y += V.y * delta_t;
   P.z += V.z * delta_t;

It's always possible to break down 3D computations into a set of (coupled) 1D computations, but this ".x .y .z" copy and paste is quite error prone--I always end up with ".x .y .y" somewhere, which gives bizarre results and is very hard to locate and fix. 

In the OpenGL shader language GLSL, or in C++ with a 3D vector class, you can avoid the copy and paste by defining P and V as 3D vectors, of type "vec3".  You can then operate on 3D vectors just like they were scalars:
   P += V * delta_t;

JavaScript is missing the operator overloading that lets a class vec3 handle multiplication and addition nicely, but in PixAnvil I added some little helpers to the existing class vec3 for these operations:


C++ or GLSL
JavaScript/PixAnvil
Operation
Vector sum
vec3 vecC = vecA+vecB;
var vecC=vecA.p(vecB);
plus
Vector addition
vecA += vecB;
vecA.pe(vecB);
plus-equal
Vector * scalar
vecA *= 2.0;
vecA.te(2.0);
times-equal

Units

Physical quantities have units, like meters or seconds or watts.  Most programming languages don't know or care about the units, so you can easily make unit errors like:
    float energy_in_joules = distance_in_feet * force_in_pounds;

Of course, foot * pound force gives you ft-lb of energy, not joules.  But all the variables are "float", so the compiler happily lets these errors creep in. 

The best way to avoid unit errors is to pick your units when you begin building the simulation, and stick with them throughout.  MKS metric units (meters, kilograms, seconds) seem to be the most common, although I've worked with everything from astrophysical simulations where "1.0" is the size of the observable universe, and molecular dynamics simulations where "1.0" is the size of an atom.  For 3D printed models, I've seen STL files where 1.0 means 1 centimeter, 1 millimeter, or even 1 inch.

It is possible to go back and figure out the effective units you're working in after the fact, but it's less effort and more reliable to figure them out as you go.  Sometimes I'll work without units just to try to capture the overall "feel" of a phenomena before actually matching it to reality, but it's usually easier if everything has units from the start.

Boundary Conditions

Often the trickiest part of a simulation is not "knowing how to simulate", it's "knowing when to stop".  Since our computers are finite, we need to do something at the borders of the domain that we can afford to simulate.  That "something" can be very simple, but there are times when this boundary condition messes up the simulation in the interior. 

For a particle simulation, a boundary condition might be something simple like:

    if (particle is outside the boundary) {
        change the velocity to make it move back inside
    }

Simulating waves or fluid flow is especially tricky, since it's easy to get wind resistance or wave reflection off the boundary.  One common trick is to make the boundaries periodic, essentially wrapping the simulation around at the boundaries.

Conserved Quantities, like Energy

Energy is conserved in both the universe and our simulations mostly because:

  1. If you keep adding energy, everything explodes (the world "ends in fire").
  2. If you keep removing energy, everything freezes in place (the world "ends in ice").
  3. If you don't add or remove energy, everything keeps working reasonably.
Many simulations calculate and monitor the total energy of everything in the simulation, as a way to verify that things are working correctly.  This is a good way to detect even surprisingly subtle flaws in the physics or numerics.  Be aware that conserving energy is a goal, not an absolute requirement, and often tiny conservation errors from things like roundoff are difficult to avoid. 

You can also lose or gain energy at the borders of the simulation, where things like the energy added to the wind as your object passes by, or the electrical energy input to the motor, aren't accounted for.  You can often fairly easily add these border energy terms back in, and sometimes they can be quite useful (e.g., noting that most of the drag energy loss in a boat is happening at the tail).

Linear and angular momentum are also conserved quantities.  Loss of momentum conservation tends to look weird, causing objects to fly off into the distance.