Reaction Diffusion Simulations

CS 482 Lecture, Dr. Lawlor

An enormous number of complicated systems are classified as "reaction-diffusion" systems.  The "reaction" part could be a simple chemical reaction like oxidation during respiration, or a more complex interaction like "plants uptake nitrogen from soil" or "predators feed on prey".  The "diffusion" part corresponds to the local neighborhood averaging that happens in a variety of systems.

Physics of Diffusion

A huge number of real things follow a diffusion law:
In each case, the physical process is driven by the interactions of immediate neighbors, but the effects spread out across the entire domain.  Do keep in mind diffusion does not need to work equally in all places or directions--real obstacles like glaciers or mountains simply slow the diffusion rate, possibly to zero.

Simulating Diffusion

The basic idea in diffusion, or blurring, is to get rid of the high-spatial-frequency detail in an image; a "low-pass" filter.  See the fourier transform discussion here for what I mean by high and low frequencies (small and large details in an image).

The way blurring gets rid of small details is by averaging them with nearby pixels--because nearby pixels will have the same overall low frequency colors, but different high-frequency details, the details get averaged away but the low frequencies remain.

You can implement blurring in a 2D texture in many different ways:
  1. Change the mipmap LOD bias: texture2D(tex,coords,bias);.  This will make the mipmapping hardware include additional blurring by artificially shifting mipmap levels, essentially sampling a 2^bias pixel region.  Clearly, this only works if you've built mipmaps for your texture, but THREE.js does this automatically.
  2. Draw a few copies of the image slightly shifted from one another, carefully adjusting the alpha each time so the copies end up equally weighted onscreen--surprisingly, the alpha values you need are 1.0, then 1.0/2, then 1.0/3, then 1.0/4, etc.
  3. Read a few nearby pixels ("filter taps") in a GLSL pixel shader, and average them together in a single shader pass.  This is a little faster than the multipass rendering method, and it's easier to write and expand to do other processing, so it's what I usually do.  (Of course, that doesn't mean it's the best!)
  4. Read a whole bunch of nearby pixels, and weight them by a Gaussian curve to get Gaussian blur.  This is the default blur performed by most image editing programs, although it's not totally clear why folks choose this.
One curious result of the central limit theorem: (almost) any blurring technique results in a Gaussian blur when applied repeatedly (same image blurred over and over again).  This is good, because it means we can just repeatedly apply a cheap lumpy blur (with few taps) to get results similar to a high-quality blur (with exponentially more taps).

Given both original and blurred images, you can subtract the two to find just the image details (the high frequencies alone).  This is useful for several interesting tasks, including HDR Tone Mapping.  You can even mix high and low frequencies from different images for a hideous effect.

Physics of Reaction

A variety of processes can result in a nonlinear interaction after the diffusion step:
Generally speaking, you can get interesting behavior whenever you combine blurring (which brings neighborhoods together) with almost *any* nonlinear reaction (which drives neighbors farther apart).