Robot Localization using Blinking Lights
CS
480: Robotics & 3D Printing Lecture, Dr. Lawlor
Using light for robotics sensors is useful--light works underwater
or in the vacuum of space.
One challenge is there is already a lot of light in most places, so
we need good filtering to eliminate this background signal.
The standard trick is to modulate the signal, blinking the light on
and off.
Blinking at Hz rates
One advantage of blinking slowly is we can use software to perform
both the modulation for transmission, and the signal filtering for
detection.
Transmitting a blinking signal is very easy--turn some glowing thing
on and off. Glowing things include displays, LEDs, or anything
reflecting their light.
Receiving a blinking signal is actually somewhat tricky. The
problem is we may not be phase-synchronized with the sender, so we
don't know when it's supposed to be blinking, and when it's not
supposed to be blinking. If we're out of phase with them, our long-term
average oscillates about zero, which means we can't detect
them.
In radio, a standard way to solve this is "I
Q demodulation", where we multiply the signal by an "I"
reference (which may be out of phase) and a "Q" reference shifted by
90 degrees. This guarantees one or the other reference will be
in phase, allowing reliable detection.
This OpenCV code performs I Q
demodulation of a video signal (or .tar.gz), and includes a
software modulator.
Several commercial products use this "blinking LED with software
decode" technique, including the Oculus Rift head tracking IR LEDs,
which blink out a 10-bit pattern.
In practice, the big problem is normal USB cameras have surprisingly
low temporal sampling rates:
- In bright indoor light, most USB cameras only capture 30 fps
(33 ms frame time), so capturing even a 15Hz signal is
difficult.
- In dim light, USB cameras are typically 15 fps (66 ms frame
time) or even 7.5 fps.
- Sony's PS3 Eye camera now has linux
support for framerates up to 187 fps (5.3 ms frame time),
although it will only do this at QVGA = 320x240 pixel
resolution.
High-end optical
tracking systems use high speed cameras with blinking lights
to detect passive retroreflective markers, reaching sub-millimeter
tracking accuracy at 300+ fps.
Blinking at kHz rates
By contrast, specialized Dynamic
Vision Sensor (DVS) chips, which report changes in pixel
brightness as they happen, can reach 10 microsecond temporal
resolution. This specialized sensor is currently about $3,000
for a 128x128 pixel image, but can be combined with simple LED
active markers to give very low-latency tracking.
One very inexpensive option is to use the existing infrastructure
for infrared
remote control devices, most of which amplitude-modulate 950nm
infrared light at either 38kHz or 56kHz. This is fast enough
we can reliably detect pulses a few hundred microseconds wide.
Demodulating infrared sensors such as the TSOP382
(through hole part) or TSOP62 (surface mount part) have a simple
interface with 3 pins for power, ground, and output signal.
The output goes low when an infrared signal is detected ("mark"),
and remains high otherwise ("space"), and typically the timings of
the mark and space are used to encode a digital message. I've
been playing with these chips for robot mining localization (see
Arduino source code for 'lawlor1'
infrared encoding source code .zip or .tar.gz).
Remote control sensors are designed to work anywhere, with a very
wide field of view (typically about 180 degrees). I've done
some work adapting these sensors to a more narrow field of view by
bouncing the transmitted or received signal from a moving
mirror. In theory this could give the low cost of IR LEDs with
the 360 degree field of view of a LIDAR, but in practice it's tough
to get both narrow field of view and reliable detection.