Coordinate Systems for Robot Sensor Fusion

CS 480: Robotics & 3D Printing Lecture, Dr. Lawlor

One of the hardest problems in fusing multiple robot sensors into a coherent view of the world is consistently representing their coordinate systems.  For example, if I put a Kinect sensor on a ground robot outdoors, both the position and orientation of both robot and Kinect are pretty arbitrary 3D values.  Smashing these down to 2D just can't represent things like tunnels or overpasses, and ignores dangerous things like cliffs.
You can even use a 4x4 homogenous matrix to represent both position and orientation.  These even compose, so you can incrementally compute the world-to-tool coordinate system by multiplying the world-to-robot, robot-to-arm, and arm-to-tool coordinate system offsets.

Coordinate system malfunctions are incredibly common in setting up a new robot, or in doing any sort of sensor filtering or combination.   A typical problem results in the sensor data arriving rotated 90 or 180 degrees from reality, or the sensor data being the mirror image of what it should be, or everything working fine until the robot moves, and the sensor data then being projected at some new and invalid location.