The Outer Limits... of Computing
CS 441 Lecture, Dr. Lawlor
Since man's powers are finite, any exponential growth curve has to
eventually stop, because it hit some limit. Here are some limits
that computing systems are facing today, that may eventually limit
Your typical US wall outlet provides 120VAC at 15Amps. So you
can't ever possibly push more than 1800 watts through that outlet, and
probably less due to the AC power factor
and wire resistance. Wire losses become significant with long
wires (e.g., for a physically large parallel machine) and high amp
draw--for example, a 2-ohm extension cord will lose 20 volts (across
each conductor!) when pulling 10 amps. Eventually, you'll need
bigger wires just to maintain a usable voltage at the end of the
Most other countries use 240VAC, which works much better over long
wires for two reasons: first, at the higher voltage you need fewer amps
to reach the same number of watts, and lower amperage decreases the
line loss; and second, a higher voltage can more volts to lose!
In practice, a future CPU/CPU/?!PU using about 100 watts won't cause
many problems, but a 10kW computer is going to need some serious
infrastructure just to get power.
The net result of expending electrical power in a computer is
computation and heat. Already, big computer machine rooms have
substantial infrastructure for power distribution; this will certainly
continue assuming power usage increases.
There are at least four good known ways to transfer heat: conduction
(transfer in a solid, like the metal in your heatsink conducts away
heat), convection (transfer to the air, like the fins on the heatsink),
circulation (transfer to a liquid, like a house boiler), and radiation
(transfer to infrared energy). Each of these transfer mechanisms
is more efficient at a higher source temperature, so often a Peltier device (heat pump) is used to increase the apparent source temperature.
Thermodynamics says you can't get rid of heat, but you can pretty
easily dump it into the air, the water, the ground, or radiate it into
deep space. Intel actually seriously considered a plumbing hookup
for PCs, where you use the PC as a pre-heater for your hot water
tank. Many big machine rooms are moving to liquid cooling at the rack or even CPU level.
In your garage, you can build circuit boards pretty easily.
But you can't build useful silicon chips--the features on modern chips
are so tiny one speck of dust can ruin a chip (a "defect").
Modern silicon foundries really work to keep the "defect rate" low, but
in a typical wafer there will still be several defects. Assuming
defects are randomly distributed (though real defect rates are non-random!),
the expected number of defects is linear in the chip's area--so big
chips are much more likely to contain defective parts than tiny chips.
Parallelism provides a cool workaround for fabrication defects--just
turn off the affected parts of the chip, which hurts the performance a
bit, but is still worthwhile. AMD is selling "tri core" CPUs
that they actually wanted to make quad core, but lost a core during
fabrication. nVidia is rumored to do the same thing with graphics
cards; you once could even "unlock" the defective pixel shader units
with a registry key, and take your chances on getting bad pixels in
Human beings still are intimately involved in designing and building computers, which shows up periodically as errors:
I don't think design will be a serious limiting factor going forward,
because CPU designs are actually getting simpler as they go wider in
- In the early 1990's some Intel processors computed the wrong answer for floating-point divide; called the FDIV bug.
- In the late 1990's some Intel processors suffered from the "F00F bug", an instruction decode bug that could lock up or reboot the CPU.
- AMD recently had a "TLB bug" involving a race condition between the different levels of their cache.
Curiously, the biggest limit to multicore performance today is
software--it'd be a real shame if hardware progress is stopped by the
fact that nobody likes writing threaded code!