A Brief History of Computing, starting in 150 BC
CS 441 Lecture, Dr. Lawlor
Folks have been using physical devices to perform computations for a long time.
Of course, there are serious limitations to mechanical devices:
it's hard to even turn a corner with a rotating axle, just like it's
hard to make a leak-free joint with liquids. One huge advantage
of electronics is that wires are very easy to route, bend, and join.
- 150 BC: Greeks, likely including Archimedes, built clockwork-like chains of gears such as the Antikythera mechanism to predict astronomical events such as eclipses, and to measure time and convert between calendars.
- 1640's: Blaise Pascal built a series of adding machines, which used a series of hand-cranked cogs to add (similar to a car's mechanical odometer), or via complement arithmetic, subtract; or via repeated addition, multiply.
- 1820's: Charles Babbage designed (but never built) a fully-mechanical polynomial evaluator, the difference engine, via the method of finite differences. He also started work on a fully programmable model, the analytical engine, but building the thing with rod logic would have taken a huge amount of labor.
- 1948: CURTA, a mass-produced fully-mechanical pocket calculator.
- 1949: MONIAC, a hydraulic computer, models the country's financial system using water.
- 1950's: the automatic transmission, a hydraulic computer, becomes cheap enough for ordinary people to buy.
Fully Electronic: Vacuum Tubes and Transistors
- 1890: IBM corporation uses the patented electromechanical (mercury switches and relays) Hollerith tabulator to count up the punched cards that represent the 1890 census results. The 1891 Electrical Engineer raved: "This apparatus works unerringly as the mills of the gods, but beats
them hollow as to speed."
- 1941: Konrad Zuse builds the world's first fully-programmable computer, the Zuse Z3. Sadly, it used scavenged telephone switching relays, and was built in wartime Germany, so it was ignored for years.
- 1944: John von Neumann
proposes using the same memory to store both program and data, now
known as a "von Neumann machine". Previous designs used separate
memories for program and data, known as the Harvard architecture.
- 1946: ENIAC, the
first vacuum-tube electronic automatic computer, built by the US
military. ENIAC is fully programmable. Vacuum tubes can
switch in nanoseconds, like transistors, rather than milliseconds, like
- 1956: IBM releases Fortran,
the first successful programming language. Prior to Fortran,
machines were typically programmed using a soldering iron, patch cables, machine code, or assembly.
- 1960's: IBM's System/360, which adds microcode and binary backward compatability using those newfangled transistors.
- 1964: Seymore Cray's CDC 6600 achieves amazing performance using superscalar
processing, caching, newfangled transistors, liquid cooling, and offloading I/O
to dedicated "peripheral processors", which were hyperthreading-style barrel processors.
- 1971: Upstart Intel creates a single-chip CPU, the 4004, which computes 4-bit values at up to 0.74MHz, 0.1MIPS. 2,300 transistors.
- 1972: HP-35, the first electronic pocket calculator good enough to replace the slide rule, for only $395.
- 1972: Intel's 8008, 8-bit values at up to 0.5MHz. 2,500 transistors.
- 1978: Intel's 8086, 16-bit values at up to 10MHz, 1 MIPS. 29,000 transistors. Instruction set is "x86", still in use today!
- late 1970's: "micro" digital computers, like the Apple I, become cheap enough for dedicated hobbyists to buy and solder together.
- 1981: digital computers, like the IBM PC,
become cheap enough for ordinary people to buy pre-assembled. The
notion of selling software is popularized by the upstart "Micro-soft"
- 1984: Apple releases a 32-bit personal computer, the Mac 128K.
- 1985: Intel's 80386, 32-bit values at up to 40MHz. 10 MIPS. 275,000 transistors.
- 1985: The notion of specialized hardware for graphics is popularized by Silicon Graphics corporation. RISC instruction sets are pushed by MIPS corporation.
- 1990: IBM introduces a superscalar RISC machine for personal computers, PowerPC.
- 1994: Intel's releases a 100MHz Pentium (P5) CPU. 120 MIPS (superscalar). 3 million transistors.
- 1990's: graphics hardware for personal computers takes off with GLQuake and other 3D games.
- 2000: Intel releases a 1 GHz Pentium III CPU. 1000+ MIPS. 10 million transistors.
- 2002: Intel releases a 3 GHz Pentium 4 CPU, with hyperthreading. 55 million transistors.
- 2002: Graphics cards become programmable in assembly language (ARB_fragment_program), and support dozens of threads.
- 2003: NVIDIA releases "Cg", a C++-like language for programming
graphics cards. Limitations include a single write per program.
- 2003: AMD corporation introduces chips with a 64-bit extension to the x86 instruction set, which Intel later adopts.
- 2004: Intel abandons plans for a 4GHz Pentium 4 chip.
- 2006: Intel releases dual-core and quad-core CPUs at around 2GHz. The great multithreaded programming model panic begins.
- 2007: Intel announces "V8" eight-core systems. Transistor counts reach billions.
- 2007: NVIDIA releases CUDA, a very C-like language for
programming graphics cards for non-graphics tasks. Supports
arbitrary reads and writes.
- 2008: Graphics hardware now supports between thousands and millions of threads, and use billions of transistors.
- CPUs are still clocked at 2-4 GHz, just like
in 2002. So in the future, your machine won't have very many more
GHz, instead it
will have many more cores. Nobody knows how we will program those
- For highly parallel problems, graphics cards already dramatically
surpass CPU parallelism and hence performance. Thousand-core
graphics software is commonplace; thousand-core threaded CPU software is
- Technology changes, like gears to relays, relays to vacuum tubes,
or tubes to transistors, have the capability to totally re-make the
computer industry in less than 10 years. Biological/nano or
quantum computing has a similar potential!