A Brief History of Computing, starting in 150 BC
CS 441 Lecture, Dr. Lawlor
Folks have been using physical devices to perform computations for a long time. Notable accomplishments:
- 150 BC: Greeks, likely including Archimedes, built clockwork-like chains of gears such as the Antikythera mechanism to predict astronomical events such as eclipses, and to measure time and convert between calendars.
- 1640's: Blaise Pascal built a series of adding machines, which used a series of hand-cranked cogs to add (similar to a car's mechanical odometer), or via complement arithmetic, subtract; or via repeated addition, multiply.
- 1820's: Charles Babbage designed (but never built) a fully-mechanical polynomial evaluator, the difference engine, via the method of finite differences. He also started work on a fully programmable model, the analytical engine, but building the thing with rod logic would have taken a huge amount of labor.
- 1890: IBM corporation uses the patented electromechanical (mercury switches and relays) Hollerith tabulator to count up the punched cards that represent the 1890 census results. The 1891 Electrical Engineer raved: "This apparatus works unerringly as the mills of the gods, but beats
them hollow as to speed."
- 1941: Konrad Zuse builds the world's first fully-programmable computer, the Zuse Z3. Sadly, it used scavenged telephone switching relays, and was built in wartime Germany, so it was ignored for years.
- 1944: John von Neumann
proposes using the same memory to store both program and data, now
known as a "von Neumann machine". Previous designs used separate
memories for program and data, known as the Harvard architecture.
- 1946: ENIAC, the
first vacuum-tube electronic automatic computer, built by the US
military. ENIAC is fully programmable. Vacuum tubes can
switch in nanoseconds, like transistors, rather than milliseconds, like
relays.
- 1949: MONIAC, a hydraulic computer, models a financial system using water.
- 1950's: the automatic transmission, a hydraulic computer, becomes cheap enough for ordinary people to buy.
- 1956: IBM releases Fortran,
the first successful programming language. Prior to Fortran,
machines were typically programmed in machine code or assembly.
- 1960's: IBM's System/360, which adds microcode and binary backward compatability.
- 1964: Seymore Cray's CDC 6600 achieves amazing performance using superscalar
processing, newfangled transistors, liquid cooling, and offloading I/O
to dedicated "peripheral processors", which were hyperthreading-style barrel processors.
- 1971: Upstart Intel creates a single-chip CPU, the 4004, which computes 4-bit values at up to 0.74MHz.
- 1972: Intel's 8008, 8-bit values at up to 0.5MHz.
- 1978: Intel's 8086, 16-bit values at up to 10MHz.
- late 1970's: "micro" digital computers, like the Apple I, become cheap enough for dedicated hobbyists to buy and solder together.
- 1981: digital computers, like the IBM PC,
become cheap enough for ordinary people to buy pre-assembled. The
notion of selling software is popularized by the upstart "Micro-soft"
corporation.
- 1984: Apple releases a 32-bit personal computer, the Mac 128K.
- 1985: Intel's 80386, 32-bit values at up to 40MHz.
- 1985: The notion of specialized hardware for graphics is popularized by Silicon Graphics corporation. RISC instruction sets are pushed by MIPS corporation.
- 1990: IBM introduces a superscalar RISC machine for personal computers, PowerPC.
- 1994: Intel's releases a 100MHz Pentium CPU.
- 1990's: concept of graphics hardware for personal computers takes off with GLQuake and other 3D games.
- 2000: Intel releases a 1 GHz Pentium III CPU.
- 2002: Intel releases a 3 GHz Pentium 4 CPU, with hyperthreading.
- 2002: Graphics cards become programmable in assembly language (ARB_fragment_program), and support dozens of threads.
- 2003: NVIDIA releases "Cg", a C++-like language for programming
graphics cards. Limitations include a single write per program.
- 2003: AMD corporation introduces chips with a 64-bit extension to the x86 instruction set, which Intel later adopts.
- 2004: Intel abandons plans for a 4GHz Pentium 4 chip.
- 2006: Intel releases dual-core and quad-core CPUs at around 2GHz. The great multithreaded programming model panic begins.
- 2007: Intel announces "V8" eight-core systems.
- 2007: NVIDIA releases CUDA, a very C-like language for
programming graphics cards for non-graphics tasks. Supports
arbitrary reads and writes.
- 2008: Graphics hardware now supports between thousands and millions of threads.
Major lessons:
- In the future, your machine won't have many more GHz, instead it
will have many more cores. Nobody knows how to program those
cores!
- For highly parallel problems, graphics cards already dramatically
surpass CPU parallelism and hence performance. Thousand-core
graphics software is commonplace; thousand-core CPU software is
unheard-of.
- Technology changes, like gears to relays, relays to vacuum tubes,
or tubes to transistors, have the capability to totally re-make the
computer industry in less than 10 years. Biological/nano or
quantum computing has a similar potential!