The evolution of the modern microprocessor is one of many surprising twists and turns. Who invented the first micro? Who had the first 32-bit single-chip design? You might be surprised at the answers.
At the dawn of the 19th century, Benjamin Franklin’s discovery of the principles of electricity were still fairly new, and practical applications of his discoveries were few — the most notable exception being the lightning rod, which was invented independently by two different people in two different places. Independent contemporaneous (and not so contemporaneous) discovery would remain a recurring theme in electronics.
So it was with the invention of the vacuum tube — invented by Fleming, who was investigating the Effect named for and discovered by Edison; it was refined four years later by de Forest (but is now rumored to have been invented 20 years prior by Tesla). So it was with the transistor: Shockley, Brattain and Bardeen were awarded the Nobel Prize for turning de Forest’s triode into a solid state device — but they were not awarded a patent, because of 20-year-prior art by Lilienfeld. So it was with the integrated circuit (or IC) for which Jack Kilby was awarded a Nobel Prize, but which was contemporaneously developed by Robert Noyce of Fairchild Semiconductor (who got the patent). And so it was, indeed, with the microprocessor.
Before the flood: The 1960s
Just a scant few years after the first laboratory integrated circuits, Fairchild Semiconductor introduced the first commercially available integrated circuit (although at almost the same time as one from Texas Instruments).
Already at the start of the decade, process that would last until the present day was available: commercial ICs made in the planar process were available from both Fairchild Semiconductor and Texas Instruments by 1961, and TTL (transistor-transistor logic) circuits appeared commercially in 1962. By 1968, CMOS (complementary metal oxide semiconductor) hit the market. There is no doubt but that technology, design, and process were rapidly evolving.
Observing this trend, Fairchild Semiconductor’s director of Research & Development Gordon Moore observed in 1965 that the density of elements in ICs was doubling annually, and predicted that the trend would continue for the next ten years. With certain amendments, this came to be known as Moore’s Law.
The first ICs contained just a few transistors per wafer; by the dawn of the 1970s, production techniques allowed for thousands of transistors per wafer. It was only a matter of time before someone would use this capacity to put an entire computer on a chip, and several someones, indeed, did just that.
Development explosion: The 1970s
The idea of a computer on a single chip had been described in the literature as far back as 1952 (see Resources), and more articles like this began to appear as the 1970s dawned. Finally, process had caught up to thinking, and the computer on a chip was made possible. The air was electric with the possibility.
Once the feat had been established, the rest of the decade saw a proliferation of companies old and new getting into the semiconductor business, as well as the first personal computers, the first arcade games, and even the first home video game systems — thus spreading consumer contact with electronics, and paving the way for continued rapid growth in the 1980s.
At the beginning of the 1970s, microprocessors had not yet been introduced. By the end of the decade, a saturated market led to price wars, and many processors were already 16-bit.
The first three
At the time of this writing, three groups lay claim for having been the first to put a computer in a chip: The Central Air Data Computer (CADC), the Intel® 4004, and the Texas Instruments TMS 1000.
The CADC system was completed for the Navy’s “TomCat” fighter jets in 1970. It is often discounted because it was a chip set and not a CPU. The TI TMS 1000 was first to market in calculator form, but not in stand-alone form — that distinction goes to the Intel 4004, which is just one of the reasons it is often cited as the first (incidentally, it too was just one in a chipset of four).
In truth, it does not matter who was first. As with the lightning rod, the light bulb, radio — and so many other innovations before and after — it suffices to say it was in the aether, it was inevitable, its time was come.
Where are they now?
CADC spent 20 years in top-secret, cold-war-era mothballs until finally being declassified in 1998. Thus, even if it was the first, it has remained under most people’s radar even today, and did not have a chance to influence other early microprocessor design.
The Intel 4004 had a short and mostly uneventful history, to be superseded by the 8008 and other early Intel chips (see below).
In 1973, Texas Instrument’s Gary Boone was awarded U.S. Patent No. 3,757,306 for the single-chip microprocessor architecture. The chip was finally marketed in stand-alone form in 1974, for the low, low (bulk) price of US$2 apiece. In 1978, a special version of the TI TMS 1000 was the brains of the educational “Speak and Spell” toy which E.T. jerry-rigged to phone home.
More here.