Computers were, at first, a decidedly unintegrated technology. They were composed of vacuum tubes, resistors, capacitors, inductors and mercury delay lines, and as a result were huge, expensive and power hungry. The situation improved with developments in microelectronics based on solid-state germanium transistors and diodes, which began replacing vacuum tubes in radios and phonographs, and eventually led to the first commercial transistor-based computers in 1959. That same year, the development of the planar process at Fairchild Semiconductor allowed tens of silicon transistors to be fabricated at the same time on the surface of a single-crystal silicon wafer (ten years later it would be possible to fabricate thousands of transistors). This invention was quickly followed by the commercialization of the first bipolar digital integrated circuits (ICs) in 1962, and from that point on, progress in semiconductor ICs became exponential, with the maximum number of components integrated in a silicon chip doubling every year, at least initially. But what allowed all the functions of a general-purpose computer to be integrated together was a monolithic central processing unit (CPU) — a microprocessor — and the first commercial microprocessor was born nine years later in 1971 (Fig. 1).