Biological computing is “massively parallel,” decentralized, and noisy. Your cells have somewhere in the neighborhood of 300 quintillion ribosomes, all working at the same time. Each of these exquisitely complex floating protein factories is, in effect, a tiny computer — albeit a stochastic one, meaning not entirely predictable. The movements of hinged components, the capture and release of smaller molecules, and the manipulation of chemical bonds are all individually random, reversible, and inexact, driven this way and that by constant thermal buffeting. Only a statistical asymmetry favors one direction over another, with clever origami moves tending to “lock in” certain steps such that a next step becomes likely to happen.
This differs greatly from the operation of “logic gates” in a computer, basic components that process binary inputs into outputs using fixed rules. They are irreversible and engineered to be 99.99 percent reliable and reproducible.
Biological computing is computing, nonetheless. And its use of randomness is a feature, not a bug. In fact, many classic algorithms in computer science also require randomness (albeit for different reasons), which may explain why Turing insisted that the Ferranti Mark I, an early computer he helped to design in 1951, include a random number instruction. Randomness is thus a small but important conceptual extension to the original Turing Machine, though any computer can simulate it by calculating deterministic but random-looking or “pseudorandom” numbers.
Parallelism, too, is increasingly fundamental to computing today. Modern AI, for instance, depends on both massive parallelism and randomness — as in the parallelized “stochastic gradient descent” (SGD) algorithm, used for training most of today’s neural nets, the “temperature” setting used in chatbots to introduce a degree of randomness into their output, and the parallelism of Graphics Processing Units (GPUs), which power most AI in data centers.