The Story
“The Limits of Intelligence”
http://www.scientificamerican.com/article.cfm?id=the-limits-of-intelligence
by Douglas Fox
Scientific American, July 2011
The Pitch
[Fox notes: This pitch was originally written to peg the story for a special “the end” theme issue of Sci Am which ended up running long before this story did. “End of intelligence” was always a stretch, trying to shoehorn the story into that special issue, so it’s probably fortunate that I never had to try to actually write it to that theme.]
The End of Intelligence
Several weeks ago scientists at IBM unveiled the largest computer brain simulation to date: 1.6 billion virtual neurons connected by 9 trillion synapses. The simulation ran on Dawn, a Blue Gene /P supercomputer—one of the fastest in the world—consuming 1 million watts of electricity. Even so, it ran at only 1/600th of real time. At current rates of supercomputer growth, the 1,000-fold greater computing capacity needed to simulate an entire human cortex in real time could be available by 2019. But here’s the hitch: That computer will consume up to a billion watts of electricity, equal to a nuclear power plant—an annual power bill of $1 billion. It raises an unsettling truth: At some point, limited resources—financial and energetic—may constrain our ability to scale up transistors and computers to achieve bigger and bigger feats of number-crunching.
What few people recognize is that biological intelligence—brains—may well face similar limits. These limits stem from the very nature of neurons, axons, and the statistically unreliable ion channels on which they rely. They are rooted in the ways that energy consumption and wiring patterns of brains scale (according to power laws) as brains increase in size from the 2-gram shrew to the 100,000- kilogram blue whale. “The relationship between energy and information is rather deep, and grounded in thermodynamics,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge.
If there’s a limit to the intelligence that can be attained by brains made of neurons and ion channels as we know them, then this spawns another, more profound question: Has the human brain—which already devours 20% of our calories—approached this limit? In Scientific American’s special issue on THE END, readers would enjoy contemplating the question of whether hominids’ evolutionary meander toward intelligence has approached its thermodynamic conclusion. Positive selection for cognitive capacity (if it occurs) might well squeeze a few more IQ points out of our brains in the next 50,000 years—but these gains may be small, and achieved at great cost. Such a thesis is bound to evoke strong opinions, strong responses, and strong readership.
Constraints on intelligence are rooted in the energetics of signal-to-noise ratio. Whether in neurons or digital electronics, doubling the ratio generally requires quadrupling the energy. In brains, noise arises from the fundamental of unit of neural computation—the ion channel, which produces the electrical currents in action potentials. Single ion channels open and close stochastically. Laughlin has found that this random behavior by ion channels places a lower limit on the brain’s ability to miniaturize its axons (the telegraph wires which carry action potentials). The brain can save energy by having smaller axons with fewer ion channels. But if an axon is too small, the random opening of a single ion channel can trigger an accidental action potential; axons must be fat enough to prevent this happening too often (even so, Laughlin calculates that the brain has pushed miniaturization to its limit: the skinniest axons fire up to 5 spontaneous action potentials per second).
Laughlin and David Attwell have determined that action potential signaling comprises 80% of energy consumed by the brain’s grey matter. If one includes white matter (composed entirely of axons, albeit myelinated ones), then the fraction of energy for the overall brain that goes toward communication is probably at least as high.
As brains get bigger, the cost of communication only grows. Based on brains from the pygmy shrew to the elephant, Terrence Sejnowski (Salk Institute) has determined that the volume of white matter grows faster than the volume of grey matter as brains increase in size. This is because axons not only increase in length as the brain grows; long-distance axons also get fatter. Fatter axons transmit their action potentials more quickly. Samual Wang (Princeton) has found that the fattening of axons in larger brains means that across a wide range of species with different brain sizes, the fastest transmission times for cross-brain action potentials is surprisingly constant—about 1 millisecond. It suggests that such fast communication is mandatory for effective brain function. And since fatter axons consume more energy, it suggests that as brains increase in size, the energy demands for communication grow more quickly than the size or processing power of the neural network. (As axons grow fatter, spike velocity increases linearly with axon diameter, but energy consumption increases supralinearly.)
Dmitri Chklovskii (HHMI, Janelia Farms) has found that brains have evolved to minimize their wiring costs. So expensive are these long-distance axonal connections that as brains increase in size, the proportion of connections which are long-distance seems to fall. It implies that—for energetic reasons—as brains get larger, a bottleneck is increasingly imposed on long-distance communication.
These same long-distance connections that are so energetically expensive are also probably critical to maximizing the brain’s global computing capacity—that is, intelligence. MRI tractography, which allows large-scale tracing of long-distance axons, bares this out. A spate of studies presented at the Human Brain Mapping meeting in June 2009 in San Francisco reveal “small worldness”—a critical balance between local and long-distance connections—as a predictor of IQ.
Trim some of those long-distance connections to save energy, and you might well decrease the computational power of the brain. “It’s tempting to speculate whether some kind of coherent [neural network+ behavior can be lost with fewer connections,” says Vijay Balasubramanian, a physicist at the University of Pennsylvania who also studies brain science and information theory.
To Balasubramanian it makes sense, given all of these considerations, that evolutionary investments in more powerful brains probably incur disproportionately greater costs for each incremental improvement in computing capacity. “I think there is a law of diminishing returns for many reasons,” says Balasubramanian. “It relates to fidelity of communications, energetics of communication, and possibly even the need for heat dissipation *through an increasingly extensive vascular system+.” It will not be a hard limit, but rather a soft-edged limit which is approached asymptotically—similar to
legislatures becoming progressively less willing to fund supercomputers with $1 billion or $10 billion or
$20 billion annual power bills.
One caveat: It’s tempting to equate brain size with intelligence. Not only has this practice produced some horrible results over the last 300 years—it oversimplifies things. A cow with a 1000-gram brain is hardly smarter than a mouse with a 1-gram brain. In fact, across mammals, brain size grows with body size according to a power law—for the simple reason that larger bodies require more neural housekeeping unrelated to intelligence, such as more skin area to be monitored by tactile nerves, larger eyes and retinas, more muscle mass to innervate, and so on. “What probably matters is how far you are above the line,” says Charles Stevens, a computational neuroscience at the Salk Institute.
Humans, he says, possess brains that are triple the size they should be based on body mass alone.
It’s possible to deviate from the line imposed by power laws, as humans have, but only at high energetic costs—and only when rare circumstances of evolution and environment provide a selective force and a means for amassing the necessary resources.
Other limits are sometimes evoked as constraints on intelligence. The width of the birth canal (imposed by bipedal walking) is said to limit the size of a newborn’s brain. But thermodynamic and power laws only deepen these limits. Intelligent brains are probably born immature by necessity: learning and experience (and not some one-size-fits-all prenatal development program) are probably what produces intelligence. But another power law limits our acquisition of experience and learning— the length of our lifespan which scales across vertebrates according to body mass and metabolic rate.
The story which I propose is best written by a journalist in that it goes beyond the work and thinking of any one scientist. My goal in writing this story is not to lay down a new truth (!), but rather to raise an interesting question—and prod the scientific community into beginning the conversation which will answer it. A number of people are already studying energetic, noise, spatial, and wiring constraints on brains—but no one has yet asked this more profound question. In weaving my argument I’ll draw not only upon friendly views (Balasubramanian and Laughlin), but also views of people less likely to agree (Stevens). “It’s a very interesting point,” says Balasubramanian. “It would be nice to articulate this point. I think the question deserves to be asked.”