2009年4月1日星期三

(56)Computers vs. Brains 电脑与人脑的对战

Inventor Ray Kurzweil, in his 2005 futurist manifesto “The Singularity Is Near,” extrapolates current trends in computer technology to conclude that machines will be able to out-think people within a few decades. In his eagerness to salute our robotic overlords, he neglects some key differences between brains and computers that make his prediction unlikely to come true.
Brains have long been compared to the most advanced existing technology — including, at one point, telephone switchboards. Today people talk about brains as if they were a sort of biological computer, with pink mushy “hardware” and “software” generated by life experiences.
However, any comparison with computers misses a messy truth. Because the brain arose through natural selection, it contains layers of systems that arose for one function and then were adopted for another, even though they don’t work perfectly. An engineer with time to get it right would have started over, but it’s easier for evolution to adapt an old system to a new purpose than to come up with an entirely new structure. Our colleague David Linden has compared the evolutionary history of the brain to the task of building a modern car by adding parts to a 1925 Model T that never stops running. As a result, brains differ from computers in many ways, from their highly efficient use of energy to their tremendous adaptability.One striking feature of brain tissue is its compactness. In the brain’s wiring, space is at a premium, and is more tightly packed than even the most condensed computer architecture. One cubic centimeter of human brain tissue, which would fill a thimble, contains 50 million neurons; several hundred miles of axons, the wires over which neurons send signals; and close to a trillion (that’s a million million) synapses, the connections between neurons.
The memory capacity in this small volume is potentially immense. Electrical impulses that arrive at a synapse give the recipient neuron a small chemical kick that can vary in size. Variation in synaptic strength is thought to be a means of memory formation. Sam’s lab has shown that synaptic strength flips between extreme high and low states, a flip that is reminiscent of a computer storing a “one” or a “zero” — a single bit of information.
But unlike a computer, connections between neurons can form and break too, a process that continues throughout life and can store even more information because of the potential for creating new paths for activity. Although we’re forced to guess because the neural basis of memory isn’t understood at this level, let’s say that one movable synapse could store one byte (8 bits) of memory. That thimble would then contain 1,000 gigabytes (1 terabyte) of information. A thousand thimblefuls make up a whole brain, giving us a million gigabytes — a petabyte — of information. To put this in perspective, the entire archived contents of the Internet fill just three petabytes.
To address this challenge, Kurzweil invokes Moore’s Law, the principle that for the last four decades, engineers have managed to double the capacity of chips (and hard drives) every year or two. If we imagine that the trend will continue, it’s possible to guess when a single computer the size of a brain could contain a petabyte. That would be about 2025 to 2030, just 15 or 20 years from now.
This projection overlooks the dark, hot underbelly of Moore’s law: power consumption per chip, which has also exploded since 1985. By 2025, the memory of an artificial brain would use nearly a gigawatt of power, the amount currently consumed by all of Washington, D.C. So brute-force escalation of current computer technology would give us an artificial brain that is far too costly to operate.
Compare this with your brain, which uses about 12 watts, an amount that supports not only memory but all your thought processes. This is less than the energy consumed by a typical refrigerator light, and half the typical needs of a laptop computer. Cutting power consumption by half while increasing computing power many times over is a pretty challenging design standard. As smart as we are, in this sense we are all dim bulbs.
A persistent problem in artificial computing is the sensitivity of the system to component failure. Yet biological synapses are remarkably flaky devices even in normal, healthy conditions. They release neurotransmitter only a small fraction of the time when their parent neuron fires an electrical impulse. This unreliability may arise because individual synapses are so small that they contain barely enough machinery to function. This may be a trade-off that stuffs the most function into the smallest possible space.
In any case, a brain’s success is not measured by its ability to process information in precisely repeatable ways. Instead, it has evolved to guide behaviors that allow us to survive and reproduce, which often requires fast responses to complex situations. As a result, we constantly make approximations and find “good-enough” solutions. This leads to mistakes and biases. We think that when two events occur at the same time, one must have caused the other. We make inaccurate snap judgments such as racial prejudice. We fail to plan rationally for the future, as explored in the field of neuroeconomics.
Still, engineers could learn a thing or two from brain strategies. For example, even the most advanced computers have difficulty telling a dog from a cat, something that can be done at a glance by a toddler — or a cat. We use emotions, the brain’s steersman, to assign value to our experiences and to future possibilities, often allowing us to evaluate potential outcomes efficiently and rapidly when information is uncertain. In general, we bring an extraordinary amount of background information to bear on seemingly simple tasks, allowing us to make inferences that are difficult for machines.
If engineers can understand how to apply these shortcuts and tricks, computer performance could begin to emulate some of the more impressive feats of human brains. However, this route may lead to computers that share our imperfections. This may not be exactly what we want from robot overlords, but it could lead to better “soft” judgments from our computers.
This gets us to the deepest point: why bother building an artificial brain?
As neuroscientists, we’re excited about the potential of using computational models to test our understanding of how the brain works. On the other hand, although it eventually may be possible to design sophisticated computing devices that imitate what we do, the capability to make such a device is already here. All you need is a fertile man and woman with the resources to nurture their child to adulthood. With luck, by 2030 you’ll have a full-grown, college-educated, walking petabyte. A drawback is that it may be difficult to get this computing device to do what you ask.
We’re grateful to Olivia for the opportunity to write these four columns. Our topics

没有评论:

发表评论

欢迎就文章所述观点、问题发表看法和留言。
请勿发表任何有关政治、宗教、成人等敏感性评论,勿发表带有人身攻击性、骂人、脏话等,博主有权删除任何评论,见谅!谢谢!