The Computer as Metaphor, Model, and Mind
Martha Farah Probes the Information-Processing BrainMartha Farah's high-ceilinged office seems an unlikely setting for computer simulations of brain function. The office and lab space is furnished with an old oak desk, dark wood chairs, and glass-front bookcases. The space is softened in spots with frayed oriental rugs and a faded paisley shawl draped over the leather back of a civil-war era rocking chair. A window looks out across a broad Victorian porch to the pedestrians and traffic on Walnut Street just past Thirty-Eighth.
On the desk, a white phrenology bust stares, expressionless and inscrutable, its cranium mapped into discrete sites once thought to correspond to mental faculties and character traits in the brain. The bust conjures up images of a bearded parlor phrenologist palpating skull contours like a blind man groping in the dark. But the bygone-era feel of the place is belied by the glowing screen of a laptop computer that sits beside a navy-blue textbook, Behavioral Neurology and Neuropsychology, with coeditor Martha Farah's name in gold lettering.
With the help of computer modeling, the Penn psychology professor has challenged a fundamental and long-standing assumption in neuropsychology. Although sophisticated by some measures, the models are grossly primitive in relation to the intricate complexity of the brain's architecture. "If any of my colleagues in neuroscience heard me calling the units in these models 'neurons,' they'd laugh." She gives a little snort as if in agreement. "But to me, the greatest strength of these models is that, with all of their minimalism, they still explain a lot. By just putting a handful of key properties of neurocomputation into their simplified network--a few basic assumptions, a few basic considerations of brain physiology--we get a huge return in terms of cognitive phenomena that we can explain."
In faded jeans and sneakers, and an oversize navy blazer with rolled-up sleeves, Farah does not look like a person who works at a computer all day teaching it how to be a brain. Sitting with crossed legs on the seat of her wooden desk chair, she speaks deliberately and haltingly, with frequent reflective pauses and backtracks. She is uncomfortable with the tape recorder's red eye watching her every word. Her conversation is choreographed by small hand gestures, confined by the radius of her forearms that pivot from the elbows propped on her thighs. On the desk is a photo of Farah, smiling hugely, with three-year-old daughter, Theodora, both sporting party hats decorated with bright hand-made paper flowers.
Curiosity about the mind probably dates back to when the first human animal was startled by an image of itself peering back from a pool of water. But it wasn't until the intricate software and circuitry of the computer emerged that neuroscientists were able to think in terms of information processing as both a metaphor and a model for cognition. "The computer analogy began appearing in journals in the 1960s," she comments. "The advent of computers and people's familiarity with them has had a huge influence on the way psychologists think about the mind."
Farah notes that it is the relatively young experimenters, those who grew up with computers and new information technologies, who are most comfortable with the computer metaphor. "The computer was crucial in helping get the field away from the dominant paradigm before cognitive psychology, which was Skinnerian behaviorism." B. F. Skinner claimed that the internal mental states that transpired between stimulus and response were not directly observable and that scientists couldn't make meaningful psychological generalizations about them. "The computer gave cognitive scientists license to talk about the mind as an information processing system without appealing to any kind of spooky mysticism about the intervening states. The terms in our theories when we talk about recalling something from our memory are no more mysterious and unscientific than when we talk about what's happening in a computer retrieving some item from its memory. Mental states are just that: they're informational states of the brain."
As an undergraduate at MIT, Farah studied metallurgy and philosophy. "On a lark," she took a metallurgy course first semester and found it enjoyable, but because she was attracted to questions about how people came up with theories on the structure of matter and how they solved engineering-design problems, she decided to major in philosophy as well. "Believe it or not, I had no idea what psychology was about. So it seemed to me that a philosophy degree, where I could study epistemology and do a little philosophy and history of physics, was a way of satisfying my vague curiosities about how people created and acquired knowledge."
It was not until the very end of her undergraduate career, however, that she found an academic niche. "I said, 'gosh, there's this field called psychology where they ask the same kinds of questions a lot of my favorite philosophers had been asking,' but they try to answer those questions with an empirical method, which is the method I was most comfortable with and the one I was trained in with my metallurgy background. So at that point, I figured it out and decided to be a psychologist when I grew up." She earned a Ph.D. in experimental psychology from Harvard.
Farah's original academic interests are not as far from her current research as might at first seem. In both cases, she has sought a nuts-and-bolts explanation of why systems behave as they do. "What I found particularly satisfying about metallurgy was that you get to explain very concrete, very palpable properties of materials and of structures: their hardness, their color, their electrical properties. Why this beam holds up a building and that one fractures is a behavior that can be explained in terms of the materials' microstructure--the world of atoms, molecules, and the rest of it. Within psychology, I'm of the school that believes a lot of the large-scale structure of cognition, the observable features of how people think and behave in everyday life, can be understood by looking at the microstructure of neuro-information processing--the properties of individual neurons and their connectivities."
Cognitive psychologists often use observations of brain-damaged people to draw inferences about the normal mind's "functional architecture"--how the brain's circuitry processes information to yield behaviors, such as recognizing a face, that we take for granted. One assumption that, until recently, informed almost all such research was that the effects of brain damage are local, that the brain's hardware is divided into parts that perform discrete cognitive tasks, a model similar to the cranial grid that overlays the phrenology bust on Farah's desk. Although phrenology was discredited, the "locality assumption" persisted: researchers' inferences about brain structure took for granted that the damaged part of the brain does the thing the brain-damaged person can no longer do.
"There's good reason to believe the brain is a dynamic and highly interactive system," she argues. "You can't always assume that the effect of damage in a complex system like the brain is just to knock out one cognitive ability and to leave all the remaining abilities unaffected." Farah calls the hypothesis that the brain's functional architecture is interactive the "parallel distributed processing framework." "The trick," she points out, "is to think simultaneously about the function of the lost part and the potentially changed functions of the physically intact parts. That's a problem computer simulation of interactive neural networks can help with."
Together with colleagues, Farah creates multi-level, highly interactive computer models that simulate a normal--functioning mind and then replicates various cognitive impairments by "damaging"--turning off--parts of the computer network. The computer models share a number of properties with live brains that include the use of numerous, interconnected, mutually interactive units. The connections between the units are "weighted" so that their strength can be high, in which case a lot of activation is passed from one unit to another, or it can be attenuated so that when one unit is very active, only a little of that activation gets passed to the others. Another important similarity to real brains is that representation is distributed. Subjectively speaking, representation is the picture in the mind's eye; objectively, it is the pattern of electrical, information-processing impulses running through the neural network. In the case of face recognition, instead of having a single unit whose activation represents the recognition of a particular face, which is what the locality assumption says, representation consists of activation patterns distributed over a number of units. Thus many different faces can be represented using the same sets of units because the pattern of activation for each face will be different over the units.
The locality assumption holds a straightforward and intuitive appeal that, like the earth-centered universe of Ptolemy, fits with experience. But a model of the brain with countless galaxies of connections generating unique and identifiable constellations of neural activity holds an aesthetic more compelling to theorists like Farah. "In some ways," she asserts, "it's simpler to accept the locality assumption than to teach a computer network to operate in this way: to play around with 'lesioning' different parts of the system, see what the output of the network is like under different conditions of damage, and only then draw conclusions. But the conclusions themselves that come at the end of this kind of tortured and computer-intensive process are often simpler ones. To me, as a scientist, that's what you want: a simple hypothesis. The process of getting there is a challenge you've got to rise to."
Farah is cautious about predicting where this work will lead. "New therapies for brain-injured patients, better ways of nurturing and educating normal brains, I can't honestly say they're just around the corner. But I do believe that, until our basic science is clear, any attempts at applied work will be shots in the dark. Right now we're excited about shedding just a little bit of light."
"Psychology doesn't know a lot," she concludes, unfolding her crossed legs, "and that's one of the reasons I like it. It's much more fun to be working in a field where you're really starting from the ground up. There's something kind of grass roots about trying to figure out how the mind works and how the brain works at this point in the history of the field. I find that satisfying."