Parts of the Elephant

There is a story about several wise men fumbling around in the dark trying to understand the nature of an elephant by each feeling different parts of the body (leg, trunk, etc). This strikes me as analogous to an approach to understanding the mind that tries to isolate mental functions by mapping them to physical regions of the brain.

Sure, we’ve known for years that regions of the brain are correlated to mental functions like language, vision, controlling distinct parts of the body, et al. And we observe that gross damage to these areas correlates to loss of function. But the observations show many exceptions and edge cases, such as functional compensation during brain damage. An illuminating aspect of brain damage is the continuous (as opposed to discrete) loss of function, which contrasts sharply with damage to human-engineered systems like cars and computers. With technology, generally speaking if a physical region gets damaged, the function it was serving is totally gone. With biological systems, and especially the brain, function degrades “gracefully”, which is to say, you may be dsylxeic or a pour speeler, but y0u still by g3t qui find 99% of the time.

This month’s Wired Magazine cover story is titled “What We Don’t Know” and it goes on to briefly discuss 42 conundrums that have eluded satisfactory understanding forever. The writeup of “Why Do Placebos Work?” says that nobody knows how the well-documented effect actually works. It talks about a “groundbreaking” experiment using functional MRI:

“When a person knew a painful stimulus was imminent, the brain lit up in the prefrontal cortex, the region used for high-level thinking. When the researchers applied the placebo cream, the prefrontal cortex lit up even brighter….”

My point isn’t to take issue with either the research or even the author, but rather to suggest that if the question is “why do placebos work?” we must first acknowledge that the placebo effect may not be limited to the (rather pedestrian) regulation of pain sensation and directly observable bio-chemical pathways. Then we have to acknowledge that knowing isolated properties — such as the “lighting up” of physical space areas of the brain — tells us very little about what’s really going on. More generally, we need to acknowledge that “placebo” is used to describe any mind-body connection that is not otherwise explained by known physiological mechanisms or experimental control.* The article poignantly points out, for instance, how “studies show that empathy from an authoritative yet caring physician can be deeply therapeutic.” Might reported cases of spontaneous remission of advanced metastatic cancer be due to a grand placebo effect? And if so, what would that mean for our understanding of human physiology and the mind/body connection?

Those familiar with cognitive sciences might be objecting at this point that I am ignoring all the work in the “neural net” literature, which takes the view of the mind/brain as essentially a binary networks of neurons connected by axons transmitting electrical impulses modulated by thresholds. In this model — and to be sure, there are many variants — functions like memory, language, visual pattern matching and muscular control emerge from the collective network dynamics. While the connectionist paradigm clearly is a step in the right direction (as based on the predictive and descriptive power of artificial neural net models), it is not the magic bullet for understanding the mind** that it was once heralded as being. What about deductive logic, grammar, situational reasoning, personality, consciousness and a whole host of other observed brain functions that have not been adequately explained with any single model, be it functional, connectionist or otherwise? And what of other models that have good — albeit limited — prescriptive and descriptive power such as Minsky’s “society of mind”, case-based reasoning, evolutionary memetics, and even such passe models as behaviorism? Should we ignore the good parts of these just so we can have a pure, elegant, simple unified theory of mind?***

Later in the Wired story the question is asked “how does human language evolve?” Glossing over the long-raging debate about how to define language, and ignoring for the moment the (unexamined) premise that it’s somehow evolutionary, the writeup concludes with a telling observation: “The parts of the brain thought to be responsible for language are as well-understood as the rest of the brain, which is to say: not so much.” However, there is some daylight in the form of the computer science researcher Luc Steele who argues that “language was a cultural breakthrough, like writing.” To give credence to this hypothesis, Steele purports to have built robots without any explicit language module which nonetheless developed grammar and syntax systems on their own. In different research, neural network computer models have produced overgeneralization errors (followed by self-correction) when learning language constructs that are eerily similar to those exhibited by human children (e.g. “I bringed the toy to Mommy”).

These sorts of emergent property models strike me as incredibly compelling lines of inquiry, if only for the fact that they tend to do much better on the predictive power index than reductionist models, at least for the kinds of tough problems we are talking about here. Emergent property models do have the drawback of seeming “magical” and somewhat impenetrable, not as good on the descriptive power index. But I believe that this is because of the hegemony of reductionist methodology in Western analytic thought, and particularly math and science up until quite recently. It will take a while before we are comfortable with the thought processes and tools that will allow us to reason and build better intuitions about complex adaptive systems. We are currently like the wise men in the dark, and in order to really grok the elephant, we need to start sliding on the dimmer switch.

* Which reminds me of my favorite definition of artificial intelligence: an AI problem is any problem which has not yet been solved; once it’s solved it’s considered an engineering issue.
** Many people, myself included, view that deeply understanding the human mind, and creating true “artificial intelligence” are flip-sides of the same coin.
*** I will argue in a later post that our bias towards simple, single-cause explanations (c.f. Occam’s Razor) sometimes blocks us from acknowledging the inherent complexity of the world and achieving better understanding.