Daniel Horowitz just forwarded me an interesting article in which Steve Pinker is debating and defending the merits of exploring dangerous ideas even though they may threaten our core values and deeply offend our sensibilities. What struck me most interesting (and laudable) was Pinker’s willingness to play devil’s advocate to his own argument and suggest that maybe exploring dangerous ideas is too dangerous an idea itself and thus should not be adopted as a practice:
But don’t the demands of rationality always compel us to seek the complete truth? Not necessarily. Rational agents often choose to be ignorant. They may decide not to be in a position where they can receive a threat or be exposed to a sensitive secret. They may choose to avoid being asked an incriminating question, where one answer is damaging, another is dishonest and a failure to answer is grounds for the questioner to assume the worst (hence the Fifth Amendment protection against being forced to testify against oneself). Scientists test drugs in double-blind studies in which they keep themselves from knowing who got the drug and who got the placebo, and they referee manuscripts anonymously for the same reason. Many people rationally choose not to know the gender of their unborn child, or whether they carry a gene for Huntington’s disease, or whether their nominal father is genetically related to them. Perhaps a similar logic would call for keeping socially harmful information out of the public sphere.
Like most people trained in a Western educational system (and especially like most scientifically minded people), I am biased toward the notion that knowledge, sharing of truth, communication, and openness to ideas are all good things for societies and individuals alike, and should therefore be fostered. I have even proposed a market system designed to be a globally trusted mechanism for assessing the truth value of claims and the trustworthiness of claimants. But I am not without my own doubts about the inherent “goodness” of knowledge and truth, having warned of the socially deleterious effects of dangerous media and therein suggesting a social/moral responsibility to willingly refrain from propagating it.
In thinking about the problem, I am reminded of the dilemmas and paradoxes for the rational agent when dealing with issues of mutual knowledge vs common knowledge. The distinction between these types of knowledge can be used to explain the existence of a whole host of phenomena involving social networks and information cascades. For instance, a stock market bubble can occur even when we all have mutual knowledge that a stock’s price is much higher than its “intrinsic value”. But as soon as this mutual knowledge becomes common knowledge (for instance some bad news is announced to the general public), the bubble is burst and we are in for a “correction”. As long as each of us believes there may be someone out there — the proverbial greater fool — who doesn’t know the stock price is inflated, we are motivated to buy or hold rather than sell, hence driving the price higher or keeping it inflated indefinitely. Once the bad news comes out, we all instantly know that our assumption can no longer hold and that there are no greater fools left, so we rush to sell, triggering a self-reinforcing negative feedback loop (aka a market crash).
One can easily argue that in institutions like markets, the more common knowledge the better off we are as a group: there’s less market volatility, fewer destructive bubble-crash cycles, less room for corruption, and generally a more fair playing field for everyone. Note that mutual knowledge is a subset of common knowledge — not only do you and I both know X (mutual), but we also each know that the other knows X (common), and so on. Thus, in certain social institutions, we can argue that more information, more knowledge, more truth is better than less. The question on my mind is whether there are also cases in which more knowledge is actually worse, not for individuals as Pinker’s quote above suggests, but for society. This would suggest that exploring dangerous ideas may in fact be a dangerous idea after all.
The example of the Farmer’s Dilemma is a telling one. It’s a variant of the Prisoner’s Dilemma and a member of a very important class of social and economic problems loosely understood as the tragedy of the commons. In formulating the generalized notion of the tragedy of the commons, Hardin points out that there exist social, political, economic problems — very big ones, like over-population, nuclear proliferation, and pollution to name just a few — which have no “technical solution”. Which is to say, more knowledge and greater understanding of the problem won’t by itself lead to a solution. The only way out is to essentially change the rules of the game by agreement, to collude, to cooperate for the common good. In situations like the Farmer’s Dilemma, I would suggest that the common knowledge of each party’s preferences and abilities to reason “rationally” and recursively is what makes the situation tragic. If each farmer were limited to the mutual knowledge of an agreement they made to help one another, and were not allowed to delve into the higher-order logic of common knowledge, the tragedy could be averted. Common knowledge can be dangerous, as it tends to erode the foundation of cooperative behavior. The unfortunate corollary is that the “smarter” the agents in a social system, the deeper they can individually reason about common knowledge, the more dangerous that knowledge becomes.
So, back to the question at hand: are there ideas that are too dangerous for us as a society to explore? I believe that there are, and I believe that they cluster around the notion of common knowledge. Whether any of the putative dangerous ideas that Pinker lists at the beginning of his essay belong to this class is hard to say. But I think it’s at least good to explore this idea. Or maybe not….