Book Report: Complex Adaptive Systems
I just finished reading Complex Adaptive Systems and thought I’d share some of the stuff I underlined and point out how it relates to certain themes and claims in this blog. The organization of these quotes is my own, not related to the chapter or section headings of the book necessarily.
p. 35: We begin with a discussion of the basics of scientific modeling. This topic is so fundamental to the scientific enterprise that it is often assumed to be known by, rather than explicitly taught to, students (with the exception of a high school lecture or two on the “scientific method”). For whatever reasons, learning about modeling is a lot like learning about sex: despite its importance, most people do not want to discuss it, and no matter how much you read about it, it just doesn’t seem the same when you actually get around to doing it.
Being able to see the intellectual tools with which you are working as such, and especially understanding their limitations and biases, is critical for deep understanding. A lot of what I post here harks back to this theme:
- There is No Truth, Only Predictive Power
- Thought as Metaphor
- This Sentence is False
- Parts of the Elephant
- Emergent Causality
Much of what passes for scientific pursuit ignores the fundamental importance of the chosen model and the modeling process itself. And instead of scientific enlightenment we get mathematics in disguise:
p. 71: Many analytic methods provide exact answers that are guaranteed to be true. Alas, all models are approximations at some level, so the fact that, say, a mathematical model gives and exact answer to a set of previously specified approximations may not be all that important.
One underutilized model in particular is going to be very important in advancing our understanding of the universe:
p. 95: Networks may also be important in terms of view. Many models assume that agents are bunched together on the head of a pin, whereas the reality is that most agents exist within a topology of connections to other agents, and such connections may have important influence on behavior.
p. 234: The path of the glider can be predicted without resorting to the microlevel rules. Thus, in a well-defined statistical sense, it requires less information to predict the path of the glider by thinking of it as a “thing” than it does to look at the underlying parts. In this sense, the glider has emerged (Crutchfield, 1994).
p. 28: Another important question is how robust are social systems. Take a typical organization, whether it be a local bar or a multinational corporation. More often than not, the essential culture of that organization retains a remarkable amount of consistency over long periods of time, even though the underlying cast of characters is constantly changing and new outside forces are continually introduced. We see a similar effect in the human body: typical cells are replaced on scales of months, yet individuals retain a very consistent and coherent form across decades. Despite a wide variety of both internal and external forces, somehow the decentralized system controlling the trillions of ever changing cells in your body allows you to be easily recognized by someone you have not seen in twenty years. What is it that allows these systems to sustain such productive, aggregate patterns through so much change?
While the answer to this question is complex, my (overly) simplistic model is that agents at level 2 emerge from autocatalysis and cooperation of level 1 agents. Over time, natural selection “solidifies” level 2 agency (and constrains level 1 interactions), making these aggregate patterns clearer and more robust. Eventually level 2 agents yield level 3 agents, and so on. Natural selection and emergence go hand in hand, and one does not typically operate without the other in nature.
p. 200: Organizations are able to circumvent a variety of agent limitations. Some organizations are useful because they can aggregate existing characteristics of agents, such as when tug-of-war teams combine each member’s strength or schools of fish confuse predators by forming a much larger and more dynamically shapred “individual”. At other times, the value of an organization comes through internalizing external benefits, such as flocks of geese (or schools of fish for that matter) having an easier time moving by using vortices created by other members of the group. Organizations can also allow agents to exploit specialization and circumvent other innate limitations, such as the ability to acquire or access incoming information, or individual bounds on processing the information once it is acquired.
p. 95: …the most interesting results come about when the outcome of the model is, at some level at odds with the induced motivations of the agents — to use Schelling’s terms, when the micromotives and macrobehavior fail to align. Thus, it is far more interesting to see cooperative behavior emerge when the agents are self-interested than when the agents are presumed to be altruistic, or to see agents aggregate into cities when their goal is to be left alone.
Indeed, it is most interesting when cooperation emerges in the presence of competition as well as when tragedies of the commons emerge despite the best intentions of the lower-level agents. I am particularly concerned about what this latter implies about the long-term viability of our individual human well-being (however you might want to define that) as the vitality of organizational levels above us — cultures, governments, corporations, belief systems, et al — becomes misaligned with our own.
p. 198: In general, communication is capable of productively altering the interactions in a social system for a few key reasons. First, communication expands the behavioral repertoire of agents, allowing new and potentially productive forms of interaction to prevail. With communication, agents can create new actions that allow them to escape the previous behavioral bounds. The great the potential of communication, proxies in our discussion by processing ability and tokens, the more possibilities that emerge. Second, communication emerges as a mechanism that allows an agent to differentiate “self” from “other”. In the worlds we have explored, agents would like to cooperate in the case of the Prisoner’s Dilemma and hunt stag in the case of the Stag Hunt, but the presences (and inherent incentives) of defectors is an ever present danger to adopting such behavior. Communication emerges as a way either to signal a willingness to be nice or to detect meanness. In these systems, this occurs when a fortuitous mutation gives an agent the ability to “speak” and to respond positively to such communication while avoinding harm from theose agents that say nothing. By detecting “self” in such a way, the agent can improve its performance even in nasty worlds.