Mechanisms of Agency

The following is a non-exhaustive catalog. Note that these mechanisms are in fact emergent properties of the system under study, a fact which has some fairly profound consequences when considering the lowest known levels in physical systems. Read Ervin Laszlo’s chapter, Aspects of Information, in Evolution of Information Processing Systems (EIPS) for more theoretical background.

Stasis

The most trivial form of stability we can think of is an agent existing in the same place over time without change. This may only make sense as you read on, so don’t get caught up here.

Movement

Keeping time in the equation but allowing physical location to vary, we see that agents can move and continue to exist and be recognized as the “same”. This is obvious in the physical world we live in, but consider what is going on with gliders in the Game of Life. The analogy is more than loose since cellular automata are network topologies which mirror physical space in one or two dimensions. Contrast this to other network topologies, such as the brain, which has many more than two dimensions in its state space.


Metastability

Gliders also exhibit a form of metastability in addition to movement. What is meant by this is that the structure of the glider goes through a cycle of distinct states (4 to be exact) and arrives back at the original state it started in. Other structures in the Game of Life are like this too, including oscillators, which cycle like the glider but don’t move on the grid. Other examples of metastability include equilibrium in dynamic systems, e.g. population genetics, financial markets, electromagnetic fields, etc. In mathematical terms, metastability can be characterized by an attractor or basin of attraction. Quite literally, a mountain lake with an incoming stream and outgoing stream is a basin of attraction, with the lake itself being a meta-stable structure; the constituent water molecules continually flow into, around and out of the lake, yet we recognize the lake as existing separately from the water molecules, as an emergent structure which is stable as long as the rate of flow in matches the rate of flow out.

Depending on how broadly you define the “meta” part, everything we are talking about in this post is a form of metastability. For instance, “metastastic cancer” refers to the notion that phenomena seemingly distant and distinct from the primary tumor is actually a part of the same cancerous process that is responsible for the primary tumor. Even though parts of the cancerous system are in motion, taken as a whole, the cancer is in a metastatic state, i.e. in that it continues its existence as an observable distinct system.

Self-Repair

One of the most prevalent and important mechanisms for agent stability in biological systems is self-repair: wound-healing, immune systems, error-correction in DNA/RNA transcription and synthesis, and others. Autocatalytic chemical sets, when viewed as agents, exhibit a very pure form of self-repair in that they are continuously regenerating their own constituent parts through catalysis. Social agents such as the modern firm or governmental agencies have mechanisms built into their formation documents which call for and bring about the replacement of functional members that quit, get fired or end their term (e.g. president, janitor, HR manager, etc).

Self-Similarity

Fractals are “self-similar” structures, meaning that of you look at them at any level of magnification they look similar. Self-similarity is a form of agency, but it may not be intuitively obvious how so. Consider the classic Russian dolls, where inside one you find another identical one (except for size), and inside that another, and so on. By opening the outer layer and destroying it, you are left with a very similar system with all the same properties as before except smaller and with one fewer doll. Even though you destroyed a part of the agent, in once sense the agent still continues to exist. Self-similarity is found everywhere in nature from galaxies to solar systems, plant structures, nautilus shells, crystals, and so on.

Reproduction

At first blush it may not seem obvious how reproduction (both asexual and sexual) can be thought of as a yielding agency. However, consider a single-cell organism such as a bacterium. It has a particular structure, including a unique genetic code.* The cell divides, creating a “daughter” cell, and now suppose that shortly thereafter the parent cell dies. Now imagine a continued progression of this process creating a chain of descent from parent to child, to grandchild, and so on. Now consider the chain itself as a system, an agent if you will. That agent survives, in a remarkably stable form, through the mechanism of reproduction.

You may object, but the chain isn’t an agent, its constituent parts are. Yet this statement is a critical fallacy, one that belies the limitations of a reductionist-only model. The chain is a system with inputs, outputs and internal structure, just like the cells themselves are (but at a lower level). Agency is a model, a subset of a similar model, that of a system. The notion of the cell itself is a just model that helps us simplify the description and our understanding of a very complex set of structures and dynamics that occur between biochemical molecules in a repeatable and partially predictable way.

Looking at things from this perspective, we see that populations of individual organisms (species, phyla, sub-populations within species, etc) are agents which use reproduction as an essential stability mechanism.

Representation & Prediction

Laszlo points out (EIPS, p.65): “Any given system tends to map its environment, including the environing systems, into its own structure.” Daniel Dennett claims that the fundamental purpose of the brain — to choose a clear example — is to “produce future”. Which is to say the brain allows the organism qua agent it resides in to predict the future state of the world, both independent of the organism and also taking into account all of the actions and plans of the organism itself. For instance, a cat takes in visual and olfactory information and over time forms a mental map of your house and where the food is. When hungry, the cat’s brain predicts that by moving to the location where the food is on the map that it will find actual food. Sometimes this prediction comes true, and other times it does not (as when the food bowl is empty, or has been moved to another location).

It is critically unimportant whether we consider the prediction and mapping to be a conscious or even intentional activity. We may consider systems where there is clearly “just” a stimulus-response process going on, such as in the immune system.** Yet, it is clear that the the immune system works by creating a mapping of the environment of pathogens and “good” entities (i.e. your own cells). It can also be considered to predict future states of its environment, namely that works on the premise where there is one pathogen there are likely many more of the same kind. Don’t get hung up on the intentionality-laden subtext of words like “predict” and “should”. Instead, consider them from a purely functional perspective. There needn’t be a controlling consciousness or designer involved for representation or prediction to occur in agents.

An even more subtle point is the irrelevance of whether mapping and prediction happens “on-the-fly” as the agent interacts with the environment, or whether these features are “baked into the system” by an evolutionary process or a design process. In designed systems, such as computers, it is easy to see that both representation and prediction can occur by being baked into the system via hardware or software. In certain computer systems, mapping and prediction happen on-the-fly as well, such as in the Roomba robot vacuum that cleans your house. We’ve discussed on-the-fly mapping and prediction w.r.t. two evolutionary systems, the brain and the (active) immune system.

There is of course baked-in mapping and prediction in most evolutionarily produced agents. DNA is just one example. Encoded in DNA is an implicit map of the environment in which proteins will get synthesized, as well as a mapping of the environment in which the organism (once formed) will find itself. In general it is not easy (nor is it relevant) to distinguish between representation and prediction; they are two sides of the same coin, and one does not make complete sense without the other.

Informational Feedback

Feedback loops of information are ubiquitous in complex adaptive systems. It is thought that the complexity in “complex” systems derives from informational feedback. However, not all feedback leads to increased complexity. For instance, in so-called “negative feedback” loops, the result is often equilibrium (a form of stability) or the destruction of the system itself. In “positive feedback” loops, growth often occurs, but runaway growth can lead to system instability and destruction as well. With subtly positive and subtly negative feedback, we often see complex (and chaotic) systems behaviors, including systems in states described as “far from equilibrium”. Such systems can be quite stable and resilient, or they can exist on the “edge of chaos” and easily be tipped into self-destruction or what I call an autocatalytic unwinding.

Planning & Intentionality

These stability mechanisms are the ones most accessible and understandable to us humans, as we employ them on a regular basis and can most easily reflect on them. I don’t need to say much here except to point out that consciousness (in the traditional sense) does not necessarily need to be involved. All mammals and reptiles do some sort of planning in their daily activity, whether it be to catch prey, build nests, or move from one location to another. The neurological mechanisms that achieve planning are unimportant, except to say that they involve more than simple stimulus response; they require some form of working memory as well. Intentionality is just another way distinguishing the activity of planning from more automatic seeming activities.

Self-Consistency

As information feeds back within agents, potential always exists for internal conflict. Psychological phenomena such as cognitive dissonance and many forms of pathology are usually modeled as internal conflict and the attempted resolution thereof. Self-consistency is the the opposite of internal conflict, a form of harmony or coherence. At the physical level, waves (in the ocean, or electromagnetic) can harmonize and thereby combine energies, preserving structure, or they can conflict and dampen each other out. Coherence at the quantum level refers to the same sort of self-consistency. Belief systems, as a set of memes, can be more or less self-consistent (from a logical perspective anyway), and thus be more or less resilient in the face of logical attack. One may note that Catholicism, and other ancient religions, as practiced today are often attacked not on the question of their a priori credibility but rather their seemingly contradictory tenets (memes). Notwithstanding an external arbiter or definition, “truth” is simply the self-consistency of logical consequences of a set of assumptions/memes. Once inconsistency has been found, truth-value (at least in formal systems) is destroyed. Even in informal systems, such as the scientific community, inconsistency tends to lead to system breakdown, though not as fast or thoroughly as most scientists would like to think.

From the standpoint of agent/system stability, individual cultures persist and thrive through interlocking, self-consistent and self-reinforcing shared beliefs and values (i.e. a set of memes). In America, some of the more prominent memes are “Independence is a virtue”, “Individual freedoms and rights are paramount”, “There exists a single, omnipotent, omniscient God”, “Truth is knowable by us humans”, “Justice will prevail in this lifetime”, and so on. Contrast this with cultures where individualism is less important than group consensus, personal honor is paramount, monotheism is not the norm, mysticism is valued, judgment can only be expected in the afterlife, etc. It’s clear that some sets of memes are more self-consistent or self-reinforcing than others, and also that individual memes are not all treated equally within a culture. Cultures evolve over time, new memes are created to resolve memetic conflicts, old memes are subjugated or reviled. Culture is passed down from generation to generation, with modification/mutation. Cultures interact with one another, they clash (e.g. the so-called “war on terror”) and they become hybrid (as in the “melting pot” or “salad bowl”). There are also sub-cultures just as there are sub-populations.

Competition

Agents compete with one another when there are fewer resources available than the population of agents needs as a whole. Resources are defined as anything material (such as food) or immaterial (such as attention from others) that have an impact on future existence or stability. Sometimes resources are not limited, or are renewable, and in those cases competition does not benefit the original agent (at least not directly). To engage in competition under such circumstances is a waste of energy that could be deployed elsewhere. On the other hand, competing and winning a set of resources that are not scarce can lead to a future in which the agent does not have to compete for other resources. The most obvious example is if one agent destroys (i.e. kills) another, then all future competition is obviated (for instance, competition for limited food).

Cooperation

Cooperation comes in a number of forms (e.g. symbiosis, parasitism, tacit agreements, explicit agreements, altruism, etc.) In cooperation, two or more agents interact with one another in such a way as to at least make one of them “better off” than before, if not all. In altruistic behavior, an agent makes itself worse off so that another agent may do even better, however over multiple iterations of altruism, individual agents can do better than if they compete. Much has been written about cooperation and competition in the context of game theory and the Prisoner’s Dilemma, so I won’t belabor it here.

Consciousness

Like porn, it’s hard to define but we know it when we see it. I will develop a theory of mind more fully in this blog as time goes on, but for now I will just point out that whatever consciousness is, we recognize it as something different than other activities of the brain and nervous system. In the language of this blog, consciousness is an emergent phenomenon, a level or so above activities like planning, pattern matching, and autonomic response. Laszlo suggest that consciousness is a “limiting case” of an informational feedback process between progressively higher levels, in other words it’s “the final output of the internal analysis of internal analyses.” (EIPS, p.67)

One hallmark of consciousness is an awareness of “self” as an entity that is aware of many things, including being aware of being aware, etc. When combined with prediction and planning we can see how this self-referential structure adds value from the perspective of system stability and continuity. Not only can humans predict the future of their environment, but they can also predict the future “self”, which of course enables more accurate and stronger predictions of the environment with which the self is interacting. I know that if I eat this whole cake, it will taste yummy and I will feel good for a short while. But I also know that I will feel really bad later because of the massive sugar content and its effect on my digestion and also because I will deny my loved ones of the pleasure of cake. On the other hand, my dog is in the corner with a guilty look on his face, clearly scheming (aka planning) on how to approach and eat the entire cake without my noticing and punishing him.

Whether you view my dog in this scenario as exhibiting consciousness or not is besides the point, as is our anthro-centric need to qualify human consciousness as entirely distinct from other phenomena in the world. The point is that consciousness as a mechanism for agent stability is distinct from less complex, lower-level mechanisms from which it emerges.

Upward Bolstering & Downward Constraint

Agents emerge from lower-level agents interacting with one another. So it logically follows that if all the lower level agents are destroyed or their interactive dynamics modified sufficiently, the higher-level agent(s) would cease to exist in the former case or or become unstable in the latter. Thus agent stability is a function of (though not completely dependent on) the lower level from which the agent emerges. In the example of the lake given above, evaporation of water would lead (if unchecked) to the lake’s own destruction. Similarly, cancer is (in part) instability of the cellular structure, which is (in part) due to corresponding instabilities at lower levels (e.g. genetic, genomic, and more).

Laszlo Diagram - Levels & Stability
From The Chaos Point. Reproduced with permission from the author. Handwritten notes added by me.

On the flip side, agents are constrained by levels which emerge above it. For instance, when multi-cellular life emerged from colonies of symbiotic single celled organisms, some of the mechanisms that lead to stability of the single cells (such as reproduction and motility) were destabilizing to the colony itself. In order for the higher-level agent to survive and become more stable it “found” (through evolution) mechanisms to curb or offset destructive amounts of reproduction (c.f. apoptosis) and destructive motility (c.f. cellular lattice tissue structures). A more simple example which everyone can appreciate is the motion-stabilizing effect that ice — an agent which emerges under certain temperature/pressure conditions — has on its lower-level agents, water molecules.

CANCER My current view of cancer is that it involves an unshackling of the stabilizing influences from both below the cellular tissue level (as in genetic and genomic instability), and perhaps as importantly from above (as in exposure to mutagens, compromised or inefficient immune response). This somewhat heretical view*** has logical implications not appreciated fully even by those who understand the core concepts. Chiefly, the vast majority of approaches to curing cancer are misguided at best, and actually accelerate mortality in some cases. Additionally, as argued by Henry Heng, the scientific community and reductionist philosophy in particular has been (understandably) completely and utterly blind to an obvious conclusion. Which is that the levels above cancer — and the agents at the same level — constitute a “cancer environment” that is extremely important if we are ever going to “cure” cancer. In other words, curing cancer can in part be accomplished by preventing its outbreak/emergence in the first place. And finally, approaches which do not acknowledge the downward constraints imposed from level to level may be wholesale doomed to failure. Much more will be said about this in future posts.

SOCIO-TECHNOLOGY That technology stabilizes itself and helps stabilize socio-technical systems is a claim that many would take exception to, arguing for the entirely new existential threats posed by the advent of nuclear and biological weapons, to name just two. However, I will argue that such a view takes too narrow a definition of stability. My claim is that despite the new “variance” in socio-technical stability, the the tendency (aka “expected value”) is towards stability. As socio-technical systems evolve, if they don’t destroy themselves, they become more stable, buttressed by the human level below and whatever emergent levels are to come above. More on this in future posts as well.

* Remember, point-mutations are the rule, not the exception, so it is very likely that any two bacterium cells chosen at random have slightly different DNA sequences. This heterogeneity is a precondition for natural selection to occur.

** Note that there are two basic types of immune systems, adaptive (found only in jawed vertebrates) and innate (found in nearly all forms of life). The differences do not matter for the argument at hand.

*** Though I should note there is a substantial, if unorganized group of researchers who share this view, including Arny Glazier, Henry Heng, Richard Somiari, Albert Kovatich, Carlo Maley and many others versed in complexity theory.

Related posts:

  1. Agency
  2. Cultural Agency
  3. Types of Emergence
  4. Seroquel For Sale
  5. Response to "Superorganism Considered Harmful"