One useful model of how information flows between elements of a level is the mathematical abstraction of a graph, more plainly known as a network. The internet is our current exemplar, but most levels of organization are amenable to network modelling of information flow: social networks, biochemical pathways, molecular latices (aka crystals), etc. The basic components of a network are nodes (i.e. the elements/agents of the level of choice in the previous post) and links from one node to another. The links in our abstract model define which nodes can pass messages to (aka share information with) which other nodes. If we wish to have a complete understanding of information flow using network modeling, there has to be a link between every pair of nodes/agents that might interact in some way.*
Typically when we model a system using networks we are modeling a static structure, meaning we are modelling not the actual dynamics of message passing but rather the potential for message passing. This is an important distinction that often gets lost and leads to confusion. Modelling the actual network dynamics — how information flows through the network over time — is left for another post. Suffice it to say, without understanding the network dynamics, we only get a partial understanding of the system.**
Depending on the actual system of study, more or less can be gleaned from a static network representation of the system. In the case of molecular latices (such as ice), most of what we care about can be understood simply by looking at the network structure and ignoring the dynamics. The links in a network that models ice define/model a physical neighborhood in space wherein water molecules bond with each other; if you are a node, then you are linked to (aka bonded with) your nearest neighbors. Not only does the lattice-shaped network give us great explanatory power (it’s easy to “see” what’s going on), it also gives us good predictive power. For instance, we can analyze the structural integrity of ice by examining the network structure, and we can predict where it is weak and most likely to cleave.
But what happens when ice turns into water? All of a sudden we see that the network structure itself can transform quite rapidly as molecules move from one location to another, and what was once your neighbor in “network space” is now a distant relative, and vice versa. Even if we were able to somehow model liquid water by keeping track of each molecule’s changing location relative to one another, we are still stuck with the problem of how to define the information flow. When it was ice, it was easy: each molecule could be thought of as being in a small set of precise configurations (tightly packed) with its neighbors on the lattice. But with liquid water, the distances in space can vary significantly for each nearest neighbor due to the non-spherical structure of water molecules, and depending on how you define distance. Is it measured from center of mass? From the nucleus of the lone oxygen atom?
The difficulty in applying the network model, which works well for ice, to a (literally) more fluid environment is not just an isolated problem; it’s endemic to all scientific pursuit and ultimately to our understanding of the world. When models break down in explanatory and predictive power so thoroughly — as it does in trying to apply network theory to liquid — we have to find a different model to gain any kind of understanding of what’s really going on. Hence fluid dynamics for liquids, Brownian motion for gasses, etc. Where we have lost our way is when our models continue to have some (or worse, a great deal of) explanatory or predictive power, and we are loath to throw the baby out with the bathwater. Einstein famously threw out the baby of constant time with the bathwater of mutable space to arrive at a deeper understanding (better explanation, better prediction) with a model that held the speed of light constant and allowed space and time to change as needed. Darwin did the same by positing that species are not necessarily distinct forms of life and that they evolve more continuously through a process of heredity, mutation and selection. We need to always keep in mind these lessons in breakthrough thinking: just because our favorite tool is a hammer, doesn’t mean that every problem is a nail.
Notwithstanding the above caveat, there has been a lot of interest*** and progress recently in looking at complex adaptive systems through the lens of the network. Not surprising given that the internet provides us an ever present and fecund playground with which to model not only itself but other systems. So for instance, we know some general equations that help describe “naturally forming”**** networks that characterize the number of links between nodes as a power law; small numbers of nodes accumulate large numbers of links and vice versa in a process termed “preferential attachment”. This model is extremely useful in helping explain the ubiquitous phenomenon that can be summed up as “the rich get richer”, as well as other related insights like the famous “80/20 rule”.
Unfortunately, most work to date has focused on the network structure and formation, to the exclusion of the informational dynamics and as well as transformational dynamics — how do network structures change over time, not just in the “always getting bigger” sense, but also including contraction, stasis, equilibrium, oscillation, meta-stability, basins of attraction, chaotic behavior, sub-structural formation, and so on. Only focusing on static networks or networks with limited dynamics strikes me as akin to trying to understand cancer from the standpoint of genetics and drug therapy alone. I predict that we are on the verge of an explosion of research and results in the application of network theory to a wide variety of social, political, biological, chemical and physical systems/processes, BUT ONLY once we’ve successfully applied and refined better models of network dynamics (both informational and transformational).
For example, what happens when we treat the nodes in a network as also being a population of creatures with heritable traits that replicate and mutate and are thus subject to selection and evolutionary dynamics? We’ve already noticed that networks in various realms exhibit something akin to punctuated equilibrium in which long periods of seemingly incremental change are punctuated by shorter, cataclysmic periods of instability and stochastic behavior. The point is not to focus on one model to the exclusion of another, but rather draw liberally from many different models (ecology, biology, computer simulation, evolution, fluid dynamics, thermodynamics, etc) and rigorously test working hypotheses that seem to fit the data better than current models. What we should be left with is a new understanding and a new model (or set of models or “mash-up” of models) that actually has real predictive power beyond current best practices.
*We shall leave aside for the moment what kind of information gets passed along the links, but at minimum we can think of binary messages — zero or one.
**We are also forgetting about information that comes into the system from the outside and that which leaves the system and is passed on to the outside. See a future post on “Open vs Closed Systems”.
****as opposed to networks that are engineered top-down, such as a military hierarchy.