Response to Superorganism as Terminology.
I was actually about to post something about terminology, so I’m glad this came up. It’s just so difficult to choose words to describe concepts that have little precedent, without going to the extreme of overloading on the one end (e.g. “organism”) or the other extreme of being totally meaningless (e.g. “foo”). I have tried to use terms that are the closest in meaning to what I’m after but there’s no avoiding the misinterpretation. I can only hope by defining and redefining to an audience that is not quick to make snap judgments but rather considers the word usage in context, we can converge to at least a common understanding of what I am claiming. From there at least we have a shot at real communication of ideas and hopefully even agreement.
Organisms and Superorganisms
Regarding “organism”, I don’t particularly like it either because it has too narrow (and biological) of a connotation. I prefer agent or “system”. In my lexicon, all systems are agents to some degree, but I typically reserve “agent” for those systems that display behaviors that we would recognize as self-preserving or self-generating. Thus, given two different systems — one being a crowd of people on a Manhattan intersection, and the other being a corporation — I would be inclined to refer to the latter as an agent, but not the former. I’m not thrilled with “supercommunity” as it’s a little too soft and doesn’t imply any sort of agency (which it needs to). Superagent? Supersystem? Let’s just stick with superfoo for the moment.
Regarding “levels”, I refer you back to my original post on levels, especially the last section titled “Level’s Aren’t Strict”. As I see it, levels are becoming less strict the higher we go up. Meaning that chemical systems are very distinct from the atomic systems they sit on top of, but social systems blend and bleed together with the human systems (i.e. human beings) they sit on top of. Just for example, individual humans interact as peers with (i.e. at the “same level as”) corporations under some circumstances (e.g. my contract with AT&T to provide service in return for money) and at different levels under other circumstances (e.g. my being an employee and thus a constituent part of the company I work for).
You make a good point about networks. Yes, everything is interconnected and one big network, and there are infinite ways to model systems as networks, and it depends on what interconnections you choose to model and what parts are “in” and what parts are “outside” of the network. However, I think we will agree that not all models are equally good and that the good models are the ones that come closest to fitting the underlying structure; they produce more insightful descriptions and more accurate predictions.
As for “awareness”, I will agree it’s too loaded a term and should probably not be used in this discussion. I note that we both have avoided the even more loaded term, “intelligence” (thank Foo!) except as it applies to humans. My contention — and I will expound on this in its own post — is that the concept of “intelligence” is an example of the fallacy misplaced concreteness. Meaning it should never have been reified. Intelligence is simply a description of how human agency manifests itself. I will go further and suggest that “awareness” is like “intelligence” in this regard, but not quite as egregious.
Which brings us to “autonomy”. Ah, autonomy… a true red herring if there ever was one. So laden with religious, philosophical and political overtones. There needs to be a term that means exactly what autonomy means, but isn’t so overloaded. Dictionaries and thesauri are no help here, yielding equally charged alternatives. All I can say is that it’s kind of like the situation with networks and and how there are many and just one at the same time. To me (in my lexicon), I would say that autonomy is the degree to which a system is able to effect its own agency, which is to say to exist and persist through time. No man is an island, and no agent is truly autonomous.
I like how you bring in “interdependence”, but I will propose the following reconciliation: autonomy is the degree to which a system is able to effect its own agency modulo whatever the space of alternatives happens to be. That is, if your only alternatives are to hunt or die, you still may have a high degree of autonomy as long as you are maximally free to hunt (nobody is holding you down, there are actually things to hunt, you are healthy, etc). I will agree with you that the space of alternatives for agents gets larger as time goes on within a level (and also as we move up the levels). But the amount of interdependence also increases as time goes on within a level, which BTW corresponds to an increase in agency at the level above. And as interdependence goes up, unless the space of possible actions grows commensurately, there will be an overall decrease in autonomy.
So is humans autonomy currently increasing or decreasing? Too hard to say. But taking the long view, the pattern that I see is that within any given level the space of alternatives for agents starts out small and rises over time in an S-curve. At the same time, interdependence also goes up along an S-curve. And so depending on the relative amplitude, and also how the phase shifts align, between these two S-curves, autonomy can either be increasing, decreasing or flat. In all the levels heretofore it seems as though there has been a long period of increase in autonomy with an eventual peak and a subsequent decline followed by a leveling off and relative steady state. Exhibit A is single celled organisms and what happens as they transition to a multicellular collective and eventually become part of a multicellular organism. But it’s also true of every other level that I can think of, including cooperative aggregations of humans. If you have a counterexample, let’s explore that. What’s in store for humans in the superfoo scenario that I see? Again, just because the pattern seems consistent over the course of 13+ Billion years as each new level has unfolded, this doesn’t mean we aren’t entering some new phase of history where the old pattern is broken. But I’m using the same inductive logic as those who argue for the inevitability of the technological singularity.
Superfoo or Superfoos?
Now, onto singular superfoo or multiple superfoos. This is also a red herring. As agreed already in the network discussion, there are and will be superfoos, plural. But no matter how many of them and how many levels of hierarchy you presume, there will always be a network that represents the single highest level on Earth which by construction is fully connected, no islands. And it is the underlying system (singular) modeled by that network that I am calling the superfoo. It’s not a matter of prediction that gives me this confidence, it’s a matter of definition :-)
What can we conclude about the superfoo with any certainty? Nothing of course. But using the inductive logic that singularity arguments must rely on, here are some things that I personally feel are justified by the evidence:
- Superfoo’s agency will continue to increase.†
- Human autonomy at some point will decrease from its peak.‡
- Superfoo will increasingly exhibit self-* properties.
One point about this last bullet. Not every agent on Earth exhibits all self-* (pronounced “self-star”) properties to a high degree. Atoms exhibit self-organization, self-containment and self-stabilization, but not much else. Biological entities exhibit most of the properties in this diagram to some degree, and some properties more than others. Humans and multi-human complexes notably exhibit properties in the “Sense of Self” category to degrees that other agents don’t. The pattern over time in the universe as new levels of complexity emerge seems to be that new self-* properties emerge as well. Thus, it is not unreasonable to suggest that the levels above humans will eventually do two things: (1) exhibit the properties on the diagram to greater and greater degree as time goes on, and (2) exhibit new self-* properties that have never been seen before and which us humans may or may not be able to predict or even grok.
I will conclude by noting that on the list of known self-* properties that are possible for superfoo, are self-awareness and self-consciousness. I will pause here to quickly throw up the flag to signal that we are not to anthropomorphize those terms, but just consider them as detached and scientific descriptions of a system’s behavior. To put a fine point on it, consider for a moment that corporations as actors in the world, do seem to exhibit a form of self-awareness. It may not be identical to the sort of self-awareness that you feel that you have, but it meets a clinical definition. To wit, all activity related to corporate branding would be hard to explain without referring at least implicitly to self-awareness.
Thus, and finally, it is with all this in mind that I suggest we admit the possibility (nay, likelihood) of superfoo’s self-awareness and self-consciousness, if not now, at some point in the future.
† Assuming we humans survive the various existential threats that are upon us including climatic catastrophe, weapons-related catastrophe, socio-economic catastrophe, technological catastrophe (bio-tech, nano-tech), etc. Have we reached the peak in human autonomy yet? Anyone’s guess. BTW, the peaking-of-autonomy argument goes for any “transhuman agents” that emerge as well (i.e. any technologically-based AIs, modified humans, and hybrids thereof). It also goes for agents at higher levels than humans or transhumans. Currently existing examples including corporations, governments, cultures, religions, military-industrial complexes, foo-bar complexes, etc.
‡ Just to reiterate why this happens: increasing interdependence of agents at the lower level and increasing agency at the higher level. Astute readers will understand that these two dynamics are actually two sides of the same coin.