Response to "Superorganism Considered Harmful"
Rafe makes an analogy to cells within a multicellular organism. How does this support the assertion that there will only be one superorganism and that we will need to subjugate our needs to its own? Obviously, there are many multicellular organisms. Certainly, there are many single-celled organisms that exist outside of multicelluar control today. So where is the evidence that there will be only one and that people won’t be able to opt out in a meaningful sense?
There will be only one because of the amount of interconnectedness and interdependency of the constituent agents. At no time in the history of Earth has the actions of one agent had such immediate and profound impact on others, both in potential terms and in actual terms.
In network theory you can identify subnets within a larger network which are islands: they connect the nodes within the subnet to each other, but are otherwise unconnected to the larger whole. By adding a link from one of the nodes in an island to a node in another island, you end up with one large island instead of two smaller ones. The Earth-system (what we are calling the superorganism) has no islands anymore. It’s all one system, whether we like it or not, and whether we intend it or not.
Opting out is not really possible anymore due to interconnectedness and interdependency. It is a myth to think we humans are autonomous agents currently. And our level of autonomy is going down. This is not necessarily a bad thing, but it does challenge our myths and conceptions of who we are and who we can be in the future.
Now, I expect the answer to include the observation that this analogy is inadequately expressive. Exactly! So how can you predict anything? The major difference between humans and cells in this context is that they possess their own executive function. They are capable of formulating and pursuing independent long range goals. They are capable of independently applying Bayes Theorem to predict changes in their environment.
I think this is a red herring to the points you make below, but I will suggest that there is nothing special about human executive function in the pantheon of mechanisms of agency. See this post, particularly the part about Prediction & Representation. As a somewhat relevant aside, I’ll point out that the higher levels can utilize the mechanisms of the lower levels, but not vice versa. This subsumption is relevant to the comprehensibility claim.
I think this a point that Rafe needs to address further to back up his assertion that the higher level will be incomprehensible to the lower.
Your brain might have the capacity to fully comprehend an individual neuron inside it, but it there is a theoretical limit to what your brain can comprehend about itself due to recursion (c.f. halting problem, Godel incompleteness, liar’s paradox, et al). To be clear, by “fully comprehend” I mean contain a representational model that has all relevant complexity to produce an accurate description and prediction of the system being “comprehended”.
Now when you add on top of that the complexity of an entire system of brains and “other stuff” (the complexity of which is staggering by itself), it is hopeless to think your brain could fully comprehend that entire system (of which your somewhat less incomprehensible brain is a subsystem). You may argue that an enhanced human intelligence of the form discussed by Kurzweil is not nearly as limited as your and my brain. But the argument still holds: no system can fully comprehend itself, let alone a supersystem of which it is a subsystem.
I contend that the superorganism has been gradually emerging for hundreds of years and that we have been gradually improving our understanding of it. My strawman superorganism is the economy, which invisibly coordinates the behavior of all participating actors. I’ll be the first to admit that our understanding of the macroeconomy leaves something to be desired, but we do understand a fair bit. Oh, and for those Greens out there who will ask, “But what about the global ecosystem?” I’m including “resource economics” in the definition of economics.
I agree about the superorganism gradually emerging for hundreds (actually billions) of years, and that we humans have been — individually and collectively — gradually improving our understanding. The economy is a good subset to focus on because it illustrates the point about limits. Let’s go simpler and just talk about “the market”.
Ever since Adam Smith reified the concept of invisible hand, the market as a system began to reflect on itself. These days with pervasive 24-hour financial news and speculation, diverse sorts of market participants, deep analysis tools, etc, our understanding of how markets work in general has increased. However, the feedback of that understanding into the micro decisions of the lower level agents makes the emerging macro-level behavior more and more complex/chaotic. Now generalize to the global economy as a whole and you get the picture. Just look at how policy makers and nobel laureates alike are floundering at fixing the global financial crisis: they don’t even agree on the fundamentals let alone what actions will have what effects.
In your own favorite complex system, the climate, it’s the exact same situation (as you have argued eloquently) despite the illusion of understanding and consensus. And it should not be lost on us that the climate and economy are increasingly one and the same system, part of the larger superorganism.
So, on the one hand, with these complex systems that we are a part of, we do increase our understanding day by day and year after year. But on the other hand the gap between our understanding and what there is to understand is widening.
Local economies are becoming more and more linked, but it’s hard for me to see how this leads to an event horizon in and of itself.
The aforementioned widening gap is the event horizon.
It’s also hard for me to see how “awareness” in the sense that we have it will exist.
Exactly. It won’t be awareness as you know it. You are, according to my argument, not capable of the sort of “awareness” that the superorganism will have. Awareness is a misleading (and anthropomorphic) word though.
I think that you must add something along the lines of a technological singularity to end up with these two properties.
Clearly, the technological aspect of the superorganism is necessary, not only for “superawareness” but for the whole shebang. It’s not possible to separate technology from the equation. Without it, the current (and future) economy, climate, culture, organization, etc doesn’t exist.
So what is the concept of emergence adding here? In terms of understanding how the superorganism functions, what does it add beyond economics? What additional predictions does it make and why? Moreover, how can one claim that exponential technological development is not a necessary conditions for the emergence of a higher level awareness? Is there some demonstration of the preconditions for emergence that excludes it?
I think you misunderstood my claim. I am not arguing against the likelihood of technological singularity. I am saying that its significance is anthropocentric. The larger event is the emergence of a new level of organization of matter, energy and information that goes beyond “simply” human immortality, human merger with technology, and an event horizon. The emergence of this new level (the superorganism) is an aspect of The Singularity that I haven’t heard discussed before, and I’m pointing out the conspicuousness of its absence.
As far as what emergence adds, it’s (a) the notion of superorganism at all, and (b) the consequences for autonomy of individual agents which comprise the superorganism, including humans, “pure” technology / AI, and mergers thereof.