Convergence

As readers of my blog posts know, I talk a lot about evolutionary systems, the formal structure of cooperation, the role of both in emergence of new levels of complexity, and I sometimes use cellular automata to make points about all these things and the reification of useful models (here’s a summary of how they all relate).  I’ve also touched on this “thing” going on with the system of life on Earth that is related to technological singularity but really is the emergence or (or convergence) of an entirely new form of intelligence/life/collective consciousness/cultural agency, above the level of human existence.

From The Chaos Point. Reproduced with permission from the author.

In a convergence of a different sort, many of these threads which all come together and interrelate in my own mind, came together in various conversations and talks within the last 15 hours.  And while it’s impossible to explain this all in details, it’s really exciting to find other people who are on the same wavelength and have thought a lot harder on each of the pieces than I have.  Just to give you a taste, here are the human players in this personal convergence and how they relate to the above themes:

Kevin Carpenter: Heard him first talk at LA Idea Project on the concept of Convergence and how it’s critically different than Kurzweilian Singularity and much more similar to Superorganism.  Ran into him again at a party last night and he was excited to have given more cogent shape to his thinking in this area.

Steve Omohundro: I went to check out the H+ Summit this morning and he was speaking matter-of-factly on so many areas of interest and dropping research-backed evidence to support all of this pontification.  While the details aren’t in this slide presentation, you should glance through it anyway, especially if you have been intrigued at all about things that I’ve written about.

Dan Miler: Spoke right after Omohundro on cellular automata and simulation, and the metaphor/paradigm of digital physics.  He highlighted several projects by other people which are shedding light on deep universal structure, including the work of Alex Lamb.  Lamb has built the first (as far as I know) cellular automata system based on irregular latices (i.e. arbitrary network structures).  Just like in Conway’s Game of Life — the most well-known cellular automaton — there emerge persistent dynamic patterns similar to gliders:

Here are more examples from the Jellyfish system.

What intrigues me most about this is that the brain is a nonregular lattice (by definition all networks are).  Neuronal firing patterns are (that is to say, cognition is) computationally isomorphic to cellular automata on nonregular lattices.  The jellyfish patterns seen in Lamb’s simulations are exactly what I would imagine to exist in the brain.  These would be the semi-autonomous interacting — sometimes cooperating, sometimes conflicting — agents that Omohundro refers to as being the basis of all cognition/intelligence.  It’s exactly what Minsky was referring to in Society of Mind, and what Palombo referred to in The Emergent Ego.  It’s also the basis of crowd wisdom or collective intelligence.

Which leads us back to Convergence.  As we learn more about the nature of cognition, intelligence and thought (both conscious and unconscious), I believe we will recognize ever more clearly how there is new sentience emerging, not alongside human beings (though that is surely happening as well), but rather at the level above human beings and their technological spawn.

  • kevindick

    You say, “…new sentience emerging…” I realize you and I have been down this road before, but what evidence do you have that a superorganism will be sentient? As far as I know, we are the only level of order that is.

    There are only two possible logical cases: (1) sentience is a fundamental achievement, in which case extrapolating properties of the superorganism from past emergent orders isn’t possible or (2) sentience is a fluke, in which case there’s no reason to believe the superorganism will also be sentient.

    Either way, claims of a sentient superorganism do not seem at all justified.

  • Rafe Furst

    Sentience, intelligence, consciousness all refer (in my mind) to the degree of competence an agent has at thriving in, and adapting to a dynamic environment, and also shaping that environment to the betterment of the agent itself. It’s Dan Dennett’s stance of the primary purpose of the brain is to “produce future” (as in predict and shape it).

    By the above definition, sentience and “organismhood” are isomorphic. If you believe there is (or will be) superorganism(s) then by definition it/they are sentient organisms. The only question is a matter of degree (i.e. how competent they are) and familiarity (i.e. does their sentience seem similar enough to our own such that we are willing to call it sentience).

    • kevindick

      I really hate overloading of previously established terms this way. In typical use, sentient organism is a strict subset of organism. So why use them equivalently?

      My definition of sentience is that is the type of consciousness that performs long term goal setting and planning (what I call executive function).

      Now, are you asserting that a superorganism of humans will possess kevin.sentience? If so, I don’t agree and further assert that you don’t have any evidence to support this claim. If not, then carry on.

  • Alex Golubev

    sentient or not, isn’t it more important that it will be more intelligent than any individual or group that has ever existed and humankind will reap the benefits by having brought it to life? (it’s not a rhetorical question)

    also, the main difference between what you’re describing and the pre-digital written body of knowledge whether it’s spoke word, rocks, or paper is that now there are nonhuman agents who filter the data for us. That’s why i’ve been posting so much on filtering. it’s the new constraint. what do you guys think?

    • kevindick

      Well, I don’t know how you measure the intelligence of a non-kevin.sentient organism. If it’s not kevin.sentient, it will be severely deficient from our perspective. We would have to use our kevin.sentience to harness whatever information processing capabilities it possesses.

      Second, to the extent that a superorganism has any rafe.competence, whose to say that the interest of individual humans will align with those of the superorganism? I don’t feel any obligation to particular cells of my body. Even large chunks of tissue may be sacrificed under certain conditions. Moreover, if I could replace whole organs with superior synthetic ones I would.

      • Alex Golubev

        that’s a very interesting point. i think i’d argue that judging the intelligence of artificial decisions is no different than judging those of a human, a dog, a group of humans. it’s not that i argue that this organism will be intelligent, i’m simply saying that it will be quite easy to figure out. compare it to google or any other competitive intelligence AI and see who makes more sense to you. So i advocate the use of the scientific method to judge AI.

        I do feel that there are a lot of natural common goals and incentives between myself and my heart, eyes, and other vital organs. But i definitely don’t think all my goals are aligned with all the cells in my body (cancer and I might have quite a few agency problems).

        I also see how google can turn evil or stupid, but i’d argue that a
        (stupid or smart) human is just as likely to be behind it as self-awareness of google itself. I’m more interested with the version of the terminator where a human with a machine fights a more fit human. i’d say we’re being quite presumptuous in assuming that a machine can fully replicate a human. We’re asking the wrong question.. We gotta create AI’s to help us learn -rationally process information faster, since distribution is no longer a constraint.

        • kevindick

          Um, I think we’re talking about different things. Superorganism competence != artificial decisions. As Rafe will tell you, superorganism is not AI, though it’s possible that the sets are somewhat overlapping.

          • Alex Golubev

            But i’m suggesting that we judge the superorganism using human utility not superorganism utility which we obviously can’t comprehend.

            • kevindick

              Perhaps you should post on how you would do this. Then we can discuss it.

              • Alex Golubev

                Kevin, isn’t it the same way we interact with economy/corporations/bosses/coworkers, government/regulators, religion/preachers/warriors? of course we cannot interact with the “biosystem” as an entity and we don’t attempt to, but we do interact with agents 1, 2, 3, etc… emergent levels lower. There MAY be a higher level than intelligence that is also important, but i cannot think of one.

  • Rafe Furst

    I am asserting the following:

    • Superorganism (SO) will indeed possess kevin.sentience including executive function. I would like to refer to kevin.sentience as “agency” which is consistent with how i’ve used that term on this blog all along.

    • Agency is not binary, it’s a continuum of ever-additive and ever-strenghtening self-* properties. kevin.sentience is a sub-continuum thereof.

    • Executive function actually is a collection of self-* properties (embodied in sub-agents) that interact in an emergently coherent way to the benefit of the agent. Executive function itself can be seen as a self-* property.

    • Thus, I see kevin.sentience as a property of an agent that emerges after a certain threshold of properly organized self-* properties are present. More here: http://emergentfool.com/2007/03/16/mechanisms-of-agent-stability/

    • There is always some conflict between agents from one level and those above and below it, as mentioned in the Mechanisms post

    • The jury is out on whether and to what degree we will be able to understand/communicate with SO and “recognize” its agency. People barely understand now the relatively strong and obvious agency of memes, corporations, governments, religions and cultures. More about this here: http://emergentfool.com/2007/03/11/cultural-agency/

    • It seems as though the higher we go up the levels of complexity the closer the languages become. E.g. humans can communicate with corporations (and are treated similarly under certain laws), but cells can’t communicate (very well) with humans.

    • SO will continue to acquire self-* properties, including but not limited to ones that humans possess as well as new ones we will never possess.

    • Whether we are talking about SO or multiple SOs is a separate discussion and I have some new thoughts on that which I will share at some point.

    Proof for my assertions are by induction: look what happens from atoms to molecules to biomolecules to organelles to cells, etc., and pull out the general principles and trends. Then look for circumstantial evidence that is around us in sociotechnical systems (corporations, crowdsources, the internet, etc.).

    • Alex Golubev

      You seem to suggest that we can interact on a more SO level. i don’t know what NATURAL interactive properties humans had and whether AI is more human than SO. I think i agree with you. Is it right to say that the human capability to learn and change it’s decision making and communicative process by increased interaction with higher order organisms (tribes, villages, cities, states, countires; workgroup, department, company, industry, economy; etc…) has already provide proof that we CREATE higher emergent orders of existence for our own benefit. Sometimes it takes major breakthroughs in the type of communication/decision making that is needed to create a new emergent entity (language, print, network). that is not to say that those entities dont’ also exist on a different interactive plain, but it simply isn’t as important as the first implication.

      check out http://www.hunch.com. a new way to interact with and create collective wisdom.

    • kevindick

      I take issue with your induction as I described before. I contend we are the first level of agent with kevin.sentience. Therefore, we do not have any trend to analyze to determine what happens when a level of organizations emerges on top of a level with kevin.sentience.

      Your circumstantial evidence is weak unless you can provide references. I have never seen an organization that possesses executive function beyond that of it’s individual human components. In fact, there is a fair amount of evidence from organizational behaviorists that individual choices about what is in their best interest are what drive the goals of organizations.

      Let me take a different approach? Can you propose a test of whether an SO has executive function? If we can agree on such a test, I would be happy to make a bet.

      • Rafe Furst

        How about you propose the test instead and I’ll propose the bet?

        • kevindick

          I don’t think so. You’re the one making a positive assertion. (Actually, you’re making a positivist assertion about a positive assertion: there will be an SO and it will have executive function.) I hold the ground of the null hypothesis. If you can’t come up with a test of your assertion, that’s fine with me.

          I’ll just file it under “speculation”. It’s a big file.

          • Rafe Furst

            Well, the problem is that since I have a very large range of what I consider executive function (it’s a spectrum, not a toggle), and since you have something much more specific in mind, I could sit here all day coming up with definitions that you will reject. Or you could come up with something and no matter what you come up with I’ll propose an over/under on when that will be achieved.

  • The RMCM encapsulates the difference between primitive life and modern life as the extense and scope of the persistent metabolisms.
    The more primitive the life the more distributed the metabolism.
    Echo “systems” are new primitive organic life forms.

    The RMCM is indifferent to the word system. System is a concept and can mean what ever the user desires. There are apparently three classes of systems, simple systems, complex systems, and complicated systems. Simple systems are predictable systems, whatever that means. I am unaware of any further systemic distinctions.

    Rather than system, The Model asserts metabolisms (complex mechanisms). Rather than levels or degrees of intelligence The Model asserts kinds of experiences. In the Model, Intelligence becomes The Third experience.
    What we dream of as AI The Model asserts is organic third experiences abstracted into an inorganic metabolism. AI We call simply The Fourth experience.
    Software models of The Third experience require a capacity for persistence.
    Persistence without causal constraints spawns mutating aggregates. Ghosts in the machine. When ghosts meet, the software third experiences become psychotic, in that there are multiple streams of concurrent consciousnesses competing indifferently for resources.
    Instead of programing an AI, we must build the inorganic equivalents of organic metabolisms and let the experience flourish.

    • Rafe Furst

      DWCrmcm - I have to say I’m intrigued by your approach outlined on your site, but since your writing is so laden with jargon, it’s impossible to really evaluate your claims. If I might be so bold as to suggest, you will get more traction if you write less abstractly and more for a lay audience.

      BTW, I am not advocating that we should or need to intentionally create an AI, but rather that at least one (probably very many) collective intelligences are emerging under nobody’s direct control and according to nobody’s intentional actions. See Susan Blackmore’s talk under Related Posts above for a simplistic example. What I’m talking about is even more “Third Experience”.