Comments on Human Cultural Transformation

This is a followup to Ben’s post on Human Cultural Transformation Triggered by Dense Populations.  Too many links for this to be accepted into the comments directly…

In thinking about these questions, it helps me to remind myself of the difference between evolution and emergence. Evolution happens whenever you have a population of agents with heritable variation and differential reproduction rates. There are at least two types of emergence, both of which can create new types of agents. Various self-reinforcing mechanisms lead to stronger and more stable agency. We may not even recognize the emergence of nascent agents for what they are until said agency (or coherence) becomes strong enough. For instance, many people have a hard time wrapping their head around cultural agency of any form.

Obviously none of us on here have a problem with the concept of non-human agency, but as Alex and Ben collectively point out, cultural agents depend on human agents for their very existence.  Yet as they become more coherent they inevitably come into conflict with human agency (i.e. what’s good for the organization diverges from what’s good for its constituents). This is the fundamental yin-yang dynamic of the creation of new levels of organization and complexity.

It is worthwhile asking what the future holds for humanity. This is what Kevin and I were on about in this whole superorganism and singularity thread:

Superorganism and Singularity
Superorganism Considered Harmful
Response to Superorganism Considered Harmful
Superorganism as Terminology
Superfoo
Focusing on Autonomy
Going Meta on Autonomy

Summary is:

  1. we disagree on whether there will be a single overarching Gaia-esque Super-agent on earth or whether there will just be a rich ecology of many interacting “small s” super-agents with no strong “big S” Super-agent
  2. we disagree on how to measure “autonomy” so we can’t come to a consensus on what life will be like for humans
  3. we didn’t really dive too deeply into the extent and nature of interaction between human agents and super-agents

This last point is interesting to me since it appears from the evidence that as each new level emerges, several things happen:

  • communicative interactions between higher level and lower level agents increases
  • level boundaries become less strict so that levels “overlap”
  • the amount of co-evolution between the lower-level population and higher-level population — i.e. multilevel evolution — also increases

To make this claim more concrete, compare for instance the difference (in the above regards) between these three dyadic systems:

A) atom -> molecule

B) cellular organism -> multicellular organism

C) human -> corporation

All thoughts, disagreements, questions welcome…

  • plektix

    Yaneer Bar-Yam’s “complexity profile” formalism is quite useful for sorting out the concepts of emergence, partial emergence, and the relationships between lower and higher-level agents. I’ll do a full post on this at some point, but for now I’ll direct readers here.

  • Alex Golubev

    I’ll start with choices and autonomy, because that’s where you guys seemed to finish up. Whether there is an illusion of choice or not, cannot necessarily be known. All that we know is that a certain system is successful and is thus sufficient (to the lower level agents) by evolution. I’d argue that being content with our levels of knowledge is grounded in something akin to the scientific method. So until we encounter anything contrary to our theory, we’re happy. Unfortunately if a large enough group of people have the same error in reasoning, the error can go on for a long enough time to create painful imbalances. We have errors of incomplete knowledge, lack of logic, emotional/animalistic biases, learned biases, etc… There are plenty of ways to reduce these biases. So my point is that we don’t choose to get into the “Matrix”, we only chose to get out of it by stopping our learning process

    Incomplete knowledge - Tyler Cowen has mentioned that anything we know is already on the internet. We just stop searching at some point. The collective knowledge on the internet must be tapped by each and every one of us.

    Logic errors, emotional/animalistic/learned biases – Overcoming Bias, and Less Wrong attempt a go at this. I would argue that their knowledge is light year ahead of what 90% of humans could benefit from. So we need to focus on application, not theory. Our errors are so basic and we all learn about them on a regular basis, yet we make them so regularly its amazes me that we’ve achieved this level of complexity! I’m sure I’ve already made at least three errors while writing this. it doesn’t necessarily make the conclusions wrong, because there’ll be other arguments and consequences, but you get the point (hopefully).

    I haven’t read much about cingularity, so this is probably already part of the theory, but given all the factors above, we need to outsource anything we can to machines. Data storage and some simple intelligence. Baby steps. Only after taking those will it become apparent how to teach the machines more of what we want them to do. Humans form hypothesis and to the extent the machines can learn to form hypothesis they will win. To the degree we stay a step ahead of them, we will win. Although in a symbiotic relationship with an illusion of choice, it just might not matter. In a world of hamburgers, a steak is always superior as long as we don’t know about lobsters). Except by the time this level of similarity between man and machine is reached, the luxuries available to us will not even be comprehensible to us “simple” folk.

    But simply stating that we CAN get to this level of complexity is far from making it happen. We must define the incentives correctly and encourage good ideas, which can only come from the most “complex” of individuals. Integration of various fields is critical. Once again, I suggest that IDEAS are more important than the masses of people who chose to get off the thinking/learning rollercoaster. We have to figure out how to farm good ideas more so than take care of every single living soul.

  • Alex Golubev

    The matrix reference and hamburger<steak<lobster are meant to link to:
    http://emergentfool.com/2009/09/20/discovery-and-being-self-aware/

  • Pingback: Convergence « The Emergent Fool()

  • Alex Golubev

    Gaia evidence - horizontal evolution?
    http://kottke.org/10/02/not-your-fathers-evolution