There is a massive paradigm shift occurring: beliefs about the nature of scientific inquiry that have held for hundreds of years are being questioned.
As laypeople, we see the symptoms all around us: climatology, economics, medicine, even fundamental physics; these domains (and more) have all become battlegrounds with mounting armies of Ph.D.s and Nobel Prize winners entrenching in opposing camps. Here’s what’s at stake:
In 1972 Kahneman and Tversky launched the study into human cognitive bias, which later won Kahneman the Nobel. Even a cursory reading of this now vast literature should make each and every logically-minded scientist very skeptical of their own work.
A few scientists do take bias seriously (c.f. Overcoming Bias and Less Wrong). Yet, nearly 40 years later, it might be fair to say that its impact on science as a whole has been limited to improving clinical trials and spawning behavioral economics.
In 2008, Farhad Manjoo poignantly illustrates that our indifference to the pervasiveness of cognitive bias, combined with digital-age network effects has lead us to a crisis of truth; none of us, from Dittohead to Nobel Laureate, are able (practically speaking) to distinguish objective, scientific truth from carefully crafted stories.
What’s worse is that we continually craft these stories for ourselves as we go through life… it’s the basis for rational, conscious thought.
What’s even worse is how individual biases combine with scientific institutional biases (e.g. publication bias) to unintentionally corrupt the scientific process. It’s gotten to the point where in 2010, headlines like these have seemingly little impact on the scientific community, and go unnoticed by policy makers and the general population alike:
• Million-dollar industry payments to doctors go undisclosed
• Science fails to face the shortcomings of statistics
• We’re so good at medical studies that most of them are wrong
“[A]mong the great majority of active scientists… [reductionism] is accepted without question…. [T]he relationship between the system and its parts is intellectually a one-way street.” (Nobel Physicist, P.W. Anderson, 1972)
“if the stars in the universe were fractally distributed it would not be necessary to rely on the Big Bang theory” (new math applied to a cosmological paradox by Mandelbrot in 1974)
“The revolutionary new discoveries [complexity] researchers have made… could change the face of every science from biology to cosmology to economics.” (Waldrop, 1993)
“[U]nexpected results force a whole new way of looking at the operation of our universe…. [T]he ultimate scope and limitations of mathematics, the possibility of a truly fundamental theory of physics, the interplay between free will and determinism, and the character of intelligence in the universe.” (summary of A New Kind of Science, 2002).
“Reductionism is so rigid in its hopes to ‘entail’ everything in the unfolding of the universe…. I think reductionism is incomplete.” (Stuart Kauffman, 2009)
The “newness” that underlies each of these claims is not exactly new; it’s been discussed since at least the time of Aristotle. The idea being that the whole is greater than the sum of the parts.
And that if scientific method focuses solely on reductionism — dividing the whole into easily-understandable parts — this will blind us to true understanding.
What’s new is the scientific approach to studying the bottom-up dynamic known as emergence.
What counts as scientific evidence is always an evolving target, and an issue that can make or break careers. But when human lives are on the line, scientific proof is not so important. Inasmuch as science is more than self-indulgent theory, callings into question of scientific rules of evidence must be taken very seriously.
All proofs assume some specific form of consistent logic.
All logics are either incomplete or inconsistent.
Therefore, no proof is complete.
Ever since Karl Popper, scientists have been obsessed with syllogisms like this one (aka deductive reasoning)and its handmaiden, falsifiability.
This obsession has been at the expense of completeness though, as there are dozens of forms of logic which are equally consistent (like induction and abduction), but which rely on and promote creativity.
Statistical methods have played an increasing role in all of science since the advent of the computer. Yet, their well-known limitations have been summarily ignored by science. In the aforementioned excoriation of statistics it was observed that “The difference between ‘significant’ and ‘not significant’ is not itself statistically significant”.
In The Black Swan, the author (Taleb) pulls even fewer punches about what these shortcomings mean for scientific pursuit. Ultimately he concludes that the combination of “unknown unknowns” with cognitive biases and increasing universal complexity means we are doomed to an eternally widening gap between what we know about the world and what we can know. Even worse, we will become increasingly ignorant of or ignorance — by construction(!) — leading to dark times ahead.
Some scientists claim that the same epistemological conundrum is leading in the exact opposite direction: towards enlightenment. Ironically(?) a modern convergence of science and spirituality is occurring, as epitomized by integral theory beginning in the late 1960s, but branching into related forms at an accelerated pace. See, for example, a popular documentary, the rise of nonduality, and a new scientific conference.
For centuries, mathematicians and philosophers believed that everything true about the universe could — though not in practice, at least theoretically speaking — be written down and codified. But then in 1931, Kurt Gödel proved an incompleteness theorem that dashed this hope even in principle.
It’s taken time for the gravity of Gödel’s discovery sink into the “grey matter” surrounding fundamental physics, but it’s finally starting to happen: “Some people will be very disappointed if there is not an ultimate theory…. I used to belong to that camp, but I have changed my mind.” (Stephen Hawking, Gödel and the end of physics, 2002)
The reticence of the scientific community to accept the incontrovertible could be seen as an example of science “advancing one funeral at a time” (an epithet wielded by Nobel physicist, Max Planck). But it turns out the core issue is a paradox that is actually thousands of years old.
The issue is self-reference and it leads to many forms of paradox and uncertainty, including Heisenberg’s famous uncertainty principle. The implications of self-reference for physics were summed up recently as follows: “After decades of debate, disputes over the mathematical rules governing reality remain unresolved” (ScienceNews, 2010)
Self-reference not only gives fits to quantum physicists, it calls into question one of the most basic premises of science, “the assumption that our world operates according to causal laws.” (Science’s First Mistake, 2010)
And it’s not just theoretical: one of the largest scientific industries in the world is desperately trying to fight the real-world effects of self-reference.
It should come as no surprise then that, in the void, theories that were once laughable to mainstream science are getting serious attention.
One such theory, biocentrism, challenges the primacy of physics in the pantheon of science, and replaces it with biology. Moreover, it picks up the thread of observer-dependence laid down by quantum physics and weaves it into a narrative of a conscious universe, reminiscent of integral theory and its ilk.
If, as Einstein noted, “the belief in an external world independent of the perceiving subject is the basis of all natural science”; and if, as Popper proposed, falsifiability is the criterion demarcating science from non-science; then…
We must take biocentrism seriously. Because unlike every other “new age” theory accused of over-fitting the data, biocentrism makes falsifiable predictions which will likely be settled soon.
Plus, it may be the case that biocentrist theory explains phenomena that is often observed but not politically correct to talk about. Says one noted scientist, “What Lanza says in this book is not new. Then why does Robert have to say it at all? It is because we, the physicists, do NOT say it––or if we do say it, we only whisper it, and in private––furiously blushing as we mouth the words.”
As if on cue, in December of 2010, The New Yorker published a tour de force of scientific journalism by Jonah Lehrer, subtitled, “Is there something wrong with the scientific method?” Here are quotes from two well-respected scientists in the article:
“Whenever I start talking about this, scientists get very nervous” (Jonathan Schooler)
“[Michael] Jennions admits that his findings are troubling, but expresses a reluctance to talk about them publicly. ‘This is a very sensitive issue for scientists’”
What they are talking about is a mysterious phenomenon that resembles the bizarre observer-dependent causality of quantum mechanics, but at the scale of human population studies. Referred to alternately as the “decline effect” and “cosmic habituation”, the effect has been shown to be both repeatable and predictable. In a nutshell, the decline effect is as follows:
Many well-established, scientifically validated “facts” seem to decline in their validity over time as they are retested.
This is despite collective best efforts to recreate the exact conditions of initial experiments and account for red herrings like flawed statistical application, regression to the mean, publication bias, reporting bias, confirmation bias, survivorship bias, “significance chasing,” financial and other conflicts of interest, data collection issues, improper experimental controls, a priori faulty experimental design, and so on.
And yes, if you were paying attention, you would notice that the decline effect has a strong element of self-referentiality….
. . .
When reading about the decline effect, biocentrism and nonduality, it’s difficult for us scientifically trained folk to turn down our highly-attuned falsification mechanisms long enough to allow the synapses to connect the dots. But the dots do seem to be connecting themselves lately.
The year after receiving a Masters of Science in Computer Science / Artificial Intelligence I read a book that I still recommend to people as the most important book I’ve read to date. It begins with the following claims:
“[T]he mind is inherently embodied, reason is shaped by the body, and since most thought is unconscious, the mind cannot be known simply by self-reflection.”
“What universal aspects of reason there are arise from the commonalities of our bodies and brains and the environments we inhabit…. [R]eason is not entirely universal.”
“[W]e have no absolute freedom in Kant’s sense, no full autonomy.”
“The utilitarian person… does not exist…. People seldom engage in a form of economic reason that could maximize utility.”
“The phenomenological person, who through… introspection alone can discover everything there is to know about the mind and the nature of experience, is a fiction.”
“There is no poststructuralist person-no completely decentered subject for whom all meaning is arbitrary, totally relative, and purely historically contingent, unconstrained by body and brain.”
“[T]he classical correspondence theory of truth is false…. [T]hat statements are true or false objectively, depending on how they map directly onto the world….”
“There is no such thing as a computational person…. The neural structures of our brains produce conceptual systems and linguistic structures that cannot be adequately accounted for by formal systems that only manipulate symbols.”
“[T]here is no Chomskyan person, for whom language is pure syntax…. [C]entral aspects of language arise evolutionarily from sensory, motor, and other neural systems that are present in “lower” animals.”
At the time, these statements seemed scientifically blasphemous, but I suspect they would seem mundane and obvious to a recent MS graduate. Though I sensed the inherent truth being spoken, it’s taken me a while to truly come to terms with the implications for my own understanding of the world.
As best as I can articulate, the questions being raised by “all of this” are as follows:
These are questions that I still do not know the answer to, but I am getting more comfortable with the not knowing.
What I like best about the practice of embracing “I don’t know” is that it allows us to step back and get clarity on questions like, “Are there any universal truths?” And when we do step back, we realize that the question is not about “truth” or “universal” but rather “are there”.
. . .
Which brings us full circle to question of independence.
When ScienceNews ran the article chastising scientists for rampant misapplication of statistical methods, the word “independence” was used only once (in the context of replicating an observed finding).
This is a curious fact considering that the real elephant in the room, statistically speaking, is always whether correlated observations are independent of one another. If they are not, then the statistical methods that scientists use violate the fundamental assumption required of them.
Of course, we all know that there is no such thing as true independence, don’t we? The butterfly effect and so on. But we use statistical theory and make probabilistic arguments all the time, both in science and in life. Without it, we’d be paralyzed to the point of inaction.
As scientists though, we’ve got to admit, it bothers us (doesn’t it?) that one of the fundamental assumptions of our daily activity is false. That we lie to ourselves every day so that we can get on with our work.
The biggest elephant of all though is the question of whether the world exist independent of our observing it. You and Descartes may be sure that it does, but I still don’t know.
There’s a question that was posed to me in my freshman calculus class in college, that I thought I knew the answer to, but which has nagged at me ever since: “Do numbers exist, or are they constructs of the human mind?”
To this day, I ask the same question of luminaries at scientific gatherings, and I have yet to get a definitive answer. Sometimes I ask it in a slightly different way: “Is math inherently part of the structure of the universe?” I have yet to get a satisfying answer.
. . .
None of this existential teeth gnashing is new in science. Thomas Kuhn coined the phrase paradigm shift to describe revolutions of thought such as the transition from Ptolemaic to Copernican cosmology in the 16th century, or the shift from classical to quantum mechanics last century.
But what is new is the accelerated number of paradigm shifts that are occurring during the span of a single human lifetime. In other words, we are at a crossroads in history during which multiple pillars of the existing scientific method are cracking at once.
What exactly does this mean?
I don’t know.
But the levee is about to break.
And in the spirit of scientific progress I will give you my own falsifiable prediction (or is it resolution?) as the new year approaches:
Something big is happening. Something positive. We can all feel it, can’t we? It is hard to define, but we will all recognize it in hindsight. And when we do, historians will agree that the levee broke in 2012….
Related posts: