Why Falsifiability is Insufficient for Scientific Reasoning

In my post about The Process it turns out that I stepped on a pedagogical minefield when using describing the Anthropic Principle (AP).  Two preeminent physicists had a very public argument a while ago in which one called the AP unscientific because it’s unfalsifiable.  I will return to that in a moment since it’s the crux of what’s wrong with Science right now, but I need to get the terminology issue out of the way first.

Lee Smolin claims that AP is bad and favors a Cosmological Natural Selection view instead (on grounds of falsifiability).  I believe this is a false dichotomy and that they are really one and the same.  Here’s why:

  1. Normally natural selection requires some form of “replication” or it’s not actually natural selection.   But replication is not needed if you start with an infinity of heterogeneous universes.  In other words replication is simulated via the anthropic lens over the life-supporting subset of all possible universes.
  2. Replication is a red herring anyway since it presupposes time (or at least well-ordered events).
  3. I conjecture that the distribution of universes is unimportant, as long as all possible universes are represented in the multiverse (i.e. the distribution can be random).

It’s worth noting that this is a purely a metaphysical/logical argument and says nothing about specific physics or cosmologies.  One of the things that makes it hard to see why this is true from reading the Smolin/Susskind debate is that they bounce between the logical argument and various proposed, unimportant details (like whether black holes are the replication mechanism in question or not).

More importantly though, we hear scientists call one another “unscientific” whenever they propose an hypothesis that is unfalsifiable.  Here’s why I think that’s problematic:

  • Ever since Popper, science has been obsessed with falsifiability, which is really about assuring consistency.
  • Godel proved that there are true statements that cannot be proved.
  • More specifically he unpacked “truth” into completeness + consistency and showed that we can’t have both simultaneously.
  • Due to extant complexity (let alone potential infinity) completeness is out the window.
  • If science is only concerned with consistency, then it’s a pointless endeavor; I can sit here all day and generate tautologies that are neither interesting nor useful.
  • If science is about truth, then there needs to be a way of expanding the set of discovered tautologies along the completeness dimension as well.
  • There are at least three formal logical systems which do that without sacrificing consistency: deduction, induction and abduction.
  • Only deduction is formally falsifiable.
  • But science relies on induction and many other forms of evidence too (statistical reasoning, clinical trials, simulation, storytelling, etc); this is the “democracy” Smolin himself referrs to in his TED talk.
  • The structure of the Anthropic Principle is abduction.  So is the structure of Occam’s Razor.  And depending on who you believe Bayesian inference is either induction or abduction.
  • Conjecture: Newton’s Calculus is a formalism based on abduction.
  • Conjecture: strong emergence (aka novel emergence) is fundamentally abduction.  This may be why science has such a hard time with it.
  • Conjecture: natural selection is fundamentally emergence/abduction.  This may be why Creationists have such a hard time with it.

There is no one true definition of what constitutes “Science.”  We hear reference to the so-called Scientific Method.  Ultimately, the holy Scientific Method is whatever scientists as a whole do; no more and no less.  To say otherwise is ad hominem.  Now I’m not claiming that ad hominem argument shouldn’t be counted as scientific evidence, but anyone who bows before Popper would.  The irony there is that ad hominem is a form of Bayesian inference.  And if you’re keeping score, that means that anyone who claims that you are being unscientific if you don’t forsake all unfalsifiable idols, is themselves committing the sin of inconsistency.  Which by their own logic means they are unscientific too.

To which I respectfully submit, their pants are on fire, hanging from a telephone wire.  And that’s a scientific fact.

  • I have a better idea. Why don’t you produce a final theory or a complete theory of quantum gravity and then you really will have the right to spout of about unobservable multiverses and the lines that define science. Otherwise, you’re just worshiping fairy tales while proclaiming that the scientific method should include them.

    • Rafe Furst

      And so coming up with one of an infinite set of possible falsifiable models in an ad hoc way makes me scientific, but calling for more creativity and rigor in the process of coming up with such falsifiable models doesn’t?

      I’ve got an even better idea. I will lay out a set of logical (falsifiable) arguments for how scientific method can be improved that keeps falsifiability at its core but does not worship it as the only thing necessary.

      Stay tuned, it will be a long set of posts, and this last one was the lead in anyway.

      • I pretty sure that a complete theory isn’t “ad hoc”.

        • Rafe Furst

          Complete theories are by definition unfalsifiable (see Godel).

          Regardless, the point is that scientific method as stated says nothing about theory generation, only talks about how to check it’s logical consistency. In practice scientists use years of experience and instinct to generate theories. But we know how biases the human mind is. So we need more rigorous and less arbitrary ways to generate theory, which will actually get us closer to completeness than by intuition alone.

          Scientific method is not wrong, just incomplete. And we know how to do better, it’s just institutional inertia and bad incentives that keep us from doing so.

          • I have an objection. Gödel proved that there are statements that are true, but unprovable, within the framework of formal system powerful enough to contain Peano’s arithmetic. It does not say anything about other formal systems, such as Presburger arithmetic (which was proven both consistent and complete). Thus, extending Gödel’s results beyond their actual scope (Principia Mathematica and related formal systems) is an unjustified generalization (until properly justified, that is).

            • Rafe Furst

              Thanks for the correction, Fabio.  You are right about Presburger arithmetic being both complete and consistent.  But it’s also a trivial system, incapable of expressing self-referential statements.  In other words, the completeness and consistency of Presburger can’t be expressed in Presburger itself.

              Any formal system of logic/math that is non-trivial and thus useful in doing science is powerful enough to be either incomplete or inconsistent.

  • plektix

    Quibble: Newton’s calculus can be formally deduced from the axioms of the real numbers, which themselves can be constructed using second-order logic.

    But I agree with your larger point: The Popperian statement that science can only ever disprove hypotheses by experiment may be formally valid, but it ignores the reality of what scientists actually do.

    • Rafe Furst

      Good quibble because it illuminates the issue: all self-consistent axiom sets constitute some particular theory, ie unfalsifiable assumption. Real numbers are no exception. Nor, by the way, is modus ponens, which is the axiom that defines logical decution itself. In the end, there is no such thing as absolute certainty. We are left with only evidence and persuasion.

  • Alex Golubev

    here’s a simpler argument. Scientific method doesn’t result in cumulative progress because scientific revolutions are incommensurable. I’ve been meaning to post on it, but … haven’t. Language is a necessary condition to express a theory or a body of knowledge. Scientific revolutions reorder a lot of things, answer some questions, while NOT answering questions that some old theories have already answered. All of this is RESOLVABLE, but not through simply stating a hypothesis and gathering data. That’s WHY we have schools of though and not one cumulative science. ultimately it’s all about folks having egos cause they’re afraid of death :)

    skim this one focusing on incommensurability of language:

  • kevindick

    Quick quibble. Hopefully deeper thoughts soon.

    Occam’s Razor and Bayes Law are not abductive. _Applying_ them is deductive (it’s a computation). _Believing_ them is inductive (they seem to work).

    Deciding what hypotheses to run through them _is_ abductive. But that’s true for any useful hypothesis generation of which I can conceive.

    Their invention may also have been abductive. But that’s also probably true for any hypothesis testing system (because such systems are meta-hypotheses).

  • I find it incredibly ironic that at its core, the Viennese, verifiability crowd suffers from their own theory of truth. That is, verifiability is itself unverifiable. There is a lesson here. “Worshipping at Popper’s altar” has done science no good. It severed its ties to any sort of robust metaphysics (if there is even such a thing.) Without a metaphysical grounding, all science is disconnected from reality (if there even is such a thing.) I think these are more than just philosophical tempests-in-teapots. As we approach closer and closer to a theory of everything, we also approach closer and closer to where science interfaces with metaphysics. What this metaphysics entails, I don’t know, but I believe it essential to a philosophically “complete” science. Very few scientists would admit they operate with any metaphysical assumptions, but this is plainly not the case. The mere believing in external reality, despite its unverifiability, requires a tiny nugget of faith—a metaphysical bridge. One that seems built into us for pragmatic reasons. But to deny the obvious existence of this bit of faith seems pointless and counterproductive. If anything we should be examining these hidden base assumptions and their impact on the structures built upon them. (Ok, I’m rambling…)

    Anyhow, yet another fascinating post. Keep up the good work.

  • kevindick

    There are two interacting problems here. First, there are those who treat “scientific” as a signal for “high status”. Obviously, it makes sense to try and exclude membership from “high status”. “Falsifiable” is highly correlated with beliefs which scientists believe are low status, so on average it’s an effective screen.

    Now, sometimes scientists will use falsifiability as a way for increasing their status _within_ science. This is counterproductive because falsifiable is only a subset of what I would call “useful”. (I think it’s a subset, though I’m not sure there isn’t a small region of “falsifiable” outside of “useful”).

    I don’t care about truth per se. I don’t care about the status of my beliefs per se. I care about winning. So to me, something is “scientific” if it is useful. If I can increase my chance of winning using this belief, then I want to have this belief.

    So my second problem is people who spend time debating falsifiable without focusing on useful. I’d love to know more about the region of “useful” that is not “falsifiable”. But I couldn’t care less about anything that’s outside both regions.

    • Rafe Furst

      Agreed 100%. I have a whole series of posts I’m lining up (and have others interested in contributing as well) that constitute a manifesto of sorts for “science 2.0”. These will focus not on how science today is broken but rather on how to make it more useful. We’ll use the Science 2.0 tag/category if you’d like to contribute to that as well!

  • Paul

    I think you would enjoy reading _The Methodology of Scientific Research Programmes_ by Imre Lakatos. Lakatos proposes that science is a competition and should be evaluated by progress, not falsification.