Troubling Statistics

In his eloquent article, Breaking the Galilean Spell (worth reading in its entirety), Stuart Kauffman has given me the words to finally be able to articulate the uneasiness I feel about statistical reasoning in an increasingly interconnected world:

…[Can] we make probability statements about the evolution of the biosphere? No. Consider flipping a coin 10,000 times. It will come up heads about 5,000 times with a binomial distribution. But, critically, note that we knew beforehand all the possible outcomes, all heads, all tails, all 2 to the 10,000 possibilities. Thus we knew what statisticians call “the sample space” of the process, so could construct a probability measure.

Can we construct a probability measure for the evolution of the biosphere into its Adjacent Possible? No. We do not know the sample space!

I won’t belabor the point except to say that I view the increasing irrelevance and danger of statistical reasoning as the essential argument of The Black Swan, the fundamental reason why the global financial crisis occurred, and one of the main things we humans need to understand better if we are going to solve the problems facing the world.

  • Tiltmom
  • kevindick

    As you can imagine, this post disturbs me greatly. I think just about everything about it is wrong. In general, as with our discussion of Bayes Rule, I think you probably have a misplacedly concrete view of what statistical reasoning is. But I’m going to count to 10,000 before I respond.

  • kevindick

    Let me start by asking you to unpack what you mean by “statistical reasoning”. This covers a lot of potential ground.

    Following your “listening” guidelines, I’ve tried to figure out how you could be right. I can think of three cases:

    – You think specific common statistical tests will become less useful.

    – You think particular assumptions used in classes of common statistical tests will become less representative.

    – You think typical statistical hypothesis testing with 95/99% confidence intervals do not provide as good of guidance as we think.

    I would agree with all these statements. However, they are a small subset of the universe of what I would call statistical reasoning.

    • Rafe Furst

      Yes to the three cases.

      I’m all ears on the rest of the universe!

      • kevindick

        I figured since I put in the work to come up with some areas were your statement could be considered right, you could put in some work and define what you call “statistical reasoning”.

        It seems rather silly for you to pronounce an entire field of study as lacking in 2 short paragraphs and force me to defend it with a 5000 word essay on its foundations and applications.

        It doesn’t give me much incentive to engage with you on these types of topics.

        • Rafe Furst

          I’m confused. I thought you had something in your back pocket you were going to enlighten me with and were seeing if we had the same starting point. Hence “all ears”.

          I have been implicitly defining what I believe to be “statistical reasoning” every time I bash it on this blog. And since I don’t know what I don’t know, how can I define it?

          • kevindick

            I’m not sure how you got confused. I directly asked you to, “…unpack what you mean by statistical reasoning?”

            Surely, you could attempt to define what you do think you know about this topic? I would point out that it hard for me to know what you don’t know if you don’t tell me what you do know.

            Obviously, I don’t have enough implicit information about your definition. Otherwise I wouldn’t have asked directly.

            • Rafe Furst

              The fundamental problem I have with “statistical reasoning” is that most people who use it claim (and seem to believe) that it increases one’s understanding of the domain in question. In reality it characterizes the uncertainty, which is a different beast. I contrast this this with explanatory models (e.g. Newtonian mechanics, natural selection, Navier-Stokes equations, etc). I’m not claiming statistical reasoning isn’t important, just that it’s so often misapplied, mischaracterized and misunderstood as to be often the problem, and not the solution.

              Most of the further unpacking I would do has been better explicated by the article Kim posted above. Happy to dissect any of those claims further if you like.

  • kevindick

    Taking things back up to the top level…

    OK, so now I understand your concern. But you are wrong. Your objection is technically known as the problem of causal inference. But the “explanatory models” you reference are actually on no firmer ground. Different ground, but not firmer.

    To get over the problem of causal inference, you either have to assume invariance in the domain and homogeneity of the objects of study (what your explanatory models do) or you have to assume similarities of populations and independence of certain events (what statistical models do). Neither of these assumptions can ever be completely proven.

    For an introduction to this topic, I recommend:

    The proof is in the pudding. Does your model outperform in making predictions? Some statistical models do. They are at least somewhat right.

    Now, the article Kim posted about has some decent points. Many scientists that use statistics don’t actually understand their foundation, so they naturally screw things up. The same can be said for explanatory models. For a good response, see here:

  • kevindick

    There is a separate point that is also worth noting. You contrast statistical with explanatory models (henceforth referred to as structural models because that is the distinction I commonly see).

    However, all structural models are validated with statistics. I defy you to find a refereed experimental paper testing a structural model that doesn’t have a ton of statistics. You can’t know if your structural model is any good without building a statistical model.

    Of course, most people believe the converse is also true. You can’t build purely statistical models without some structural hypothesis. Though there are a small minority of social scientists who believe so called “atheoretic” statistical models are valid. I disagree (even though I have a patent that one could interpret as proposing to generate predictively useful atheoretic models).

    So in the end, I contend you can’t do any useful causal inference without both a structural and statistical model.

    • Rafe Furst

      I agree with everything you have said in your last two comments. Where we differ is an empirical observation (which I suppose could be statistically validated :-) That is…

      I believe statistical models are (for a number of structural reasons) more often misunderstood, misapplied, misleading and used in unintentionally damaging ways than structural models. The biggest structural reason for this is that we are not hard-wired for statistics but we are for the basis of structural models (e.g. pattern matching, visual reasoning, storytelling, etc).

      Thus, when somebody gives me a statistical argument my priors are set to a much higher burden of proof than for structural arguments. Furthermore because statistical reasoning goes “against the grain” of our natural strengths as humans, finding the errors is harder. Thus, if we are looking for falsifiability, it’s much more practical to achieve with structural models.

      I could go on, but I think you get the point.

      • kevindick

        I do get your point, but I don’t agree. I’ll stack up my empirical evidence against yours any day.

        Our brains are not hardwired to evaluate any but the most trivial structural models either. How long did it take people to figure out gravity and natural selection and these to become generally accepted?

        Moreover, the hardwiring humans do have is actually counterproductive in evaluating structural models due to things like narrative and availability bias.

        The only way to evaluate whether the effects you observe are “real” or spurious is to use statistics. Your argument about falsifiability demonstrates this pretty clearly. Please explain to me how can you falsify any but the most trivial structural model without using statistics?

        • Rafe Furst

          You set up the argument in the formats that our brains are hardwired to be able to falsify automatically. E.g. create visualizations where any sighted human can spot anomalous patterns.

          I understand that we have not spent a good deal of time on this endeavor in scientific history, but I believe that is changing (I see it on the margins), and this is an integral part of what I’m calling “science 2.0”

          • kevindick

            Yes and those visualizations require graphs, which are of course based on statistical representations. To make sure the graphs aren’t misleading, you need a statistical model matched to the structural model.

            And, as you well know, humans are famous for spotting spurious patterns. You need to therefore validate identified patterns against statistical models for significance.

            There’s no magic that will get you away from the need for statistical reasoning. You can hide it. But it will be there.

  • Paul

    As best as I can tell, the form of statistical reasoning you regard as irrelevant and dangerous employs frequentist hypothesis testing. If so, I totally agree.

    But are you familiar with statistical decision theory? Unlike frequentist statistics, it has a coherent foundation: game theory. See, for example, _Mathematical Statistics: A Game Theoretic Approach_ by Thomas S. Ferguson (Chris and Marc’s dad). Roughly speaking, the fundamental theorem of statistical decision theory states that the game-theoretically optimal statistical inference is precisely Bayes’ rule applied to the appropriate prior distribution.

    The main drawback of statistical decision theory is that it is very difficult to apply, both analytically and computationally.

    • Anonymous

      I have been trying to get Rafe into Bayesian analytic methods for some time now. You wouldn’t happen to have read Andrew Gelman’s books on this, have you? I just picked up Carlin and Louis on Bayesian Data Analysis and am wondering if Gelman is worth buckling down and digesting.

      • Paul

        _Bayesian Data Analysis_ by Gelman, Carlin, Stern & Rubin is OK, but pretty dry. E.T. Jaynes provides a much better introduction to the Bayesian perspective in _Probability Theory: The Logic of Science_. It’s lively, with entertaining and thought-provoking examples. He does take one wrong turn in dismissing statistical decision theory because it focuses on worst case analysis. But in competitive arenas, such as the financial markets, worst case analysis is essential. And even in areas that seem non-advsersarial, game theoretic optimality provides robustness to inference.

    • Rafe Furst

      I believe we are in agreement (even Kevin and I). Where I take a radical stance is exactly how useful statistical reasoning of any kind (including Bayesian) is in science. My feeling is that it’s one type of reasoning amongst many, though it’s treated as the only relevant form all too often.

      If I had to quantify its overall relevance I’d say it’s 10% useful, but of course that number could depend on whether this statement is true or not :-)