Why I No Longer Chase Statistical Trends (and Why You Probably Shouldn’t Either)

Mind over Model
A few weeks ago, I found myself in a conversation about whether a particular research question could “only” be answered with a DAG. Not a DAG in the everyday sense, but a DAG™, the modern badge of causal purity. Someone confidently declared: “Well, only a DAG can answer this.” (Find out more about DAGs here).
And there I was again, wondering how researchers have managed to answer questions for the last decades without collapsing into a puddle of epistemological despair. Surely some knowledge must have slipped through the cracks before? That’s really why I’ve grown skeptical of methodological trends: Every few years a new method arrives promising that everything we did before was wrong — until next year, when another method arrives to declare that one wrong too.
Meanwhile, when you listen to the genuinely good statisticians and econometricians — the ones who have been quietly producing excellent work for decades — they almost never talk that way. They don’t say: “This method is wrong.” They say: “If you use this method, you are making the following assumptions.” And that distinction matters. If your software converges and produces results, the model isn’t “wrong.” It runs. The real question is: What inferences are you allowed to draw, given the assumptions you just bought into?
No method is exempt from this. Every method has assumptions. And, as Pedro Sant’anna puts it beautifully in his lecture slides: Our job as researchers is to assess the pros and cons of each method in their ability to answer the questions we (and the business/policy makers/stakeholders) care about.
Progress Is Real… But So Are Its Blind Spots
Let me be clear: I’m not anti-progress. We do have better tools, better computation, sharper theory. But that doesn’t automatically make our research “better.” Replications haven’t suddenly skyrocketed because we switched from SPSS to Python. Publication bias is alive and thriving (now with prettier tables). And many of the simulation studies used to validate new methods model worlds so unrealistically pristine they make children’s television look gritty.
So yes, progress is real — but over the years, I’ve developed a real appreciation for a more measured (pun intended) approach to improving our tools, rather than the bold promise of the next big thing that supposedly “solves it all.” Nothing has reminded me of this more vividly than Bueno de Mesquita and Fowler (2021) Thinking Clearly with Data, in which they which presents their beautifully simple “favorite equation”:
Estimate = Estimand + Bias + Noise
The estimand is the thing you actually want to know — the true effect or quantity of interest. The estimate is what you end up getting. Bias is systematic distortion between the two. Noise is everything random that refuses to behave.
We never observe the estimand directly. We only approximate it, and every approximation drags bias and noise along like barnacles on a ship. No shiny causal ML method, no DAG, no deep neural net can completely wash these off in real data. At best, they rearrange the barnacles. If you don’t know what your estimand really is, no model — old or new — will save you.
When Methods Turn Into Fashion Accessories
This is where the trend-chasing bothers me. Every year brings another wave of implied moral judgment: Regression? Naive. Fixed effects? Cute. Matching? Outdated. Anything invented before your PhD coursework? Suspicious.
I like machine learning. I use it. I teach it. But there is a difference between liking a tool and thinking clearly with it. With a simple regression, I can sit down with ten numbers, compute everything by hand, and understand every step. With many ML methods that clarity vanishes. Defaults take over; optimizers do mysterious things; and if someone pushes me deep enough on intuition, there is a non-zero chance I start drawing vague diagrams on a whiteboard to buy time. That doesn’t make ML bad. It simply means that methods only help thinking if we can reason through them.
So Where Does This Leave Us?
Which brings me back to that “only DAGs can answer this” conversation. The problem isn’t DAGs — DAGs are powerful and elegant when used thoughtfully. The problem is the mindset: the belief that correctness lives in the tool rather than in the clarity of assumptions and the suitability of the design. After a decade of analyzing data, teaching methods, reviewing papers, and staring at margins plots at unholy hours, I’ve learned this much:
Understand your estimand before touching your estimator.
Expect bias and noise; they’re loyal companions.
Don’t worship statistical trends; use what fits your question.
Be wary of anyone who claims there is only one “right” way of doing a certain analysis.
If you want flashy certainty, statistics is the wrong hobby. If you want humble approximations, thoughtful assumptions, and occasional existential crises… welcome aboard - don’t mind the barnacles!