Alasdair Munro wrote a blog post titled Why Bad Research is Worse Than No Research about the detrimental effects of low-quality observational studies in the field of medicine.
There’s a flood of poorly conducted observational studies, often lacking rigorous methodological standards, which create misleading or unreliable findings. Unlike randomized clinical trials, which are the gold standard for medical evidence due to their stringent oversight, observational research lacks such regulatory scrutiny. This gap often leads to flawed studies that fail to account for confounding variables. You can never be sure that your results have not been impacted by a variable you haven’t even measured (hidden confounding), and even if your methods are accurate it is possible that residual confounding remains.
It’s the usual story: many studies are conducted by researchers with limited methodological training, often using inadequate data sources like electronic health records not designed for research. Low-quality studies squander valuable resources—time, funding, and effort—that could be directed toward better, more impactful research. This “research waste” may contribute to misinformation rather than advancing medical knowledge. Repeated publication of contradictory or implausible findings (e.g., studies that alternately claim coffee causes or prevents cancer) erodes public trust in scientific research and medicine. The “publish or perish” culture in academia incentivizes quantity over quality, encouraging researchers to produce quick, flashy studies that may lack robustness.
And it’s also the usual proposed solutions: enhancing methodological training for researchers and clinicians, and shifting institutional incentives to value accuracy over publication volume.
On the question of incentives, he refers to this commentary: Methodology over metrics: current scientific standards are a disservice to patients and society. “We assert that top-down action is needed from journals, universities, funders and governments to break the cycle and put methodology first. These actions should involve the widespread adoption of registered reports, balanced research funding between innovative, incremental and methodological research projects, full recognition and demystification of peer review, improved methodological review of reports, adherence to reporting guidelines, and investment in methodological education and research. Currently, the scientific enterprise is doing a major disservice to patients and society.”
We often hear that there are similar problems in the social sciences. There, however, the problem is that science is not solution-based. It has little impact. The social scientific enterprise can’t implement technical solutions, like vaccines or cancer treatment. Medicine is useful because it provides technical solutions to technical problems. Social science maps out and describes the world. It says a lot of things about the world. These things are interesting to some and not to others. The problems it identifies are not technical problems, they are problems of value (e.g., it describes inequalities, it describes toxicity in speech, talks about utility, maps out associations between attributes or views and other attributes or views). Social science is journalism with more data and some theory built on top of it. That theory is more or less opinion, it has limited explanatory power.
Different people are … different, a theory in social science holds in some cases, not others. We say it’s probabilistic, but it’s not very convincing when the stuff doesn’t work for a third of people and we have no idea why; and when it works it’s a tiny effect.
Social science is a problem of value: many interpret inequality as freedom, or toxicity as free speech. So we can describe how different people view different things in different ways. It’s fundamentally about publishing narratives shaped by data, to increase our understanding of ourselves both individually and collectively. Yet, what social science is and what it does is difficult because we insist on justifying every endeavour through the objective value it generates; often meaning monetary value.
Imitating the physical sciences in social science, with RCTs and causal analyses, gives us a grotesque caricature. It fails because social science deals with mental and social causation, where effects are often small and interventions hold only as long as people choose to comply, and correlations with a bunch of things on which we have no control. Again, some of these things are interesting to some, and not to others. Quasi-experiments based on instruments will remain controversial forever. Unlike in the natural sciences, where causes are rooted in physical realities, social causes are fluid, they depend on collective interpretation. The effects are small and go away when people start acting differently. We are left with observational description and narrative building, which is fine, but people never come close to admitting this.