I’m looking for specific, real cases in which a causal relationship was inappropriately inferred from evidence of a correlation.
Specifically, I’m interested in examples that meet the following criteria:
- Existence of the causal relationship was accepted as fact widely enough to have notable effects (on public policy, discourse, individual decisions, etc.).
- The link was inferred solely on the basis of correlative evidence (perhaps along with the existence of a coherent but unproven causal mechanism).
- Causality has been objectively falsified or at least called into serious doubt.
The two examples that came to mind for me aren’t quite ideal:
- Sodium intake and blood pressure: As I understand it, it has since been determined that salt intake only increases blood pressure in sodium-sensitive individuals. The existence of a valid causal relationship (although not quite the one that was originally accepted) make this example less compelling.
- Vaccines and autism: I may have the background wrong, but I believe this link was surmised on the basis of both correlations and (fraudulent) experimental evidence. This example is weakened by the fact that (fake) direct evidence existed.
Note: I’ve seen this similar question:
My question differs primarily in that it focuses on notable, real-world examples and not on examples in which a causal link is clearly absent (e.g., weight and musical skill).
For many years large observational epidemiological studies interpreted by researchers using Bradford Hill-style heuristic criteria for inferring causation asserted evidence that hormone replacement therapy (HRT) in females decreased risk of coronary heart disease, and it was only after two large scale randomized trials demonstrated the opposite, that clinical understanding and clinical recommendations regarding HRT changed. This a classic cautionary tale in contemporary epidemiology that you can read about in textbooks (e.g., Leon Gordis’ Epidemiology), and on the Wikipedia article on David Hume’s classic maxim.
That said, The Bradford Hill criteria have not been understood as the state of the art for a good while now, with counterfactual causal inference (a la Judea Pearl, Jamie Robbins, Sander Greenland, and others) being the really heavy lifter. It is possible to make reasonably strong causal inferences without conducting randomized experiments, using, for example, instrumental variables, Mendelian randomization, etc. (which is good for science, since we cannot conduct randomized experiments on much, if not most, of the universe).