Methods to detect published mistakes without raw data?

I’m interested in ways to detect mistakes in published papers without analyzing the raw data. For example the GRIM test [1]. Here’s another similarish one from one of the GRIM authors’ blog. I don’t know of any others.

Looking for inconsistencies in reported stats seems attractive, because digging through raw data is difficult and sometimes the data isn’t available. It’s probably also easier to automate.

Edit: Benford’s law, credit to DJohnson. Any others?


[1] Brown et al., A Simple Technique Detects Numerous Anomalies in the Reporting of Results in Psychology, Social Psychological and Personality Science (2016)

Answer

You could simply ask for the data if you think there is an error. If they say no that would be a concern to me, although I find it hard to believe people would falsify results deliberately (well some will, but that will be rare I think).

Attribution
Source : Link , Question Author : R Greg Stacey , Answer Author : user54285

Leave a Comment