1 Comment

Has anyone ever run an experiment where they submit different versions of a paper to two sets of peer reviewers, one with the real data and real conclusions, and the other with falsified-but-plausible-looking data and the opposite conclusions, and seen whether one version has a notably harder time making it through peer review than the others? A bit like the Sokal hoax, except that it would presumably need the active cooperation of the journal editors?

Expand full comment