Abstract
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, however, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1) P-hacking, which is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want; 2) overemphasis on P values rather than on the actual size of the observed effect; 3) overuse of statistical hypothesis testing, and being seduced by the word “significant”; and 4) over-reliance on standard errors, which are often misunderstood.
Footnotes
- Received August 8, 2014.
- Accepted August 8, 2014.
This Commentary evolved from multiple conversations between the author and several editors of pharmacology journals. This work represents the opinions of the author and should not be attributed to the American Society for Pharmacology and Experimental Therapeutics or the Journal of Pharmacology and Experimental Therapeutics (JPET) editorial board. Publication of this article in JPET does not represent an endorsement of GraphPad Software. This article is being simultaneously published in JPET, British Journal of Pharmacology, Pharmacology Research & Perspectives, and Naunyn-Schmiedeberg’s Archives of Pharmacology in a collaborative effort to help investigators and readers appropriately use and interpret statistical analyses in pharmacological research studies.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0) license (https://creativecommons.org/licenses/by-nd/4.0/legalcode).
- Copyright © 2014 Creative Commons Attribution-NoDerivatives 4.0 International (CC-BY-ND 4.0)