P value has no value, What should we do instead?

In 2016, the scientific community received the long awaited policy statement of the American Statistical Association regarding the P value. The American Statistical Association clearly described the misinterpretation and the misuse of P value as an important contributing factor to the common problem of scientific conclusions that fail to be reproducible. Furthermore, the statement explicitly indicated that reliance on P values may distract from the good scientific principles that are needed for high-quality research. [1, 2]

Now, the sensible question is, “What should we do instead?”

The American Statistical Association clearly stated: “In the post p<0.05 era, scientific argumentation is not based on whether a p-value is small enough or not. Attention is paid to effect sizes and confidence intervals. Evidence is thought of as being continuous rather than some sort of dichotomy…. Instead, journals [should evaluate] papers based on clear and detailed description of the study design, execution, and analysis, having conclusions that are based on valid statistical interpretations and scientific arguments, and reported transparently and thoroughly enough to be rigorously scrutinized by others.”

In plain language, researchers submitting manuscripts to medical journals should follow this practical guidance:

  1. Data that are descriptive of the sample (ie, indicating imbalances between observed groups but not making inference to a population) should not be associated with P values. In this case, authors would describe numerical differences and sample summary statistics and focus on differences of clinical importance. In addition to summary statistics and confidence intervals, standardized differences (rather than P values) are a preferred way to exhibit imbalances between groups.
  2. Researchers should define and interpret effect measures that are clinically relevant.
  3. Reporting stand-alone P values is discouraged, and preference should be given to presentation and interpretation of effect sizes and their uncertainty (confidence intervals) in the scientific context and in light of other evidence.

Key message

Authors should interpret effect size in the scientific context and in light of other research evidence.


  1. Wasserstein  RL, Lazar  NA. The ASA’s statement on P-values: context, process, and purpose. http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108. Published 2016. Accessed August 19, 2016.
  2. Mark  DB, Lee  KL, Harrell  FE  Jr.  Understanding the role of P values and hypothesis tests in clinical research.  JAMA Cardiol. doi:10.1001/jamacardio.2016.3312.
  3. Thomas LE, Pencina MJ. Do Not Over (P) Value Your Research Article. JAMA Cardiol. 2016;1(9):1055.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.