Nonparametric tests aren’t a silver bullet when parametric assumptions are violated

R
power
significance
simplicity
assumptions
nonparametric tests
Author

Jan Vanhove

Published

May 23, 2020

Some researchers adhere to a simple strategy when comparing data from two or more groups: when they think that the data in the groups are normally distributed, they run a parametric test (\(t\)-test or ANOVA); when they suspect that the data are not normally distributed, they run a nonparametric test (e.g., Mann–Whitney or Kruskal–Wallis). Rather than follow such an automated approach to analysing data, I think researchers ought to consider the following points:

In this blog post, I’ll share the results of some simulations that demonstrate that the Mann–Whitney (a) picks up on differences in the variance between two distributions, even if they have the same mean and median; (b) picks up on differences in the median between two distributions, even if they have the same mean and variance; and (c) picks up on differences in the mean between two distributions, even if they have the same median and variance. These points aren’t new (see Zimmerman 1998), but since the automated strategy (‘parametric when normal, otherwise nonparemetric’) is pretty widespread, they bear repeating.

Same mean, same median, different variance

The first simulation demonstrates the Mann–Whitney’s sensitivity to differences in the variance. I simulated samples from a uniform distribution going from \(-\sqrt{3}\) to \(\sqrt{3}\) as well as from a uniform distribution going from \(-3\sqrt{3}\) to \(3\sqrt{3}\). Both distributions have a mean and median of 0, but the standard deviation of the first is 1 and that of the second is 3. I compared these samples using a Mann–Whitney and recorded the \(p\)-value. I generated samples of both 50 and 500 observations and repeated this process 10,000 times. You can reproduce this simulation using the code below.

Figure 1 shows the distribution of the \(p\)-values. Even though the distributions’ means and medians are the same, the Mann–Whitney returns significance (\(p < 0.05\)) in about 7% of the comparisons for the smaller samples and 8% for the larger samples. If the test were sensitive only to differences in the mean or median, if should return significance in only 5% of the comparisons.

# Load package for plotting
library(ggplot2)

# Set number of simulation runs
n_sim <- 10000

# Draw a sample of 50 observations from two uniform distributions with the same 
# mean and median but with different variances/standard deviations.
# Run the Mann-Whitney on them (wilcox.test()).
# Repeat this n_sim times.
pvals_50 <- replicate(n_sim, {
  x <- runif(50, min = -3*sqrt(3), max = 3*sqrt(3))
  y <- runif(50, min = -sqrt(3), max = sqrt(3))
  wilcox.test(x, y)$p.value
})

# Same but with samples of 500 observations.
pvals_500 <- replicate(n_sim, {
  x <- runif(500, min = -3*sqrt(3), max = 3*sqrt(3))
  y <- runif(500, min = -sqrt(3), max = sqrt(3))
  wilcox.test(x, y)$p.value
})

# Put in data frame
d <- data.frame(
  p = c(pvals_50, pvals_500),
  n = rep(c(50, 500), each = n_sim)
)

# Plot
ggplot(data = d,
       aes(x = p,
           fill = (p < 0.05))) +
  geom_histogram(
    breaks = seq(0, 1, 0.05),
    colour = "grey20") +
  scale_fill_manual(values = c("grey70", "red")) +
  facet_wrap(~ n) +
  geom_hline(yintercept = n_sim*0.05, linetype = 2) +
  theme(legend.position = "none") +
  labs(
    title = element_blank(),
    subtitle = "Same mean, same median, different variance",
    caption = "Comparison for two sample sizes (50 vs. 500 observations per group):
    uniform distribution from -sqrt(3) to sqrt(3)
    vs. uniform distribution from -3*sqrt(3) to 3*sqrt(3)"
  )

Figure 1.

Same mean, different median, same variance

The second simulation demonstrates that the Mann–Whitney does not compare means. The simulation set-up was the same as before, but the samples were drawn from different distributions. The first sample was drawn from a log-normal distribution with mean \(\exp{(\ln{10} + \frac{1}{2})} \approx 16.5\), median 10 and standard deviation \(\sqrt{(\exp{(1)}-1)\exp{(2\ln{10}+1)}} \approx 21.6\). The second sample was drawn from a normal distribution with the same mean (i.e., about 16.5) and the same standard deviation (i.e., about 21.6), but with a different median (viz., 16.5 rather than 10).

Figure 2 shows that the Mann–Whitney returned significance for 12% of the comparisons of the smaller samples and 92% of the comparisons for the larger samples. So the Mann–Whitney does not test for differences in the mean; otherwise only 5% of the comparisons should have been significant (since the means of the distributions are the same).

Figure 2.

Different mean, same median, same variance

The last simulation demonstrates that the Mann–Whitney does not compare medians, either. The first sample was again drawn from a log-normal distribution with mean \(\exp{(\ln{10} + \frac{1}{2})} \approx 16.5\), median 10 and standard deviation \(\sqrt{(\exp{(1)}-1)\exp{(2\ln{10}+1)}} \approx 21.6\). The second sample was now drawn from a normal distribution with the same median (i.e., 10) and the same standard deviation (i.e., about 21.6), but with a different mean (viz., 10 rather than 16.5).

Figure 3 shows that the Mann–Whitney returned significance for 20% of the comparisons of the smaller samples and 91% of the comparisons for the larger samples. So the Mann–Whitney does not test for differences in the median; otherwise only 5% of the comparisons should have been significant (since the medians of the distributions are the same).

Figure 3.

Nonparametric tests make assumptions, too

Many researchers think that nonparametric tests don’t make any assumptions about the distributions from which the data were drawn. This belief is half-true (i.e., wrong): Nonparametric tests such as the Mann–Whitney don’t assume that the data were drawn from a specific distribution (e.g., from a normal distribution). However, they do assume that the data in the different groups being compared were drawn from the same distribution (but for a shift in the location of this distribution). If researchers run nonparametric tests because they are worried about violating the assumptions of parametric tests, I suggest they worry about the assumptions of their nonparametric tests, too.

But a better solution in my view is to them to consider more carefully what they actually want to compare. If it is really the means that are of interest, parametric tests are often okay, and their results can be double-checked using the bootstrap if needed. Permutation tests would be an alternative. If it is the medians that are of interest, quantile regression, bootstrapping, or permutation tests may be useful. If another measure of the data’s central tendency is of interest, robust regression may be useful. A discussion of these techniques is beyond the scope of this blog post, whose aims merely were to alert researchers to the fact that nonparametric tests aren’t a silver bullet when parametric assumptions are violated and that nonparametric tests aren’t just sensitive to differences in the mean or median.

Reference

Zimmerman, Donald W. 1998. Invalidation of parametric and nonparametric statistical tests by concurrent violation of two assumptions. Journal of Experimental Education 67(1). 55-68.

Software versions

Please note that I reran the code on this page on August 6, 2023.

devtools::session_info()
─ Session info ───────────────────────────────────────────────────────────────
 setting  value
 version  R version 4.3.1 (2023-06-16)
 os       Ubuntu 22.04.2 LTS
 system   x86_64, linux-gnu
 ui       X11
 language en_US
 collate  en_US.UTF-8
 ctype    en_US.UTF-8
 tz       Europe/Zurich
 date     2023-08-06
 pandoc   3.1.1 @ /usr/lib/rstudio/resources/app/bin/quarto/bin/tools/ (via rmarkdown)

─ Packages ───────────────────────────────────────────────────────────────────
 package     * version date (UTC) lib source
 cachem        1.0.6   2021-08-19 [2] CRAN (R 4.2.0)
 callr         3.7.3   2022-11-02 [1] CRAN (R 4.3.1)
 cli           3.6.1   2023-03-23 [1] CRAN (R 4.3.0)
 colorspace    2.1-0   2023-01-23 [1] CRAN (R 4.3.0)
 crayon        1.5.2   2022-09-29 [1] CRAN (R 4.3.1)
 devtools      2.4.5   2022-10-11 [1] CRAN (R 4.3.1)
 digest        0.6.29  2021-12-01 [2] CRAN (R 4.2.0)
 dplyr       * 1.1.2   2023-04-20 [1] CRAN (R 4.3.0)
 ellipsis      0.3.2   2021-04-29 [2] CRAN (R 4.2.0)
 evaluate      0.15    2022-02-18 [2] CRAN (R 4.2.0)
 fansi         1.0.4   2023-01-22 [1] CRAN (R 4.3.1)
 farver        2.1.1   2022-07-06 [1] CRAN (R 4.3.0)
 fastmap       1.1.0   2021-01-25 [2] CRAN (R 4.2.0)
 forcats     * 1.0.0   2023-01-29 [1] CRAN (R 4.3.0)
 fs            1.5.2   2021-12-08 [2] CRAN (R 4.2.0)
 generics      0.1.3   2022-07-05 [1] CRAN (R 4.3.0)
 ggplot2     * 3.4.2   2023-04-03 [1] CRAN (R 4.3.0)
 glue          1.6.2   2022-02-24 [2] CRAN (R 4.2.0)
 gtable        0.3.3   2023-03-21 [1] CRAN (R 4.3.0)
 hms           1.1.3   2023-03-21 [1] CRAN (R 4.3.0)
 htmltools     0.5.5   2023-03-23 [1] CRAN (R 4.3.0)
 htmlwidgets   1.6.2   2023-03-17 [1] CRAN (R 4.3.1)
 httpuv        1.6.11  2023-05-11 [1] CRAN (R 4.3.1)
 jsonlite      1.8.7   2023-06-29 [1] CRAN (R 4.3.1)
 knitr         1.39    2022-04-26 [2] CRAN (R 4.2.0)
 labeling      0.4.2   2020-10-20 [1] CRAN (R 4.3.0)
 later         1.3.1   2023-05-02 [1] CRAN (R 4.3.1)
 lifecycle     1.0.3   2022-10-07 [1] CRAN (R 4.3.0)
 lubridate   * 1.9.2   2023-02-10 [1] CRAN (R 4.3.0)
 magrittr      2.0.3   2022-03-30 [1] CRAN (R 4.3.0)
 memoise       2.0.1   2021-11-26 [2] CRAN (R 4.2.0)
 mime          0.10    2021-02-13 [2] CRAN (R 4.0.2)
 miniUI        0.1.1.1 2018-05-18 [1] CRAN (R 4.3.1)
 munsell       0.5.0   2018-06-12 [1] CRAN (R 4.3.0)
 pillar        1.9.0   2023-03-22 [1] CRAN (R 4.3.0)
 pkgbuild      1.4.2   2023-06-26 [1] CRAN (R 4.3.1)
 pkgconfig     2.0.3   2019-09-22 [2] CRAN (R 4.2.0)
 pkgload       1.3.2.1 2023-07-08 [1] CRAN (R 4.3.1)
 prettyunits   1.1.1   2020-01-24 [2] CRAN (R 4.2.0)
 processx      3.8.2   2023-06-30 [1] CRAN (R 4.3.1)
 profvis       0.3.8   2023-05-02 [1] CRAN (R 4.3.1)
 promises      1.2.0.1 2021-02-11 [1] CRAN (R 4.3.1)
 ps            1.7.5   2023-04-18 [1] CRAN (R 4.3.1)
 purrr       * 1.0.1   2023-01-10 [1] CRAN (R 4.3.0)
 R6            2.5.1   2021-08-19 [2] CRAN (R 4.2.0)
 Rcpp          1.0.11  2023-07-06 [1] CRAN (R 4.3.1)
 readr       * 2.1.4   2023-02-10 [1] CRAN (R 4.3.0)
 remotes       2.4.2   2021-11-30 [2] CRAN (R 4.2.0)
 rlang         1.1.1   2023-04-28 [1] CRAN (R 4.3.0)
 rmarkdown     2.21    2023-03-26 [1] CRAN (R 4.3.0)
 rstudioapi    0.14    2022-08-22 [1] CRAN (R 4.3.0)
 scales        1.2.1   2022-08-20 [1] CRAN (R 4.3.0)
 sessioninfo   1.2.2   2021-12-06 [2] CRAN (R 4.2.0)
 shiny         1.7.4.1 2023-07-06 [1] CRAN (R 4.3.1)
 stringi       1.7.12  2023-01-11 [1] CRAN (R 4.3.1)
 stringr     * 1.5.0   2022-12-02 [1] CRAN (R 4.3.0)
 tibble      * 3.2.1   2023-03-20 [1] CRAN (R 4.3.0)
 tidyr       * 1.3.0   2023-01-24 [1] CRAN (R 4.3.0)
 tidyselect    1.2.0   2022-10-10 [1] CRAN (R 4.3.0)
 tidyverse   * 2.0.0   2023-02-22 [1] CRAN (R 4.3.1)
 timechange    0.2.0   2023-01-11 [1] CRAN (R 4.3.0)
 tzdb          0.4.0   2023-05-12 [1] CRAN (R 4.3.0)
 urlchecker    1.0.1   2021-11-30 [1] CRAN (R 4.3.1)
 usethis       2.2.2   2023-07-06 [1] CRAN (R 4.3.1)
 utf8          1.2.3   2023-01-31 [1] CRAN (R 4.3.1)
 vctrs         0.6.3   2023-06-14 [1] CRAN (R 4.3.0)
 withr         2.5.0   2022-03-03 [2] CRAN (R 4.2.0)
 xfun          0.39    2023-04-20 [1] CRAN (R 4.3.0)
 xtable        1.8-4   2019-04-21 [1] CRAN (R 4.3.1)
 yaml          2.3.5   2022-02-21 [2] CRAN (R 4.2.0)

 [1] /home/jan/R/x86_64-pc-linux-gnu-library/4.3
 [2] /usr/local/lib/R/site-library
 [3] /usr/lib/R/site-library
 [4] /usr/lib/R/library

──────────────────────────────────────────────────────────────────────────────