#### This is a draft of the Third Edition of this handbook. It may change at any time. Until this is finished, you may want to use the Second Edition.

# Kruskal–Wallis test

### Summary

Use the Kruskal–Wallis test when you have one nominal variable and one measurement variable that is severely non-normal, or when you have one nominal variable and one ranked variable. It tests whether the mean ranks of the measurement variable are the same in all the groups.

### When to use it

One-way anova assumes that the measurement variable fits the normal distribution. It is not very sensitive to deviations from this assumption, especially if you have a balanced design (equal sample sizes in all groups). However, if your data are severely non-normal, one-way anova can give you inaccurate results; this problem is more severe if you have an unbalanced design. Your first choice should be to try a data transformation, and if you find a transformation that makes the data fit the normal distribution fairly well, you can use one-way anova on the transformed data. However, if you can't find a good transformation, you can use the Kruskal–Wallis test.

The most common use of the Kruskal–Wallis test is when you have one nominal variable and one measurement variable, and the measurement variable does not meet the normality assumption of an anova. It is a non-parametric test, which means that it does not assume that the data come from a distribution that can be completely described by two parameters, mean and standard deviation (the way a normal distribution can). Like most non-parametric tests, you perform it on ranked data, so you convert the measurement observations to their ranks in the overall data set: the smallest value gets a rank of 1, the next smallest gets a rank of 2, and so on. You lose information when you substitute ranks for the original values, which can make this a somewhat less powerful test than an anova, so you should use one-way anova if your data are normal. Another reason to prefer one-way anova is that its null hypothesis is a lot easier to understand.

Some people have the attitude that unless you have a large sample size and can clearly demonstrate that your data are normal, you should routinelly use Kruskal-Wallis; they think it is dangerous to use a test that assumes normality when you don't know for sure that your data are normal. As computer simulations have shown that one-way anova is not very sensitive to deviations from normality, this attitude seems to be less common. I lean in the other direction; unless your data are obviously, severely non-normal, go ahead and use one-way anova.

While there are ways to measure the non-normality of a data set (skewness and kurtosis), there doesn't seem to be any rule about how non-normal your data can be and still be acceptable for one-way anova. So when you're planning how to analyze your data (which you should do before you do the experiment), you can't just say "If skewness is less than 5, I'll use one-way anova, otherwise I'll use Kruskal-Wallis." Deciding which test to use *after* you've looked at the data opens you up to the temptation of picking the test that yields the most interesting result. The best thing to do would be to decide before you do the experiment whether you're going to use one-way anova or Kruskal-Wallis, based on the normality of the measurement variable in a pilot study or other previous research. That won't always be possible, however.

The other assumption of one-way anova is that the variation within the groups is equal (homoscedasticity). If your data are heteroscedastic, Kruskal-Wallis is no better than one-way anova, and may be worse. Instead, you should use Welch's anova for heterscedastic data.

If your original data set actually consists of one nominal variable and one ranked variable, you cannot do a one-way anova and must use the Kruskal–Wallis test.

The Mann–Whitney U-test (also known as the Mann–Whitney–Wilcoxon test, the Wilcoxon rank-sum test, or the Wilcoxon two-sample test) is limited to nominal variables with only two values; it is the non-parametric analogue to Student's t-test. It uses a different test statistic (*U* instead of the *H* of the Kruskal–Wallis test), but the P-value is mathematically identical to that of a Kruskal–Wallis test. For simplicity, I will only refer to Kruskal–Wallis on the rest of this web page, but everything also applies to the Mann–Whitney U-test.

### Null hypothesis

The null hypothesis of the Kruskal-Wallis test is that the mean ranks of the groups are the same. The expected mean rank depends only on the total number of observations (for *n* observations, the expected mean rank in each group is (*n*+1)/2), so it is not a very useful description of the data; it's not something you would plot on a graph.

You will sometimes see the null hypothesis of the Kruskal-Wallis test given as "The samples come from populations with the same distribution." This is correct, in that if the samples come from populations with the same distribution, the Kruskal-Wallis test will show no difference among them. I think it's a little misleading, however, because only some kinds of differences in distribution will be detected by the test. For example, if two populations have symmetrical distributions with the same center, but one is much wider than the other, their distributions are different but the Kruskal-Wallis test will not detect any difference between them.

The null hypothesis of the Kruskal-Wallis test is *not* that the means are the same. It is therefore incorrect to say something like "The mean concentration of fructose is higher in pears than in apples (Kruskal-Wallis test, P=0.02)," although you will see data summarized with means and then compared with Kruskal-Wallis tests in many publications.

The null hypothesis of the Kruskal-Wallis test is often said to be that the medians of the groups are equal, but this is only true if you assume that the shape of the distribution in each group is the same. If the distributions are different, the Kruskal-Wallis test can reject the null hypothesis even though the medians are the same. To illustrate this point, I made up these three sets of numbers. They have identical means (43.5), and identical medians (27.5), but the mean ranks are different (34.6, 27.5, and 20.4, respectively), resulting in a significant (P=0.025) Kruskal–Wallis test:

Group 1 Group 2 Group 3 1 10 19 2 11 20 3 12 21 4 13 22 5 14 23 6 15 24 7 16 25 8 17 26 9 18 27 46 37 28 47 58 65 48 59 66 49 60 67 50 61 68 51 62 69 52 63 70 53 64 71 342 193 72

### Assumptions

The Kruskal–Wallis test does NOT assume that the data are normally distributed; that is its big advantage. If you're using it to test whether the medians are different, it does assume that the observations in each group come from populations with the same shape of distribution, so if different groups have have different shapes (one is skewed to the right and another is skewed to the left, for example, or they have different variances), the Kruskal–Wallis test may give inaccurate results (Fagerland and Sandvik 2009). If you're interested in any difference among the groups that would make the mean ranks be different, then the Kruskal-Wallis test doesn't make any assumptions.

Heteroscedasticity is one way in which different groups can have different shaped distributions. If the distributions are normally shaped but highly heteroscedastic, you can use Welch's t-test for two groups, or Welch's anova for more than two groups. If the distributions are both non-normal and highly heteroscedastic, I don't know what to recommend.

### How the test works

When working with a measurement variable, the Kruskal–Wallis test starts by substituting the rank in the overall data set for each measurement value. The smallest value gets a rank of 1, the second-smallest gets a rank of 2, etc. Tied observations get average ranks; thus if there were four identical values occupying the fifth, sixth, seventh and eighth smallest places, all would get a rank of 6.5.

The sum of the ranks is calculated for each group, then the test statistic, H, is calculated. H is given by a rather formidable formula that basically represents the variance of the ranks among groups, with an adjustment for the number of ties. H is approximately chi-square distributed, meaning that the probability of getting a particular value of H by chance, if the null hypothesis is true, is the P value corresponding to a chi-square equal to H; the degrees of freedom is the number of groups minus 1.

If the sample sizes are too small, H does not follow a chi-squared distribution very well, and the results of the test should be used with caution. N less than 5 in each group seems to be the accepted definition of "too small."

A significant Kruskal–Wallis test may be followed up by unplanned comparisons of mean ranks, analogous to the Tukey-Kramer method for comparing means. There is an online calculator for computing the Least Significant Difference in ranks.

### Examples

Bolek and Coggins (2003) collected multiple individuals of the toad *Bufo americanus,*, the frog *Rana pipiens,* and the salamander *Ambystoma laterale* from a small area of Wisconsin. They dissected the amphibians and counted the number of parasitic helminth worms in each individual. There is one measurement variable (worms per individual amphibian) and one nominal variable (species of amphibian), and the authors did not think the data fit the assumptions of an anova. The results of a Kruskal–Wallis test were significant (H=63.48, 2 d.f., P=1.6 X 10^{-14}); the mean ranks of worms per individual are significantly different among the three species.

McDonald et al. (1996) examined geographic variation in anonymous DNA polymorphisms (variation in random bits of DNA of no known function) in the American oyster, *Crassostrea virginica*. They used an estimator of Wright's F_{ST} as a measure of geographic variation. They compared the F_{ST} values of the six DNA polymorphisms to F_{ST} values on 13 proteins from Buroker (1983). The biological question was whether protein polymorphisms would have generally lower or higher F_{ST} values than anonymous DNA polymorphisms; if so, it would suggest that natural selection could be affecting the protein polymorphisms. F_{ST} has a theoretical distribution that is highly skewed, so the data were analyzed with a Mann–Whitney U-test.

gene | class | F_{ST} |
---|---|---|

CVB1 | DNA | -0.005 |

CVB2m | DNA | 0.116 |

CVJ5 | DNA | -0.006 |

CVJ6 | DNA | 0.095 |

CVL1 | DNA | 0.053 |

CVL3 | DNA | 0.003 |

6Pgd | protein | -0.005 |

Aat-2 | protein | 0.016 |

Acp-3 | protein | 0.041 |

Adk-1 | protein | 0.016 |

Ap-1 | protein | 0.066 |

Est-1 | protein | 0.163 |

Est-3 | protein | 0.004 |

Lap-1 | protein | 0.049 |

Lap-2 | protein | 0.006 |

Mpi-2 | protein | 0.058 |

Pgi | protein | -0.002 |

Pgm-1 | protein | 0.015 |

Pgm-2 | protein | 0.044 |

Sdh | protein | 0.024 |

The results were not significant (U=0.21, P=0.84), so the null hypothesis that the F_{ST} of DNA and protein polymorphisms have the same mean ranks is not rejected.

### Graphing the results

It is tricky to know how to visually display the results of a Kruskal–Wallis test. It would be misleading to plot the means or medians on a bar graph, as the Kruskal–Wallis test is not a test of the difference in means or medians. If there are relatively small number of observations, you could put the individual observations on a bar graph, with the value of the measurement variable on the Y axis and its rank on the X axis, and use a different pattern for each value of the nominal variable. Here's an example using the oyster F_{st} data:

F_{st} values for DNA and protein polymorphisms in the American oyster. DNA polymorphisms are shown in red. |

F_{st} values for DNA and protein polymorphisms in the American oyster. Names of DNA polymorphisms have a box around them. |

If there are larger numbers of observations, you could plot a histogram for each category, all with the same scale, and align them vertically. I don't have suitable data for this handy, so here's an illustration with imaginary data:

Histograms of three sets of numbers. |

Histograms of three sets of numbers. |

### Similar tests

One-way anova is slightly more powerful and a lot easier to understand than the Kruskal–Wallis test, so it should be used unless the data are severely non-normal. There is no firm rule about how non-normal data can be before an anova becomes inappropriate.

If the data are normally distributed but heteroscedastic, you can use Welch's t-test for two groups, or Welch's anova for more than two groups.

### How to do the test

#### Spreadsheet

I have put together a spreadsheet to do the Kruskal–Wallis test on up to 20 groups, with up to 1000 observations per group.

#### Web pages

Richard Lowry has web pages for performing the Kruskal–Wallis test for two groups, three groups, or four groups.

#### SAS

To do a Kruskal–Wallis test in SAS, use the NPAR1WAY procedure (that's the numeral "one," not the letter "el," in NPAR1WAY). "Wilcoxon" tells the procedure to only do the Kruskal–Wallis test; if you leave that out, you'll get several other statistical tests as well, tempting you to pick the one whose results you like the best. The nominal variable that gives the group names is given with the "class" parameter, while the measurement or rank variable is given with the "var" parameter. Here's an example, using the oyster data from above:

data oysters; input markername $ markertype $ fst; cards; CVB1 DNA -0.005 CVB2m DNA 0.116 CVJ5 DNA -0.006 CVJ6 DNA 0.095 CVL1 DNA 0.053 CVL3 DNA 0.003 6Pgd protein -0.005 Aat-2 protein 0.016 Acp-3 protein 0.041 Adk-1 protein 0.016 Ap-1 protein 0.066 Est-1 protein 0.163 Est-3 protein 0.004 Lap-1 protein 0.049 Lap-2 protein 0.006 Mpi-2 protein 0.058 Pgi protein -0.002 Pgm-1 protein 0.015 Pgm-2 protein 0.044 Sdh protein 0.024 ; proc npar1way data=oysters wilcoxon; class markertype; var fst; run;

The output contains a table of "Wilcoxon scores"; the "mean score" is the mean rank in each group, which is what you're testing the homogeneity of. "Chi-square" is the H-statistic of the Kruskal–Wallis test, which is approximately chi-square distributed. The "Pr > Chi-Square" is your P-value. You would report these results as "H=0.04, 1 d.f., P=0.84."

Wilcoxon Scores (Rank Sums) for Variable fst Classified by Variable markertype Sum of Expected Std Dev Mean markertype N Scores Under H0 Under H0 Score ----------------------------------------------------------------- DNA 6 60.50 63.0 12.115236 10.083333 protein 14 149.50 147.0 12.115236 10.678571 Kruskal–Wallis Test Chi-Square 0.0426 DF 1 Pr > Chi-Square 0.8365

### Power analysis

I am not aware of a technique for estimating the sample size needed for a Kruskal–Wallis test.

### References

Bolek, M.G., and J.R. Coggins. 2003. Helminth community structure of sympatric eastern American toad, *Bufo americanus americanus,* northern leopard frog, *Rana pipiens,* and blue-spotted salamander, *Ambystoma laterale,* from southeastern Wisconsin. J. Parasit. 89: 673-680.

Buroker, N. E. 1983. Population genetics of the American oyster *Crassostrea virginica* along the Atlantic coast and the Gulf of Mexico. Mar. Biol. 75:99-112.

Fagerland, M.W., and L. Sandvik. 2009. The Wilcoxon-Mann-Whitney test under scrutiny. Statist. Med. 28: 1487-1497.

McDonald, J.H., B.C. Verrelli and L.B. Geyer. 1996. Lack of geographic variation in anonymous nuclear polymorphisms in the American oyster, *Crassostrea virginica.* Molecular Biology and Evolution 13: 1114-1118.

### ⇐ Previous topic | Next topic ⇒

This page was last revised September 14, 2009. Its address is http://www.biostathandbook.com/kruskalwallis.html. It may be cited as pp. 165-172 in: McDonald, J.H. 2013. Handbook of Biological Statistics (3rd ed.). Sparky House Publishing, Baltimore, Maryland.

©2013 by John H. McDonald. You can probably do what you want with this content; see the permissions page for details.