language-iconOld Web
English
Sign In

Bonferroni correction

In statistics, the Bonferroni correction is one of several methods used to counteract the problem of multiple comparisons. In statistics, the Bonferroni correction is one of several methods used to counteract the problem of multiple comparisons. The Bonferroni correction is named after Italian mathematician Carlo Emilio Bonferroni for its use of Bonferroni inequalities. Its development is often credited to Olive Jean Dunn, who described the procedure's application to confidence intervals. Statistical hypothesis testing is based on rejecting the null hypothesis if the likelihood of the observed data under the null hypotheses is low. If multiple hypotheses are tested, the chance of a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. The Bonferroni correction compensates for that increase by testing each individual hypothesis at a significance level of α / m {displaystyle alpha /m} , where α {displaystyle alpha } is the desired overall alpha level and m {displaystyle m} is the number of hypotheses. For example, if a trial is testing m = 20 {displaystyle m=20} hypotheses with a desired α = 0.05 {displaystyle alpha =0.05} , then the Bonferroni correction would test each individual hypothesis at α = 0.05 / 20 = 0.0025 {displaystyle alpha =0.05/20=0.0025} . Let H 1 , … , H m {displaystyle H_{1},ldots ,H_{m}} be a family of hypotheses and p 1 , … , p m {displaystyle p_{1},ldots ,p_{m}} their corresponding p-values. Let m {displaystyle m} be the total number of null hypotheses and m 0 {displaystyle m_{0}} the number of true null hypotheses. The familywise error rate (FWER) is the probability of rejecting at least one true H i {displaystyle H_{i}} , that is, of making at least one type I error. The Bonferroni correction rejects the null hypothesis for each p i ≤ α m {displaystyle p_{i}leq {frac {alpha }{m}}} , thereby controlling the FWER at ≤ α {displaystyle leq alpha } . Proof of this control follows from Boole's inequality, as follows: This control does not require any assumptions about dependence among the p-values or about how many of the null hypotheses are true. Rather than testing each hypothesis at the α / m {displaystyle alpha /m} level, the hypotheses may be tested at any other combination of levels that add up to α {displaystyle alpha } , provided that the level of each test is determined before looking at the data. For example, for two hypothesis tests, an overall α {displaystyle alpha } of .05 could be maintained by conducting one test at .04 and the other at .01. The Bonferroni correction can be used to adjust confidence intervals. If one establishes m {displaystyle m} confidence intervals, and wishes to have an overall confidence level of 1 − α {displaystyle 1-alpha } , each individual confidence interval can be adjusted to the level of 1 − α m {displaystyle 1-{frac {alpha }{m}}} . There are alternative ways to control the familywise error rate.For example, the Holm–Bonferroni method and the Šidák correction are universally more powerful procedures than the Bonferroni correction, meaning that they are always at least as powerful. Unlike the Bonferroni procedure, these methods do not control the expected number of Type I errors per family (the per-family Type I error rate).

[ "Analysis of variance", "Statistics", "bonferroni inequality" ]
Parent Topic
Child Topic
    No Parent Topic
Baidu
map