Type I and Type II Errors
Every hypothesis test can result in one of four outcomes depending on the true state of the world and the decision made. When Hβ is actually false and we reject it: correct decision β a true positive. When Hβ is actually true and we fail to reject it: correct decision β a true negative. When Hβ is actually true but we reject it: Type I error (false positive) β we concluded there is an effect when there is none. The probability of a Type I error equals alpha (Ξ±), the significance level β setting Ξ± = 0.05 means a 5% chance of incorrectly rejecting a true null hypothesis. When Hβ is actually false but we fail to reject it: Type II error (false negative) β we missed a real effect. The probability of a Type II error is called beta (Ξ²). The relationship between alpha and beta is not one-to-one: making alpha smaller (more stringent) reduces Type I errors but increases Type II errors for a fixed sample size. The stakes determine which error is more costly. In medical diagnostic testing: a Type I error (false positive cancer diagnosis) causes unnecessary treatment, anxiety, and cost. A Type II error (false negative cancer diagnosis) means a real cancer is missed β potentially fatal. Calibrate alpha based on the relative cost of each error type. In drug trials, the convention is Ξ± = 0.05 for efficacy (we can afford some Type I error) but the FDA requires very small alpha for safety signals (Type II errors are unacceptable when real side effects might go undetected).