Multiple Regression: Coefficients as Partial Effects
Multiple regression extends simple linear regression to include two or more predictors: Ŷ = b₀ + b₁X₁ + b₂X₂ + ... + bₖXₖ + ε. The key interpretive shift is that each coefficient bⱼ is now a partial regression coefficient — the effect of Xⱼ on Y holding all other predictors constant. This statistical control is the primary advantage of multiple regression over simple regression: it removes the influence of confounding variables included in the model. Example: predicting salary (Y) from years of experience (X₁) and education level (X₂). b₁ = 3,200 means each additional year of experience is associated with $3,200 higher salary, holding education level constant. This removes the confound of education (more experienced workers also tend to have higher education) from the experience-salary relationship. Standardized coefficients (beta weights, β): when predictors are measured in different units, standardized coefficients allow comparison of their relative importance. β = b × (s_x / s_y) — the change in Y in standard deviations for a one-SD change in X, holding others constant. Model fit: R² is the proportion of total variance in Y explained by all predictors combined. Adjusted R² = 1 − [(1−R²)(n−1)/(n−k−1)] penalizes for the number of predictors, preventing R² from artificially inflating when irrelevant predictors are added. The overall F-test tests H₀: all slope coefficients = 0 simultaneously. Individual t-tests for each coefficient test whether that predictor's partial effect is significant after controlling for others. Multicollinearity among predictors inflates standard errors and makes individual coefficients unstable — diagnose with VIF.