Statistics - Multiple Linear Regression

Multiple regression is a regression with multiple predictors. It extends the simple model. You can have many predictor as you want. The power of multiple regression (with multiple predictor) is to better predict a score than each simple regression for each individual predictor.

In multiple regression analysis, the null hypothesis assumes that the unstandardized regression coefficient, B, is zero.

3 - (Equation | Function)

The function is called a hyperplane.

In general notation:

$$\begin{array}{rrl} \hat{Y} & = & Y - e \\ & = & B_0 + B_1.{X_1} + B_2.{X_2} + \dots + B_k.{X_k} - e\\ & = & B_0 + \sum_{1}^{k}(B_k.{X_k}) - e\\ \end{array}$$

where:

• $Y$

is the original score

• $\hat{Y}$

is the predicted score (ie the model)

• $e$

is the error (residual) ie

$Y - \hat{Y}$
• $B_0$

is the intercept also known as the regression constant

• $B_k$

is the slope also known as the regression coefficient

• $X_k$

are predictor variables

• $k$

is the number of predictor variables

The intercept is the point in the Y axis when X is null. And the slope tells that for each X units (1 on the X scale), Y increases of the slope value.

4 - Regression Coefficient

4.1 - Interpretation

4.1.1 - Balanced design

The ideal scenario is when the predictors are uncorrelated (a balanced design).

• Each coefficient can be estimated and tested separately.
• Interpretations such as “a unit change in $X_j$

is associated with a

$B_ j$

change in Y , while all the other variables stay fixed”, are possible.

But predictors are not usually uncorrelated in the data. They will tend to move together in real data.

4.1.2 - Unbalanced design

Correlations amongst predictors cause problems:

• The variance of all coefficients tends to increase, sometimes dramatically
• Interpretations become hazardous. when $X_j$

changes, everything else changes. The Regression coefficients in multiple regression must be interpreted in the context of the other variables.

We might get a particular regression coefficient on a variable just because of others characteristics of the sample.

The value of one regression coefficient is influenced by the other values on the other variables that are used as predictors in the model.

The calculation of one regression coefficient is made taking into account the other variables. You can only make cross conclusion between all variables from all regression coefficients.

A regression coefficient

$B_j$

estimates the expected change in Y per unit change in

$X_j$

, with all other predictors held fixed. But predictors usually change together.

“Data Analysis and Regression” Mosteller and Tukey 1977 - Chapter The woes of (interpreting) regression coefficients

Claims of causality should be avoided because the predictors in the system are correlated. (ie One predictor doesn't cause the outcome).

Any effect of a predictor variable can be soaked up by an other because they're correlated. And on the other hand, uncorrelated variables may have somewhat complimentary effects.

4.2 - Strongest predictor

The strongest predictor is given by the standardized regression coefficient.

4.3 - Estimation of Standardized Coefficient

The values of the coefficients (B) are estimated such that the model yields optimal predictions by minimizing the sum of the squared residuals (RSS). This method is called the multiple least squares estimates because there's multiple predictors.

Ie A multiple regression fits a hyperplane to minimize the square distance between each point and the closest point on the plane.

4.3.1 - Step 1

The linear matrix equation (regression model)

$\hat{Y} = B.X$

where:

• $\hat{Y}$

is a [N x 1] vector representing the predicted score where N is the sample size

• $X$

is the [N x k] design matrix representing the predictors variables where k is the number of predictors

• $B$

is a [k x 1] vector representing the regression coefficients (

$\beta$

)

• the regression constant is assumed to be zero.

4.3.2 - Step 2

Assuming that the residual are null,

$\hat{Y} = Y$$Y = X.B$

where

$Y$

is the raw score

4.3.3 - Step 3

To make X square and symmetric in order to invert it in the next step only square matrices can be inverted, both sides of the equation are pre-multiplied by the transpose of X.

$X^T.Y = X^T.X.B$

4.3.4 - Step 4

To eliminate

$X^T.X$

, pre-multiply by the inverse,

$(X^T.X)^{-1}$

because

$(X^T.X).(X^T.X)^{-1}= I$

where I is the identity matrix

$\begin{array}{rrl} (X^T.X)^{-1}.(X^T.Y) & = & (X^T.X).(X^T.X)^{-1}.B \\ (X^T.X)^{-1}.(X^T.Y) & = & B \\ B & = & (X^T.X)^{-1}.(X^T.Y) \end{array}$

4.3.5 - Step 5

Substitute:

• $(X^T.X)^{-1}$
• $(X^T.Y)$
$\begin{array}{rrl} B & = & (X^T.X)^{-1}.(X^T.Y) \\ & = & {S_{xx}}^{-1}.S_{xy} \\ \end{array}$

5 - Confidence Interval

If the confidence intervals don't cross zero (if they don't include zero), it's an indication that they're going to be significant.

6 - Visualisation

You can't visualize multiple regression in one graphic (scatter-plot) because they are more than one predictors. There is one way through the model R and R squared to capture it all in one scatter plot.

By saving the predicted scores, you can plot an other scatter plot with the predicted scores on the x axis vs the actual score on the y axis.

7 - Questions

Question that can be answered from the model.

• Is at least one of the predictors $X_1,X_2,\dots,X_n$

useful in predicting the response?

• Do all the predictors help to explain Y , or is only a subset of the predictors useful?
• Given a set of predictor values, what response value should we predict, and how accurate is our prediction?

Question that can be answered from alternative models..

• How well does the model fit the data?

7.1 - One predictor Useful ?

To answer the question: Is at least one of the predictors

$X_1,X_2,\dots,X_n$

useful in predicting the response? we can use the F-statistic

We look at the drop in training error (ie the present variance explained R Squared). To quantify it in a more statistical way, we can form the f-ratio.

$$F = \frac{\displaystyle \frac{(TSS-RSS)}{p}}{\displaystyle \frac{RSS}{n-(p+1)}}$$

The f-statistic is:

• the drop in training error (TSS-RSS) divided by the number of parameters (p)
• divided by the mean squared residual
• ie rss divided by
• the sample size N minus the number of parameters we fit (p plus 1 for the intercept).

Under the null hypothesis (ie there's no effect of any of predictors), the f-statistic will have an f-distribution with p and n minus p minus 1 degrees of freedom.

7.3 - Variables redundant ?

We don't want to include variables that are redundant. One of them is not going to explain a significant amount of variants in the outcome when the other one is in the model. Because they're both sort of explaining the same variants.

When multiple regression coefficient remains significant these two variables are not redundant and our prediction should get better by including both of these in the model over including just one alone.

data_mining/multiple_regression.txt · Last modified: 2018/06/05 10:20 by gerardnico