(Machine learning|Inverse problems) - Regularization

Thomas Bayes

About

Regularization refers to a process of introducing additional information in order to:

  • solve an ill-posed problem
  • or to prevent overfitting.

This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm.

Techniques

Least Square

The least-squares method can be viewed as a very simple form of regularization.

Linear Regression

In statistics and machine learning, regularization methods are used for model selection, in particular to prevent overfitting by penalizing models with extreme parameter values. The most common variants in machine learning are L₁ and L₂ regularization.

When applied in linear regression, the resulting models are termed ridge regression or lasso.

Statistics - (Shrinkage|Regularization) of Regression Coefficients

Regularization is also employed in:

Bayes

From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.

Documentation / Reference





Discover More
Thomas Bayes
Loss functions (Incorrect predictions penalty)

Loss functions define how to penalize incorrect predictions. The optimization problems associated with various linear classifiers are defined as minimizing the loss on training points (sometime along with...
Bed Overfitting
Machine Learning - (Overfitting|Overtraining|Robust|Generalization) (Underfitting)

A learning algorithm is said to overfit if it is: more accurate in fitting known data (ie training data) (hindsight) but less accurate in predicting new data (ie test data) (foresight) Ie the model...
Lasso Vs Ridge Regression211
Statistics - (Shrinkage|Regularization) of Regression Coefficients

Shrinkage methods are more modern techniques in which we don't actually select variables explicitly but rather we fit a model containingall p predictors using a technique that constrains or regularizes...



Share this page:
Follow us:
Task Runner