Definition

Effect size tells you about the magnitude of the effect.

Meta-analysis

Glass coined the term “meta-analysis”. 1978: Gene V. Glass statistically aggregate the findings of 375 psychotherapy outcome studies Glass (and colleague Smith) to disprove claim that psychotherapy was useless.

Effect Size is used prolifically in meta-analysis to combine results from multiple studies (averaging results from different experiments can produce nonsense).

“When a number of quite independent tests of significance have been made, it sometimes happens that although few or none can be claimed individually as significant, yet the aggregate gives an impression that the probabilities are on the whole lower than would often have been obtained by chance”

Fisher (1944)

Formula

<MATH> \text{Effect Size} = \frac{ \displaystyle \text{Mean of experimental group} – \text{Mean of control group} } { \displaystyle \href{Standard deviation}{\text{Standard deviation}}} </MATH>

Other definitions of effect size exist: odds-­ratio, correlation coefficient

Cohen’s Heuristic: Standardized mean difference effect size

  • small = 0.20
  • medium = 0.50
  • large = 0.80

Statistics

Weighted Average

  • Idea: Average across multiple studies, but give more weight to more precise studies
  • Simple method: Weight by sample size: w_i = {n_i}/{Sigma_j n_j}
  • Inverse-variance weight: w_i = 1/{se^2}

Standardized Mean Difference

Efeect Sized (ES) = {{Mean of group1} - {Mean of group2}}/{Standard deviation pooled} = {overline{X}_1 - overline{X}_2}/{sigma_{pooled}}

Different Definitions of {sigma_{pooled}}

  • Glass 1976 = {sigma_{pooled}} = hat{{sigma}_2}
  • Hartung et al., 2008 …

Confidence Interval

Confidence Interval (of effect size)

A 95% confidence interval of the effect size mean that if the experiment 100 times is repeated 100 times, we expect that the interval would include this effect size 95/100 times

If this interval includes 0.0, that’s equivalent to saying the result is not statistically significant.