Let's look at the variance formula first. that is the sum of (x_i - the mean of x)^2 over the number of data points. That is actually the average of the square of the distance. Now let's imagine the formula without the square, the sum of the distance could be ZERO so the variance would be ZERO. But we know there is variation between numbers so the variance should not be ZERO. For example, if we have 4 numbers: -1, -1, 0, 1, 1. Clearly we see variation between number but if we use the formula without square, the variance would be zero.
Axiomatic approach to the definition of variance?
I'm trying to grasp the intuition behind the definition of variance. It seems plausible that we want to measure how much a random variable deviates from it's expected value. But why using the square exactly?
From what I can see, we are interested in an assignment of the form $X\\mapsto E(f(|E(X)-X|))$ for some strictly monotonous $f$ with $f(0)=0$ and $f(1)=1$. Are there any further properties of the variance from which, if used as axioms, we can derive $f(x)=x^2$?
For example, would additiveness w.r.t. independent random variables, i.e. $$E(f(|E(X+Y)-X-Y|))=E(f(|E(X)-X|))+E(f(|E(Y)-Y|))$$ for $X,Y$ independent, suffice as such an axiom?
Follow
1
Add comment
More
Report
1 Expert Answer
Still looking for help? Get the right answer, fast.
Ask a question for free
Get a free answer to a quick problem.
Most questions answered within 4 hours.
OR
Find an Online Tutor Now
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.