Asked • 03/17/19

Why do we use a Least Squares fit?

I've been wondering for a while now if there's any deep mathematical or statistical significance to finding the line that minimizes the *square* of the errors between the line and the data points. If we use a less common method like LAD, where we just consider the absolute deviation, then outliers make less difference to the final model, while if we take the *cube* of the error (or any other power higher than 2), then outliers are far more significant than with the least squares model. I suppose what I'm really asking is *mathematically*, is raising the error to the power of 2 really that special. Is it say more "accurate" in some sense than raising the error to the power of 1.95 or 2.05??? Thanks!

1 Expert Answer

By:

Patrick B. answered • 03/17/19

Tutor
4.7 (31)

Math and computer tutor/teacher

Still looking for help? Get the right answer, fast.

Ask a question for free

Get a free answer to a quick problem.
Most questions answered within 4 hours.

OR

Find an Online Tutor Now

Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.