The probability density function for the standard distribution is

P(t, μ, σ) = 1/√(2 π σ^{2}) * e^{-(t-μ)^{2}/(2σ^{2})}

For a standard normal distribution, σ=1, and μ=0. But that may not be the case if they are asking you to do that for some specific σ

To find the area within 1.5 standard deviations on either side, you must integrate that function, with respect to t, with the lower and upper limits of t=-1.5σ and t=1.5σ respectively.

This function is not analytically integrable. (There is no way to compute a function that produces an exact solution)

You must do one of the following four things to retrieve your answer:

1.) Use a computational service like wolfram alpha to perform the integration.

2.) Use your calculators CDF function

3.) Use a table of values to look up the area.

4.) Perform the integration iteratively using a Riemann sum, Simpson's rule, or some other finite approximation method (for high accuracy you should not do this by hand as the number of terms for convergence will be excruciatingly long, but rather write a computer program to do this)

EDIT: Because Andre has a very good point below, but I am guessing you are in a statistics rather than power series course, I will carry out the arduous calculus for you.

For a given function f(t) we would like to produce some series that can approximate it to arbitrary precision.

Algorithmically you can think of this as follows: Start with an initial guess a_{0}. Then recognize that the answer might be dependent on the first order of the variable. So we add in t to have a_{0} + a_{1}(t-t_{0}). Recognize that it could dependent on any order of t. By adding and subtracting small monomials we should be able to get nearer and nearer the actual function. Since subtracting is the same thing as adding a negative coefficient, we can still represent our function as an infinite sum of monomials.

Σ(a_{k}(t-t_{0})^{k})

However, just finding an expression to represent f(t) at a known point t_{0} is fairly useless, since we knew the value of that function to begin with. What we want is an expression to extrapolate (predict) other values that are reasonably distant from the initial t_{0}.

Extrapolation in a linear sense is done by taking a point, looking at the slope and calculating the next point. Thus we can do the same thing by looking at the derivative of the function. But a first derivative is not enough. There exist an infinite number of functions with the same first derivative. If we truly want a polynomial expression that accurately defines our function we need the second derivatives to be the same... and the third, fourth, fifth...., infinite derivative to be the same.

What effect does this have on our series.

Well, the first gives a_{1} + 2a_{2}(t-t_{0}) - 3a_{3}(t-t_{0})+...

we note that a_{1}=f'(t)

The second gives 2a_{2}+3*2a_{3}(t-t_{0})+4*3a_{4}+...

we note that a_{2}=f''(t) / 2

The third gives 3*2a_{3}+4*3*2a_{4}(t-t_{0})+5*4*3a_{5}(t-t_{0})^{2}+...

we note that a_{3}=f'''(t)/3*2

Here a pattern emerges, and we see that

a_{n}=(1/n!)f^{(n)}(t)_{t=}_{t0}

Thus, our series must be

f(t) = Σ(1/k!)f^{(k)}(t)_{t=t0}(t-t_{0})^{k}

And that will give you the taylor series for any function (an infinite polynomial series to find a functions value to arbitrary precision)--these are normally evaluated by a computer as the calculations are arduous.

But now to the meat of the question, how do you calculate the integral of the gaussian distribution?

We have a function e^{x} in the gaussian distribution.

By the above method we can see that the taylor series for e^{x} is

Σ(x^{k}/k!)

Now, if we go to our definition of the distribution we see that we really have an e^{-(t^2)}

we simply substitute in -t^{2} for t in our expansion leaving us with Σ((-t)^{2k}/k!) = Σ((-1)^{2k}t^{2k}/k!)

Now we want an integral of the function. Where we could not previously integrate analytically the distribution, we can integrate analytically the polynomial series, since the integral of a sum is merely the sum of the integrals.

This leaves us with

Σ((-1)^{2k}t^{2k+1}/((2k+1)k!))

We then normalize the function, and realize that (-1)^{2k} = (-1)^{k} for all integers k. And reveal our final expression

(2/π^{(}^{1/2)}) Σ((-1)^{k}t^{2k+1} / ((2k+1)k!))

Which is the error function that Andre was talking about.

This expression is then evaluated to a reasonable number of terms in k, at the points t=1.5 and t=-1.5 to arrive at your centered 3σ spread that you asked for.

I am sorry if that is confusing, I was hoping to avoid that topic. It is important to remember that though Taylor series provide arbitrarily precise calculations for a value, they are still approximations. Since you cannot actually add up an infinite number of terms, you must choose the appropriate number until you have the precision you desire.

Not all Taylor series converge at the same rate, or even to the function itself, as well, so where you might need 40 terms for one series to converge on 10 decimal places of accuracy, another expression might take 50 million--or it may never get there. The optimization of such series representation is an important topic in mathematics used by engineers and computer scientists all over the world.

## Comments

definea function to be the integral of P(t) from 0 to x. (It's called the error function, erf(x).) Its Taylor series produces a solution as exact as you like it.^{2}). That is true, but definition, but the Fresnel integral is still not an "elementary" function. So to say that there is no way to compute a function that produces anexactsolutionistechnically correct--the best kind of correct (if you are a futurama fan). But you are correct in that you may compute a function that produces anarbitrarily accuratesolution.