
Aaron S. answered 05/30/13
With Aaron's help, math is as easy as pi
This is actually a lot more intuitive than it seems. For 1-dimensional derivatives, we have that the best linear approximation of f(x) near x=a is:
f(x) ≈ f(a) + f'(a) * (x-a)
We have the same thing for higher dimensions:
R(x) ≈ R(a) + grad(R)(a) · (x-a)
The only difference is that this involves vectors, where R(x) is vector-valued, both x and a are vectors, and the · signifies the dot-product. Using the above equation, we have:
R( 3.1, -3.1 ) ≈ R( 3, -3) + grad(R)(3,-3) · ( <3.1, -3.1> - <3,-3> )
≈ 4 + <-1,2> · <0.1, -0.1>
≈ 4 + (-0.1 - 0.2)
≈ 4 - 0.3
≈ 3.7