# DerApproximator

not rated
A Python module for finite-differences derivatives approximation

### changelog

DerApproximator is a Python library for finite-differences derivatives approximation.

Example:

from DerApproximator import *

print get_d1(lambda x: (x**2).sum(), [1,2,3])

print get_d1(lambda x: x**2, [1,2,3])

Expected output:
[ 1.99999993 3.99999998 5.99999996]
[[ 2. 0. 0. ]
[ 0. 3.99999996 0. ]
[ 0. 0. 5.99999996]]

* check_d1 checks user-provided routing for obtaining 1st derivatives of a function

Example:

from numpy import *

from DerApproximator import *

func = lambda x: (x**4).sum()

func_d = lambda x: 40 * x**3

x = arange(1.0, 6.0)

r = check_d1(func, func_d, x)

func = lambda x: x**4

func_d = lambda x: 40 * diag(x**3)

x = arange(1.0, 6.0)

r = check_d1(func, func_d, x)

Expected output:
func num user-supplied numerical RD
0 +4.000e+01 +4.000e+00 3
1 +3.200e+02 +3.200e+01 3
2 +1.080e+03 +1.080e+02 3
3 +2.560e+03 +2.560e+02 3
4 +5.000e+03 +5.000e+02 3
max(abs(d_user - d_numerical)) = 4499.9999861
(is registered in func number 4)

func num i,j: dfunc[i]/dx[j] user-supplied numerical RD
0 0 / 0 +4.000e+01 +4.000e+00 3
6 1 / 1 +3.200e+02 +3.200e+01 3
12 2 / 2 +1.080e+03 +1.080e+02 3
18 3 / 3 +2.560e+03 +2.560e+02 3
24 4 / 4 +5.000e+03 +5.000e+02 3
max(abs(d_user - d_numerical)) = 4499.9999861
(is registered in func number 24)

* Default diffInt is 1.5e-8, you can overwrite it by "diffInt" argument for get_d1 and check_d1. Another one argument is stencil, default value 2 for DerApproximator, FuncDesigner and OpenOpt NSP is 2, i.e. (f(x+diffInt)-f(x-diffInt)) / (2*diffInt), for OpenOpt NLP default is 1, i.e. (f(x+diffInt)-f(x)) / diffInt.

Example:

from numpy import *

from DerApproximator import get_d1

func = lambda x: (x**4).sum()

x = arange(1.0, 6.0)

r1 = get_d1(func, x, stencil = 1, diffInt = 1e-5)

print(r1)

r2 = get_d1(func, x, stencil = 2, diffInt = 1e-5)

print(r2)

Expected output:
[ 4.00005999 32.00024 108.00054 256.00095998 500.00149998]
[ 4. 32. 108. 256. 499.99999998]

* If it turns out that f(x+diffInt) is NaN (not a number) or f(x-diffInt) is NaN, than only one side will be involved into calculations. BTW this is a typical situation for lots of numerical optimization problems, and currently functions approx_fprime and check_grad from scipy.optimize are even more primitive - they have only one stencil and no handling of NaNs.