3

The following was ported from the pseudo-code from the Wikipedia article on Newton's method:

#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys

x0 = 1
f = lambda x: x ** 2 - 2
fprime = lambda x: 2 * x
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20

for i in range(maxIterations):
    denominator = fprime(x0)
    if abs(denominator) < epsilon:
        print('WARNING: Denominator is too small')
        break
    newtonX = x0 - f(x0) / denominator
    if abs(newtonX - x0) < tolerance:
        print('The root is', newtonX)
        break
    x0 = newtonX
else:
    print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
    print('The last computed approximate root was', newtonX)

Question

Is there an automated way to calculate some form of fprime given some form of f in Python 3.x?

Noctis Skytower
  • 19,237
  • 15
  • 73
  • 103
  • 1
    doesnt directly answer but you are better off saving it as a list eg `x**2 -2 = [1,0,-2]` `2x` would be `[0,2,0]` – Joran Beasley May 20 '13 at 13:38
  • 1
    For _some_ forms of `f`, you can easily automatically compute a corresponding form of `f'`. For example, if you represent polynomials as lists of coefficients (or a map `exponent -> coefficient`), you can easily compute the derivative. You can go beyond that and include other elementary functions, but the answer in the general case is no. But you can approximate the derivative at any given point using the Taylor expansion and only values of `f`. – Daniel Fischer May 20 '13 at 14:03
  • Thanks goes to Beasley and Fischer for their helpful comments. Another question proved useful: http://stackoverflow.com/questions/11155367 – Noctis Skytower May 20 '13 at 15:04

3 Answers3

2

A common way of approximating the derivative of f at x is using a finite difference:

f'(x) = (f(x+h) - f(x))/h                   Forward difference
f'(x) = (f(x+h) - f(x-h))/2h                Symmetric

The best choice of h depends on x and f: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if h is too small. Also x+h should be distinct from x. Something like h = x*1e-15 might be appropriate for your application. See also implementing the derivative in C/C++.

You can avoid approximating f' by using the secant method. It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative.

Community
  • 1
  • 1
Joni
  • 101,441
  • 12
  • 123
  • 178
  • Why would you approximate the derivative when you can calculate it if `f` is in an appropriate form? – Noctis Skytower May 20 '13 at 17:15
  • Newton's method is not restricted to functions that are of any particular form. Do you want to restrict your implementation to functions of a particular form? Which one? (polynomial, rational, trigonometric, ...) – Joni May 21 '13 at 13:28
  • Ah, we have different meanings for the term "form." You meant form of equation while I meant form of programmatic representation. Since the given equation was a polynomial with a single variable, that was my target for solving `fprime`. A mathematician would probably target a more generic solution. – Noctis Skytower May 21 '13 at 14:14
  • Given a programmatic representation you can obtain the derivative using symbolic methods, for example with sympy: http://docs.sympy.org/0.7.2/tutorial.html#calculus The symbolic representation of the derivative is possibly slower to evaluate than an approximation, though. Or maybe a sympy expression can be compiled to bytecode so that it runs just as fast as a "native" python expression would, I don't know. – Joni May 21 '13 at 14:46
1

You can approximate fprime any number of ways. One of the simplest would be something like:

lambda fprime x,dx=0.1: (f(x+dx) - f(x-dx))/(2*dx)

the idea here is to sample f around the point x. The sampling region (determined by dx) should be small enough that the variation in f over that region is approximately linear. The algorithm that I've used is known as the midpoint method. You could get more accurate by using higher order polynomial fits for most functions, but that would be more expensive to calculate.

Of course, you'll always be more accurate and efficient if you know the analytical derivative.

mgilson
  • 264,617
  • 51
  • 541
  • 636
  • Maybe I wanted a Python version of the pseudo-code in the Wikipedia article and then thought that it would be nice to automatically generate the `fprime` variable if only given the `f` variable ... – Noctis Skytower May 20 '13 at 14:07
0

Answer

Define the functions formula and derivative as the following directly after your import.

def formula(*array):
    calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
    calculate.coefficients = array
    return calculate

def derivative(function):
    return (p * c for p, c in enumerate(function.coefficients[1:], 1))

Redefine f using formula by plugging in the function's coefficients in order of increasing power.

f = formula(-2, 0, 1)

Redefine fprime so that it is automatically created using functions derivative and formula.

fprime = formula(*derivative(f))

That should solve your requirement to automatically calculate fprime from f in Python 3.x.

Summary

This is the final solution that produces the original answer while automatically calculating fprime.

#! /usr/bin/env python3
# https://en.wikipedia.org/wiki/Newton's_method
import sys

def formula(*array):
    calculate = lambda x: sum(c * x ** p for p, c in enumerate(array))
    calculate.coefficients = array
    return calculate

def derivative(function):
    return (p * c for p, c in enumerate(function.coefficients[1:], 1))

x0 = 1
f = formula(-2, 0, 1)
fprime = formula(*derivative(f))
tolerance = 1e-10
epsilon = sys.float_info.epsilon
maxIterations = 20

for i in range(maxIterations):
    denominator = fprime(x0)
    if abs(denominator) < epsilon:
        print('WARNING: Denominator is too small')
        break
    newtonX = x0 - f(x0) / denominator
    if abs(newtonX - x0) < tolerance:
        print('The root is', newtonX)
        break
    x0 = newtonX
else:
    print('WARNING: Not able to find solution within the desired tolerance of', tolerance)
    print('The last computed approximate root was', newtonX)
Noctis Skytower
  • 19,237
  • 15
  • 73
  • 103