I am using Python 2.7, and have a program that solves a recursive optimization problem, that is, a dynamic programming problem. A simplified version of the code is:
from math import log
from scipy.optimize import minimize_scalar
class vT(object):
def __init__(self,c):
self.c = c
def x(self,w):
return w
def __call__(self,w):
return self.c*log(self.x(w))
class vt(object):
def __init__(self,c,vN):
self.c = c
self.vN = vN
def objFunc(self,x,w):
return -self.c*log(x) - self.vN(w - x)
def x(self,w):
x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
bounds=(1e-10,w-1e-10)).x
return x_star
def __call__(self,w):
return self.c*log(self.x(w)) + self.vN(w - self.x(w))
p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)
w1 = 3.0
x1 = p1.x(w1)
w2 = w1 - x1
x2 = p2.x(w2)
w3 = w2 - x2
x3 = w3
x = [x1,x2,x3]
print('Optimal x when w1 = 3 is ' + str(x))
If enough periods are added, the program can begin to take a long time to run. When x1 = p1.x(w1)
is run, p2
and p3
are being evaluated multiple times by the minimize_scalar
. Also, when x2 = p2(w2)
is run, we know the ultimate solution will involve evaluating p2
and p3
in ways that were already done in the first step.
I have two questions:
- What's the best way to use a memoize wrapper on the
vT
andvt
classes to speed up this program? - When
minimize_scalar
is run, will it benefit from this memoization?
In my actually application, the solutions can take hours to solve currently. So, speeding this up would be of great value.
UPDATE: A response below points out that the example above could be written without the use of classes, and the normal decoration can be used for functions. In my actual application, I do have to use classes, not function. Moreover, my first question is whether the call of the function or method (when it's a class) inside of minimize_scalar
will benefit from the memoization.