-1

I am using Python 2.7, and have a program that solves a recursive optimization problem, that is, a dynamic programming problem. A simplified version of the code is:

from math import log
from scipy.optimize import minimize_scalar

class vT(object):
    def __init__(self,c):
        self.c = c

    def x(self,w):
        return w

    def __call__(self,w):
        return self.c*log(self.x(w))

class vt(object):
    def __init__(self,c,vN):
        self.c = c
        self.vN = vN

    def objFunc(self,x,w):
        return -self.c*log(x) - self.vN(w - x)

    def x(self,w):
        x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
                                 bounds=(1e-10,w-1e-10)).x
        return x_star

    def __call__(self,w):
        return self.c*log(self.x(w)) + self.vN(w - self.x(w))

p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)

w1 = 3.0
x1 = p1.x(w1)
w2 = w1 - x1
x2 = p2.x(w2)
w3 = w2 - x2
x3 = w3

x = [x1,x2,x3]

print('Optimal x when w1 = 3 is ' + str(x))

If enough periods are added, the program can begin to take a long time to run. When x1 = p1.x(w1) is run, p2 and p3 are being evaluated multiple times by the minimize_scalar. Also, when x2 = p2(w2) is run, we know the ultimate solution will involve evaluating p2 and p3 in ways that were already done in the first step.

I have two questions:

  1. What's the best way to use a memoize wrapper on the vT and vt classes to speed up this program?
  2. When minimize_scalar is run, will it benefit from this memoization?

In my actually application, the solutions can take hours to solve currently. So, speeding this up would be of great value.

UPDATE: A response below points out that the example above could be written without the use of classes, and the normal decoration can be used for functions. In my actual application, I do have to use classes, not function. Moreover, my first question is whether the call of the function or method (when it's a class) inside of minimize_scalar will benefit from the memoization.

profj
  • 271
  • 2
  • 8
  • These don't need to be classes. [One of the hallmarks of not being a class is that it has two functions, one of which is `__init__`.](https://youtu.be/o9pEzgHorH0) Memorizing return values is a standard [example of decorators](https://www.python-course.eu/python3_memoization.php) (or an [SO answer implementing it](https://stackoverflow.com/a/1988826/1394393)). Consider the `functools.partial` function if you wish to avoid repeating the value of `c` throughout your code. The best way to tell if memorization will help is to profile it; obviously, memorizing many results will consume more memory. – jpmc26 Mar 06 '19 at 18:53
  • @jpmc26 I tried to simplify my actual code in order to isolate the issues related to memoization and the `minimize_scalar` command. The actual classes I use have a few hundred lines of code each, and even these classes here have more than two functions. The ones in my example could perhaps be rewritten to only have two functions, but not the ones in my actual application. In that case, the links you provide do not pertain to classes. What would I do in the case where I indeed have to use classes. I could paste my actual code (2000+ lines) if you think that is necessary. – profj Mar 06 '19 at 19:24
  • @jpmc26 The other question is whether the calls of the memoized function when I use `minimize_scalar` add to the cache of values that have already been calculated. Whether I use functions or classes, I would still like to know the answer to that. – profj Mar 06 '19 at 19:27
  • Decorators can absolutely [be applied to instance methods](https://stackoverflow.com/a/30105234/1394393). – jpmc26 Mar 06 '19 at 19:45

1 Answers1

0

I found out the answer. Below is an example of how to memoize the program. There may be an even more efficient approach, but this approach memoizes the methods of the class. Furthermore, when minimize_scalar is run, the memoize wrapper records the results each time it evaluates the functions:

from math import log
from scipy.optimize import minimize_scalar
from functools import wraps

def memoize(obj):
    cache = obj.cache = {}

    @wraps(obj)
    def memoizer(*args, **kwargs):
        key = str(args) + str(kwargs)
        if key not in cache:
            cache[key] = obj(*args, **kwargs)
        return cache[key]
    return memoizer

class vT(object):
    def __init__(self,c):
        self.c = c

    @memoize
    def x(self,w):
        return w

    @memoize    
    def __call__(self,w):
        return self.c*log(self.x(w))


class vt(object):
    def __init__(self,c,vN):
        self.c = c
        self.vN = vN

    @memoize    
    def objFunc(self,x,w):
        return -self.c*log(x) - self.vN(w - x)

    @memoize
    def x(self,w):
        x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
                                 bounds=(1e-10,w-1e-10)).x
        return x_star

    @memoize
    def __call__(self,w):
        return self.c*log(self.x(w)) + self.vN(w - self.x(w))

p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)

x1 = p1.x(3.0)
len(p3.x.cache) # how many times was p3.x evaluated?

Out[3]: 60

x2 = p2.x(3.0 - x1)
len(p3.x.cache) # how many additional times was p3.x evaluated?

Out[5]: 60

profj
  • 271
  • 2
  • 8