3

I am using scipy.optimize.minimize to find the optimum value from a function. Here is the simplest example, using the built-in Rosenbrock function:

>>> from scipy.optimize import minimize, rosen
>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> # Minimize returns a scipy.optimize.OptimizeResult object...
>>> res = minimize(rosen, x0, method='Nelder-Mead') 
>>> print res
  status: 0
    nfev: 243
 success: True
     fun: 6.6174817088845322e-05
       x: array([ 0.99910115,  0.99820923,  0.99646346,  0.99297555,  0.98600385])
 message: 'Optimization terminated successfully.'
     nit: 141

x is just the final, optimum input vector. ​Can I get a list for all iterations (i.e. an objective function with corresponding input vector) from the returned scipy.optimize.OptimizeResult object?

Masoud Rahimi
  • 4,957
  • 13
  • 30
  • 53
feedMe
  • 2,552
  • 1
  • 29
  • 56
  • 1
    http://stackoverflow.com/questions/16739065/how-to-display-progress-of-scipy-optimize-function – cel Dec 15 '15 at 10:10
  • 1
    @cel Good answer, but the print command in the callbackF() function is re-evaluating the rosen() objective function. In this case it would be OK, but for an expensive objective it would be unacceptable to calculate the result a second time just to see it's value! Is there a way to get the result directly from the callback? – feedMe Dec 15 '15 at 10:26
  • Usually you do not need any of those intermediate results. computing/storing them would just slow the algorithm down a lot. Fast algorithms cannot give you that kind of information for the sake of performance. – cel Dec 15 '15 at 10:32
  • @cel OK well in most situations where the objective is expensive, say an engineering simulation, the speed of the algorithm is negligible compared to the simulation time. So I am not interested in the optimization algorithm being fast, I am only interested in it deciding which input vector to use for the next iteration. And I want all of the information at the end of the run without having to re-run those same simulations again! I don't want intermediate results, I just want them all at the end! – feedMe Dec 15 '15 at 10:41
  • @cel cont... luckily in the case of simulation you do normally have access to the results at the end anyway. – feedMe Dec 15 '15 at 10:44
  • Yes, but this is not the kind of problem standalone numerical algorithms are optimized for. These Algorithms are implemented for the task "Give me a smooth function and a start point and I will find a local minimum for you" – cel Dec 15 '15 at 11:42
  • @cel Gradient-free algorithms for noisy functions are also implemented, and in most cases (for me at least) the outcome should be convex, so I don't see any harm in trying to use them. At least, I wanted to try using optimization tools without going outside of Python. Thanks for the useful discussion! – feedMe Dec 15 '15 at 12:10
  • 1
    Somewhat of a kludge, but you could incorporate it in to your function to minimize - either print out at each call (console of file), or append data from each call on to a (global) list to be perused later. It is kind of neat to see (once or twice) how the algorithm goes about its business. – Jon Custer Dec 15 '15 at 14:48
  • @JonCuster I would say it is *essential* to see the optimization history, especially for expensive objective functions where you want to find ways to minimize the total number of evaluations in future runs. That's why I was surprised that it wasn't trivial to just print them out at the end. How about an option to store them at each iteration in an array belonging to the OptimizeResults object? – feedMe Dec 15 '15 at 16:23
  • @feedMe - I disagree to some point. Given understanding of the optimization routine, it is not that hard to figure out generally what will happen if you know your starting point and the general 'lay of the land' for the function. And, knowing what happened for one set of parameters may not be of great help for an even slightly different set of parameters. Note also that, particularly for expensive objective functions people would get (rightly) angry if the optimization routine barfed because you ran out of memory to hold the return object. Better to let you do what you want yourself. – Jon Custer Dec 15 '15 at 20:51
  • @JonCuster In what situation would you know the 'general "lay of the land"' for the function, and if you know what will happen why optimize in the first place? Perhaps we are coming from different communities with different needs. For engineering design optimization you want to be able to throw your design problem at a robust algorithm without knowing what will happen (within reason) and the landscape won't necessarily be predictable. And in some cases you will want to plot the attempted data points so you want to see them. – feedMe Dec 17 '15 at 11:42
  • This touches on several different threads. (1) You should have some idea of what the function might look like - it is your problem after all.. (2) Optimization routines are not as robust as you might believe (or else there would be fewer questions about them!). In fact, they are pretty dumb and predictable. (3) So, you need do a little work to help out the algorithm. Sometimes the best guesses aren't close to the real answer, but better than those that are. See, e.g., http://stackoverflow.com/questions/28187569/find-the-root-of-a-cubic-function/28195068#28195068 – Jon Custer Dec 17 '15 at 14:03
  • Yes, they are not necessarily robust, that is for sure! Hence the need to check what it was doing in order to understand your result and how it was obtained. – feedMe Dec 17 '15 at 14:10

0 Answers0