10

I had been interested in neural networks for a bit and thought about using one in python for a light project that compares various minimization techniques in a time domain (which is fastest).

Then I realized I didn't even know if a NN is good for minimization. What do you think?

bias
  • 1,522
  • 3
  • 19
  • 34
physicsmichael
  • 4,297
  • 11
  • 33
  • 54
  • I was looking at comparisons of brute force array scanning, simulated annealing, and the Migrad minimization built apart of Minuit in root. – physicsmichael Mar 17 '09 at 02:39

8 Answers8

5

It sounds to me like this is a problem more suited to genetic algorithms than neural networks. Neural nets tend to need a bounded problem to solve, requiring training against known data, etc. - whereas genetic algorithms work by finding better and better approximate solutions to a problem without requiring training.

Erik Forbes
  • 33,543
  • 26
  • 93
  • 117
  • If the (training) data has a characteristic form then it could prove a good method, but that is some assumption... – Alex Mar 16 '09 at 22:18
  • PSO (Particle Swarm Optimization) and ACO (Ant Colony Optimization) are also options to GAs. – Levon May 26 '14 at 08:21
3

Back-propagation works by minimizing the error. However, you can really minimize whatever you want. So, you could use back-prop-like update rules to find the Artificial Neural Network inputs that minimize the output.

This is a big question, sorry for the short answer. I should also add that my suggested approach sounds pretty inefficient compared to more established methods and would only find a local minima.

danelliottster
  • 315
  • 1
  • 13
1

The training process of a back-propagation neural network works by minimizing the error from the optimal result. But having a trained neural network finding the minimum of an unknown function would be pretty hard.

If you restrict the problem to a specific function class, it could work, and be pretty quick too. Neural networks are good at finding patterns, if there are any.

zweiterlinde
  • 13,359
  • 2
  • 25
  • 31
Markus Jarderot
  • 79,575
  • 18
  • 131
  • 135
0

You can teach a NN to approximate a function. If a function is differentiable or your NN has more than one hidden layers, you can teach it to give derivative of a function.

Example:

You can train  a 1 input 1 output NN to give output=sin(input)

You can train it also give output=cos(input) which is derivative of sin()

You get a minima/maxima of sin when you equate cos to zero.

Scan for zero output while giving many values from input. 0=cos() -> minima of sin

When you reach zero output, you know that the input value is the minima of the function.

Training takes less, sweeping for zero takes long.

huseyin tugrul buyukisik
  • 9,464
  • 3
  • 39
  • 81
0

Although this comes a bit too late for the the author of this question. Maybe somebody wants to test some optimization algorithms, when he reads this...

If you are working with regressions in machine learning (NN, SVM, Multiple Linear Regression, K Nearest Neighbor) and you want to minimize (maximize) your regression-function, actually this is possible but the efficiency of such algorithms depends on smootheness, (step-size... etc.) of the region you are searching in.

In order to construct such "Machine Learning Regressions" you could use scikit- learn. You have to train and validate your MLR Support Vector Regression. ("fit" method)

SVR.fit(Sm_Data_X,Sm_Data_y)

Then you have to define a function which returns a prediction of your regression for an array "x".

def fun(x):
    return SVR.predict(x)

You can use scipiy.optimize.minimize for optimization. See the examples following the doc-links.

www.pieronigro.de
  • 692
  • 2
  • 11
  • 27
0

They're pretty bad for the purpose; one of the big problems of neural networks is that they get stuck in local minima. You might want to look into support vector machines instead.

chaos
  • 115,791
  • 31
  • 292
  • 308
  • Sorry, that was a brain typo. Support vector machines is what I was trying to say. I don't have any better references than wikipedia would point you to. – chaos Mar 16 '09 at 22:19
0

Actually you could use the NN to find a function minimum, but it would work best combined with genetic algorithms mentioned by Erik.

Basically NN tent to find solutions which correspond to a function local minimum or maximum but in doing so are pretty precise (to comment on Tetha answer stating that NN are classifiers you can use if to say it the data input is minimum or not)

in contrast genetic algorithms tend to find more universal solution from the whole range of the inputs possible but then give you the proximate results.

The solution is to combine the 2 worlds

  1. Get the approximate result from genetic algorithms
  2. Use that result to find the more precise answer using NN
kristof
  • 49,335
  • 23
  • 82
  • 107
-4

Neural networks are classifiers. They separate two classes of data elements. They learn this separation (usually) by preclassified data elements. Thus, I say: No, unless you do a major stretch beyond breakage.

Tetha
  • 4,694
  • 1
  • 14
  • 17
  • 8
    Neural networks are sample interpolators, which you can use for classification, among other things. Using a neural network to approximate a function and then finding the minima of the approximation would be a reasonable thing to do in some cases. – Don Reba Nov 29 '12 at 15:48