2

Hi I am trying to use newton method to minimise a function but I keep getting this error when I run the code and I don't know why. Any help is much appreciated. Thanks!

Error:

ValueError: shapes (2,1) and (2,1) not aligned: 1 (dim 1) != 2 (dim 0)  

Code:

import sympy as sy
from sympy import symbols
import numpy as np
from numpy import linalg as la
from scipy.optimize import minimize
a1=0.3
a2=0.6
a3=0.2
b1=5
b2=26
b3=3
c1=40
c2=1
c3=10
h=0.000001

def TutFunc(x):
    x=np.empty((2,1))
    u = x[0][0] - 0.8
    v = x[1][0] - ((a1+(a2*u**2))*(1-u)**0.5-(a3*u))
    alpha = -b1+b2*u**2*(1-u)**0.5+b3*u
    beta = c1*v**2*(1-c2*v)/(1+c3*u**2)
    y= alpha*np.exp(-beta)
    return y

def JacobianFun(x):
    x=np.empty((2,1))
    Jx1 = (TutFunc(x+[[h],[0]]) - TutFunc(x-[[h],[0]]))/(2*h)
    Jx2 = (TutFunc(x+[[0],[h]]) - TutFunc(x-[[0],[h]]))/(2*h)
    jaco = np.array([[Jx1],[Jx2]])
    return jaco

def HessianFun(x): 
    x=np.empty((2,1))
    Hx1x1 = (TutFunc(x+[[h],[0]]) - 2*TutFunc(x) + TutFunc(x-[[h],[0]]))/h**2
    Hx1x2 = (TutFunc(x+[[h],[h]]) - TutFunc(x+[[h],[-h]]) - TutFunc(x+[[-h],[h]]) + TutFunc(x-[[h],[h]]))/(4*h**2)
    Hx2x1 = Hx1x2
    Hx2x2 = (TutFunc(x+[[0],[h]]) - 2*TutFunc(x) + TutFunc(x-[[0],[h]]))/h**2
    Hess = np.array([[Hx1x1, Hx1x2],[ Hx2x1, Hx2x2]])
    return Hess

x0=([0.7, 0.3]
x=minimize(TutFunc,x0,method= 'Newton-CG', jac=JacobianFun, hess=HessianFun)
Vasilis G.
  • 6,470
  • 3
  • 15
  • 27
Nu2prog
  • 19
  • 4
  • what is meant by `x0=([0.7, 0.3]`? it is incomplete expression – Evgeny Aug 24 '18 at 12:48
  • I think it should be x0 = np.array ([0.7 ,0.3]) – NoorJafri Aug 24 '18 at 12:49
  • @Syed Noor Ali Jafri - that possibly solves it! More clues for OP at https://stackoverflow.com/questions/39608421/showing-valueerror-shapes-1-3-and-1-3-not-aligned-3-dim-1-1-dim-0 – Evgeny Aug 24 '18 at 12:51
  • x0 = np.array ([0.7 ,0.3]) gives 2,0 dimension and x0= np.array ([[0.7], [0.3]]) gives dimension 2,1 . OP can check the dimension using x0.shape. This might give you the clue for dimensional error. – NoorJafri Aug 24 '18 at 12:53
  • @SyedNoorAliJafri that doesn't solve it, the same error occurs – Nu2prog Aug 24 '18 at 12:59
  • looking deeper in code - you repeat ```def JacobianFun(x): x=np.empty((2,1))``` that effectively wipes our an argument in every function, I think this is wrong behaviour. – Evgeny Aug 24 '18 at 13:00
  • @EPo he then uses it initialising it. I don't know whatever it is but the error is generating somewhere else. – NoorJafri Aug 24 '18 at 13:14
  • @Syed Noor Ali Jafri - no point in passing x to a fucntion then overwritying it. I think OP has to specify what he is after: retruning a matrix of functions or calculting a value ot some point `x`, I think the code becomes a mixture of the two ways and it is unclear what type `JacobianFun` is returning - a vector of floats of a vector of two fucntions. – Evgeny Aug 24 '18 at 13:30
  • Upon debugging minimise function uses numpy.dot(ri, ri) which brings us closer to error. CHeck this thread: https://stackoverflow.com/questions/28028991/numpy-dot-dimensions-not-aligned – NoorJafri Aug 24 '18 at 13:36
  • @Nu2prog are you sure you have jacobian function right? Please check my solution, optimisation achieved though :p – NoorJafri Aug 24 '18 at 15:06

1 Answers1

0

I have tried to optimised your output. You need to alter your jacobian and hessian function. I have altered the jacobian, hessian you need to do yourself.

See the documentation here.

According to the documentation: jac(x) -> array_like, shape (n,) Which means jacobian function takes x which is an ndarray and returns array with (n,0) dimension. In your case (2,0). You on the other hand had (2,2) so I switched between both of them one by one to achieve optimisation. You can go through the documentation if you like. Second thing is hess and hessp are two different functions.

hess : hess(x, *args) -> {LinearOperator, spmatrix, array}, (n, n)

takes arguments and return something I really have no idea about. Whereas by seeing your code I think you were thinking of using:

hessp: hessp(x, p, *args) -> ndarray shape (n,)

Which takes arguments and return ndarray of shape(n,)

For simplicity I have disabled hessp

Plus I don't see a point why were you initialising the array X by an empty one.

Here is the code:

import sympy as sy
from sympy import symbols
import numpy as np
from numpy import linalg as la
from scipy.optimize import minimize
a1=0.3
a2=0.6
a3=0.2
b1=5
b2=26
b3=3
c1=40
c2=1
c3=10
h=0.000001
flag = 0

def TutFunc(x):
    u = x[0] - 0.8
    v = x[1] - ((a1+(a2*u**2))*(1-u)**0.5-(a3*u))
    alpha = -b1+b2*u**2*(1-u)**0.5+b3*u
    beta = c1*v**2*(1-c2*v)/(1+c3*u**2)
    y= alpha*np.exp(-beta)
    return y

def JacobianFun(x):
    global flag
    Jx1 = (TutFunc(x+[[h],[0]]) - TutFunc(x-[[h],[0]]))/(2*h)
    Jx2 = (TutFunc(x+[[0],[h]]) - TutFunc(x-[[0],[h]]))/(2*h)
    jaco = np.array([Jx1 , Jx2])
    if flag == 0:
        flag = 1
        return jaco[0]
    else:
        flag = 0
        return jaco[1]

def HessianFun(x): 
    x=x
    Hx1x1 = (TutFunc(x+[[h],[0]]) - 2*TutFunc(x) + TutFunc(x-[[h],[0]]))/h**2
    Hx1x2 = (TutFunc(x+[[h],[h]]) - TutFunc(x+[[h],[-h]]) - TutFunc(x+[[-h],[h]]) + TutFunc(x-[[h],[h]]))/(4*h**2)
    Hx2x1 = Hx1x2
    Hx2x2 = (TutFunc(x+[[0],[h]]) - 2*TutFunc(x) + TutFunc(x-[[0],[h]]))/h**2
    Hess = np.array([[Hx1x1, Hx1x2],[ Hx2x1, Hx2x2]])
    if flag == 0:
        flag = 1
        return Hess[0]
    else:
        flag = 0
        return Hess[1]


x0= np.array([0.7, 0.3])
x0.shape
x=minimize(TutFunc,x0,method= 'Newton-CG', jac=JacobianFun, hessp=None)
print(x)

OUTPUT for x when hess function is dissabled

    jac: array([ 2.64884244, -2.89355671])
message: 'Optimization terminated successfully.'
   nfev: 2
   nhev: 0
    nit: 1
   njev: 6
 status: 0
success: True
      x: array([0.69999996, 0.30000004])
NoorJafri
  • 1,333
  • 11
  • 20