0

I am familiarizing myself with CVXPY, and encountered a strange problem. I have the following simple toy optimization problem:

import numpy as np
import cvxpy as cp

A=np.array([[1,0,0],[0,1,0], [0,0,1]])
y=np.array([1,1,1])

# Upper bound for the constraint term
upper=1
# Solve the optimization problem using CVXPY
x = cp.Variable(3)
objective = cp.Minimize(cp.sum_squares(x))
constraint = [cp.sum_squares(A*x - y) <= upper]
prob = cp.Problem(objective, constraint)
prob.solve()
optimal_x = x.value

print('Value of constraint at optimal x:' + str(np.linalg.norm(A*optimal_x - y)**2))

Now, I expect my output number to be samller than upper=1, but what I get is the following:

Value of constraint at optimal x:3.0000000068183947

I am very confused about how this could be true. Am I using the function cp.sum_squares incorrectly? Am I just setting up the optimization in a wrong way? Any help is appreciated!!

Longti
  • 111
  • 1

1 Answers1

3

I think the confusion stems from wrong matrix multiplication in numpy:

>>> A * optimal_x - y
array([[-0.57735027, -1.        , -1.        ],
       [-1.        , -0.57735027, -1.        ],
       [-1.        , -1.        , -0.57735027]])

Where in fact what I think you want is

>>> np.dot(A, optimal_x) - y
array([-0.57735027, -0.57735027, -0.57735027])

Which indeed doesn't violate the constraint (within rounding errors):

>>> np.linalg.norm(np.matmul(A, optimal_x) - y) ** 2
1.000000002699704

Also see this question for reference on matrix-array multiplication in numpy.

This is indeed confusing because CVXPY objects do properly handle the * operator, even with numpy types:

>>> (A * x - y).value
array([-0.57735027, -0.57735027, -0.57735027])

Also note that for whatever expression tree you construct in CVXPY, after the optimization you can query the value of that expression given the optimized x value:

>>> cp.sum_squares(A*x - y).value
array(1.)
andersource
  • 664
  • 3
  • 9