I do not have much experience with constrained optimization, but I am hoping that you can help. My current problem involves a more complex function, but the constraints are similar to the ones below. Just so that I can see how to apply Lagrange multipliers to my problem, I want to look at a simpler function.

Suppose that I would like to minimize the function

$$ f(x,y) = a + bx + cy $$ subject to the constraints $$ 0 \le x, y \le 1. $$

If I understand the Lagrange multipliers technique correctly, I should create 4 constraint functions: $x = 0$, $x = 1$, $y = 0$, and $y = 1$. This would lead to minimizing the function $$ a + bx + cy - \lambda_1 x + \lambda_2 (x-1) - \lambda_3y + \lambda_4(y-1) $$ with respect to $x,y,\lambda_1, \lambda_2,\lambda_3$, and $\lambda_4$.

Does this look correct? If not, where did I go wrong? Your help is greatly appreciated.

  • 86,205
  • 9
  • 157
  • 296
  • 343
  • 1
  • 2
  • 6
  • 16
    I'm surprised that no one has yet mentioned the [KKT conditions](http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions), which are what you need to apply Lagrange multipliers to an inequality-constrained problem. –  Jul 06 '11 at 16:26
  • @ℝⁿ. Yes this is so according to [Wikipedia](http://en.wikipedia.org/wiki/Lagrange_multiplier#Multiple_constraints) but why is it so? Could you explain? *"The method of Lagrange multipliers is generalized by the Karush–Kuhn–Tucker conditions, which can also take into account inequality constraints of the form $h(x) \leq c$."* Moved [here](http://math.stackexchange.com/questions/311328/kkt-for-generalizing-lagrange-multiplier-method-with-inequality-constraints). – hhh Feb 22 '13 at 16:51
  • Oh my! I was on the wrong track all the time. The notation 0<=x,y<=1 looks to me like 0<=x and y<=1, which is not bounded. – Douba Jun 08 '16 at 07:49
  • 1
    To clarify, it was shorthand for $0 \le x \le 1$ and $0 \le y \le 1$. – ramhiser Jun 08 '16 at 20:16

3 Answers3


That might work, but it has far more variables than equations. I recommend thinking instead as follows:

The region $ 0 \leq x, y \leq 1$ is a square. More importantly, it's a compact region. So we know that our function will take a maximum and a minimum somewhere. So it will either attain its extrema in the interior, or on the boundary.

To check the interior, perform the usual max/min test: take the gradient and set to zero. Either use the multivariable 'second-derivative test' or just compare the points to check for maxima and minima. Now, you should check the boundaries. This means you could do the regular Lagrange multipliers method 4 times, one with each constraint $$\begin {align} y &= 0; \quad x = 0 \\ y &= 0; \quad x = 1 \\ y &= 1; \quad x = 0 \\ y &= 1; \quad x = 1 \end{align}$$ I want to emphasize that I would do these constraints separately rather than together. Each one is very trivial to solve - but you only pay attention to solutions within your region of interest. There is one more place that you need to check, and that's the corners (here, the boundary of the boundary: it's as if we treated our domain as 4 separate lines, themselves closed and bounded).

Finally, you compare these three areas: the interior, the line-boundaries, and the corner boundaries. Because the region is compact, there will be an absolute max and absolute min (perhaps multiple). That seems to be the easiest way to do this, especially as all the equations in this case are very simple to solve (all linear).

I should note that you do not need to use Lagrange multipliers if you don't want. You could interpret the lines as paths, and maximize/minimize the curve along each of the paths (which does not require lagrange, but instead a parameterization). Then you still compare the interior and the boundary. But this sounds not so fun to me, and not worth it in this case.

  • 86,205
  • 9
  • 157
  • 296

What you call "constraints" is in reality the definition of the compact set $Q$ over which the function $f$ has to be maximized. In order to find this maximum one has to set up a list of candidate points $P_k\in Q$, using the "stratification" of $Q$ into open manifolds of dimensions $2$, $1$ and $0$. The reason for that is the following: Setting a derivative (or $\nabla f$) to $0$ will only find stationary points in the interior of some domain of "full dimension".

In your example the set $Q$ is stratified as follows: It consists of a $2$-dimensional open interior $Q^\circ$, of four $1$-dimensional boundary segments $E_i$ (without endpoints!) and of four vertices $V_i$.

The possible candidates in $Q^\circ$ are brought to the fore by solving the equation $\nabla f(x,y)=0$, i.e. the system $f_x(x,y)=0$, $f_y(x,y)=0$, and retaining the solutions that lie in $Q^\circ$.

For the possible candidates on an edge $E_i$ there are two methods: Either you are able to produce a parametric representation $t\mapsto\bigl(x(t),y(t)\bigr)$ of $E_i$ (which in your example you certainly can) and then compute the "conditionally stationary points" of $f$ on $E_i$ by looking at the pullback $\phi(t):=f\bigl(x(t),y(t)\bigr)$. This means you solve the equation $\phi'(t)=0$, obtain some $t_k$ and add the points $\bigl(x(t_k),y(t_k)\bigr)$ to the list, if they belong to $Q$. If your edge $E_i$ is given by a constraint $G(x,y)=0$ which you cannot solve for $x$ or $y$ then you have to resort to Lagrange multipliers. (I omit the details.)

Finally you have to add the vertices $V_i$ to your candidate list.

Assume you now have a finite candidate list $L=\{P_1, P_2,\ldots, P_r\}\subset Q$. Then $$\max_{(x,y)\in Q} f(x,y)\ =\ \max\{f(P_1),f(P_2),\ldots, f(P_r)\}\ .$$

I'm sorry that things are so cumbersome, even for such a simple domain as a square. But it's easy to cook up an example where by forgetting one vertex or a conditionally stationary point on an edge you miss the maximum of $f$ on $Q$.

Christian Blatter
  • 216,873
  • 13
  • 166
  • 425

mixedmath's answer is right; here's some more on your question "where did I go wrong?". You tried to impose all four constraints at once, even though they're not all compatible. Remember that the solution using Lagrange multipliers not only involves adding multiples of the constraints to the objective function, but also determining both the original variables and the multipliers by setting all the derivatives to zero (where the derivatives with respect to the multipliers are the constraints). If you include incompatible constraints (like $x=0$ and $x=1$), the resulting system of equations has no solutions. In the present case, because the constraints happen to differ only by additive constants, you could get by with ignoring that and just setting the derivatives of your function with respect to $x$ and $y$ to zero and imposing the constraints one pair at a time; then you'd just have a slight overabundance of multipliers but would still get the right values of $x$ and $y$; but if your constraints differ such that their derivatives with respect to the variables differ (i.e. by more than just an additive constant), that won't work anymore and the constraints will get inextricably mixed up if you impose them all at once.

  • 215,929
  • 14
  • 263
  • 474