What is the point of KKT conditions for constrained optimization? In other words, how is the best way to use them. I have seen examples in different contexts, but miss a short overview of the procedure, in like one or two sentences.

Should we use them to find the optimal solution of a constrained problem? The reason I am very confused is that one of conditions in KKT, already requires the constraints of the original problem to hold. The question is if we knew how to impose constraints in first place, then why look at KKT conditions?

Or should we use another one of KKT conditions first, i.e. only set the gradient of Lagrangian to zero and extract the solutions from that, and then check if the inequality and equality constraints hold?

I deeply appreciate if you could clarify.

  • 3,246
  • 1
  • 26
  • 52
  • 3
    The KKT conditions are not used to *find* an optimal solution. They are simply necessary (and sometimes sufficient) conditions for optimality. Therefore, given a solution, we can check to make sure it meets the necessary conditions. In general, points that satisfy the KKT conditions can't be solved for immediately. Rather, typical algorithms iteratively move towards points that satisfy KKT conditions (i.e., numerically solving the KKT system of equations). – David Feb 26 '17 at 22:28
  • 4
    To quote Boyd and Vandenberghe, from *Convex Optimization*: "The KKT conditions play an important role in optimization. In a few special cases it is possible to solve the KKT conditions (and therefore, the optimization problem) analytically. More generally, many algorithms for convex optimization are conceived as, or can be interpreted as, methods for solving the KKT conditions." – David Feb 26 '17 at 22:32
  • 1
    The strategy is the same as the strategy from calculus of minimizing a function by setting the derivative equal to $0$. For unconstrained problems, the optimality condition is just that the derivative is equal to $0$. For constrained problems, the analogous optimality condition is the KKT conditions. – littleO Mar 06 '17 at 20:36
  • @David given a problem, is there a sufficient condition that guarantees an analytical solution of the KKT conditions? I mean, which are the situations that allow an analytical solution? Can we encompass them with a statement? It seems that Boyd and Vandenberghe do not elaborate further on it. – newman_ash Feb 14 '22 at 00:33

1 Answers1


Since it doesn't seem that anybody is giving an answer I will slightly elaborate on my comments above. The first thing to point out is that KKT conditions don't give a "procedure" as you're question implies. Rather, KKT conditions give a "target" for procedures to move towards.

KKT conditions are primarily a set of necessary conditions for optimality of (constrained) optimization problems. This means that if a solution does NOT satisfy the conditions, we know it is NOT optimal. In particular cases, the KKT conditions are stronger and are necessary and sufficient (e.g., Type 1 invex functions). In these cases, if a solution satisfies the system of KKT conditions it is globally optimal.

So what do the KKT equations do for us? By giving us a system of equations, we can attempt to find a solution to them. Typically, we can't solve these equations analytically, so we use numerical methods to solve them (e.g., sequential quadratic programming).

If you have specific questions about numerical (or exact) methods in given contexts, I'd suggest asking a new question with those details.

  • 3,000
  • 2
  • 12
  • 21