Questions tagged [karush-kuhn-tucker]

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions are first order necessary conditions for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first-order necessary conditions for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.

The KKT conditions were originally named after Harold W. Kuhn, and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.

The KKT conditions include stationarity, primal feasibility, dual feasibility, and complementary slackness.

433 questions
3
votes
0 answers

Question about Lagrange multipliers, optimization problems and KKT-points.

I am having some difficulties with optimization problems with inequality constraints. In general the problems I am given will look something like this: $$\min f(x,y,z) \\ \text{s.t.} \space \space \space g(x,y,z) \le0 \\ \space \space \space…
3
votes
1 answer

KKT condition with infinite gradient at the boundary

Let $P\subseteq \mathbb{R}^n$ be a convex polytope (cut out by finitely many linear inequalities) and $O\subseteq \mathbb R^n$ be an open set such that $O\cap P$ contains the (relative) interior of $P$. Suppose $f: O\cup P \to \mathbb R$ is a…
3
votes
3 answers

Unconstrained optimization problem with Kuhn-Tucker multiplier undefined

I have the following problem: minimize $f(xy) = x^2 + y^2$ subject to $h(x,y) = y^2 - (x-1)^3 \leq 0$ To solve it I set up the Lagrangian: $$L = x^2 + y^2 - \lambda((x-1)^3-y^2)$$ And obtain the following FOCs: $$ 2x - 3\lambda(x-1)^2 = 0 $$ $$…
Ali
  • 157
  • 8
3
votes
1 answer

How to handle optimization problems when optimization variable is matrix?

Suppose we have the following optimization problem $$ \min_{0\preceq M \preceq I} y^TMy $$ where $y \in \mathbb{R}^n$ and $M \in \mathbb{R}^{n \times n}$ is a positive semi-definite matrix. Notice that the optimization variable is matrix. Is there…
Saeed
  • 4,031
  • 1
  • 8
  • 23
3
votes
1 answer

Derive LCP from KKT conditions of a QP

I'm working through this tutorial on LCPs and interior point methods. In it, the authors claim that the following quadratic program $$ \begin{aligned} \min \quad& \frac{1}{2}u^TQu - c^Tu\\ \text{subject to} \quad& Au\leq b, 0 \leq…
ehuang
  • 307
  • 1
  • 2
  • 7
3
votes
1 answer

Optimization under constraints - unique solution or not

Say we have a problem such as minimize $f(x)$ such that $h(x)=0$ and $g(x) \leq0$. Let the minimum achieved under these constraints be $f(x^*) = p^*$. My question is: If $f(x)$ is convex, are $p^*$ and $x^*$ unique? Why, why not? If in the general…
3
votes
1 answer

Rewrite an Optimization problem for $\textrm {min } \:\textrm {max} \{f_1, \dots, f_N\}$

Given the optimization problem: $$\textrm {min } \:\textrm {max} \{f_1, \dots, f_N\}$$$$\textrm{s.t. } \: h(x) = 0$$ with $f_1, \dots, f_N : \Bbb R^n \to \Bbb R$ and $h: \Bbb R^n \to \Bbb R^p$ continously differentiable. Let $\bar x$ be a local…
DeltaChief
  • 915
  • 7
  • 10
3
votes
1 answer

A version of KKT theorem. Looking for a reference

I have been told that the following optimization theorem holds and is true. I would like to verify this or find a reference. I have searched some standard books that I know (e.g. Luenberger). Theorem. Let $X$ be a vectors space. Let $f:S \to…
Lisa
  • 2,889
  • 1
  • 6
  • 15
3
votes
2 answers

Does KKT works for non-convex problems as well?

I want to make sure that the following claim is correct. Please let me know what you think. "Let us assume that we have a constrained non-convex and nonlinear minimization problem. The objective function and all the constraints are differentiable.…
3
votes
2 answers

Karush Kuhn Tucker Theorem Basic Understanding

I have a general question about the Karush-Kuhn-Tucker-Theorem. Let's assume that we want to maximize the following funtion: $$f(x)$$ subject to some constraints $$g_i(x ) \leq 0$$ Using the objective function $f(x)$ and the constraints I get the…
3
votes
2 answers

solving a constrained 2nd order Euler Lagrange equation for a pair of probability distributions

I am interested in the following constrained optimization problem: $$\text{Minimize }J_p(F, G) = \int_0^1 \left(\frac{F}{f} + \frac{1-G}{g}\right)^{-1} dx.$$ over a pair of cumulative distributions functions $F$ and $G$, with $f := F'$ and $g :=…
3
votes
2 answers

Linear Least Squares with Non Negativity Constraint

I am interested in the linear least squares problem: $$\min_x \|Ax-b\|^2$$ Without constraint, the problem can be directly solved. With an additional linear equality constraint, the problem can be directly solved too, thanks to a Lagrange…
3
votes
0 answers

Is it possible for the Lagrangian to have two or more minima?

I am posing this question in conjunction with the one posted here. Let $F:\mathbb{R}^{n}\rightarrow \mathbb{R}$ be a differentiable function, and let $L$ denote the following subset of $\mathbb{R}^{n}$: \begin{eqnarray} L=\{x\in…
Karthik P N
  • 1,289
  • 6
  • 19
3
votes
1 answer

Beyond LICQ in KKT problem: can't I study irregular points as follows?

Given the following minimization problem $$min f(x,y)=-x-y$$ in the region described by $ x-4y+3 \geq 0$, $-x+y^2 \geq 0$, $x \geq 0$, I notice that the origin is an irregular point. In fact, gradients of constraints, evaluated in the origin,…
3
votes
2 answers

normalization of constraints $ 0 \leq x \leq 1 $ in Lagrangian KKT

With Lagrangian we have an objective function and a set of equality constraints of form $ g_{i}(x_{j}) = 0 $ . With KKT we can have another set of inequality constraints of the form $ h_{i}(x_{j}) \leq 0 $. All constraints has to be normalized to…
1 2
3
28 29