Let $A,A_h\in M_{n\times m}(\mathbb{R})$ be $n\times m$ matrices with $\Vert (A-A_h)x\Vert\leq h^2 \Vert x\Vert$ for $h\in (0,1)$ and $e\in \mathbb{R}^n$. Now consider for a fixed $\lambda>0$ the optimization problems $$\min_xe^TA_{(h)}x+\lambda\Vert x\Vert_2^2\quad \text{subject to}\quad \Vert x\Vert_1\leq C_1, \Vert x\Vert_\infty\leq C_2.$$

I'll denote the minimizer of the Problem with $A_h$ by $x_h$ and of the one with $A$ by $x_0$. I already showed that $x_h$ converges to $x_0$. I would furthermore like to get some convergence rates, but so far I was only able to derive one for the case where $x_0$ is strictly feasible.

My approach was to use the KKT conditions, i. e $$ 0\in A_{(h)}^Te+2\lambda x+\mu_1^{(h)}\partial\Vert x \Vert_1+\mu_2^{(h)}\partial\Vert x \Vert_\infty,$$ $$\mu_1^{(h)}, \mu_2^{(h)}\geq0,\quad \mu_1^{(h)}(\Vert x \Vert_1-C_1)=0,\quad \mu_2^{(h)}(\Vert x \Vert_\infty-C_2)=0$$ $$ \Vert x\Vert_1\leq C_1, \Vert x\Vert_\infty\leq C_2.$$ I calculated the subdifferentials: $$\partial\Vert x \Vert_1=\{v:\Vert v\Vert_\infty\leq 1, v^Tx=\Vert x\Vert_1\}$$ $$\partial\Vert x \Vert_\infty=\{v:\Vert v\Vert_1\leq 1, v^Tx=\Vert x\Vert_\infty\}$$ We have $\Vert A^Te-A_h^Te\Vert\leq h^2 \Vert e\Vert$. Therefore when $x_0$ is strictly feasible, then because of the convergence I can assume that $x_h$ is strictly feasible as well. Hence $\mu_1^{(h)}=\mu_2^{(h)}=0$, so I can easily derive $\Vert x_0-x_h\Vert\leq C h^2$.

Unfortunatly I can't come up with estimates in the other cases.

**EDIT:**
We can assume, that $x_0\geq 0$ (componentwise), otherwise we can transform the coordinate system. And we can restrict ourselves to the case, where $x_h$ lies in the same orthant as $x_0$, i.e $x_h\geq 0$.
For $x=(x_1,..,x_m)\geq 0 $ and $v\in \partial\Vert x\Vert_1$ we get $v_i=1$ when $x_i\not=0$ and $v_i\in [-1,1]$ else.

If we now look at the fairly simple case $\Vert x_0\Vert_1=\Vert x_h\Vert_1=C_1$ and $\Vert x_0\Vert_\infty=\Vert x_h\Vert_\infty<C_2$ with $x_0,x_h>0$, then we know that $\partial\Vert x_0 \Vert_1=\partial\Vert x_h \Vert_1=\{(1,..1)^T\}$. Hence $$0= A_{(h)}^Te+2\lambda x+\mu_1^{(h)}(1,..1)^T.$$ But even in this simple case I don't know how to bound the convergence of $\mu_1^h\rightarrow \mu_1$.

**EDIT:**
I would furthermore like to add the condition $\sum_{i=1}^m x_i=0.$ Once again the same problem with the convergence of the multipliers arises. One can probably incooperate this condition into the problem, without adding a new condition. E.g. we can define the matrix $P$ by $x=P\tilde{x}$, where $\tilde{x}=(x_1,..,x_{m-1})^T$, and notice $\partial (f\circ P)(\tilde{x}) = P^T\partial f(P\tilde{x})$ if $f$ convex. I'll figure this out in more detail tomorrow!

**EDIT:**
I made some progress and could solve the case where $\Vert x\Vert_\infty \leq C_2$ is active. But I had to assume that $(x_1,..,x_{m-1})\geq 0$. I'm unsure if I can assume this w.l.o.g because of the condition $\sum_{i=1}^m x_i=0.$

I appreciate any advice.