The nuclear norm is defined in the following way

$$\|X\|_*=\mathrm{tr} \left(\sqrt{X^T X} \right)$$

I'm trying to take the derivative of the nuclear norm with respect to its argument

$$\frac{\partial \|X\|_*}{\partial X}$$

Note that $\|X\|_*$ is a norm and is convex. I'm using this for some coordinate descent optimization algorithm. Thank you for your help.

Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
  • 2,375
  • 3
  • 19
  • 39
  • 2
    Is there a reason you need the derivative? In a convex optimization setting, this function would likely be handled using a semidefinite transformation or perhaps a projection. – Michael Grant Mar 08 '14 at 02:47
  • I will add an answer describing these alternate approaches. – Michael Grant Mar 08 '14 at 15:27
  • You can get the proof from the reference Characterization of the subdifferential of some matrix norm, Linear Algebra Appl., 170(1992),pp.33-45. – askuyue Sep 03 '16 at 08:56

8 Answers8


As I said in my comment, in a convex optimization setting, one would normally not use the derivative/subgradient of the nuclear norm function. It is, after all, nondifferentiable, and as such cannot be used in standard descent approaches (though I suspect some people have probably applied semismooth methods to it).

Here are two alternate approaches for "handling" the nuclear norm.

Semidefinite programming. We can use the following identity: the nuclear norm inequality $\|X\|_*\leq y$ is satisfied if and only if there exist symmetric matrices $W_1$, $W_2$ satisfying $$\begin{bmatrix} W_1 & X \\ X^T & W_2 \end{bmatrix} \succeq 0, ~ \mathop{\textrm{Tr}}W_1 + \mathop{\textrm{Tr}}W_2 \leq 2 y$$ Here, $\succeq 0$ should be interpreted to mean that the $2\times 2$ block matrix is positive semidefinite. Because of this transformation, you can handle nuclear norm minimization or upper bounds on the nuclear norm in any semidefinite programming setting. For instance, given some equality constraints $\mathcal{A}(X)=b$ where $\mathcal{A}$ is a linear operator, you could do this: $$\begin{array}{ll} \text{minimize} & \|X\|_* \\ \text{subject to} & \mathcal{A}(X)=b \end{array} \quad\Longleftrightarrow\quad \begin{array}{ll} \text{minimize} & \tfrac{1}{2}\left( \mathop{\textrm{Tr}}W_1 + \mathop{\textrm{Tr}}W_2 \right) \\ \text{subject to} & \begin{bmatrix} W_1 & X \\ X^T & W_2 \end{bmatrix} \succeq 0 \\ & \mathcal{A}(X)=b \end{array} $$ My software CVX uses this transformation to implement the function norm_nuc, but any semidefinite programming software can handle this. One downside to this method is that semidefinite programming can be expensive; and if $m\ll n$ or $n\ll m$, that expense is exacerbated, since size of the linear matrix inequality is $(m+n)\times (m+n)$.

Projected/proximal gradients. Consider the following related problems: $$\begin{array}{ll} \text{minimize} & \|\mathcal{A}(X)-b\|_2^2 \\ \text{subject to} & \|X\|_*\leq \delta \end{array} \quad $$ $$\text{minimize} ~~ \|\mathcal{A}(X)-b\|_2^2+\lambda\|X\|_*$$ Both of these problems trace out tradeoff curves: as $\delta$ or $\lambda$ is varied, you generate a tradeoff between $\|\mathcal{A}(X)-b\|$ and $\|X\|_*$. In a very real sense, these problems are equivalent: for a fixed value of $\delta$, there is going to be a corresponding value of $\lambda$ that yields the exact same value of $X$ (at least on the interior of the tradeoff curve). So it is worth considering these problems together.

The first of these problems can be solved using a projected gradient approach. This approach alternates between gradient steps on the smooth objective and projections back onto the feasible set $\|X\|_*\leq \delta$. The projection step requires being able to compute $$\mathop{\textrm{Proj}}(Y) = \mathop{\textrm{arg min}}_{\{X\,|\,\|X\|_*\leq\delta\}} \| X - Y \|$$ which can be done at about the cost of a single SVD plus some $O(n)$ operations.

The second model can be solved using a proximal gradient approach, which is very closely related to projected gradients. In this case, you alternate between taking gradient steps on the smooth portion, followed by an evaluation of the proximal function $$\mathop{\textrm{Prox}}(Y) = \mathop{\textrm{arg min}}_X \|X\|_* + \tfrac{1}{2}t^{-1}\|X-Y\|^2$$ where $t$ is a step size. This function can also be computed with a single SVD and some thresholding. It's actually easier to implement than the projection. For that reason, the proximal model is preferable to the projection model. When you have the choice, solve the easier model!

I would encourage you to do a literature search on proximal gradient methods, and nuclear norm problems in particular. There is actually quite a bit of work out there on this. For example, these lecture notes by Laurent El Ghaoui at Berkeley talk about the proximal gradient method and introduce the prox function for nuclear norms. My software TFOCS includes both the nuclear norm projection and the prox function. You do not have to use this software, but you could look at the implementations of prox_nuclear and proj_nuclear for some hints.

Michael Grant
  • 18,124
  • 1
  • 34
  • 55

Start with the SVD decomposition of $x$:

$$x=U\Sigma V^T$$

Then $$\|x\|_*=tr(\sqrt{x^Tx})=tr(\sqrt{(U\Sigma V^T)^T(U\Sigma V^T)})$$

$$\Rightarrow \|x\|_*=tr(\sqrt{V\Sigma U^T U\Sigma V^T})=tr(\sqrt{V\Sigma^2V^T})$$

By circularity of trace:

$$\Rightarrow \|x\|_*=tr(\sqrt{V^TV\Sigma^2})=tr(\sqrt{V^TV\Sigma^2})=tr(\sqrt{\Sigma^2})=tr(\Sigma)$$

Since the elements of $\Sigma$ are non-negative.

Therefore nuclear norm can be also defined as the sum of the absolute values of the singular value decomposition of the input matrix.

Now, note that the absolute value function is not differentiable on every point in its domain, but you can find a subgradient.

$$\frac{\partial \|x\|_*}{\partial x}=\frac{\partial tr(\Sigma)}{\partial x}=\frac{ tr(\partial\Sigma)}{\partial x}$$

You should find $\partial\Sigma$. Since $\Sigma$ is diagonal, the subdifferential set of $\Sigma$ is: $\partial\Sigma=\Sigma\Sigma^{-1}\partial\Sigma$, now we have:

$$\frac{\partial \|x\|_*}{\partial x}=\frac{ tr(\Sigma\Sigma^{-1}\partial\Sigma)}{\partial x}$$ (I)

So we should find $\partial\Sigma$.

$x=U\Sigma V^T$, therefore: $$\partial x=\partial U\Sigma V^T+U\partial\Sigma V^T+U\Sigma\partial V^T$$


$$U\partial\Sigma V^T=\partial x-\partial U\Sigma V^T-U\Sigma\partial V^T$$

$$\Rightarrow U^TU\partial\Sigma V^TV=U^T\partial xV-U^T\partial U\Sigma V^TV-U^TU\Sigma\partial V^TV$$

$$\Rightarrow \partial\Sigma =U^T\partial xV-U^T\partial U\Sigma - \Sigma\partial V^TV$$

\begin{align} \Rightarrow\\ tr(\partial\Sigma) &=& tr(U^T\partial xV-U^T\partial U\Sigma - \Sigma\partial V^TV)\\ &=& tr(U^T\partial xV)+tr(-U^T\partial U\Sigma - \Sigma\partial V^TV) \end{align}

You can show that $tr(-U^T\partial U\Sigma - \Sigma\partial V^TV)=0$ (Hint: diagonal and antisymmetric matrices, proof in the comments.), therefore:

$$tr(\partial\Sigma) = tr(U^T\partial xV)$$

By substitution into (I):

$$\frac{\partial \|x\|_*}{\partial x}= \frac{ tr(\partial\Sigma)}{\partial x} =\frac{ tr(U^T\partial xV)}{\partial x}=\frac{ tr(VU^T\partial x)}{\partial x}=(VU^T)^T$$

Therefore you can use $U V^T$ as the subgradient.

  • 2,375
  • 3
  • 19
  • 39
  • Alt, I am trying to understand why taking the nuclear norm is not differentiable. You said it is because of the absolute values, but as @Rodrigo de Avezedo pointed out, the Sigma is already non-negative. Given that there is no absolute value, why is it not differentiable? – The_Anomaly Jan 17 '18 at 19:40
  • 2
    @The_Anomaly, the singular values are non-negative, but we are taking the derivative with respect to the Matrix $x$. For example, if $x$ is a $1\times 1$ matrix, then the nuclear norm is equivalent to the absolute value of $x$, which is non-diferentiable. – Alt Jan 17 '18 at 20:50
  • I still cannot prove $-U^T\partial U\Sigma - \Sigma\partial V^TV=0$.. – olivia Jul 25 '19 at 14:46
  • How to show $-U^T\partial U\Sigma - \Sigma\partial V^TV=0$? I can only get its diagonal entries are zero...@Alt @The_Anomaly – olivia Jul 25 '19 at 15:32
  • @olivia: $V$ and $U$ are unitary matrices. I.e., $V^TV= \mathbf{I}$ ($\mathbf{I}$ is the identity matrix.) Therefore, $\partial(V^TV) = \partial \mathbf{I} = \mathbf{0} \rightarrow = \partial V^TV + V^T\partial V = \mathbf{0} \rightarrow x \partial V^TV = - (\partial V^TV)^T = -V^T\partial V$. Therefore $\partial V^TV$ is an antisymmetric matrix. If you multiply a diagonal matrix with an antisymmetric matrix the result will be the zero matrix. – Alt Jul 25 '19 at 17:51
  • @olivia "If you multiply a diagonal matrix with an antisymmetric matrix the result will be the zero matrix." (Please let us know if you have difficulty proving the latter part.) – Alt Jul 25 '19 at 17:57
  • 1
    @Alt Thanks. I only think $diag(U^T\partial U\Sigma)=0$ rather than $U^T\partial U\Sigma=0$. So $$\partial\Sigma =diag(U^T\partial xV)$$. However, this modification does not change the final result, which is lucky. – olivia Jul 26 '19 at 00:00
  • @Alt hi? please help to check my previous comments. – olivia Jul 29 '19 at 02:26
  • @olivia That's true! We only care about the trace (sum of the values on the diagonal) in this derivation. Look at the last line of the proof, at the end we substitute things in the trace function, which is a linear operator. I'll fix and clarify it now. – Alt Jul 29 '19 at 22:03
  • @Alt please see Eq.(17) of https://j-towns.github.io/papers/svd-derivative.pdf. Do you think $\partial\Sigma=diag(\partial\Sigma)$? – olivia Jul 30 '19 at 03:40

You can use this nice result for the differential of the trace $$ \eqalign { d\,\mathrm{tr}(f(A)) &= f'(A^T):dA \cr } $$ to write $$ \eqalign { d\,\mathrm{tr}((x^Tx)^{\frac {1} {2}}) &= \frac {1} {2} (x^Tx)^{-\frac {1} {2}}:d(x^Tx) \cr &= \frac {1} {2} (x^Tx)^{-\frac {1} {2}}:(dx^T x + x^T dx) \cr &= x(x^Tx)^{-\frac {1} {2}}: dx \cr } $$ Yielding the derivative as $$ \eqalign { \frac {\partial\|x\|_*} {\partial x} &= x(x^Tx)^{-\frac {1} {2}} \cr } $$ Another nice result (this one's from Higham) $$ \eqalign { A\,f(B\,A) &= f(A\,B)\,A \cr } $$ yields an alternative expression with (potentially) smaller dimensions $$ \eqalign { \frac {\partial\|x\|_*} {\partial x} &= (x\,x^T)^{-\frac {1} {2}}x \cr } $$

While the square root of $x^Tx$ certainly exists, the inverse may not. So you might need some sort of regularization, e.g. $$ \eqalign { \frac {\partial\|x\|_*} {\partial x} &= x(x^Tx+\varepsilon I)^{-\frac {1} {2}} \cr } $$

  • 51
  • 1
  • 1
  • 2
    I believe the colon notation represents the Frobenius product [http://en.wikipedia.org/wiki/Matrix_multiplication#Frobenius_product]. – lynne Oct 21 '14 at 00:47

Of course, $n:x\in M_{n,p}\rightarrow tr(\sqrt{x^Tx})$ can be derived in $x$ s.t. $x^Tx$ is invertible, that is, in the generic case when $n\geq p$ (if $n\leq p$, then consider $tr(\sqrt{xx^T})$). The result of greg is correct ; yet, his proof is unclear and I rewrite it for convenience.

If $A$ is symmetric $>0$, then $f:A\rightarrow \sqrt{A}$ is a matrix function (cf. the Higham's book about this subject) ; if $g$ is a matrix function and $\phi:A\rightarrow tr(g(A))$, then its derivative is $D\phi_A:K\rightarrow tr(g'(A)K)$. Let $A=x^Tx$. Thus $Dn_x:H\rightarrow tr(f'(A)(H^Tx+x^TH))=tr((f'(A)^Tx^T+f'(A)x^T)H)$. Then the gradient of $n$ is $\nabla(n)(x)=x(f'(A)+f'(A)^T)=2xf'(A)=x(x^Tx)^{-1/2}$.

As Alt did, we can use the SVD decomposition and we find $\nabla(n)(x)=U\Sigma (\Sigma^T\Sigma)^{-1/2}V^T$ ($=UV^T$ if $n=p$). Recall to Alt that the diagonal of $\Sigma$ is $\geq 0$.


Short answer

Nuclear norm has subgradients (w.r.t. its argument). You may use $UV^\top$ in your algorithm if you need one.

See https://math.stackexchange.com/a/1016743/351390 for where it is actually differentiable. You should also see the comment by loup blanc below.


The elements in $\partial\|X\|_*$ can be characterized as a sum of two parts: Let $X = U \Sigma V^\top$ be a (skinny) singular value decomposition, then

$$Y \in \partial \|X\|_* \quad \Leftrightarrow \quad Y = UV^\top + W \text{ for some } W \in T^\perp$$

where the definition of subspace (of matrices) $T$ is a bit complicated: $$T :=\text{span}(\{x v_j^\top| j \in \{1, \ldots, m\}; x \text{ is any vector}\} \cup \{u_i y^\top| i \in \{1, \ldots, n\}; y \text{ is any vector}\})$$ However, you don't need to pay too much attention to it, because obviously $0$ is an element of $T^\perp$, therefore at least $$UV^\top \in \partial\|X\|_*$$

For proof, as commented by askuyue, see https://www.sciencedirect.com/science/article/pii/0024379592904072

  • 121
  • 6
  • Welcome to MSE. This should be a comment, not a answer. – José Carlos Santos Apr 08 '18 at 11:06
  • @JoséCarlosSantos Sorry, I was not looking at your comment while I was editing the answer to elaborate. Please advise me whether this still should be moved. – diadochos Apr 08 '18 at 11:18
  • Now it looks fine. – José Carlos Santos Apr 08 '18 at 11:37
  • When you write "nuclear norm is not differentiable", you copy what you read elsewhere; unfortunately (for you) it's false or at least very incomplete. Indeed, the nuclear norm is differentiable in a neighborhood of any $X$ s.t. $X^TX$ is invertible (see my post in this file). Moreover, the nuclear norm is differentiable on $C^1$ arcs $t\rightarrow X(t)$ s.t. $rank(X(t))$ is constant (possibly $ –  Apr 08 '18 at 18:03
  • I see. I'm sorry I was very thoughtless, and thank you very much for your guidance. I will edit the post. – diadochos Apr 10 '18 at 06:15
  • No problem. Thanks. –  Apr 11 '18 at 10:14

Alt's answer has a fundamental error. First of all, the nuclear norm is the sum of all singular values not the absolute of the singular values.

To make it right, we need to first define the square root for matrix as $\sqrt{BB}=B$. As Alt shown,


But we cannot use the circularity of trace here because it is not well defined.

We should do something like this,

$||x||_*=tr(\sqrt{V\Sigma^2V^T})=tr(\sqrt{V\Sigma V^TV\Sigma V^T})=tr(V\Sigma V^T)$,

the last equality is based on the definition of the square root for matrix described above. Then by the circularity of trace, we get

$tr(V\Sigma V^T)=tr(\Sigma V^TV)=tr(\Sigma)=\sum_i \sigma_i$.

  • 891
  • 1
  • 8
  • 21
  • Does this mean it can be negative? For real numbers we often define $\sqrt(x^2):=|x|$. What is the equivalent for matrices? – Harsh Aug 21 '18 at 16:08

What about $|| M ||_{F} = \mathrm{Trace}(M^{T}M)$?

Davide Giraudo
  • 158,460
  • 65
  • 231
  • 360
  • 306
  • 1
  • 10

The challenges of calculating the gradients of $||X||_{*}$ comes from the non-smooth processing of calculating the singular values of $X$.

Thus, I usually first transform $X$ into a symmetric positive semi-definite matrix $\hat{X}$, and then we have $||\hat{X}||_{*}=tr(\hat{X})$. The intuitive is that the eignvalues are equals to singular values, when $\hat{X}$ is a symmetric positive semi-definite matrix, and $tr(\hat{X})=\sum eignvalues$.

Finally, we have $\frac{\partial ||\hat{X}||_{*}}{\partial \hat{X}}=\frac{\partial tr(\hat{X})}{\partial \hat{X}}=I$.

I am not sure whether it is helpful to you.