While solving a research problem related to approximations in neural networks, I've faced the following problem which I have not been able to solve after trying different approaches for a while.

Let's say that we have a matrix $A \in \mathbb{R}^{n x n}$ whose entries are each random varianbles $a_{ij} = \sum_{k=1}^m x_{ik} x_{jk}$ where $x_{ab} \sim N(0, 1)$ and each of the random variables $x_{ab}$ are independent of the other ones (unless both indices of $x$ are the same, in which case it will be the same random variable).

The question is: can we bound the spectral norm of the matrix $A$? A simpler version of the question which has the same value for me, is if we can bound the spectral value of matrix $A$ while zero-ing out its diagonal values. I can see that the latter version might be simpler, as in the diagonal, the random variables $x_{ii}$ are not independent. But I'd definitely be happier to know about the general solution to this problem.

P.S.: I know that we can bound this value by $\infty$, but I would love to know if there is any tight bound already available on this. In particular, I would like to know about bounds that are dependent on the variable $m$, but any insight could be helpful for me and is much appreciated.

I know that the product of two independent standard normal random variables is a form of K-Distributions, as mentioned here, but as this distribution is not a sub-gaussian distribution I couldn't use the available resources that discuss spectral norm of random matrices whose entries are sub-gaussian. The other path that I've tried is bounding spectral norm by frobenius norm, but as the K-distribution is distributed around zero (and has mean zero), $x_{ab}^2$ does not have zero mean, and things got a bit messy as I couldn't come up with a concentration inequality to continue this.

Thanks a lot!