Your question involves a fairly standard suite of arguments in random matrices literature so I will provide you references which you can use. I don't know if there's a very short and elementary proof of the result that is not involving the steps i describe below

The first step is to reduce degrees of freedoms: in the Hermitian ensemble, any hermitian matrix $B=UDU^*$ with $U$ a unitary matrix and $D$ the eigenvalue matrix where $U$ and $D$ are independent. The change of variable $B\rightarrow(U,D)$ can be worked out in detail and reads

$$\Pi_{ij} d B_{ij} = VDM(\lambda_1,\ldots,\lambda_n)^2 \Pi_{i=1}^nd\lambda_i d\Omega $$

where $VDM$ is the vandermonde of the eigenvalues $\lambda_i$ and $d\Omega$ is the Haar measure on the unitary group. The proof is not straightforward and details are provided here

Lectures on Random Matrices

Now the particular integral with gaussian measure defines what is called the Gaussian Unitary Ensemble because the integrand is invariant under the unitary transformation so the integral to be evaluated becomes

$$\int \Pi_{i=1}^nd\lambda_i \exp\left(-\frac{N}{2}\sum_i\lambda_i^2\right)\Pi_{i<j}(\lambda_i-\lambda_j)^2$$

The next step to of the computation is to project the vandermonde polynomials onto the basis of orthogonal polynomials with respect to the gaussian measure, aka Hermite polynomials in this case.

This is worked out in details in the following 2 papers where the expected value of the product of 2 vandermonde is explained

Orthogonal polynomial ensembles
in probability theory

Dimers and orthogonal polynomials:
connections with random matrices