I will start with the 1-dimensional case. In the question you linked to, they have the identity:

$$
N(\mu_1, \sigma_1^2) \times N(\mu_2, \sigma_2^2) \propto N \left( \frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1}{\sigma_1^2 + \sigma_2^2}, \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}} \right)
$$

where the multiplication is of PDFs, *not* random variables.

Introduce alternate parameters $\lambda_i = 1/\sigma_i^2$ and $\xi_i =
\mu_i / \sigma_i^2$. Note that we can recover the original parameters as $\mu_i = \xi_i / \lambda_i, \sigma^2_i = 1/\lambda_i$. This parameterization is called *canonical* or *information form*. Note that:

$$
N \left( \frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1}{\sigma_1^2 + \sigma_2^2}, \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}} \right) = N \left( \frac{\xi_1 + \xi_2}{\lambda_1 + \lambda_2}, \frac{1}{\lambda_1 + \lambda_2} \right)
$$

Hence the new canonical parameters are $\lambda' = \lambda_1 + \lambda_2, \xi' = \xi_1 + \xi_2$. Since $\lambda \in \mathbb{R}_{>0}$ and $\xi \in \mathbb{R}$, I suppose that in the 1D case you want the Cartesian product of those spaces under addition, $\mathbb{R}_{>0} \times \mathbb{R}$. As $\mathbb{R}_{>0}$ does not support an additive identity, I think this is a semigroup.

In multiple dimensions, canonical form is given by $\bf{\Lambda} = \bf{\Sigma}^{-1}, \bf{\xi} = \bf{\Sigma}^{-1}\bf{\mu}$. I think the addition properties still hold, and you would want to restrict $\bf{\Sigma}$ or equivalently $\bf{\Lambda}$ to be (symmetric) positive definite to avoid degeneracy.

Source: Kevin P. Murphy, *Machine Learning*, 2nd. ed., section 4.3.3.

Update: I have changed the $=$ in the first equation to $\propto$ because there is a proportionality constant: https://www.johndcook.com/blog/2012/10/29/product-of-normal-pdfs/. Murphy and the linked question both overlook this.