I am currently taking a intro course to abstract algebra and am revisiting ideas from linear algebra so that I can better understand examples.

When i was in undergraduate learning L.A., I thought of matrix manipulations as ways of solving $n \times n$ systems of equations. Recently i was exposed to the idea of a matrix being a linear transformation, and matrix multiplication being composition of linear transformations. Im trying to understand this in a more intuitive way and was hoping for some insight...

I was thinking of a basic $2\times2$ example and how it affects a point $(x,y)$. We could have a matrix :

\begin{bmatrix} a & b \\ c & d \end{bmatrix} When we 'apply' or multiply this to a point $(x,y)$ using matrix multiplication we get new $x' = ax + by$ and $y' = cx + dy$.

So if $b,c = 0$, then I can see that what we are doing is 'scaling' both $x \;\& \;y$. I'm guessing that if $b,c \neq 0$, then this becomes some sort of rotation or reflection, but how do you understand this on a fundamental level?

How do these operations relate to Gaussian elimination when we are trying to solve systems of equations? Or are are these two seperate applications of matrices?

Another observation is that when multiplying a matrix such as this one with a point, we get two equations which remind me of Bézout's identity. Am I overanalyzing this or can I draw connections between these two concepts?

Thanks for any input!