Let $V \neq \{\mathbf{0}\}$ be a inner product space, and let $f:V \to V$ be a linear transformation on $V$.
I understand the definition1 of the adjoint of $f$ (denoted by $f^*$), but I can't say I really grok this other linear transformation $f^*$.
For example, it is completely unexpected to me that to say that $f^* = f^{-1}$ is equivalent to saying that $f$ preserves all distances and angles (as defined by the inner product on $V$).
It is even more surprising to me to learn that to say that $f^* = f$ is equivalent to saying that there exists an orthonormal basis for $V$ that consists entirely of eigenvectors of $f$.
Now, I can follow the proofs of these theorems perfectly well, but the exercise gives me no insight into the nature of the adjoint.
For example, I can visualize a linear transformation $f:V\to V$ whose eigenvectors are orthogonal and span the space, but this visualization tells me nothing about what $f^*$ should be like when this is the case, largely because I'm completely in the dark about the adjoint in general.
Similarly, I can visualize a linear transformation $f:V\to V$ that preserves lengths and angles, but, again, and for the same reason, this visualization tells me nothing about what this implies for $f^*$.
Is there (coordinate-free, representation-agnostic) way to interpret the adjoint that will make theorems like the ones mentioned above less surprising?
1 The adjoint of $f:V\to V$ is the unique linear transformation $f^*:V\to V$ (guaranteed to exist for every such linear transformation $f$) such that, for all $u, v \in V$,
$$ \langle f(u), v\rangle = \langle u, f^*(v)\rangle \,.$$