Is there a way of adding two vectors in polar form without first having to convert them to cartesian or complex form?

1You can do it by solving triangles (more specifically, half the parallellogram of the parallellogram rule can be solved as SAS) if you really want to. But in the general case, converting to Cartesian is the _easy_ way through. (Especially once you move to 3D!) – hmakholm left over Monica Jul 18 '15 at 14:37

@HenningMakholm I found this question quite interesting, actually, especially when it appeared that (in 2D) it might be more computationally efficient not to use the standard conversion to Cartesian coordinates when adding vectors in polar form. I may want to run some benchmarks on some code I work with. – David K Jul 18 '15 at 16:13

Essentially no. You will find alternative formulas that look different, but they actually boil down to conversions to and fro, possibly via trigonometric identities. The global computational cost will be similar. Why this question ? – Jul 18 '15 at 17:02

1I am using polars for sprite movement, and wish to change trajectories with simply adding vectors. I was thinking maybe there was a way to achieve less overhead, by avoiding the conversion to and from. – lash Jul 18 '15 at 17:39

1@YvesDaoust I think what we're discovering (see Henning Makholm's answer, or Dr. MV's) is that indeed you can can add the vectors without converting anything to Cartesian coordinates, not even implicitly. You can do it with just one cosine, a square root, and an arc cosinewhere the cosine could be viewed as _one_ Cartesian coordinate in a rotated system, and that's as close as these methods come to doing any Cartesian conversion. Depending on the application, this might be hard for Cartesian conversion to beat even with clever amortization of costs. – David K Jul 19 '15 at 00:29

@DavidK: things are like I said. Your formulas are those obtained by addition in Cartesian coordinates, to the application of trigonometric identities. See my answer. I agree that this formulation is much more efficient in terms of trigonometric function evaluation and should be preferred. – Jul 19 '15 at 09:01

@YvesDaoust Your answer is along the lines of my "hybrid" answer, so yes, there is effectively a conversion to (rotated) Cartesian coordinates there. You could even argue that the these Cartesian coordinates can be found in my first method. But in the other methods I mentioned (not mine), I see no coordinate system in which all four components (of two vectors) are actually computed. I can only find at most three, so the conversion is never completed. If you can find all four components in one of those answers, it might make a useful comment on the answer to explain where they are to be found. – David K Jul 19 '15 at 09:47

@DavidK: I won't try to argue further. – Jul 19 '15 at 09:55

@YvesDaoust OK. For what it's worth, because of your arguments (which I found had merit with respect to _both_ of my own "polar" methods), I have revised the claims I made in my own answer. – David K Jul 19 '15 at 13:07

@lash Please feel free to up vote and accept an answer as you see fit of course. – Mark Viola May 08 '19 at 04:22
5 Answers
Here is another way forward that relies on straightforward vector algebra. Let $\vec r_1$ and $\vec r_2$ denote vectors with magnitudes $r_1$ and $r_2$, respectively, and with angles $\phi_1$ and $\phi_2$, respectively.
Let $\vec r$ be the vector with magnitude $r$ and angle $\phi$ that denotes the sum of $\vec r_1$ and $\vec r_2$. Thus,
$$\vec r=\vec r_1+\vec r_2 \tag 1$$
From the definition of the inner product we have
$$\vec r_1\cdot \vec r_2=r_1r_2\cos(\phi_2\phi_1) \tag 2$$
and
$$\vec r_1\cdot \vec r=r_1r\cos(\phi\phi_1)\tag 3$$
Using $(1)$ and $(2)$, we find $r^2$ can be written
$$\begin{align} r^2&=\vec r\cdot \vec r\\\\ &=(\vec r_1+\vec r_2)\cdot (\vec r_1+\vec r_2)\\\\ &=\vec r_1\cdot \vec r_1+\vec r_2\cdot \vec r_2+2\vec r_1\cdot \vec r_2\\\\ &r_1^2+r_2^2+2r_1r_2\cos (\phi_2\phi_1) \end{align}$$
and thus $r$ is given by
$$\bbox[5px,border:2px solid #C0A000]{r=\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2\phi_1)}}\tag 4$$
Using $(1)$, $(3)$, and $(4)$, yields
$$\begin{align} \vec r_1\cdot \vec r&=r_1r\cos(\phi\phi_1)\\\\ &=\vec r_1\cdot (\vec r_1+\vec r_2)\\\\ &=r_1^2+r_1r_2\cos(\phi_1\phi_2)\\\\ \end{align}$$
whereupon solving for $\cos (\phi\phi_1)$ reveals
$$\cos(\phi\phi_1)=\frac{r_1+r_2\cos(\phi_2\phi_1)}{\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2\phi_1)}}\tag 5$$
We can easily obtain the expression for $\sin(\phi\phi_1)$ by applying the cross product
$$\hat z\cdot(\vec r_1 \times \vec r)=\hat z\cdot(\vec r_1 \times \vec r_2)$$
which after straightforward arithmetic yields
$$\sin(\phi\phi_1)=\frac{r_2\sin(\phi_2\phi_1)}{\sqrt{r_1^2+r_2^2+2r_1r_2\cos(\phi_2\phi_1)}} \tag 6$$
Dividing $(5)$ by $(6)$ and inverting shows that
$$\bbox[5px,border:2px solid #C0A000]{\phi =\phi_1+\operatorname{arctan2}\left(r_2\sin(\phi_2\phi_1),r_1+r_2\cos(\phi_2\phi_1)\right)} \tag 7$$
where the function $\operatorname{arctan2}(y,x)$ is described in this article.
Equations $(4)$ and $(7)$ provide the polar coordinates of $\vec r$ strictly in terms of the polar coordinates of $\vec r_1$ and $\vec r_2$. And the development of $(4)$, $(5)$, and $(6)$ did not appeal to Cartesian coordinates.
NOTE:
In a parallel development, we can express the sum of two complex numbers $z_1=r_1e^{i\phi_1}$ and $z_2=r_1e^{i\phi_2}$ in terms of their magnitudes and arguments.
First, recall that the inner product of two complex numbers is given by
$$\begin{align} \langle z_1,z_2 \rangle &=z_1 \bar z_2\\\\ &=r_1r_2e^{i(\phi_1\phi_2)} \end{align}$$
where $\bar z$ denotes the complex conjugate of $z$.
Next, we let $z=re^{i\phi}=z_1+z_2$ be the sum of $z_1$ and $z_2$. The magnitude of $z$ is given by
$$\begin{align} r&=\sqrt{\langle z,z \rangle}\\\\ &=\sqrt{\langle z_1+z_2,z_1+z_2 \rangle}\\\\ &=\sqrt{r_1^2+r_2^2+r_1r_2\left(e^{i(\phi_1\phi_2)}+e^{i(\phi_1\phi_2)}\right)}\\\\ &=\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2\phi_1)} \end{align}$$
Therefore, we have
$$\bbox[5px,border:2px solid #C0A000]{r=\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2\phi_1)}} \tag 8$$
Finally, we find the argument of $z$ by taking the inner product of $z$ and $z_1$. To that end, we write
$$\begin{align} \langle z,z_1 \rangle &=rr_1e^{i(\phi\phi_1)}\\\\ &=\langle z_1+z_2,z_1 \rangle \\\\ &=r_1^2+r_1 r_2 e^{i(\phi_2\phi_1)} \end{align}$$
which reveals that
$$e^{i(\phi\phi_1)}=\frac{r_1+r_2e^{i(\phi_2\phi_1)}}{\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2\phi_1)}} \tag 9$$
whereupon inverting yields
$$\bbox[5px,border:2px solid #C0A000]{\phi =\phi_1+\operatorname{arctan2}\left(r_2\sin(\phi_2\phi_1),r_1+r_2\cos(\phi_2\phi_1)\right)} $$
Equations $(8)$ and $(9)$ provide the polar coordinates of $z$ strictly in terms of the polar coordinates of $z_1$ and $z_2$. Again, this development did not appeal to Cartesian coordinates.
 166,174
 12
 128
 227

Given that both complex numbers and 2D vectors have essentially the same polar representation, I suspect there's an even faster way to show the same formula applies. But the formula itself is very interesting; I look forward to testing its performance when I have the time. – David K Jul 19 '15 at 00:40

@davidk This development can be generalized using inner product space notation. Let me know how your testing goes please. – Mark Viola Jul 19 '15 at 01:46

2Perhaps I'm not understanding correctly, but I believe I've found an issue with equation (9). Consider the case of r1 (1 @ 0 degrees) and r2 (1 at 90 degrees). The resulting r should have a magnitude of sqrt(2) and an angle of 45 degrees, which is computed correctly using equations (7) and (9). Next, consider the case of r1 (1 @ 0 degrees) and r2 (1 at 90 degrees). The resulting r should have a magnitude of sqrt(2) and an angle of 45 degrees. However, equation (9) yields a value of +45 degrees rather than 45 degrees. Am I missing something? – Thomas M Apr 29 '16 at 15:54

@ThomasM You're not missing anything. The principal branch of the arccosine is typically defined so that $0\le \arccos(\theta)\le \pi$. In the spirit of this development, in going from $(5)$ to $(6)$ and from $(8)$ to $(9)$ there was a tacit assumption that the branch of the arccosine is judiciously selected by the user given $\phi_1$ and $\phi_2$. In the example for which $\phi_1=0$ and $\phi_2=\pi/2$, we know that $\pi/2 \le \phi \le 0$. Then, with $\cos(\phi\phi_1)=1/\sqrt 2$, we know to select the branch of the arccosine for which $\phi<0$. Thus, $\phi=\pi/4$ as expected. Mark – Mark Viola Apr 29 '16 at 18:16

@ThomasM If you believe an edit is in order, I am happy to oblige. Thank you in any case for your very useful comment! +1 Mark – Mark Viola Apr 29 '16 at 18:16

@DrMV Thanks for the clarification! I think that, at a minimum, it would be good to include the assumption in your original answer. However, including the specific details on how to utilize equations (6) and (9) for all possible input values (i.e. −π to π) would provide a truly complete answer. – Thomas M Apr 29 '16 at 18:53

@ThomasM Thomas, I've edited to clarify things better. Please let me know how I can improve this answer. I really want to give the best answer I can. Mark – Mark Viola Apr 29 '16 at 21:37

So safe to say that when doing addition by hand, do it in Cartesian or you will be miserable :) – neuronet Nov 06 '16 at 17:42

For the sake of completeness, now this answer gives correct result for the case mentioned by @ThomasM after DrMV replaced `arccos` with `arctan2` in the answer. – Antony Hatchkins Mar 18 '17 at 07:34

@AntonyHatchkins Is there a reason for the comment. I edited this almost 2 years ago. Mark – Mark Viola Mar 18 '17 at 18:35

And I spent a quarter of hour trying to understand how Thomas's comment applies to your updated answer (it doesn't apply). When I edit my answers I usually mark edits with 'Update:' subheader to avoid ambiguity. – Antony Hatchkins Mar 19 '17 at 11:30

Is this more or less computationally efficient than converting to cartesian? For a computer program. – Aaron Franke Sep 24 '18 at 08:36

@AaronFranke One would have to count the number of operations required of both methodologies. But the question asked by the OP didn't address the efficiency issue. – Mark Viola Sep 24 '18 at 13:50

@lash Please feel free to up vote and accept an answer as you see fit of course. – – Mark Viola Jun 24 '19 at 14:04
Here is a method using polar coordinates in a plane. (I do not think I want to attempt this in spherical coordinates or in any higher dimension.)
Given: a vector $v_1$ at angle $\theta_1$, of length $r_1$; another vector $v_2$ at angle $\theta_2$, of length $r_2$.
To find: the vector $v_3$ at angle $\theta_3$, of length $r_3$ such that $v_3 = v_1 + v_2$.
Procedure: find the difference between the angles $\theta_2$ and $\theta_1$, mapped to an equivalent angle of magnitude no greater than $\pi$ (using radian measure of angles; if you prefer to work in degrees, substitute $180$ wherever you see $\pi$). That is, set $$\alpha = \theta_2  \theta_1 + 2n\pi$$ where $n$ is an integer chosen so that $\pi \leq \alpha \leq \pi$. Then, using the Law of Cosines, set $$r_3 = \sqrt{r_1^2 + r_2^2 + 2 r_1 r_2 \cos\alpha}\,.$$ (This formula uses $+$ where the usual formula uses $$ because $\alpha$ is an exterior angle of the triangle, rather than the interior angle used in the usual formula.)
You now have $r_3$.
Now if $\beta$ is the difference between the directions of $v_3$ and $v_1$, measured in the direction that gives the angle of the smallest possible magnitude, the area of the triangular region between $v_1$ and $v_3$ will be $\frac12 r_1 r_3 \sin\beta$. But the same triangular region also has area $\frac12 r_1 r_2 \sin\alpha$. So $$\frac12 r_1 r_3 \sin\beta = \frac12 r_1 r_2 \sin\alpha,$$ $$\sin\beta = \frac{r_2}{r_3} \sin\alpha.$$
This has two solutions for $\beta$. To find which solution applies, find $r_1 + r_2 \cos\alpha$. This is positive if $\beta$ is acute, negative if $\beta$ is obtuse. So take $$\beta = \begin{cases} \arcsin\left( \frac{r_2}{r_3} \sin\alpha \right) & \text{if }\ r_1 + r_2 \cos\alpha \geq 0, \\ \pi  \arcsin\left( \frac{r_2}{r_3} \sin\alpha \right) & \text{if }\ r_1 + r_2 \cos\alpha < 0. \end{cases}$$ Now let $$\theta_3 = \theta_1 + \beta.$$ If you prefer all your directions to be within certain bounds, for example you wish to have $0 \leq \theta_3 < 2\pi$, then set $\theta_3 = \theta_1 + \beta + 2m\pi$ where $m$ is an integer such that $\theta_3$ is within the bounds you prefer.
Nowhere in this procedure did we compute the usual $x$ and $y$ components of the standard "convert to Cartesian coordinates" method. But the coordinates of the two vectors in a system that is rotated by an angle $\theta_1$ are $(r_1,0)$ and $(r_2 \cos\alpha, r_2\sin\alpha)$; and if you look carefully, you can find all four of those components in the formulas above (in the sense that $0$ is always present as a constant term in any formula, and the other three components are all present as factors of terms). This method therefore could be considered a cryptoCartesian conversion method (the coordinates are "hidden" in the formulas), although it does avoid some of the computation that the usual Cartesian conversion method would entail.
(Edited: this last paragraph is different from my previous conclusion.)
In contrast, the usual Cartesiancoordinates method is:
$$\begin{align}
x_3 &= r_1 \cos\theta_1 + r_2 \cos\theta_2,\\
y_3 &= r_1 \sin\theta_1 + r_2 \sin\theta_2,\\
r_3 &= \sqrt{x_3^2 + y_3^2},\\
\theta_3 &= \begin{cases}
\arctan\left( \frac {y_3}{x_3} \right) & \text{if }\ x_3 > 0, \\
\pi + \arctan\left( \frac {y_3}{x_3} \right) & \text{if }\ x_3 < 0, \\
\frac\pi2 & \text{if $x_3 = 0$ and $y_3 > 0$}, \\
\frac\pi2 & \text{if $x_3 = 0$ and $y_3 < 0$}. \\
\end{cases}
\end{align}$$
If you are programming this on a computer using a math library,
there will typically be a function atan2(y_3, x_3)
that will compute $\theta_3$ without requiring you to explicitly
test the signs of $x_3$ and $y_3$.
The Cartesian method is simpler and (I think) less prone to error either when following the steps with pencil, paper, and a calculator or when programming it in a computer, which may explain its popularity.
I noticed, however, that the Cartesian method requires a square root and five trigonometric functions (two sines, two cosines, and an arc tangent) whereas the nonCartesian method uses only a square root and three trigonometric functions (one sine, one cosine, and an arc sine). So the nonCartesian method appears to be more computationally efficient. In fact, this suggests a special use of the Cartesian method, where the coordinate frame is first rotated so that one of the vectors lies along a Cartesian axis, and rotated back to the original frame after the vector has been computed in the rotated frame.
That is, the method can be:
$$\begin{align}
\alpha &= \theta_2  \theta_1, \\
u &= r_2 \cos\alpha,\\
v &= r_2 \sin\alpha,\\
r_3 &= \sqrt{\left( r_1 + u \right)^2 + v^2}, \\
\beta &= \text{atan2}(v, r_1 + u), \\
\theta_3 &= \theta_1 + \beta
\end{align}$$
using the atan2
function to simplify the formula.
This is really a kind of "hybrid" method due to the use of $u$ and $v$, which are Cartesian coordinates of $v_2$ in a rotated coordinate frame, but I think it qualifies as "without first having to convert them to Cartesian or complex form".
 85,585
 7
 71
 188

And, of course, going through Cartesian coordinates is vastly faster. I guess the OP wanted to ask whether there is any *simple* addition law, in the same way that vector addition in Cartesian coordinates is simple. – Alex M. Jul 18 '15 at 18:43

1@AlexM. Actually, the answers to this question seem to be throwing some doubt on the idea that the Cartesian method is faster (much to my surprise, by the wayI came into this with assumptions much like yours). I still have a hunch there are plenty of applications where you can amortize the conversion over enough vector sums to beat the best of the polar algorithms, but the interesting thing is, this now seems to be an applicationspecific question rather than a handsdown win for the Cartesian method. – David K Jul 19 '15 at 00:57
If we assume that the expensive operations are trigonometry and square roots, then David K's "hybrid" solution can be optimized down to two trigonometric functions and a square root.
As before our vectors to sum are $(\theta_1,r_1)$ and $(\theta_2,r_2)$. Without loss of generality we can assume that $r_1\ge r_2$ and $\alpha=\theta_2\theta_1 \in [0,\pi]$. (If the latter condition is not true, negate all the angles before and after adding).
Compute $u=r_2\cos\alpha$. Now, by the law of cosines, $$ r_3= \sqrt{r_1^2 + r_2^2 + 2r_1r_2\cos\alpha} =\sqrt{r_1(r_1+2u) + r_2^2}$$ For the angle, instead of using an arctangent (for which we would need $\sin\alpha$), we can find the cosine of twice the angle without any additional expensive operations. Namely, the cosine is the real part of $$ \left(\frac{r_1+r_2(\cos\alpha+i\sin\alpha)}{r_3}\right)^2 = \frac{r_1^2+r_2^2(\cos^2\alpha\sin^2\alpha)+2r_1r_2\cos\alpha + i(\cdots)}{r_3^2}$$ and therefore $$ \beta = \frac12 \arccos \frac{r_1(r_1+2u)r_2^2 + 2u^2}{r_3^2}$$ and as before $\theta_3=\theta_1+\beta$. The assumption that $r_1\ge r_2$ ensures that $\beta\in[0,\pi/2]$, so the value of the arccosine in $[0,\pi]$ is the one we need to halve.
But, as Dr. MV points out with more algebraic details, finding $\cos 2\beta$ instead of $\cos\beta$ is really a detour, since we've taken the square root anyway. In the notation of this answer, his computation just sets $$ \beta = \arccos\frac{r_1+u}{r_3} $$ which is even cheaper than the above.
This can also be justified without projecting to the $\theta_1$ axis, simply as using the Law of Cosines once again: $$ \beta = \arccos\frac{r_1^2+r_3^2r_2^2}{2r_1r_3} $$ since (by the initial application of the Law of Cosines) we have $r_3^2r_2^2=r_1^2+2r_1u$. This results in a completely nonCartesian argument.
 276,945
 22
 401
 655

Between this and Dr. MV's answer, now there are two methods that require just a cosine, a square root, and an arc cosine (plus a division and a handful of additions and multiplications). What a pleasant surprise! I look forward to trying these out. – David K Jul 19 '15 at 00:42
While correct mathematically, these are quite hard to code properly due to sign ambiguities and branch cuts in phase. This can be seen in that the argument of the arccos() does not depend on sign of the angle difference ($\phi_2\phi_1$), it gives the same answer regardless. Thus when adding back $\phi_1$ to $\psi_2$ to get the angle of the result $\phi$, the resulting vector has the correct magnitude, but the angle is wrong if $\phi_2 \lt \phi_1$.
You can go through and code in the signs, but then you get another set of signs if the angle between the input vectors is more than $\pi$, and even this is complicated by the branch cut angle (difference between angles should be less than ±$\pi$). Essentially you need to know whether $\vec{r}_1 \times \vec{r}_2$ is left or right handed (negative or positive), and handle correctly.
You can make this work, as I worked through this afternoon in a fit of pique. BUT, the answer is less accurate than the cartesian result due to phase errors accumulating when the vectors nearly add to zero. So less accurate and lots of fiddly logic that is going to be slow. If you do go down this road I hope you are unit testing the bejesus out of it.
That said, you can make a chisq fitting algorithm work well, because that just wants the square of the vector distance, which avoids all the sign issues. Just use law of cosines and go to town.
 31
 1
The to/fro Cartesian conversion is
$$r=\sqrt{(r_1\cos(\phi_1)+r_2\cos(\phi_2))^2+(r_1\sin(\phi_1)+r_2\sin(\phi_2))^2)},$$ $$\phi=\arccos\left(\frac{r_1\cos(\phi_1)+r_2\cos(\phi_2)}r\right).$$ An easy simplification is possible by temporarily taking $\phi_1$ as the origin of the angles, and letting $\psi_1,\psi_2,\psi:=0,\phi_2\phi_1,\phi\phi_1$.
$$r=\sqrt{(r_1+r_2\cos(\psi_2))^2+r_2^2\sin^2(\psi_2)}=\sqrt{r_1^2+2r_1r_2\cos(\psi_2)+r_2^2},$$ $$\psi=\arccos\left(\frac{r_1+r_2\cos(\psi_2)}r\right).$$
For those preferring the arctangentbased angle evaluation,
$$\psi=\arctan\left(\frac{r_1+r_2\cos(\psi_2)}{r_2\sin(\psi_2)}\right),$$ which is slightly more costly.