75

How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please?

I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...

When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.

Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
niko
  • 869
  • 1
  • 7
  • 6
  • 12
    Choose two coordinates, switch them, add a minus sign, and complete with zeroes. For example: choosing `i` and `j` might yield `4i-3j`, choosing `i` and `k` might yield `2i+3k`, and choosing `j` and `k` might yield `2j+4k`. – Did Apr 26 '12 at 19:09
  • @Didier thanks for letting me know but as you told,we have got 3 solutions. 4i-3j,2i+3k,2j+4k its not single vector.I need a vector something like ai+bj+ck which is perpendicular to other vector.sorry but I Just started to learn vectors. – niko Apr 26 '12 at 19:15
  • $2j+4k=0i+2j+4k$. – David Mitra Apr 26 '12 at 19:19
  • 16
    Pick any vector not colinear to your vector and take their cross product. – N. S. Apr 26 '12 at 20:26
  • 10
    Not to de-rail the thread, but does anyone know why this particular question has over 15k views? – Jesse Madnick Feb 27 '13 at 07:51
  • I found this good [pdf](http://www.tracy.k12.mn.us/larsena/PC-Sec7-3%28Day3%29.pdf) to explain it, if can help. – Riccardo Volpe Jul 05 '14 at 18:06
  • 1
    @JesseMadnick Over 92k views now. A question often searched for, with a clear descriptive title. –  Oct 06 '14 at 20:20
  • 10
    @JesseMadnick Useful in computer graphics. You often have a lot of normal vectors for the surfaces of objects, but to turn those into proper transformation matrices, you need perpendicular vectors. – Brendan Abel Oct 01 '16 at 00:52
  • 4
    There are a lot of detailed mathy answers here, but the most practical answer is found only in the comment from @Did above. Just make sure that the two components you switch are not both zero. I lack the reputation to add an answer, but here's a complete and simple solution in C form: planeVec = (normal.x == normal.y ? new Vector3(-normal.z, 0, normal.x) : new Vector3(-normal.y, normal.x, 0)) – Joe Strout Jan 03 '17 at 16:23
  • @Did That doesn't work for $(1,0,0)$ when switching the zeroes ... – Michael Hoppe Jun 11 '19 at 17:38
  • @MichaelHoppe: sound and safe to use the components that maximize some norm. –  Sep 11 '20 at 07:02

18 Answers18

56

There exists an infinite number of vectors in 3 dimension that are perpendicular to a fixed one. They should only satisfy the following formula: $$(3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) \cdot v=0$$

For finding all of them, just choose 2 perpendicular vectors, like $v_1=(4\mathbf{i}-3\mathbf{j})$ and $v_2=(2\mathbf{i}+3\mathbf{k})$ and any linear combination of them is also perpendicular to the original vector: $$v=((4a+2b)\mathbf{i}-3a\mathbf{j}+3b\mathbf{k}) \hspace{10 mm} a,b \in \mathbb{R}$$

carlop
  • 1,698
  • 11
  • 10
  • What is this kind of notation called? I have never seen a vector being defined like $(3i + 4j - 2k)$. The notation I've seen so far would be $\left(\begin{array}{c}3\\4\\2\end{array}\right)$, therefore I do not really understand your answer. :( – Niklas R Oct 30 '12 at 22:20
  • 3
    There are many possible notation, I choose to use the same notation of the question, but other choice are good as well. $i$,$j$,$k$ refers to vectors $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$, so it is basically the same thing after you do vector-scalar multiplication. – carlop Dec 14 '12 at 13:49
  • 1
    @NiklasR Since you wanted a name, $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ are called the (classical) [Hamiltonian Quaternions](http://en.wikipedia.org/wiki/Classical_Hamiltonian_quaternions). – Alexander Gruber Jul 05 '14 at 20:23
  • 1
    @NiklasR you can find that notation being used in the Stieg Larsson. – iam_agf Jun 20 '16 at 03:50
  • 1
    @MonsieurGalois: would that be the famous lost fourth novel, _The Girl Who Manipulated Vectors_? – Anton Sherwood Sep 20 '16 at 18:51
  • 1
    @AlexanderGruber I do not see anything there to support Hamiltonian Quaternions referring specifically to the standard basis unit vectors i, j, k. – electronpusher Feb 21 '17 at 01:44
  • 2
    Agreeing with @electronpusher, it is more appropriate to refer to **i**, **j**, **k** as ["Unit vectors representing the axes of Cartesian coordinates"](https://en.wikipedia.org/wiki/Unit_vector#Cartesian_coordinates). – ToolmakerSteve Apr 30 '17 at 20:53
  • This is just a suggestion for an addition to the answer at the very end: "All the vectors perpendicular to 3i+4j−2k form a plane in 3D space. This plane would represent the null-space of your vector". – joshuaronis Nov 25 '18 at 16:02
  • 1
    This doesn't answer the question as it doesn't show how to choose these two vectors... – Danvil Mar 16 '20 at 01:18
  • With Danvil. IMO this is a non-answer as it replies that to find a perpendicular vector, you need to find two of them ! Shame on the upvoters. –  Sep 11 '20 at 06:18
32

Take cross product with any vector. You will get one such vector.

R K Sinha
  • 753
  • 7
  • 6
15

A related problem is to construct an algorithm that finds a non-zero perpendicular vector without branching. If the input vector is N = (a,b,c), then you could always choose T = (c,c,-a-b) but T will be zero if N=(-1,1,0). You could always check to see if T is zero, and then choose T = (-b-c,a,a) if it is, but this requires a test and branch. I can't see how to do this without the test and branch.

wcochran
  • 578
  • 5
  • 13
  • One of the few answers where the author understood the question. Now we just need a solution.. – Danvil Mar 16 '20 at 01:20
  • I posted a solution which doesn't need a test and branch for a normalized vector. For a non-normalized vector it only requires a test and branch to check if the complete vector is null. – Danvil Mar 16 '20 at 03:52
  • I proposed a different approach where obtaining the vector takes no arithmetic (there is a trivial solution), and the burden is on making sure that you obtain a nonzero. –  Sep 11 '20 at 06:35
8

You just need to find any vector $v \neq 0$ such that $v \cdot (3\mathbf{i}+4\mathbf{j}-2\mathbf{k}) = 0$.

There is no unique solution, any one will do. To save typing, let $p = 3\mathbf{i}+4\mathbf{j}-2\mathbf{k}$.

Pick a vector $x$, that is not on the line through the origin and $p$. Take $x = 3\mathbf{i}$, for example.

Construct a vector perpendicular to $p$ in the following way: Find a value of $t$ so that $(x+t p) \cdot p = 0$. Then the vector $v=x+t p$ will be perpendicular to $p$.

In my example, $(x+t p) = (3 + 3 t)\mathbf{i}+4 t \mathbf{j}-2t\mathbf{k}$, and $(x+t p) \cdot p = 9 + 29 t$. By choosing $t=-\frac{9}{29}$, the vector $v=x+t p$ is now perpendicular to $p$.

copper.hat
  • 161,568
  • 8
  • 96
  • 225
4

A suggested solution without a branch could be: Construct an array of 2 vector elements in the following way:

arr[0] = (c,c,-a-b) arr[1] = (-b-c, a,a)
int selectIndex = ((c != 0) && (-a != b)) // this is not a branch
perpendicularVector = arr[selectIndex]

If (c, c, -a-b) is zero, selectIndex is 1 and the other vector will be selected.

Jesko Hüttenhain
  • 14,196
  • 1
  • 32
  • 67
dmoti
  • 49
  • 1
  • 1
  • Clever -- I like it. – wcochran Aug 10 '16 at 14:49
  • 1
    There is bug: Input (0, 0.707, -0.707); Output (0,0,0) – Ondrej Petrzilka Jan 17 '17 at 12:18
  • 2
    How is `&&` performed without a branch? [Logical AND Operator](https://msdn.microsoft.com/en-us/library/c6s3h5a7.aspx) says "*The second operand is evaluated only if the first operand evaluates to true (nonzero).*". *That* implies a branch (to avoid evaluating the second operand). To avoid a branch, you must use `&` instead. Suggest that change as an edit. [I can't make the edit, because it is only a single character; edit less than 6 characters is rejected. I'm not going to do some bogus other characters just to submit the correction.] – ToolmakerSteve Apr 30 '17 at 20:58
  • `// this is not a branch` is flat-out incorrect, as ToolmakerSteve said. The entire point of the short-circuited AND is that it conditionally skips the second, which _requires_ a branch at some level -- either in the CPU pipeline or the Assembly -- which... is a branch. – Nic Apr 09 '18 at 03:15
  • @Stuntddude The sarcasm is utterly unnecessary, but thanks for pointing it out. My first comment is still correct; please fix that when you get a chance. – Nic Aug 05 '18 at 01:37
4

For any nonzero vector $(a,b,c)$, the three of $(0,c,-b),(-c,0,a)$ and $(-b,a,0)$ are orthogonal to it.

To avoid the "parallel case", you can choose the one with the largest squared modulus, among $c^2+b^2, c^2+a^2$ and $b^2+a^2$, or the one with the two largest absolute components or simply one with the largest absolute component. Choosing the largest will also optimize numerical stability.


In the given case, $(-4,3,0)$.


Update:

The largest squared modulus also corresponds to the smallest (absolute) component.

2

One way to do this is to express the vector in terms of a spherical coordinate system. For example

$$ \boldsymbol{e}= \pmatrix{a \\ b \\ c} = r \pmatrix{ \cos\varphi \cos\psi \\ \sin\varphi \cos\psi \\ \sin\psi} $$

where $r=\sqrt{a^2+b^2+c^2}$, $\tan(\varphi) = \frac{b}{a}$ and $\tan{\psi} = \frac{c}{\sqrt{a^2+b^2}}.$

Provided that $a \neq 0$ or $b \neq 0$ then

A choice of two orthogonal vectors can be found with $$ \begin{aligned} \boldsymbol{n}_1 & = \frac{{\rm d} \boldsymbol{e}}{{\rm d} \varphi} = r\pmatrix{-\sin \varphi \cos\psi \\ \cos\varphi \cos\psi \\ 0}& \boldsymbol{n}_2 & = \frac{{\rm d} \boldsymbol{e}}{{\rm d} \psi} = r\pmatrix{-\cos\varphi \sin\psi \\ -\sin\varphi \sin\psi \\ \cos\psi} \end{aligned}$$

Of course, any non-zero linear combination of these two vectors is also orthogonal

$$ \boldsymbol{n} = \cos(t) \boldsymbol{n}_1 + \sin(t) \boldsymbol{n}_2 $$

where $t$ is an rotation angle about the vector $\boldsymbol{e}$.

Put it all together to make a family of orthogonal vectors in terms of $t$ as

$$ \boldsymbol{n} = \pmatrix{-b \cos(t) - \frac{a c}{\sqrt{a^2+b^2}} \sin(t) \\ a \cos(t) - \frac{b c}{\sqrt{a^2+b^2}} \sin(t) \\ \sqrt{a^2+b^2} \sin(t)} $$

For $\boldsymbol{e} = \pmatrix{3 & 4 & -2}$ the aboves gives

$$ \boldsymbol{n} = \pmatrix{ \frac{6}{4} \sin(t)-4 \cos(t) \\ 3 \cos(t) - \frac{8}{5} \sin(t) \\ 5 \sin(t) } \longrightarrow \begin{cases} \boldsymbol{n} = \pmatrix{-4 & 3 & 0} & t =0 \\ \boldsymbol{n} =\pmatrix{\frac{6}{5} & \frac{8}{5} & 5} & t = \frac{\pi}{2} \end{cases} $$


For the case when $a=0$ and $b=0$ then you know can assign the perpendicular somewhat arbitrarily with

$$ \boldsymbol{n}_1 = \pmatrix{1 \\ 0 \\ 0} $$

and

$$ \boldsymbol{n}_2 = \pmatrix{0 \\ 1 \\ 0} $$

for the general solution

$$\boldsymbol{n} = \cos(t) \boldsymbol{n}_1 + \sin(t) \boldsymbol{n}_2$$

John Alexiou
  • 10,718
  • 1
  • 30
  • 60
2

Short answer: the vector $(s_z\,(z + s_z) - x^2, -x y, -x\,(z + s_z))$ with $s_z := \text{sign}(z) \, \|(x,y,z)\|$ is orthogonal to the vector $(x,y,z)$.


Note that we assume that $\text{sign}(x)$ is defined as $+1$ for $x \ge 0$ and as $-1$ otherwise.

Let $(x,y,z)$ be a vector with norm s and z > -s then the following matrix is an orthogonal basis where every basis vector has norm s:

$\left( \begin{array}{ccc} s - \frac{x^2}{z+s} & -\frac{x y}{z+s} & x \\ -\frac{x y}{z+s} & s - \frac{y^2}{z+s} & y \\ -x & -y & z \\ \end{array} \right)$

There are two notable cases if z = -s:

  1. The vector is of form $(0,0,z)$ with z < 0 and we can simply invert it before applying the formula above. As shown below this can be exploited to get a branch-free implementation.
  2. The vector is the zero vector $(0,0,0)$. "perpendicular" doesn't make much sense in case of the null vector. If you interpret it as "dot product is zero" than you can just return the zero vector.

We can deal with these two problems as follows:


Let's look at the first vector: $(s - \frac{x^2}{z+s}, -\frac{x y}{z+s}, -x)$. The singularity at $(0,0,-1)$ can be avoided by inverting the input vector and then inverting the result which gives: $(-s - \frac{x^2}{z-s}, -\frac{x y}{z-s}, -x)$.

Following this idea we can set $s_z := \text{sign}(z) \, s$ and compute an orthogonal basis vector for any non-null vector $(x,y,z)$ as:

$(s_z - \frac{x^2}{z + s_z}, -\frac{x y}{z + s_z}, -x)$

This leads to a nice branch-free C++ implementation for a normalized vector:

Vector3 OrthoNormalVector(double x, double y, double z) {
  const double g = std::copysign(1., z);
  const double h = z + g;
  return Vector3(g - x*x/h, -x*y/h, -x);
}

Check the implementation of copysign on your platform to make sure that copysign(1., 0.) returns 1 and not 0.


For an arbitrary vector, not necessarily normalized, we can use a little trick to get an orthogonal vector: we scale the vector by the factor $z+s_z$ to get:

$(s_z\,(z + s_z) - x^2, -x y, -x\,(z + s_z))$

This vector is still orthogonal to the original vector $(x,y,z)$ as it was just scaled by a factor. It also has zero norm if and only if the norm of the original vector is 0.

This leads again to a branch-free implementation:

Vector3 OrthogonalVector(double x, double y, double z) {
  const double s = std::sqrt(x*x + y*y + z*z);
  const double g = std::copysign(s, z);  // note s instead of 1
  const double h = z + g;
  return Vector3(g*h - x*x, -x*y, -x*h);
}
Danvil
  • 253
  • 1
  • 13
  • As you mention, this fails for the vector $(0,0,1)$ and similar vectors, but there is a simple modification. The same can be said for my approach, using the vector $(0,0,1)$ as the extra vector to make up the $n-1$ vectors. – robjohn Mar 16 '20 at 08:20
  • The provided code snippet works for any vector. It uses a closed form formula to compute an orthogonal vector without if statements or branches. – Danvil Mar 18 '20 at 03:33
2

This branch-free algorithm is $\operatorname{sqrt}$-free and trig-free:

$$ \begin{aligned} \begin{bmatrix} \operatorname{copysign}\left(z,x\right) \\ \operatorname{copysign}\left(z,y\right) \\ -\operatorname{copysign}\left(|x|+|y|,z\right) \\ \end{bmatrix} \end{aligned} $$

An equivalent form

This alternative avoids the 2 $\operatorname{abs}$ at the cost of an additional $\operatorname{copysign}$:

$$ \begin{aligned} \begin{bmatrix} \operatorname{copysign}\left(z,x\right) \\ \operatorname{copysign}\left(z,y\right) \\ -\operatorname{copysign}\left(x,z\right) -\operatorname{copysign}\left(y,z\right) \\ \end{bmatrix} \end{aligned} $$

Properties

Let $L_\text{i}$ be the length of the input and $L_\text{o}$ be the length of the output:

$$ \begin{aligned} L_\text{i} \le L_\text{o} \le \sqrt{2} L_\text{i} \end{aligned} $$

which holds for both forms above.

A note about the function $\operatorname{copysign}$

Many platforms offer a function $\operatorname{copysign}\left(a,b\right)$ whose return value has the magnitude of $a$ and the sign of $b$. Despite the following mathematical definition, its implementation can be branch-free using bitwise operations:

$$ \begin{aligned} \operatorname{copysign}\left(a,b\right) &= \begin{cases} |a|&\text{for }b\ge0 \\ -|a|&\text{for }b<0 \\ \end{cases} \end{aligned} $$

If $\operatorname{copysign}$ is not available

The $\operatorname{copysign}(a,b)$ function is preferred because it is non-vanishing. However some platforms only offer a $\operatorname{sign}(b)$ function which vanishes for $b=0$:

$$ \begin{align} \operatorname{sign}(b) &= \begin{cases} 1 &\text{for } b > 0 \\ 0 &\text{for } b = 0 \\ -1 &\text{for } b < 0 \\ \end{cases} \\ \end{align} $$

Fortunately an alternative exists. The desired non-vanishing functionality can be obtained by "nudging" the output and nesting the result in another call:

$$ \begin{align} \operatorname{sign}\left[\operatorname{sign}(b)+0.5\right] &= \begin{cases} 1 &\text{for } b \ge 0 \\ -1 &\text{for } b < 0 \\ \end{cases} \\ \end{align} $$

This leads to the following form of the perpendicular to vector $(x,y,z)$ for platforms with no $\operatorname{copysign}(a,b)$ function:

$$ \begin{align} \begin{bmatrix} s_{xz}z \\ s_{yz}z \\ -s_{xz}x-s_{yz}y \end{bmatrix} \\ \end{align} $$

where:

$$ \begin{align} s_{xz} &= \operatorname{sign}\left\{ \left[ \operatorname{sign}(x)+0.5 \right] \left[ \operatorname{sign}(z)+0.5 \right] \right\} \\ s_{yz} &= \operatorname{sign}\left\{ \left[ \operatorname{sign}(y)+0.5 \right] \left[ \operatorname{sign}(z)+0.5 \right] \right\} \\ \end{align} $$

1

A geometric solution would be as follows. The plane $3x+4y-2z=0$ is perpendicular to the vector $3i+4j−2k$. Any vector in that plane is thus perpendicular this vector. Thus you may choose any $x$, $y$ and $z$ that lie in the plane $3x+4y-2z=0$ and the resulting $xi+yj+zk$ will be perpendicular to $3i+4j−2k$,

user1483
  • 123
  • 3
1

Given $n-1$ linearly independent vectors, $\{v_j\}_{j=1}^{n-1}$ in $\mathbb{R}^n$, we can find a non-zero vector, $u$, perpendicular to all of them.

If we set $$ \begin{align} u_1&=\det\begin{bmatrix} v_{1,1}&v_{2,1}&\cdots&v_{n-1,1}&1\\ v_{1,2}&v_{2,2}&\cdots&v_{n-1,2}&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ v_{1,n}&v_{2,n}&\cdots&v_{n-1,n}&0 \end{bmatrix}\\ u_2&=\det\begin{bmatrix} v_{1,1}&v_{2,1}&\cdots&v_{n-1,1}&0\\ v_{1,2}&v_{2,2}&\cdots&v_{n-1,2}&1\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ v_{1,n}&v_{2,n}&\cdots&v_{n-1,n}&0 \end{bmatrix}\\ &\vdots\\ u_n&=\det\begin{bmatrix} v_{1,1}&v_{2,1}&\cdots&v_{n-1,1}&0\\ v_{1,2}&v_{2,2}&\cdots&v_{n-1,2}&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ v_{1,n}&v_{2,n}&\cdots&v_{n-1,n}&1 \end{bmatrix}\\ \end{align} $$ then $$ u\cdot w=\det\begin{bmatrix} v_{1,1}&v_{2,1}&\cdots&v_{n-1,1}&w_1\\ v_{1,2}&v_{2,2}&\cdots&v_{n-1,2}&w_2\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ v_{1,n}&v_{2,n}&\cdots&v_{n-1,n}&w_n \end{bmatrix} $$ If we replace $w$ by any of the $v_j$, the determinant will be $0$ because of duplicate columns; thus, $u\cdot v_j=0$.

$\{v_j\}_{j=1}^{n-1}$ cannot span $\mathbb{R}^n$, so there must be some $v_n$ that is not in the span of $\{v_j\}_{j=1}^{n-1}$. This means that $\{v_j\}_{j=1}^n$ are independent, and so $$ \begin{align} u\cdot v_n&=\det\begin{bmatrix} v_{1,1}&v_{2,1}&\cdots&v_{n-1,1}&v_{n,1}\\ v_{1,2}&v_{2,2}&\cdots&v_{n-1,2}&v_{n,2}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ v_{1,n}&v_{2,n}&\cdots&v_{n-1,n}&v_{n,n} \end{bmatrix}\\ &\ne0 \end{align} $$ In particular, $u\ne0$.

robjohn
  • 326,069
  • 34
  • 421
  • 800
  • The question asks for a perpendicular vector in case only n-2 are given. You are answering a much easier problem which can be achieved with cross production in case n=3. – Danvil Mar 16 '20 at 01:13
  • @Danvil: If you have $n-2$ vectors, simply add *any* vector that is not a combination of the $n-2$ to get $n-1$ vectors and then apply the procedure above. A vector that is perpendicular to the $n-1$ vectors will be perpendicular to the $n-2$. – robjohn Mar 16 '20 at 03:19
  • Yes of course, but the core of the problem is how to compute "any vector" which is not a combination of the other. – Danvil Mar 16 '20 at 03:43
  • Almost any vector you choose (in a measure theoretic or probabilistic sense) will be independent of the other $n-2$. In almost any approach (mine, Gram-Schmidt, etc.), one may need to try one of each of some basis vectors to find one that is independent of the $n-2$ given vectors. – robjohn Mar 16 '20 at 08:08
1

The vectors perpendicular to $(3,4,-2)$ form a two dimensional subspace, the plane $3x+4y-2z=0$, through the origin.

To get solutions, choose values for any two of $x,y$ and $z$, and then use the equation to solve for the third.

The space of solutions could also be described as $V^{\perp}$, where $V=\{(3t,4t,-2t):t\in\Bbb R\}$ is the line (or one dimensional vector space) spanned by $(3,4-2)$.

1

Another way to find a vector $\vec{v}$ for a given $\vec{u}$ such that $$ \vec{u}\cdot\vec{v}=0 $$ is to use an antisymmetric matrix $A$ ($A^\top=-A$) defined as follow $$ A_{ij}u_iu_j=0\qquad(\text{sum over }ij). $$ In two dimension $A$ is $$ A=\begin{pmatrix} 0&1\\ -1&0\\ \end{pmatrix}. $$ In three dimension $A$ is $$ A=\begin{pmatrix} 0&1&1\\ -1&0&1\\ -1&-1&0\\ \end{pmatrix}. $$ In 2D only one such vector $\vec{v}=A\vec{u}$ exist, while in 3D you can apply the same matrix to the sum $\vec{u}+\vec{v}$ finding a vector perpendicular to the plane given by the other two vectors.

2D

The matrix $A$ can be calculated as follow $$ A_{ij}u_iu_j=A_{11}u_1^2+(A_{12}+A_{21})u_1u_2+A_{22}u_2^2. $$ One way is to set $A_{11}=0=A_{22}$ and $A_{21}=-A_{12}$.

3D

Again $$ A_{ij}u_iu_j=A_{11}u_1^2+(A_{12}+A_{21})u_1u_2+A_{22}u_2^2+(A_{13}+A_{31})u_1u_3+(A_{23}+A_{32})u_2u_3+A_{33}u_3^2, $$ and setting $A_{11}=A_{22}=A_{33}=0$ and $A_{21}=-A_{12}$, $A_{31}=-A_{13}$ and $A_{23}=-A_{32}$.

yngabl
  • 976
  • 1
  • 7
  • 14
0

A vector perpendicular to the given vector A can be rotated about this line to find all positions of the vector. To find them,

if $ A \cdot B =0 $ and $ A \cdot C =0 $ then $ B,C $ lie in a plane perpendicular A and also $ A \times ( B \times C ) $= 0, for any two vectors perpendicular to A. (Last equation typo edited late)

Narasimham
  • 36,354
  • 7
  • 34
  • 88
0

Remember: There exist infinite vector in 3 dimension that are perpendicular to a fixed one. Now, Let $v\neq 0$ be the vector whose is $xi+yj+zk$. So , $v$ is perpendicular to the vector $3i+4j-2k$. Therefore, $v\cdot\langle 3i+4j−2k\rangle=0$. $$ \langle xi+yj+zk\rangle\cdot \langle3i+4j−2k\rangle =0 $$ so $3x+4y-2z=0$ (1) where $i\cdot i =j\cdot j=k\cdot k=1$.

Now, there are three unknown variable such x, y and z in (1). You can choose any two variable whatever you like. Let $y=2$ and $z=1$,
then $x=-2$ from (1),

One of the vector is $(-2i+2j+k)$. Similarly, you can choose one of two variables from (1) , then find the third variable. So, you can find infinite perpendicular vectors to the vector $3i+4j-2k$.

dustin
  • 7,933
  • 11
  • 42
  • 81
gyaba
  • 11
0

All vectors perpendicular to the given vector form a plane. If $v_1$ and $v_2$ are perpendicular to the given vector $v = 3i +4j -2k$, then the dot products $v\cdot v_1 =0$ and $v\cdot v_2 = 0$. If $v_1 = 2i -j + k$ and $v_2 = 2i +j +5k$, then a plane formed by any vector $v_3 = av_1 +bv_2$; where $a$ and $b$ are scalars, will be normal to the given vector $v$.

dustin
  • 7,933
  • 11
  • 42
  • 81
b.sahu
  • 318
  • 1
  • 4
0

Definition of the Dot Product:

$\vec{a} \cdot \vec{b}$ = ( $a_{1} , a_{2}$ ) $\cdot$ ( $b_{1} , b_{2}$ ) = $a_{1}b_{1} + a_{2}b_{2}$

also known as the scalar product or inner product

$\mathbf{\vec{a} \cdot \vec{b}}$ is a one "number" answer

Orthogonal Vectors:

Two vectors are orthogonal (perpendicular) if and only if $\ \mathbf{\vec{a} \cdot \vec{b} = 0}$ in other words... two vectors are perpendicular if their DOT PRODUCT is ZERO

Example:

Let

$\vec{a}$ = ( 8 , -4 )

that is:

$a_{1}$ = 8

$a_{2}$ = -4

Find a vector $\mathbf{\vec{r}}$ that is perpendicular to $\mathbf{\vec{a}}$:

$\vec{r}$ = (x, y);

that is:

$b_{1} = x$

$b_{2} = y$

$\vec{a} \cdot \vec{r} = 8x + (-4y) = 0 \Rightarrow$

$\Rightarrow 8x - 4y = 0 \Rightarrow$

$8(1) - 4(2) = 0 \Rightarrow \mathbf{\vec{r} = (1, 2)} \Rightarrow$ one solution

$8(2) - 4(4) = 0 \Rightarrow \mathbf{\vec{r} = (2, 4)} \Rightarrow$ other solution

$8(-1) - 4(-2) = 0 \Rightarrow \mathbf{\vec{r} = (-1, -2)} \Rightarrow$ other solution

... as Rebecca said: << Keep in mind there will be an infinite number of perpendicular vectors >> ... Here the pdf source

Riccardo Volpe
  • 109
  • 1
  • 5
  • Just so you know, you can make a dot in MathJax with \cdot and subscripts with _. For more info, check out http://meta.math.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference. –  Jul 05 '14 at 19:46
  • @Bye_World, Thanks a lot! :-) – Riccardo Volpe Jul 05 '14 at 19:49
  • 1
    Nice for 2D, but the OP was asking about a 3D scenario. – electronpusher Feb 21 '17 at 01:50
  • This is not an answer to the question. Question-asker states that he knows how to solve in 2D (two axes or components), but doesn't understand how to solve in 3D. You are showing the solution to 2D. Similarly, you misunderstand Rebecca's comment about there being an infinite number of perpendicular vectors. All your solution vectors are in the same **direction**, only varying by length. Indeed, this is what happens in 2D. However, Rebecca is talking about 3D, in which the DIRECTIONS of the solutions also vary. – ToolmakerSteve Apr 30 '17 at 21:08
-1

The dot product of two perpendicular vectors are always $0$ so if you $(ai+bj+ck)\cdot(di+ej+fk)=0$ you can solve for the different variables. If you have one vector than the infinite amount of perpendicular vectors will form a plane that is perpendicular to the original vector. If you know one or two of the coordinates of the desired perpendicular line than you can find the corresponding vector(s) on that plane.

Babelfish
  • 1,762
  • 8
  • 24