I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?
 257,588
 37
 262
 861
 2,257
 5
 23
 28

3I'm not sure what you mean. The Extended Euclidean Algorithm is inherently recursive. When you use it by hand, you use it recursively. – Jim Belk Nov 26 '11 at 18:13

Maybe you can have a look at this question: http://math.stackexchange.com/questions/20717/howtofindsolutionsoflineardiophantineaxbyc – Martin Sleziak Nov 26 '11 at 18:14

Have you seen [this blurb](http://www.math.uconn.edu/~kconrad/blurbs/ugradnumthy/divgcd.pdf) by KCd? – J. M. ain't a mathematician Nov 26 '11 at 18:16

If you're looking for help working out examples by hand, you may find [this](http://www.math.umn.edu/~garrett/crypto/a01/Euclid.html) helpful. I certainly find it helpful when writing tests or homework assignments :) – Bill Cook Nov 26 '11 at 18:41

@steven That is the same algorithm described in my answer below. – Bill Dubuque Jul 03 '21 at 20:47
5 Answers
Perhaps the easiest way to do it by hand is in analogy to Gaussian elimination or triangularization, except that, since the coefficient ring is not a field, we must to use the division / Euclidean algorithm to iteratively decrease the coefficients till zero. In order to compute both $\rm\,gcd(a,b)\,$ and its Bezout linear representation $\rm\,j\,a+k\,b,\,$ we keep track of such linear representations for each remainder in the Euclidean algorithm, starting with the trivial representation of the gcd arguments, e.g. $\rm\: a = 1\cdot a + 0\cdot b.\:$ In matrix terms, this is achieved by augmenting (appending) an identity matrix that accumulates the effect of the elementary row operations. Below is an example that computes the Bezout representation for $\rm\:gcd(80,62) = 2,\ $ i.e. $\ 7\cdot 80\: \: 9\cdot 62\ =\ 2\:.\:$ See this answer for a proof and for conceptual motivation of the ideas behind the algorithm (see the Remark below if you are not familiar with row operations from linear algebra).
For example, to solve m x + n y = gcd(m,n) we begin with
two rows [m 1 0], [n 0 1], representing the two
equations m = 1m + 0n, n = 0m + 1n. Then we execute
the Euclidean algorithm on the numbers in the first column,
doing the same operations in parallel on the other columns.
Here is an example: d = x(80) + y(62) proceeds as:
in equation form  in row form
+
80 = 1(80) + 0(62)  80 1 0
62 = 0(80) + 1(62)  62 0 1
row1  row2 > 18 = 1(80)  1(62)  18 1 1
row2  3 row3 > 8 = 3(80) + 4(62)  8 3 4
row3  2 row4 > 2 = 7(80)  9(62)  2 7 9
row4  4 row5 > 0 = 31(80) +40(62)  0 31 40
The row operations above are those resulting from applying
the Euclidean algorithm to the numbers in the first column,
row1 row2 row3 row4 row5
namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence
 
for example 623(18) = 8, the 2nd step in Euclidean algorithm
becomes: row2 3 row3 = row4 when extended to all columns.
In effect we have rowreduced the first two rows to the last two.
The matrix effecting the reduction is in the bottom right corner.
It starts as 1, and is multiplied by each elementary row operation,
hence it accumulates the product of all the row operations, namely:
$$ \left[ \begin{array}{ccc} 7 & 9\\ 31 & 40\end{array}\right ] \left[ \begin{array}{ccc} 80 & 1 & 0\\ 62 & 0 & 1\end{array}\right ] \ =\ \left[ \begin{array}{ccc} 2\ & \ \ \ 7\ & 9\\ 0\ & 31\ & 40\end{array}\right ] \qquad\qquad\qquad\qquad\qquad $$
Notice row 1 is the particular solution 2 = 7(80)  9(62)
Notice row 2 is the homogeneous solution 0 = 31(80) + 40(62),
so the general solution is any linear combination of the two:
n row1 + m row2 > 2n = (7n31m) 80 + (40m9n) 62
The same row/column reduction techniques tackle arbitrary
systems of linear Diophantine equations. Such techniques
generalize easily to similar coefficient rings possessing a
Euclidean algorithm, e.g. polynomial rings F[x] over a field,
Gaussian integers Z[i]. There are many analogous interesting
methods, e.g. search on keywords: Hermite / Smith normal form,
invariant factors, lattice basis reduction, continued fractions,
Farey fractions / mediants, SternBrocot tree / diatomic sequence.
Remark $ $ As an optimization, we can omit one of the Bezout coefficient columns (being derivable from the others). Then the calculations have a natural interpretation as modular fractions (though the "fractions" are multivalued), e.g. computing $\,\color{#c00}{\large \frac{10}9}\equiv\color{#90f}{18}\pmod{\!43}\,$ as in this answer
$$ \begin{array}{rr} \bmod 43\!:\ \ \ \ \ \ \ \ [\![1]\!] &43\, x\,\equiv\ \ 0\ \\ [\![2]\!] &\ \color{#c00}{9\,x\, \equiv 10}\!\!\!\\ [\![1]\!]5\,[\![2]\!] \rightarrow [\![3]\!] & \color{#0a0}{2\,x\, \equiv\ \ 7}\ \\ [\![2]\!]+\color{orange}4\,[\![3]\!] \rightarrow [\![4]\!] & \color{#90f}{1\,x\, \equiv 18}\ \end{array}\qquad\qquad\qquad$$
$${\text{as fractions}}\!:\ \,\dfrac{0}{43}\ \overset{\large\frown}\equiv \underbrace{\color{#c00}{\dfrac{10}{9}}\ \overset{\large\frown}\equiv \ \color{#0a0}{\dfrac{7}{2}}\ \overset{\large\frown}\equiv\ \color{#90f}{\dfrac{18}{1}}} _{\!\!\!\Large \begin{align}\color{#c00}{10}\ \ + \ \ &\!\color{orange}4\,(\color{#0a0}{\ \, 7\ \, }) \ \ \equiv \ \ \color{#90f}{18}\\ \color{#c00}{9}\ \ +\ \ &\!\color{orange}4\,(\color{#0a0}{2} ) \ \ \equiv\ \ \ \color{#90f}{1}\end{align}}\qquad\qquad\qquad\quad\, $$
We also used least magnitude remainders $\,(\color{#0a0}{2}\,$ vs. $7\bmod 9)\,$ to shorten the computations (this can halve the number of steps in the Euclidean algorithm).
Introduction to row operations (for readers unfamiliar with linear algebra).
Let $\,r_i\,$ be the Euclidean remainder sequence. Above $\, r_1,r_2,r_3\ldots = 80,62,18\ldots$ Given linear combinations $\,r_j = a_j m + b_j n\,$ for $\,r_{i1}\,$ and $\,r_i\,$ we can calculate a linear combination for $\,r_{i+1} := r_{i1}\bmod r_i = r_{i1}  q_i r_i\,$ by substituting said combinations for $\,r_{i1}\,$ and $\,r_i,\,$ i.e.
$$\begin{align} r_{i+1}\, &=\, \overbrace{a_{i1} m + b_{i1}n}^{\Large r_{i1}}\, \, q_i \overbrace{(a_i m + b_i n)}^{\Large r_i}\\[.3em] {\rm i.e.}\quad \underbrace{r_{i1}  q_i r_i}_{\Large r_{i+1}}\, &=\, (\underbrace{a_{i1}q_i a_i}_{\Large a_{i+1}})\, m\, +\, (\underbrace{b_{i1}  q_i b_i}_{\Large b_{i+1}})\, n \end{align}$$
Thus the $\,a_i,b_i\,$ satisfy the same recurrence as the remainders $\,r_i,\,$ viz. $\,f_{i+1} = f_{i1}q_i f_i.\,$ This implies that we can carry out the recurrence in parallel on row vectors $\,[r_i,a_i,b_i]$ representing the equation $\, r_i = a_i m + b_i n\,$ as follows
$$\begin{align} [r_{i+1},a_{i+1},b_{i+1}]\, &=\, [r_{i1},a_{i1},b_{i1}]  q_i [r_i,a_i,b_i]\\ &=\, [r_{i1},a_{i1},b_{i1}]  [q_i r_i,q_i a_i, q_i b_i]\\ &=\, [r_{i1}q_i r_i,\ a_{i1}q_i a_i,\ b_{i1}q_i b_i] \end{align}$$
which written in the tabular format employed far above becomes
$$\begin{array}{ccc} &r_{i1}& a_{i1} & b_{i1}\\ &r_i& a_i &b_i\\ \rightarrow\ & \underbrace{r_{i1}\!q_i r_i}_{\Large r_{i+1}} &\underbrace{a_{i1}\!q_i a_i}_{\Large a_{i+1}}& \underbrace{b_{i1}q_i b_i}_{\Large b_{i+1}} \end{array}$$
Thus the extended Euclidean step is: compute the quotient $\,q_i = \lfloor r_{i1}/r_i\rfloor$ then multiply row $i$ by $q_i$ and subtract it from row $i\!\!1.$ Said componentwise: in each column $\,r,a,b,\,$ multiply the $i$'th entry by $q_i$ then subtract it from the $i\!\!1$'th entry, yielding the $i\!+\!1$'th entry. If we ignore the 2nd and 3rd columns $\,a_i,b_i$ then this is the usual Euclidean algorithm. The above extends this algorithm to simultaneously compute the representation of each remainder as a linear combination of $\,m,n,\,$ starting from the obvious initial representations $\,m = 1(m)+0(n),\,$ and $\,n = 0(m)+1(n).\,$
 257,588
 37
 262
 861

6[See here](http://math.stackexchange.com/a/163118/242) for an example in $\rm\color{#940}{TeX}\color{#C00}{ni}\color{#0A0}{color}.\ \ $ – Bill Dubuque Jun 28 '12 at 20:24

1[Sometimes](http://math.stackexchange.com/a/398356/242) this is called the EuclidWallis algorithm, but I am not sure that is historically correct. – Bill Dubuque Dec 23 '13 at 22:20

See [this answer](http://math.stackexchange.com/a/2053174/242) for a *fraction* form of the Extended Euclidean Algorithm – Bill Dubuque Jan 18 '17 at 18:52

Bill, by doing the euclidian algorithm I got 80,62,18,8,2,0. As 80,62 are the building blocks, all these numbers will be written in the form x(80)+y(62), for 80 and 62 it is easy to find the x and the y, But I didn't undertand what You did to get the x and y for the other numbers. – Goun2 Aug 12 '17 at 01:41

1@hjy As described above, we need to perform the algorithm on the *entire* row or equation, e.g. as I show above the Euclidean step $\ 62\color{#c00}{3}(18) = 8\ $ becomes $\ [62, 0, 1]\color{#c00}{ 3}[18,1,1] = [8,3,4],\,$ when performed on the augmented rows or, performed on the equations, it is $$\begin{align} 62\, &=\ \ \ 0(80) + 1(62)\\ \color{#c00}{ 3}\,(18\, &=\ \ \ 1(80) + 1(62))\\ \rightarrow\ \ \ 8\, &= 3(80)+4(62) \end{align}$$ – Bill Dubuque Aug 12 '17 at 02:08

I think I got what you did. You get the first 2 rows to get the following. for example row1 x(row2) = row3, If I do the operation I will get the row 3, What I need to do is only operations with matrixes or another thing ? – Goun2 Aug 12 '17 at 02:12

1@hjx The idea is: we attach to each remainder $r_i$ in the Euclidean remainder sequence a representation as a linear combination of the initial numbers $\,m,n,\,$ e.g. by attaching the combination as the RHS of an equation $\, r_i = a_i m + b_i n.\,$ In order to propagate these linear combinations to the next remainder in the sequence, we simply extend the Euclidean operation to the linear combinations too, i.e. to the RHS of the equations in my prior comment. – Bill Dubuque Aug 12 '17 at 02:29

1So we end up adding and subtracting ($\rm\color{#c00}{scaled}$) *equations* (vs. *numbers* = remainders).The vector or row notation is just an abbreviation of these equations (which will be natural if one knows linear algebra). But I avoided much use of linear algebra in order to keep the answer accessible to readers who have not yet studied linear algebra. – Bill Dubuque Aug 12 '17 at 02:29

I think I better do it by back substitution, I didn't learn linear algebra yet, I was really interested in solving diophantine equations by using this, but even so thanks for the help. – Goun2 Aug 12 '17 at 02:50

1@hjx It is much easier than back substitution, and you needn't know any linear algebra. If you can tell me *precisely* what is not clear then I can elaborate. – Bill Dubuque Aug 12 '17 at 03:28

1@hjx I added a Remark to the answer which explains in more detail the row operations used in the *extended* Euclidean step. Is it clear now? – Bill Dubuque Aug 12 '17 at 15:47


Is it possible to find the gcd of 3 numbers with this method ? I'm reading a book and it talks about a method called Blankinship's method that can computer the gcd of more than 3 numbers, but it does exactly the same thing as you did in your post, however It shows the method with 2 numbers, I found a link that shows that method with 3 but It isn't the same thing. – Goun2 Aug 15 '17 at 13:24

1@hjx Yes, it works for arbitrarily many numbers, and there are many variations on the basic idea, i.e for descent we can replace any gcd argument by its remainder mod a smaller argument (or subtract any linear combination of the others that yields a smaller magnitude value). When using the above tabular format for many arguments we need to keep track of which element was replaced (e.g. cross it out) – Bill Dubuque Aug 15 '17 at 13:57

What is happening [here](https://thenoteboo.wordpress.com/2015/12/03/blankinshipsmethod/) is the same thing you did with the difference that the rows are getting changed ? – Goun2 Aug 15 '17 at 14:34

1@hjx Yes, it is essentially the same method except they keep track of the latest 3 rows in a matrix rather than the 3 most recent (noncrossedout) elements in a list. That way is more work notationally since it repeats much. The (common) attribution to Blankenship 1963 is historically incorrect since these ideas are *very* old. – Bill Dubuque Aug 15 '17 at 14:51

Could you please provide an example of how to get the gcd of 3 numbers ? – Goun2 Aug 15 '17 at 22:45

@hjx See e.g. [this answer](https://math.stackexchange.com/a/620958/242) and [this answer.](https://math.stackexchange.com/a/1978664/242). Generally you can find examples of my posts in the Linked Questions list. – Bill Dubuque Aug 15 '17 at 23:32

When you typed $ [r_{i1}q_i r_i,\ a_{i1}q_i a_i,\ b_{i1}b_i r_i]$, did you mean $ [r_{i1}q_i r_i,\ a_{i1}q_i a_i,\ b_{i1}b_i \color{red}q_i]$? – J. W. Tanner Dec 23 '20 at 15:02

Thanks for this answer. In the line after *"by substituting said combinations for"*, the second $a_{i1}$ should be $b_{i1}$, right? – joseville Jan 23 '22 at 03:54

1
This is more a comment on the method explained by Bill Dubuque then a proper answer in itself, but I think there is a remark so obvious that I don’t understand that it is hardly ever made in texts discussing the extended Euclidean algorithm. This is the observation that you can save yourself half of the work by computing only one of the Bezout coefficients. In other words, instead of recording for every new remainder $r_i$ a pair of coefficients $k_i,l_i$ so that $r_i=k_ia+l_ib$, you need to record only $k_i$ such that $r_i\equiv k_ia\pmod b$. Once you will have found $d=\gcd(a,b)$ and $k$ such that $d\equiv ka\pmod b$, you can then simply put $l=(dka)/b$ to get the other Bezout coefficient. This simplification is possible because the relation that gives the next pair of intermediate coefficients is perfectly independent for the two coefficients: say you have $$ \begin{aligned} r_i&=k_ia+l_ib\\ r_{i+1}&=k_{i+1}a+l_{i+1}b\end{aligned} $$ and Euclidean division gives $r_i=qr_{i+1}+r_{i+2}$, then in order to get $$ r_{i+2}=k_{i+2}a+l_{i+2}b $$ one can take $k_{i+2}=k_iqk_{i+1}$ and $l_{i+2}=l_iql_{i+1}$, where the equation for $k_{i+2}$ does not need $l_i$ or $l_{i+1}$, so you can just forget about the $l$'s. In matrix form, the passage is from $$ \begin{pmatrix} r_i&k_i&l_i\\ r_{i+1}&k_{i+1}&l_{i+1}\end{pmatrix} \quad\text{to}\quad \begin{pmatrix} r_{i+2}&k_{i+2}&l_{i+2}\\ r_{i+1}&k_{i+1}&l_{i+1}\end{pmatrix} $$ by subtracting the second row $q$ times from the first, and it is clear that the last two columns are independent, and one might as well just keep the $r$'s and the $k$'s, passing from $$ \begin{pmatrix} r_i&k_i\\ r_{i+1}&k_{i+1}\end{pmatrix} \quad\text{to}\quad \begin{pmatrix} r_{i+2}&k_{i+2}\\ r_{i+1}&k_{i+1}\end{pmatrix} $$ instead.
A very minor drawback is that the relation $r_i=k_ia+l_ib$ that should hold for every row is maybe a wee bit easier to check by inspection than $r_i\equiv k_ia\pmod b$, so that computational errors could slip in a bit more easily. But really, I think that with some practice this method is just as safe and faster than computing both coefficients. Certainly when programming this on a computer there is no reason at all to keep track of both coefficients.
A final bonus it that in many cases where you apply the extended Euclidean algorithm you are only interested in one of the Bezout coefficients in the first place, which saves you the final step of computing the other one. One example is computing inverse modulo a prime number $p$: if you take $b=p$, and $a$ is not divisible by it, then you know beforehand that you will find $d=1$, and the coefficient $k$ such that $d\equiv ka\pmod p$ is just the inverse of $a$ modulo $p$ that you were after.
 107,679
 7
 148
 306

I just noticed your yearlater answer. I usually perform this optimization by working with (modular) fractions, e.g. see [this answer.](http://math.stackexchange.com/a/2053174/242) – Bill Dubuque Jan 18 '17 at 18:56
You may like to check this and this.
Also, there is a well known table method which is very easy and fast for the manual solution.
 21,425
 30
 121
 207
The way to do this is due to Blankinship "A New Version of the Euclidean Algorithm", AMM 70:7 (Sep 1963), 742745. Say we want $a x + b y = \gcd(a, b)$, for simplicity with positive $a$, $b$ with $a > b$. Set up auxiliary vectors $(x_1, x_2, x_3)$, $(y_1, y_2, y_3)$ and $(t_1, t_2, t_3)$ and keep them such that we always have $x_1 a + x_2 b = x_3$, $y_1 a + y_2 b = y_3$, $t_1 a + t_2 b = t_3$ throughout. The algorithm itself is:
(x1, x2, x3) := (1, 0, a)
(y1, y2, y3) := (0, 1, b)
while y3 <> 0 do
q := floor(x3 / y3)
(t1, t2, t3) := (x1, x2, x3)  q * (y1, y2, y3)
(x1, x2, x3) := (y1, y2, y3)
(y1, y2, y3) := (t1, t2, t3)
At the end, $x_1 a + x_2 b = x3 = \gcd(a, b)$. It is seen that $x_3$, $y_3$ do as the classic Euclidean algorithm, and easily checked that the invariant mentioned is kept all the time.
One can do away with $x_2$, $y_2$, $t_2$ and recover $x_2$ at the end as $(x_3  x_1 a) / b$.
 26,788
 6
 38
 73

That's the same method as in my answer, and it is *much* older than Blankinship's 1963 paper. Alas, I don't recall the historical details at the moment. – Bill Dubuque Mar 24 '14 at 14:54
Just to complement the other answers, there's an alternative form of the extended Euclidean algorithm that requires no backtracking, and which you might find easier to understand and apply. Here's how to solve your problem* using it: $\newcommand{\x}{\phantom} \newcommand{\r}{\color{red}} \newcommand{\g}{\color{green}} \newcommand{\b}{\color{blue}}$
^{*) …from a duplicate question that I originally wrote this answer for.}
$$\begin{aligned} \g{ 0} \cdot 19 + \r{\x+1} \cdot 29 &= \b{ 29} && (1) \\ \g{ 1} \cdot 19 + \r{\x+0} \cdot 29 &= \b{ 19} && (2) \\ \g{1} \cdot 19 + \r{\x+1} \cdot 29 &= \b{ 10} && (3) = (1)  (2) \\ \g{ 2} \cdot 19 + \r{ 1} \cdot 29 &= \b{\x09} && (4) = (2)  (3) \\ \g{3} \cdot 19 + \r{\x+2} \cdot 29 &= \b{\x01} && (5) = (3)  (4) \end{aligned}$$
…and now you have your solution: $\g{3} \cdot 19 \equiv \b{1} \pmod{29}$, so the inverse of $19$ modulo $29$ is $3$ (or $29  3 = 27$, if you prefer a nonnegative solution).
In effect, what we're doing is trying to find a solution to the linear equation $\g x \cdot 19 + \r k \cdot 29 = \b r$ with the smallest possible $\b r > 0$. We do this by starting with the two trivial solutions $(1)$ and $(2)$ above, and then generate new solutions with a smaller and smaller $\b r$ by always subtracting both sides of the last solution from the one before it as many times as needed to get a smaller $\b r$ than we have so far. (In your example, that's only once each time, but I'll show another example below where we that's not the case.)
Eventually, we'll either find a solution with $\b r = 1$, in which case the corresponding $\g x$ coefficient is the inverse we want, or we'll end up with $\b r = 0$, in which case the previous solution's $\b r > 1$ is the greatest common divisor of the number we're trying to invert and the modulus, and thus no inverse exists. (Of course, we could also just keep going until $\b r = 0$ in any case, and then check the previous line to see if $\b r$ there equals $1$ or not, but that would be extra work we can easily avoid.)
Also, it's worth noting that we're not actually using the $\r k$ coefficients for anything, so if all you're interested in is finding the modular inverse (and/or the GCD), you don't actually have to calculate those. But showing them makes it clearer why the algorithm works. (Also, if you do calculate $\r k$, it's easy to verify that you didn't make any mistakes just by checking that the last equation with $\b r = 1$ really holds.)
Anyway, here's a couple more worked examples to illustrate some cases that your example doesn't. To start with, let's try to find the inverse of $13$ modulo $29$:
$$\begin{aligned} \g{ 0} \cdot 13 + \r{\x+1} \cdot 29 &= \b{ 29} && (1) \\ \g{ 1} \cdot 13 + \r{\x+0} \cdot 29 &= \b{ 13} && (2) \\ \g{2} \cdot 13 + \r{\x+1} \cdot 29 &= \b{\x03} && (3) = (1)  2 \cdot (2) \\ \g{ 9} \cdot 13 + \r{ 4} \cdot 29 &= \b{\x01} && (4) = (2)  4 \cdot (3) \\ \end{aligned}$$
This time, we could (and needed to) subtract solution $(2)$ from $(1)$ twice, since $\lfloor 29 \mathbin/ 13 \rfloor = 2$. And, similarly, we could subtract $(3)$ from $(2)$ four times, since $\lfloor 13 \mathbin/ 3 \rfloor = 4$. And we can verify that $\g 9$ is indeed the inverse of $13$ modulo $29$ just by checking that $9 \cdot 13  4 \cdot 29$ indeed equals $1$.
Now let's try an example where the inverse does not exist, like trying to find the inverse of $15$ modulo $27$:
$$\begin{aligned} \g{ 0} \cdot 15 + \r{\x+1} \cdot 27 &= \b{ 27} && (1) \\ \g{ 1} \cdot 15 + \r{\x+0} \cdot 27 &= \b{ 15} && (2) \\ \g{1} \cdot 15 + \r{\x+1} \cdot 27 &= \b{ 12} && (3) = (1)  (2) \\ \g{ 2} \cdot 15 + \r{ 1} \cdot 27 &= \b{\x03} && (4) = (2)  (3) \\ \g{9} \cdot 15 + \r{\x+5} \cdot 27 &= \b{\x00} && (5) = (3)  4 \cdot (4) \\ \end{aligned}$$
Oops. The last solution, with $\b r = 0$, is quite useless to us (except for checking that we did the arithmetic right). From the previous solution $(4)$, however, we can read that $\gcd(15, 27) = 3$, and even that $\g x = 2$ is a solution to the "generalized inverse" equation $\g x \cdot 15 \equiv \gcd(15, 27) \pmod{27}$.
Ps. See also this answer I wrote on crypto.SE last year, explaining the same algorithm from a slightly different viewpoint.
 24,602
 3
 64
 103

1This is exactly the same method described in my 8yearold answer, which is also illustrated many times in the 150 linked questions, e.g. [here](https://math.stackexchange.com/a/3379575/242) and [here](https://math.stackexchange.com/a/3325252/242) and [here](https://math.stackexchange.com/a/3326123/242) for some recent examples. As such, I think you should not say "alternative form complementing other answers" since that will probably mislead readers to think that you are describing a different method, – Bill Dubuque Oct 14 '19 at 02:04