Lucas sequences $U_n(P,Q)$ and $V_n(P,Q)$ are defined by the following relations:



$U_n(P,Q)=P\cdot U_{n-1}(P,Q)-Q\cdot U_{n-2}(P,Q)$




$V_n(P,Q)=P\cdot V_{n-1}(P,Q)-Q\cdot V_{n-2}(P,Q).$

I knew that we can use the following method to compute whether $U_n(P,Q)$ or $V_n(P,Q)$:

$\begin{pmatrix} P & -Q \\ 1 & 0 \end{pmatrix}^{n-1} \begin{pmatrix} U_1 & V_1 \\ U_0 & V_0 \end{pmatrix} = \begin{pmatrix} U_n & V_n \\ U_{n-1} & V_{n-1} \end{pmatrix}$

Now, my question is: is this method is the fastest method to compute the $nth$ number in Lucas sequences ?

can we use a method similar to (the fast doubling method) to compute the $nth$ number in Lucas sequences instead of the above method (lucas sequences matrix method) ?

Note that the fast doubling method here is the fastest method to compute the $n$-th Fibonacci number and it is an alternative of the following method (matrix method of Fibonacci numbers):

$\begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}^{n} = \begin{pmatrix} F_{n+1} & F_{n} \\ F_{n} & F_{n-1} \end{pmatrix}$

Support your answer with an example please.

Aryaman Maithani
  • 14,362
  • 1
  • 13
  • 33

2 Answers2


Disclaimer $\newcommand{\two}[2]{\begin{pmatrix} #1 \\ #2 \end{pmatrix}} \newcommand{\four}[4]{\begin{pmatrix} #1 & #2 \\ #3 & #4 \end{pmatrix}}$

I am not well-versed with complexity theory. From the link you quote, I don't know why the ``fast doubling method'' is actually faster than fast exponentiation. It looks as if constructing a recurrence which relates $a_{2n}$ and $a_{2n - 1}$ with $a_{n}$ and $a_{n - 1}$ seems to make computation of $a_{n}$ fast. This is what I will try to do.

One more thing: I shall assume $Q \neq 0$. If $Q = 0$, then both the sequences are simply geometric progressions, and we have the general term quite easily. (You can also easily put it in a recursive format, but that is essentially fast exponentiation for a $1 \times 1$ matrix.)

Instead of $(U_{n})_{n}$ and $(V_{n})_{n}$, let us first focus on some other sequences.

Define $(P_{n})_{n \ge 0}$ as \begin{align} P_{0} &:= 1,\, P_{1} := P; \\ P_{n + 1} &:= P P_{n} - Q P_{n - 1} \quad \text{for } n \ge 1. \end{align}

and $(Q_{n})_{n \ge 0}$ as \begin{align} Q_{0} &:= 0,\, Q_{1} := -Q; \\ Q_{n + 1} &:= P Q_{n} - Q Q_{n - 1} \quad \text{for } n \ge 1. \end{align}

Note that the recurrence relation that they follow is the same as $(U_{n})_{n}$ and $(V_{n})_{n}$. We have simply changed the initial values. The upshot of these values is that we get the neat expression as

$$\four{P}{-Q}{1}{0}^n = \four{P_{n}}{Q_{n}}{P_{n - 1}}{Q_{n - 1}}.$$

(This is essentially the formula with matrices that you wrote, except that we have convenient initial values here, which clubs the LHS as a the power of a single matrix.)

The above formula now gives

$$\four{P_{n}}{Q_{n}}{P_{n - 1}}{Q_{n - 1}}^2 = \four{P_{2n}}{Q_{2n}}{P_{2n - 1}}{Q_{2n - 1}}.$$

Thus, we get the coupled recurrence relations as \begin{align} P_{2n} &= P_{n}^2 + Q_{n}P_{n - 1}, \\ P_{2n - 1} &= P_{n}P_{n - 1} + Q_{n - 1}P_{n - 1}, \\ Q_{2n} &= P_{n}Q_{n} + Q_{n}Q_{n - 1}, \\ Q_{2n - 1} &= P_{n - 1}Q_{n - 1} + Q_{n - 1}^2. \end{align}

This is the (apparently) fast recurrence relation which helps us solve for $P_{n}$ and $Q_{n}$ quickly. The question now is obviously how can we get $U_{n}$ and $V_{n}$ in terms of these two sequences. Fortunately, it is not that difficult. The key point to note that all four sequences satisfy the same recurrence relation. More this is a linear recurrence relation. Thus, if we can find $\alpha, \beta \in \mathbb{R}$ such that

$$\alpha\two{P_0}{P_1} + \beta\two{Q_0}{Q_1} = \two{U_0}{U_1},$$

then we actually have

$$U_{n} = \alpha P_{n} + \beta Q_{n}$$

for all $n \ge 0$. (Similarly for $V_{n}$.)

The above is simply a constant time operation. (Of course, it depends on how many $U_{n}$ you are computing, but it should still be asymptotically the same as fast doubling.)

Now, the question is whether we can actually find such solutions. Note that we have

$$\two{P_0}{P_1} = \two{1}{P} \quad\text{and}\quad \two{Q_0}{Q_1} = \two{0}{-Q}.$$

Since $Q \neq 0$, the two vectors above are linearly independent, and we can actually solve to get $\alpha$ and $\beta$. (If you're not familiar with this lingo, you can simply solve the system of two linear equations and see that you can get the roots.)

Aryaman Maithani
  • 14,362
  • 1
  • 13
  • 33
  • 1
    thank you for your answer. Note: the fast doubling method is faster than the matrix method because it requires less multiplications and summations than the matrix method. You can try to compute any Fibonacci number by using the above two methods and you will figure out that the fast doubling method requires less multiplications and summation than the matrix method. – عبد الرحمن رمزي محمود Jun 30 '21 at 23:26
  • 1
    You mean $P_n$ and $Q_n$ instead of $W_n$. – Joshua P. Swanson Jul 01 '21 at 07:02
  • 2
    The "fast doubling method" for Fibonacci numbers takes advantage of special properties, namely the $U_n$ and $V_n$ sequences are just reindexed versions of each other that are off by 1, and the matrix you need to exponentiate is symmetric. Naive 2x2 matrix multiplication requires 8 multiplications, but those optimizations allow you to do the important part in only 4. The general coupled recurrences in this post don't have that luxury and take the usual 8 multiplications per step, so they're no faster. You could use the Strassen algorithm to do 7 multiplications (but more additions) if desired. – Joshua P. Swanson Jul 01 '21 at 07:10
  • Thank you for your comments, @Joshua. I have fixed the typo. – Aryaman Maithani Jul 01 '21 at 07:30
  • I'm pretty sure there's still a typo where the second $W_n$ was. – Joshua P. Swanson Jul 01 '21 at 09:19
  • @JoshuaP.Swanson: Oh geez, yes, I made a typo when correcting the earlier typo. Hopefully now it is all fine. – Aryaman Maithani Jul 01 '21 at 09:21
  • @JoshuaP.Swanson thank you for this great comment, but I have a question, you said "but those optimizations allow you to do the important part in only 4". what do you mean by "those optimizations" ? do you mean the optimizations of the fast doubling method or the optimizations of the recurrences in this post ? – عبد الرحمن رمزي محمود Jul 01 '21 at 15:01
  • 1
    @عبدالرحمنرمزيمحمود I meant that in the Fibonacci case, since the matrix is symmetric, you only have to compute 3 of the 4 entries, which gets you to 6 rather than 8 multiplications, but the three entries are three terms in a two-term recurrence so you really only have to compute 2 of them, giving you 4 multiplications per step. I'm not seeing an optimization in the general case aside from generic matrix multiplication optimizations, though I haven't thought about it much either. – Joshua P. Swanson Jul 01 '21 at 18:56
  • @JoshuaP.Swanson thank you very much sir, I got it now – عبد الرحمن رمزي محمود Jul 01 '21 at 19:00

For fastest retrieving, use a lookup table: First, calculate a million of u's and a million of v's and store these numbers somewhere. The nth u is just u(n), and the same with the nth v. Nobody is interested in Lucas numbers with an index larger than a million (smile!).

  • 144
  • 4