Taking as our definition of exponentiation repeated multiplication (extended to real exponents by continuity), can we show that the limit

$$\lim_{h\to 0}\dfrac{a^h-1}{h}$$

exists, without l'Hôpital, $e$, or even natural logarithm? Sure, l'Hôpital will work, but that's circular if we're developing calculus of transcendentals from first principles. There is a good answer to this question already by user Neal, but he uses the exponential function with base $e$ (it's been answered many times: see also here, here, and here).

But using the special properties of $e$ strikes me as circular too; not literally logically circular, in the sense that we are invoking results we're trying to prove, since there are definitions of $e^x$ which make it trivial to verify the derivative. But perhaps pedagogically circular for a complete novice, the special properties of $e$ appear unmotivated because they cannot be justified without reference to the very derivative we are trying to compute (or else a detour through logarithms, but let's not).

Can we find nice squeeze theorem bounds like Neal has, but for the function $a^x$ instead of $e^x$, with the additional handicap that we can't just write $a^x=e^{x\log a}$? I thought to substitute a series expansion for $\log a$, but didn't come up with any bounds that were nicely polynomial in both $x$ and $a$.

I wonder whether the geometric proof of $\lim (\sin x)/x$ (see for example, robjohn's answer here) could be adapted.

Obviously without a reference to natural logarithm, we cannot compute the value of the limit. But I just want to show it exists (via squeeze theorem or monotone convergence). Once we know this limit exists, we can show it behaves like a logarithm, whence there is a unique base for which the limit is 1, which we call $e$. The rest of the development of calculus of exponentials and logs follows easily. This seems like the approach that would appear the most accessible yet motivated to a novice calculus student.

An analogous limit to $\lim\dfrac{a^h-1}{h}$ for understanding to differentiating exponential functions, are the limits $\lim\limits_{n\to\infty} (1+\frac{1}{n})^n$ and $\lim\limits_{n\to\infty} (1+\frac{1}{n})^{n+1}$ for differentiating the logarithm, if you prefer to start with that as your primitive concept. Both limits are shown to exist using Bernoulli's inequality (see WimC's answer here for the first limit, and see David Mitra's answer here for the second limit). I tried without success to use Bernoulli's inequality to show my sequence was monotone. This limit can also be analyzed using the AM-GM inequality as seen in user94270's answer to this question. So that inequality may help here.

I would also accept an explanation of why the limit cannot be computed without transcendental techniques, or an opinion why this is not a pedagogically sound approach to introducing the calculus of exponentials and logarithms.

Edit: This question has a nice solution by Paramanand Singh to a closely related problem.

  • 1
  • 4
  • 11
  • 37
  • 15,266
  • 2
  • 45
  • 104
  • 4
    How are you defining $a^h$ for values of $h$ which are not rational? – Mark Bennet Oct 21 '15 at 19:12
  • 2
    The Dedekind cut way: $a^h = \sup \{a^r | r\in\mathbb{Q}, r1$) – ziggurism Oct 21 '15 at 19:15
  • Or you can just define it for the rationals and "extend by continuity", which amounts to the same thing – ziggurism Oct 21 '15 at 19:23
  • I would settle to just show that the limit of $n(a^{1/n}-1)$ exists, which need only be evaluated at rational exponents – ziggurism Oct 21 '15 at 19:24
  • set $a^h-1=\frac{1}{m}$ – Dr. Sonnhard Graubner Oct 21 '15 at 19:32
  • Although you're fine with $\sin x/x\to 1,$ note that there are difficulties here from a real analysis view as well: What is $\sin x $ anyway? The problem is the definition of the arc-length $x$. – zhw. Oct 21 '15 at 19:32
  • @zhw This can be done with area as well as arc length. – Mark Bennet Oct 21 '15 at 20:24
  • @MarkBennet I think we have the same problem with area: Both area and arc-length are defined by integrals, so it would be difficult to derive $\sin x/x \to 1$ at a beginning level. – zhw. Oct 21 '15 at 22:55
  • 1
    @zhw. Both are integrals, indeed. GH Hardy (Pure Mathematics 10th 3ed p316 para 163) says "The whole difficulty lies in the question, what is the $x$ which occurs in $\cos x$ and $\sin x$?" and because he deals, as most courses do, with area before arc length he says that arc length can be done, but with further machinery and definitions, and it is easier therefore to define $x$ in terms of area. Also at beginning level I learned trigonometry with angles in degrees. It is with the series and limits for these functions that radians begin to make more sense. – Mark Bennet Oct 22 '15 at 06:26
  • 1
    Have you tried to prove the sequence $a_n=n(a^{\frac{1}{n}}-1)$ is decreasing (i.e. prove $a_n+1 –  Oct 22 '15 at 07:56
  • @frank000: yes, that exact inequality is the requirement to invoke monotone convergence. I struggled with that it for several hours without success. But paramanand's and zhw's answers show it can be done. – ziggurism Oct 22 '15 at 11:49
  • @ziggurism Unless you tell how *you* define $a^x$, only partial answers can be given. My definition is $a^x=\exp(x\log a)$, where $\log a=\int_{0}^a\frac{1}{x}\,dx$ and $\exp$ is the inverse function of $\log$. Note that no mention of $e$ is needed. With this definition, the limit follows easily from the chain rule. – egreg Oct 31 '15 at 14:20
  • @egreg: the elementary definition of exponentiation is repeated multiplication, just as multiplication is repeated addition. This definition works for natural multiplier or exponent, can easily be extended to rational by algebraic considerations, and extended to real by continuity (the details of which depend on your definition of real numbers). The idea that exponentiation cannot be defined for irrational exponents is one I've seen in calc textbooks, but it makes no more sense for repeated multiplication than it does for repeated addition. – ziggurism Oct 31 '15 at 17:26
  • @egreg: you are of course free to adopt a different definition of exponentiation. the one you offer in your comment requires the student to first understand integral calculus and inverse functions. And unless you already know the calculus of these transcendental functions, it is entirely unclear what the integral of the reciprocal function has to do with exponentiation. I was seeking to justify a more elementary approach. – ziggurism Oct 31 '15 at 17:31
  • @ziggurism I'm not stating that the way I use is the best under all respects: each one has its pros and its cons. However one has to give a definition, which is what I asked for; computing that limit strongly depends on it. – egreg Oct 31 '15 at 17:53
  • @egreg: you're right. Whatever complaints I might have about the suitability of the definition for a beginner, it is not in dispute that it is a definition people use, and therefore the question is ambiguous without specifying the definition. Let me edit. – ziggurism Oct 31 '15 at 18:27
  • http://math.stackexchange.com/questions/366563/how-to-integrate-1-x/1758952#1758952 –  Jul 10 '16 at 03:53

5 Answers5


Let $a>1.$ I assume $a^x$ is continuous, and that the basic exponent law $a^{x+y}=a^xa^y$ holds.

Claim: $a^x$ is convex on $[0,\infty).$ Proof: Because $a^x$ is continuous, it suffices to show $a^x$ is midpoint convex. Suppose $x,y\in [0,\infty).$ Using $(uv)^{1/2} \le (u+v)/2$ for nonnegative $u,v,$ we get $a^{(x+y)/2} = (a^{x} a^{y})^{1/2} \le (a^x+a^y)/2.$

Now if $f$ is convex on $[0,\infty),$ then $(f(x)-f(0))/x$ is an increasing function of $x$ for $x\in(0,\infty).$ This is a simple and easily proved property of convex functions.

Claim: $\lim_{x\to 0^+}(a^x-1)/x$ exists. Proof: All of these quotients are bounded below by $0.$ As $x$ decreases to $0,(a^x-1)/x$ decreases by the above. Because of the lower bound of $0,$ the limit exists.

It follows that $\lim_{x\to 0}(a^x-1)/x$ exists. This follows from the above and the fact that if $x>0,$ then $a^{-x} = 1/a^{x}.$ To handle $0<a<1,$ look at $[(1/a)^x-1]/x$ to see $\lim_{x\to 0}(a^x-1)/x$ exists.

  • 15,266
  • 2
  • 45
  • 104
  • 102,095
  • 6
  • 47
  • 106
  • I didn't find it when I first searched the site, probably because I was excluding results with used base $e$ exponential functions. But now I see that your answer can be found on the site. See Hagen van Eitzen's answer here: http://math.stackexchange.com/a/311249/16490. Invoking convexity of the exponential function is so simple and powerful and elegant and obvious. And doesn't care at all what base you use. Makes me look so stupid for struggling with more explicit bounds for so long. Anyway, your answer provides a lot more explicit details. I appreciate that, thank you. – ziggurism Oct 22 '15 at 21:08
  • That answer isn't the one you want though. I obtained the convexity pretty much from first principles. He's taking the convexity of $e^x$ for granted, probably as a corollary of $(e^x)''= e^x>0.$ But that uses the very result we're trying to prove here. – zhw. Oct 22 '15 at 23:16
  • Yes, that's a good point. Convexity might be obvious, but the AM-GM inequality trick to justify it is not, at least not to me. – ziggurism Oct 22 '15 at 23:19

If $a = 1$ the limit is obviously $0$. Let $a > 1$ and $0 < b < 1$. Using simple algebra it is easy to show that $$\frac{a^{r} - 1}{r} > \frac{a^{s} - 1}{s},\,\,\frac{1 - b^{r}}{r} < \frac{1 - b^{s}}{s}\tag{1}$$ where $r, s$ are positive rationals with $r > s$. (see equation $(11)$ of this post). On OP's request I am providing the proof of the above inequality here itself.

First let's us assume that $r, s$ are positive integers. Clearly we know that $a^{i} < a^{r}$ for all $i = 0, 1, 2,\dots, r - 1$ and hence on adding these equations we get $$1 + a + a^{2} + \dots + a^{r - 1} < ra^{r}$$ and multiplying the above equation by $(a - 1) > 0$ we get $$a^{r} - 1 < ra^{r}(a - 1) = ra^{r + 1} - ra^{r}$$ or $$(r + 1)a^{r} - r - 1 < ra^{r + 1} - r$$ or $$(r + 1)(a^{r} - 1) < r(a^{r + 1} - 1)$$ so that we have finally $$\frac{a^{r} - 1}{r} < \frac{a^{r + 1} - 1}{r + 1}$$ It thus follows that the sequence $t_{n} = (a^{n} - 1)/n$ is strictly increasing and hence $t_{r} > t_{s}$ for positive integers $r,s$ with $r > s$. This proves the first inequality of $(1)$ with the restriction that $r, s$ are positive integers. The second inequality dealing with $0 < b < 1$ can be proved similarly starting with $b^{i} > b^{r}$ for all $i = 0, 1, 2, \dots, r - 1$.

Next we extend the inequality $(1)$ to the case when $r, s$ are positive rational numbers with $r = p/q, s = m/n$ where $p, q, m, n$ are positive integers and $r > s$ so that $np > mq$. Let $c = a^{1/nq}$ so that $c > 1$ and therefore via inequality $(1)$ (with restriction of positive integral indexes) we get $$\frac{c^{np} - 1}{np} > \frac{c^{mq} - 1}{mq}$$ Multiply the above equation by $nq > 0$ we get $$\frac{a^{r} - 1}{r} > \frac{a^{s} - 1}{s}$$ so that the inequality $(1)$ is proved for the case when $r, s$ are positive rationals.

If we extend the definition of $a^{x}$ to real exponents $x$ by any method (like one using Dedekind cuts suggested by OP), we see that the above inequalities hold true even if $r, s$ are positive reals with $r > s$. However any such procedure is effectively based on limiting processes and hence we can only obtain the weaker versions of the above inequalities in this manner. We are however lucky that we only need the weaker version here. Thus we have $$\frac{a^{r} - 1}{r} \geq \frac{a^{s} - 1}{s},\,\,\frac{1 - b^{r}}{r} \leq \frac{1 - b^{s}}{s}\tag{2}$$ where $r, s$ are real numbers with $r > s > 0$ and $a, b$ are real numbers with $a > 1 > b > 0$. It is now obvious that for $a > 1$ the function $f(x) = (a^{x} - 1)/x$ in increasing in $(0, \infty)$. Also $f(x) > 0$ for all $x$. It thus follows that $f(x) \to L$ as $x \to 0^{+}$ and moreover $L \geq 0$.

If $x < 0$ so that $x = -y$ and $y > 0$ then we can see that $$f(x) = f(-y) = \frac{a^{y} - 1}{ya^{y}} = \frac{a^{y} - 1}{y}\frac{1}{a^{y}} \to L$$ as $y \to 0^{+}$. It thus follows that $f(x) \to L$ as $x \to 0$ and $L \geq 0$.

If $0 < a < 1$ then we can use the ineuqalities related to $b$ above and show that $f(x) \to L$ and $L \leq 0$ as $x \to 0$. With slightly more effort we can show that $L = 0$ if and only if $a = 1$. The existence of this limit for all $a > 0$ defines a new function of $a$ which is normally called the logarithm of $a$. From here we can develop a theory of exponential and logarithmic function. See more details in this post.

Paramanand Singh
  • 79,254
  • 12
  • 118
  • 274
  • Your assertion that we can only obtain the weaker versions is not really correct - you can get strong inequality with only a few extra steps. If $f(r)=\frac{a^r-1}r$ is known to be strictly increasing on the rationals and is extended by continuity to the reals, it is also strictly increasing on the reals because for $r – Mario Carneiro Oct 22 '15 at 13:32
  • @MarioCarneiro: My point was that a direct limit would weaken the inequality, but that weak version is sufficient for our needs here. With very little amount of extra effort we get the stronger version (as you mention in your comment). – Paramanand Singh Oct 22 '15 at 16:24
  • Thank you, Paramandand. I enjoyed your series of blog posts a lot. The key step for me is showing that $\frac{a^{r+1}-1}{r+1} > \frac{a^r-1}{r},$ which you show by starting with $a>1$, so $a^i – ziggurism Oct 31 '15 at 14:02
  • I wonder whether you might consider adding a sentence or two from your blog about the derivative of $\frac{a^{r+1}-1}{r+1} > \frac{a^r-1}{r},$ and extension to real exponents, just so your answer is self-contained – ziggurism Oct 31 '15 at 14:11
  • @ziggurism: Added the proof of the inequality in my updated answer. – Paramanand Singh Nov 01 '15 at 07:39
  • Very thorough. +1 – Mark Viola Dec 11 '15 at 06:13

I'll sketch one way. Assume that $a > 1$. (The case $a < 1$ can be done similarly.)

  1. Show that $f(x) = a^x$ is continuous (in fact continuous from the right is enough) and strictly increasing.
  2. Every continuous monotone function is differentiable almost everywhere. (This is a well-known, but not entirely trivial theorem.)
  3. Show that $f(x+y) = f(x)f(y)$. Thus, if $f$ is differentiable at some point $x=x_0$, then $f$ is also differentiable at $x=0$, which shows that your limit exists.
  • 42,112
  • 6
  • 59
  • 102
  • 2
    The diff a.e. theorem seems to be so nontrivial that it would not be understood by any novice calculus student (as OP requests). – coffeemath Oct 21 '15 at 21:19

This addresses showing that $\lim c_n=\lim n(a^{1/n}-1)$ exists. Let $u=a^{1/n}$ and apply the identity $u^n-1=(u-1)(1+u+\cdots + u^{n-1}).$ Then after multiplying numerator and denominator of $c_n$ by the second factor of that identity, one gets to $c_n=(a-1)/I_n,$ where $$I_n=\frac{1}{n} [ a^{0/n}+a^{1/n}+\cdots +a^{(n-1)/n}],$$ which is a left endpoint Riemann sum for the integral $\int_0^1 a^x \ dx,$ so in the limit converges to that integral, showing as desired the limit exists.

Typically at least, a calc 2 student would be familiar with left hand Riemann sums and their convergence to the associated integral.

  • 29,229
  • 2
  • 29
  • 48

The claim by OP,

But using the special properties of $e$ strikes me as circular too; for a complete novice, the special properties of $e$ cannot be justified without reference to the very derivative we are trying to compute (or else a detour through logarithms, but let's not).

is not true. Those properties can be constructed and proved without being circular (in a hard way!). I'll refer you to Principle of mathematical analysis by Walter Rudin (i.e. Baby Rudin) for the detail.

In page 22 exercise 6, $b^x$ with $b>1$ is defined the Dedekind cut way OP gave in the comment.

In page 63-64 the constant $e$ is defined to be $\displaystyle \sum_{n=0}^{\infty} \frac{1}{n!}$, and $\displaystyle e=\lim_{n\rightarrow \infty} (1+\frac{1}{n})^{n}$ is proved.

In page 178-179, a "special function" $E(z)$ is defined to be $\displaystyle \sum_{n=0}^{\infty} \frac{z^n}{n!}$ and a proof of $E(x)=e^x$ for any real number $x$ is given.

Finally, once we have power series, the limit is easy to compute.

Issues to consider

  1. Why does $\displaystyle \sum_{n=0}^{\infty} \frac{1}{n!}$ converge? You can prove the sequence of partial sum is increasing and is bounded above by $3-\frac{1}{n}$ (by induction), therefore bounded above by 3, and therefore converge.
  2. You will need to use ratio test or root test to show $E(x)$ absolutely converge for any $x$ and is therefore well defined for any $x$.
  3. To prove $E(x)E(y)=E(xy)$ (An important step of proving $E(x)=e^x$) you'll need properties of absolutely convergent power series.
  • 1
    I didn't mean to imply that it was formally circular. I recognize that Rudin's and other advanced textbooks' approaches which rely on power series or differential eq definitions of exp are not only not circular, they are more elegant, more powerful, and generalize more easily, than the approach I am seeking. What I meant was, those definitions are entirely unmotivated. To motivate the Taylor series or diff eq def of $e^x$, you have to already know the derivative of $e^x$. Without that foreknowledge, it seems like a magic coincidence that an unknown Taylor series behaves like an exponential. – ziggurism Oct 22 '15 at 13:18
  • 1
    So I didn't mean it's literally logically circular. Just pedagogically unsound, in a way that I will admit is subjective and reasonable people may disagree. I will edit the question to make that clearer. I want to start with the basic concepts familiar to the calculus novice (i.e. just the exponential operation) and derive everything from that, without the burden of developing power series or diff eqs. This is the pedagogical approach taken by many first calculus texts, but they lack rigor, relying on heuristics to evaluate the limits. More rigorous texts adopt more theoretical definitions... – ziggurism Oct 22 '15 at 13:20
  • like you suggest. I am seeking a middle road. – ziggurism Oct 22 '15 at 13:21
  • Oh my I always felt some uneasiness/circular logic around powers and complex numbers but of course $\displaystyle \sum_{n=0}^{\infty} \frac{z^n}{n!}$ will work neatly for complex numbers as well and it can be legit evaluated for any $z,n$ without any of those problematic things. Thanks! I should've thought of that before. – chx Nov 23 '15 at 02:11