I know how to prove the equality when $m$ is a rational number and $n$ is an integer, but do not know how to go about proving this for real numbers. On a semirelated note, I was trying to prove this when both $m$ and $n$ are rational, and found out that I have to prove that $(\frac{1}{z})^{\frac{1}{y}}$=$\frac{1}{z^{\frac{1}{y}}}$. Does this need to be proven or can I accept it as a definition?

3Please, please, stick to the case $a>0$. – Angina Seng Jul 23 '20 at 18:13

1How about using Continuity? – IMOPUTFIE Jul 23 '20 at 18:13

These is a fundamental property of powers. Example: $(2^3)^2=(2^3).(2^3)=2^{(3+3)}=2^{2.3}$. Regarding rational numbers, remember that $a^{1/2}= \sqrt{a}$. Since this looks like a HW question, we may not give you full answers on the site. – Basco Jul 23 '20 at 18:18

4Which definition of $a^b$ do you use for $a,b$ real? – Hagen von Eitzen Jul 23 '20 at 18:18

$$(a^m)^n = \exp (n \log a^m) = \exp (n \log (\exp (m \log a))) = \exp (nm \log a) = a^{nm}.$$ And by commutativity of the monoid $(ℝ,·,1)$, you have $a^{nm} = a^{mn}$. – k.stm Jul 23 '20 at 18:24

3Does this answer your question? [$a^ma^n=a^{m+n}$ and $(a^m)^n=a^{mn}$ proof](https://math.stackexchange.com/questions/2495174/amanamnandamnamnproof) – VIVID Jul 23 '20 at 18:41

@Basco It isn't a homework question. I just want to know the proof to have peace of mind and surprisingly, I can't find the proof for this anywhere on the internet. – Orlin Aurum Jul 24 '20 at 03:06

@OrlinAurum You haven’t searched properly. It also depends on your definition of $a^b$, as Hagen indirectly pointed out. You can find one proof three comments above this one. – k.stm Jul 24 '20 at 05:04

This is also a decategorification of some more general principle. Not that it's what you're looking for, but it's related, see [Riehl] – Anthony SaintCriq Jul 24 '20 at 09:51

1@k.stm I have managed to find an alternate proof here https://math.stackexchange.com/questions/3727801/provingexponentlawforrealnumbersusingthesupremumdefinitiononly I don't fully understand it though. Also, I have found proofs like yours before but I it is my fault for dismissing them. I didn't know that $a^b=exp(b•log(a))$ is a definition. I thought it used the fact that $(a^m)^n=a^{mn}$. I am sorry. – Orlin Aurum Jul 24 '20 at 14:51
3 Answers
The very first thing you need to do is ask yourself what the definitions are. Without proper definitions, you'll never have a complete proof. So, if $a>0$ and $m\in \Bbb{R}$, how are you even supposed to define $a^m$? This is not at all a trivial task.
For example, here's one possible approach to things:
 First, define $\exp: \Bbb{R} \to \Bbb{R}$ by $\exp(x):= \sum_{n=0}^{\infty}\frac{x^n}{n!}$. You of course have to check that this series converges for every $x\in \Bbb{R}$.
 Check basic properties of $\exp$, such as $\exp(0) = 1$ and for all $x,y \in \Bbb{R},$ $\exp(x+y) = \exp(x)\cdot \exp(y)$. Also, verify that $\exp:\Bbb{R} \to (0,\infty)$ is an invertible function.
 Since $\exp:\Bbb{R} \to (0,\infty)$ is invertible, we can consider its inverse function, which we denote as $\log:(0,\infty) \to \Bbb{R}$. Then, verify all the basic properties of $\log$, such as for all $a,b>0$, $\log(ab) = \log(a) + \log(b)$.
 Finally, given $a>0$ and $m\in \Bbb{R}$, we define $a^m := \exp(m \log(a))$.
From this point, it is a simple matter to use the various properties of the exponential and logarithmic functions: for any $a>0$ and $m,n \in \Bbb{R}$ \begin{align} a^{m+n} &:= \exp((m+n)\log (a)) \\ &= \exp[m\log(a) + n \log (a)]\\ &= \exp[m \log(a)] \cdot \exp[n\log(a)] \\ &:= a^m \cdot a^n \tag{$*$} \end{align} Similarly, \begin{align} (a^m)^n &:= \exp[n \log(a^m)] \\ &:= \exp[n \log(\exp(m \log(a)))] \\ &= \exp[nm \log(a)] \tag{since $\log \circ \exp = \text{id}_{\Bbb{R}}$} \\ &:= a^{nm} \\ &= a^{mn} \end{align} where in the last line, we make use of commutativity of multiplication of real numbers.
Note that steps 1,2,3 are not at all trivial, and indeed there are entire chapters of calculus/analysis textbooks devoted to proving these facts carefully. So, while I only listed out various statements, if you want the proofs for the statements I made, you should take a look at any analysis textbook, for example, Rudin's Principles of Mathematical Analysis, or Spivak's Calculus (I recall Spivak motivating these things pretty nicely).
As for your other question, yes it is something which needs to be proven. This result can be easily deduced from two other facts.
 For any $x\in \Bbb{R}$, $1^x = 1$. (proof: $1^x := \exp[x \log(1)] = \exp[0] = 1$)
 For any $a,b > 0$ and $x\in \Bbb{R}$, $(ab)^x = a^x b^x$. The proof is a few lines, once you use the properties of $\exp$ and $\log$.
Now, if $z>0$, then for any $x\in \Bbb{R}$, \begin{align} z^x \cdot \left(\frac{1}{z}\right)^x &= \left(z\cdot \frac{1}{z}\right)^x = 1^x = 1 \end{align} Hence, $\left(\frac{1}{z}\right)^x = \frac{1}{z^x}$. In particular, you can take $x=1/y$ to prove what you wanted.
Edit: Motivating the definition $a^x := \exp(x\log(a))$, for $a>0, x \in \Bbb{R}$.
The long story short: this definition is unique in certain sense, and is almost forced upon us once we impose a few regularity conditions.
Now, let me once again stress that you should be careful to distinguish between definitions, theorems and motivation. Different authors have different starting points, so Author 1 may have one set definitions and motivations, and hence different theorems, while author 2 may have a completely different set of definitions, and hence have different theorems, and motivation.
So, let's start with some motivating remarks. Fix a number $a>0$. Then, we usually start start by defining $a^1 = a$. Next, given a positive integer $m\in \Bbb{N}$, we define $a^m = \underbrace{a\cdots a}_{\text{$m$ times}}$ (If you want to be super formal, then ok, this is actually a recursive definition: $a^1:= 1$, and then for any integer $m\geq 2$, we recursively define $a^{m}:= a\cdot a^{m1}$).
Now, at this point what we observe from the definition is that for any positive integers $m,n\in \Bbb{N}$, we have $a^{m+n} = a^m \cdot a^n$. The proof of this fact follows very easily by induction.
Next, we typically define $a^0 = 1$. Why do we do this? One answer is that it is a definition, so we can do whatever we want. Another answer, is that we are almost forced to do so. Why? notice that for any $m\in \Bbb{N}$, we have $a^m = a^{m+0}$, so if we want this to be equal to $a^m \cdot a^0$, then we had better define $a^0 = 1$.
Next, if $m>0$ is an integer, then we usually define $a^{m} := \dfrac{1}{a^{m}}$. Once again, this is just a definition, so we can do whatever we want. The motivation for making this definition is that we have $1 =: a^0 = a^{m+m}$ for any positive integer $m$. So, if we want the RHS to equal $a^{m}\cdot a^m$, then we had better define $a^{m}:= \frac{1}{a^m}$.
Similarly, if $m>0$, then we define $a^{1/m} = \sqrt[m]{a}$ (assuming you've somehow proven existence of $m^{th}$ roots of positive real numbers). Again, this is just a definition. But why do we do this? Because we have $a =: a^1 = a^{\frac{1}{m} + \dots +\frac{1}{m}}$, so if we want the RHS to equal $(a^{\frac{1}{m}})^m$, then of course, we had better define $a^{1/m}:= \sqrt[m]{a}$.
Finally, we define $a^{\frac{m}{n}}$, for $m,n \in \Bbb{Z}$ and $n >0$ as $a^{m/n} = (a^{1/n})^m$. Once again, this is just a definition, so we can do whatever we want, but the reason we do this is to ensure the equality $a^{m/n} = a^{1/n + \dots + 1/n} = (a^{1/n})^m$ is true.
Now, let's think slightly for what we have done. We started with a number $a>0$, and we defined $a^1 := a$, and we managed to define $a^x$ for every rational number $x$, simply by the requirement that the equation $a^{x+y} = a^x a^y$ hold true for all rational $x,y$. So, if you actually read through everything once again, what we have actually done is shown the following theorem:
Given $a>0$, there exists a unique function $F_a:\Bbb{Q} \to \Bbb{R}$ such that $F_a(1) = a$, and such that for all $x,y\in \Bbb{Q}$, $F_a(x+y) = F_a(x)\cdot F_a(y)$.
(Note that rather than writing $a^x$, I'm just writing $F_a(x)$, just to mimic the function notation more)
Our motivation has actually been to preserve the functional equation $F_a(x+y) = F_a(x)\cdot F_a(y)$ as much as possible. Now, we can ask whether we can extend the domain from $\Bbb{Q}$ to $\Bbb{R}$, while preserving the functional equation, and if such an extension is unique. If the answer is yes, then we just define $a^x := F_a(x)$ for all real numbers $x$, and then we are happy. It turns out that if we impose a continuity requirement, then the answer is yes; i.e the following theorem is true:
Given $a>0$, there exists a unique continuous function $F_a:\Bbb{R} \to \Bbb{R}$ such that $F_a(1) = a$, and such that for all $x,y\in \Bbb{R}$, $F_a(x+y) = F_a(x)\cdot F_a(y)$.
Uniqueness is pretty easy (because $\Bbb{Q}$ is dense in $\Bbb{R}$ and $F_a$ is continuous). The tough part is showing the existence of such an extension.
Of course, if you already know about the $\exp$ function and its basic properties like 1,2,3, then you'll see that the function $F_a:\Bbb{R} \to \Bbb{R}$ defined by $F_a(x):= \exp(x \ln(a))$ has all the nice properties (i.e is continuous, it satisfies that functional equation, and $F_a(1) = a$). Because of this existence and uniqueness result, this is the only reasonable way to define $a^x \equiv F_a(x) := \exp(x \log(a))$; anything other than this would be pretty absurd.
The purpose of the rest of my answer is to try to motivate how anyone could even come up with the function $F_a(x) = \exp(x\ln(a))$; sure the existence and uniqueness result is very nice and powerful, but how could you try to come up with it by yourself? This certainly doesn't come from thin air (though at some points we have to take certain leaps of faith, and then check that everything works out nicely).
To do this, let's start with a slightly more restrictive requirement. Let's try to find a function $f:\Bbb{R} \to \Bbb{R}$ with the following properties:
 for all $x,y\in\Bbb{R}$, $f(x+y) = f(x)\cdot f(y)$
 $f$ is nonzero; i.e there exists $x_0\in \Bbb{R}$ such that $f(x_0) \neq 0$.
 $f$ is differentiable at $0$.
The first two conditions seem reasonable, but the third one may seem a little strange, but let's just impose it for now (it's mainly there to try to motivate things and hopefully simlify the argument and to convince you that $x\mapsto \exp(x\ln(a))$ didn't come from thin air).
First, we shall deduce some elementary consequences of properties 1,2,3:
In (2), we assumed $f$ is nonzero at a single point. We'll now show that $f$ is nowhere vanishing, and that $f(0)=1$. Proof: we have for any $x\in\Bbb{R}$, $f(x) \cdot f(x_0x) = f(x_0) \neq 0$. Hence, $f(x) \neq 0$. In particular, $f(0) = f(0+0) = f(0)^2$. Since $f(0)\neq 0$, we can divide it on both sides to deduce $f(0) = 1$.
We also have for every $x \in \Bbb{R}$, $f(x)>0$. Proof: We have \begin{align} f(x) = f(x/2 + x/2) = f(x/2)\cdot f(x/2) = f(x/2)^2 > 0, \end{align} where the last step is because $f(x/2) \neq 0$ (this is why in real analysis, we always impose the condition $a = f(1) > 0$).
$f$ is actually differentiable on $\Bbb{R}$ (not just at the origin). This is because for $t\neq 0$, we have \begin{align} \dfrac{f(x+t)  f(x)}{t} &= \dfrac{f(x)\cdot f(t)  f(x) \cdot f(0)}{t} = f(x) \cdot \dfrac{f(0+t)  f(0)}{t} \end{align} now, the limit as $t\to 0$ exists by hypothesis since $f'(0)$ exists. This shows that $f'(x)$ exists and $f'(x) = f'(0) \cdot f(x)$. As a result of this, it immediately follows that $f$ is infinitely differentiable.
Now, we consider two cases. Case ($1$) is that $f'(0) = 0$. Then, we have $f'(x) = 0$ for all $x$, and hence $f$ is a constant function, $f(x) = f(0) = 1$ for all $x$. This is clearly not very interesting. We want a nonconstant function with all these properties. So, let's assume in addition that $f'(0) \neq 0$. With this, we have that $f'(x) = f'(0)\cdot f(x)$; this is a product of a nonzero number and a strictly positive number. So, this means the derivative $f'$ always has the same sign. So, $f$ is either strictly increasing or strictly decreasing. Next, notice that $f''(x) = [f'(0)]^2 f(x)$, is always strictly positive; this coupled with $f(x+y) = f(x)f(y)$ implies that $f$ is injective and has image equal to $(0,\infty)$. i.e $f:\Bbb{R} \to (0,\infty)$ is bijective.
Theorem 1.
Let $f:\Bbb{R} \to \Bbb{R}$ be a function such that:
 for all $x,y\in \Bbb{R}$, $f(x+y) = f(x)f(y)$
 $f$ is nonzero
 $f$ is differentiable at the origin, with $f'(0) \neq 0$
Suppose $g:\Bbb{R} \to \Bbb{R}$ is a function which also satisfies all these properties. Then, there exists a number $c\in \Bbb{R}$ such that for all $x\in \Bbb{R}$, $g(x) = f(cx)$. In other words, such functions are uniquely determined by a constant $c$.
Conversely, for any nonzero $c\in \Bbb{R}$, the function $x\mapsto f(cx)$ satisfies the three properties above.
Proof
To prove this, we use a standard trick: notice that \begin{align} \dfrac{d}{dx}\dfrac{g(x)}{f(cx)} &= \dfrac{f(cx) g'(x)  g(x) cf'(cx)}{[f(cx)]^2} \\ &= \dfrac{f(cx) g'(0) g(x)  g(x) c f'(0) f(cx)}{[f(cx)]^2} \\ &= \dfrac{g'(0)  c f'(0)}{f(cx)} \cdot g(x) \end{align} Therefore, if we choose $c = \dfrac{f'(0)}{g'(0)}$, then the derivative of the function on the LHS is always zero. Therefore, it must be a constant. To evaluate the constant, plug in $x=0$, and you'll see the constant is $1$. Thus, $g(x) = f(cx)$, where $c= \frac{g'(0)}{f'(0)}$. This completes the proof of the forward direction. The converse is almost obvious
Remark
Notice also that from $g(x) = f(cx)$, by plugging in $x=1$, we get $g(1) = f(c)$, and hence $c = (f^{1} \circ g)(1) = \frac{g'(0)}{f'(0)}$ (recall that we already stated that such functions are invertible from $\Bbb{R} \to (0,\infty)$). It is this relation $c = (f^{1} \circ g)(1)$, which is the key to understanding where $x\mapsto \exp(x\ln(a))$ comes from. We're almost there.
Now, once again, just recall that we have been assuming the existence of a function $f$ with all these properties. We haven't proven the existence yet. Now, how do we go about trying to find such a function $f$? Well, recall that we have the fundamental differential equation $f'(x) = f'(0) f(x)$. From this, it follows that for every positive integer $n$, $f^{(n)}(0) = [f'(0)]^n$. We may WLOG suppose that $f'(0) = 1$ (other wise consider the function $x\mapsto f\left(\frac{x}{f'(0)}\right)$), then we get $f^{(n)}(0) = 1$. Finally, if we make the leap of faith that our function $f$ (which is initially assumed is only differentiable at $0$ with $f'(0) = 1$, and then proved it is $C^{\infty}$ on $\Bbb{R}$) is actually analytic on $\Bbb{R}$, then we know that the function $f$ must equal its Taylor series: \begin{align} f(x) &= \sum_{n=0}^{\infty} \dfrac{f^{(n)}(0)}{n!} x^n = \sum_{n=0}^{\infty}\dfrac{x^n}{n!} \end{align} This is one of the many ways of how one might guess the form of the exponential function, $\exp$. So, we now take this as a definition: $\exp(x):= \sum_{n=0}^{\infty}\frac{x^n}{n!}$. Of course using basic power series techniques, we can show that $\exp$ is differentiable everywhere, and satisfies that functional equation with $\exp(0)=\exp'(0) = 1$.
So, now, back to our original problem. Given any $a>0$, we initially wanted to find a function $F_a:\Bbb{R} \to \Bbb{R}$ such that $F_a$ satisfies the functional equation, and $F_a(1) = a$, and such that $F_a$ is differentiable at $0$ with $F_a'(0) \neq 0$. Well, in this case, both $F_a$ and $\exp$ satisfy the hypothesis of theorem 1. Thus, there exists a constant $c \in \Bbb{R}$ such that for all $x\in \Bbb{R}$, $F_a(x) = \exp(cx)$. To evaluate the constant $c$, we just plug in $x=1$, to get $c = (\exp^{1}\circ F_a)(1) := \log(a)$. Therefore we get $F_a(x) = \exp(x \log(a))$. This is why we come up with the definition $a^x := \exp(x\log(a))$.
 35,109
 2
 22
 52

2Why/how do we define $a^m := \exp(m \log(a))$ ? I was under the impression that this somehow needed to use the fact that $(a^m)^n. – Orlin Aurum Jul 24 '20 at 11:02

This is virtually what I wrote anout the other answer: The following proof that $\ln{a^b} =b \ln a$ is my area of difficulty with this response: $$\log_a {M^n} = x \implies a^x = M^n$$ $$\log_a M = y \implies a^y = M \implies a^x=M^n=(a^y)^n=a^{ny}$$ $$x=ny=n \log_a {M}$$ This proof is where you're comment and the answer come s from, but this proof uses the fact that $(a^y)^n=a^{ny}$! – ALevel Student Jul 24 '20 at 11:25

@AlevelStudent nowhere in my answer did I ever define the logarithm to base $a$. By the way, what I defined in my answer as $\log$, you may instead want to call it $\ln$. Anyway, putting these small details aside, the proof for $\ln(a^b) = b \ln(a)$ using the definitions and theorems stated in steps 1,2,3,4 goes like so: $\ln[a^b] := \ln[\exp(b \ln(a))] = b \ln(a)$, because $\ln\circ \exp = \text{id}_{\Bbb{R}}$. No where in this proof have I even assumed that $(a^y)^n = a^{ny}$. All I did is use definitions which I have given. – peekaboo Jul 24 '20 at 15:00

@AlevelStudent if this is confusing, it's probably because (sounds ironic) you know too many facts, but you've probably never proven them all systematically. I know I had the same trouble when first learning these things. I had to pretend I new nothing about exponentials and logarithms and then start from scratch with definitions and prove every single thing. – peekaboo Jul 24 '20 at 15:03

@OrlinAurum There is nothing circular about this definition. I suggest you to temporarily forget everything you know about exponentials and logarithms and all their rules. Completely forget about the rules. Instead focus on how everything is defined. You need to convince yourself that everything has a precise definition (which is noncircular), for this one possible approach is to do things like I have written in steps 1,2,3,4. – peekaboo Jul 24 '20 at 15:11

@peekaboo I wasn't saying that it is circular (rather, I used to think proofs like this were circular, before reading your comment). I just wanted to know why $a^m$ is defined to be the same as $exp(mlog(a))$ Is it because it makes our lives easier? – Orlin Aurum Jul 24 '20 at 15:27

For your first question about why/how we define $a^m := \exp(m\log(a))$, there's a simple answer to "why" : and that is "because I can, and because, why not?" Because if you follow steps 1,2,3 in that order then, everything is properly defined and nothing is circular in logic, so of course, I can do whatever I want with my definitions, so I can certainly define $a^m := \exp[m\log(a)]$. But, this is a very simple definition. A slightly better answer to this question is that it is "almost" the unique choice, if I impose a few other requirements (I'll try to edit and elaborate when I have time). – peekaboo Jul 24 '20 at 15:30

I'm sorry to keep on saying this, but I still believe you're using the questions as a proof for your answer. You say that $a^m=\exp(m\log(a))$. This is obviously derived using the log laws, ie that $\exp(\log(a^m))=\exp(m\log(a))$ (correct me if I'm wrong). However, the log law that allows you to write $a^m=\exp(m\log(a))$ is itself derived using the fact $(a^m)^n =a^{mn}$, as is demonstrated in my previous comment. To sum up: the only reason you know that $a^m=\exp(m\log(a))$ is because of the fact $(a^m)^n =a^{mn}$. – ALevel Student Jul 24 '20 at 16:13

@AlevelStudent it is your very first line where you're going wrong. You need to forget everything you've learnt about log laws and exponential laws. Once again, if $a>0$ and $m\in \Bbb{R}$, then I am DEFINING $a^m := \exp(m\log(a))$. This is a definition, and I can define whatever I want. I suggest you reread my answer carefully and follow the logic step by step and verify every single claim I make, in the order I have stated things. What I have done is defined $a^m:= \exp(m\log(a))$, and from this starting point, I am proving all other results like $(a^m)^n = a^{mn}$ and $a^{m+n}=a^ma^n$. – peekaboo Jul 24 '20 at 16:19

Ok. Then how did you come to your definition of $a^m=\exp(m\log(a))$? Surely you'd need to prove it? – ALevel Student Jul 24 '20 at 16:21

@AlevelStudent how are you supposed to prove a definition? It is a definition, which means I am simply giving meaning to a certain symbol. I can define whatever I want as long as it is logically consistent and I define things entirely in terms of things I have previously defined. If you ask why this definition is reasonable/meaningful, then that's an entirely different question, which is worth looking into. The main reason is because if I define $a^x:= \exp(x\log(a))$, then it follows that $a^1 = a$, $a^{x+y} = a^x\cdot a^y$, and hence this agrees with the old one if we plug in integer $m>0$ – peekaboo Jul 24 '20 at 16:48

@AlevelStudent It doesn't need to be proven as it is 'defined'. For example (and forget about fractions and irrationals and negatives and 0 here), $a^m$ is 'defined' to be equal to whatever you get if you multiply a, m times. Notice that you don't need to prove that $a^m=a•a•a•....a$ (m times). It is a 'definition''. But I think that there's more to this, as I guess $a^m:=exp(mloga)$ is chosen not out of thin air, but rather it is the most 'logical' definition. At least, that is what I am gathering from this. I do not know 'why' it is the most logical one though. – Orlin Aurum Jul 24 '20 at 16:53

Neither do I understand why it is at all logical without the log laws. – ALevel Student Jul 24 '20 at 16:59

@AlevelStudent I've tried to motivate the definition as far as I can. It is pretty much our desire to preserve a certain functional equation (along with some regularity assumptions) which motivates our definition. I apologize for the length of it; there's quite a bit to say (this is why in the very first paragraph of my answer I said very explicitly that going from rationals to reals is not at all a trivial task). Hopefully, you follow along each step carefully and keep track of what are the assumptions at each stage of my answer, and keep track of what has been proven at each stage. – peekaboo Jul 24 '20 at 20:39


1**Theorem:** For each $a > 0$ there exists a unique continuous function $E_a : \mathbb R \to \mathbb R$ such that the following hold: (1) $E_a(1)=a$; (2) for all $x,y \in \mathbb R$, $E_a(x+y) = E_a(x) \cdot E_a(y)$. – Lee Mosher Jul 24 '20 at 21:01

1Existence is easy: $E_a(x) = e^{\ln(a) \cdot x}$. Uniqueness isn't hard either – Lee Mosher Jul 24 '20 at 21:01

@peekaboo that was a nice read. But shouldn't $c=\frac{g'(0)}{f'(0)}$ in order for the L.H.S. to be 0 ? Also, how do we WLOG suppose that f'(0)=1 ? – Orlin Aurum Jul 29 '20 at 15:39

@OrlinAurum lol that's an embarassing algebraic mistake. Suppose $f'(0) \neq 0$, then consider the function $\phi(x) = f\left(\frac{x}{f'(0)}\right)$. Then, it is easy to verify that $\phi$ satisfies all the properties which $f$ does (we're applying the converse of Theorem 1 with $c=\frac{1}{f'(0)}$), AND in addition it has the nice property that $\phi'(0) = 1$. So when I say "WLOG suppose $f'(0)=1$", what I mean is that rather than considering the function $f$, we consider the slightly modified function $\phi$ (just that at the time I didn't feel like introducing too many new letters) – peekaboo Jul 29 '20 at 17:28

Also, I realized that in the secondlast paragraph, it is actually not necessary to assume the function $f$ (or $\phi$ using the notation of my previous comment) is analytic. This can actually be shown directly by writing a finite Taylor expansion, with the Lagrange form of the remainder, and then showing that the remainder term goes to $0$ as $n\to \infty$, thereby proving that the Taylor series converges to the function for every $x\in \Bbb{R}$. – peekaboo Jul 29 '20 at 17:54

@peekaboo I've managed to show that $\phi$ has all the properties of f, and $\phi’(0)=1$. So, from here on, we can only work with $\phi$, and make no more conclusions for f, right? As in, we have to show that $\phi$, not f, is equal to it's Taylor series, right? Also, in theorem 1, how do we choose $c=\frac{g'(0)}{f'(0)}$? It's like saying “This theorem is true, but only in this specific case.” – Orlin Aurum Aug 01 '20 at 12:20

@OrlinAurum You can work with only $\phi$, but this is only a matter of simplicity, because it satisfies the simple ODE $\phi'=\phi$ (and $\phi(0)=1$). Even $f$ will equal its Taylor series... but we're just not that interested in $f$. If I ask you to solve $2c=1$, what would you do? Of course you'd say $c=1/2$. This equation has a unique solution depending on the LHS coefficient $2$ and the RHS coefficient $1$. Similarly, Theorem $1$ claims the existence of $c$ such that $g(x)=f(cx)$, so of course I have to find and choose a specific $c$ (depending on $f$ and $g$). – peekaboo Aug 01 '20 at 16:27

@peekaboo Hopefully this will be my last set of questions: Is there a standard way we came up with $\phi(x)=f(\frac{x}{f'(0)})$ , or was it an instance of intellect? Also, knowing that $\phi(x)=\phi’(x), \phi’(0)=1$, doesn't the fact that $\phi(x)=exp(x)$ follow in an easier way than the one you mentioned (i.e. writing a finite Taylor expansion and showing that the Lagrange remainder goes to zero)? For example, we could think of $\phi(x)$ as $\sum_{n=0}^{\infty}a_n x^n$ And solve the differential equation from there. I'm only asking this because I'm not quite sure of the rigour of this. – Orlin Aurum Aug 03 '20 at 14:15

@OrlinAurum No it's not a matter of intellect, you're just led to that answer by applying chain rule in reverse: if you know $f'(x) = f'(0) f(x)$, $f'(0)\neq 0$, $f(0)=1$, there is of course only one way to modify $f$ to get a function $\phi$ such that $\phi'(x) = \phi(x)$ and $\phi(0)=1$. In order to get $\phi(x)=\sum\frac{x^n}{n!}$, using Taylor's theorem (with remainder and show the remainder goes to $0$) is perfectly rigorous. At the time of writing my answer I hadn't thought of this yet, which is why I said "make the leap of faith $f$ (or $\phi$ in the new notation) is analytic blablabla" – peekaboo Aug 03 '20 at 18:13

1@ALevelStudent: This is quite an old post, but I suggest you look at the following [thread](https://math.stackexchange.com/questions/55068/canyouraiseanumbertoanirrationalexponent/55078#55078). The truth is that it is actually quite difficult to define $a^x$ means when $x$ could be any real number. Clearly, $2^{1/3}$ does not refer to repeated multiplication, so for rational numbers we have to define $a^{p/q}$ as $(\sqrt[q]{a})^p$ so that the definition makes sense. That still leaves open the question: what does $2^{\sqrt{2}}$ mean? The answer is not as simple as you think. – Joe Lamond May 21 '21 at 15:43

@JoeLamond Thank you for your comment, I will look into that thread. My opinion has now changed (it's been 10 months); I'm now a bit embarrassed of my comments here :) – ALevel Student May 21 '21 at 16:38

1@ALevelStudent: That's okay. It's not helped by the fact that often when we are first introduced to the exponential function, it is not done in a very precise or rigorous way. – Joe Lamond May 21 '21 at 18:25
I am not yet allowed to comment, so I write here. For the case a>0 you can easily prove by use of logarithm. For negative a you need to use complex analysis to prove the same thing.
Use following $$\ln (a^m)^n=n\ln (a^m)=nm\ln a= \ln a^{mn}.$$
New edit for comments:
You have $y=a^x$ where $a>0$. By definition $x=\log_a y$.
Now instead in your problem statement you have $y=(a^m)^n$ so choose $b=a^m$ so that $y=b^n$ then by using the definition you get $n=\log_b y=\frac{\ln y}{\ln b}=\frac{\ln y}{\ln a^m}=\frac{\ln y}{m\ln a}=\frac{1}{m}\log_a y$ which gives you $mn=\log_a y$.
Choose $x=mn$ and use the definition again and you have proven the result.
 528
 3
 13

2Interesting answer, thanks for that! However, I think (correct me if I'm wrong), you may find that the proof for your answer, ie the log laws, is based on the question! – ALevel Student Jul 23 '20 at 18:56

@AlevelStudent They come from the definition $a^b=\exp(b\ln(a))$. – Maximilian Janisch Jul 24 '20 at 09:28


I do believe it's the same as yours, and the following proof that $\ln{a^b} =b \ln a$ is my area of difficulty with this response: $$\log_a {M^n} = x \implies a^x = M^n$$ $$\log_a M = y \implies a^y = M \implies a^x=M^n=(a^y)^n=a^{ny}$$ $$x=ny=n \log_a {M}$$ This proof is where you're comment and the answer come s from, but this proof uses the fact that $(a^y)^n=a^{ny}$! – ALevel Student Jul 24 '20 at 11:17


@AlevelStudent Well if you define $a^b=\exp(b\ln(a))$, then you are good to go because $$\ln(a^b)=\ln(\exp(b\ln(a)))=(\ln\circ\exp)(b\ln(a))=b\ln(a).$$ More work might be required if you use different definitions. – Maximilian Janisch Jul 24 '20 at 12:22
The rule is merely a specific case of the addition law of indices/powers. Consider: $$(a^m)^n=a^m\times a^m \times a^m \times a^m \times a^m...\times a^m$$ where there are $n$ lots of $a^m$ multiplied together. We have the law that $$a^p \times a^q=a^{p+q}$$ Applying this to your question we have: $$(a^m)^n=a^{m+m+m+m+m...+m}$$ where we have $n$ lots of $m$. Now, $n$ lots of $m$ is equal to $n\times m$ which is equal to $mn$. So, finally, we have that $$(a^m)^n=a^{mn}$$ as required.
Edit: Since writing the above proof for positive integer exponents I have been shown a proof that $\ln {a^x}=x\ln a $ without using what we are trying to prove in the question itself, which allows us to use the laws of logs etc to answer the question, as previous answers have done:
Consider $$f(x)=\ln {x^n} n\ln x\implies f'(x)=\frac{nx^{n1}}{x^n}\frac{n}{x}=\frac{n}{x}\frac{n}{x}=0$$ This means that $f(x)$ must be equal to some constant, $c$, as only constants diiferentiate to $0$. Let's try to find $c$: We have $$\ln {x^n} n\ln x=c$$ Let $x=1$: $$\ln1n\ln1=c=00=0$$ So we have $c=0$, leaving us with $$\ln {x^n} =n\ln x$$ For the sake of completeness, I'll now go on to answer the question for all real exponents. Note that $\ln {x^n} =n\ln x$ is true for all $n\in\mathbb R$.
Let $x=a^m$: $$e^{\ln {x^n}}=e^{\ln {(a^m)^n}}=(a^m)^n$$ But we also have $$e^{\ln {x^n}}=e^{n\ln {x}}=e^{n\ln {a^m}}=e^{mn\ln {a}}=e^{\ln {a^{mn}}}=a^{mn}$$ So, at last, we have: $$(a^m)^n=a^{mn}$$. Thanks to peekaboo, among others, who attempted to make me understand this in more detail.
 6,190
 2
 10
 34

1this reasoning only holds for positive integers $m,n$, but OP is asking for the case where $m,n$ are any real number – peekaboo Jul 24 '20 at 09:13

$m$ certainly doesn't need to be positive, or an integer. You have a point about $n$ though, I will think about it. – ALevel Student Jul 24 '20 at 09:21

I agree that the rule $(a^m)^n = a^{mn}$ is true for all $a>0$ and any $m,n \in \Bbb{R}$, but your proposed proof only works in the case $m,n$ are positive integers. Part of the problem is in carefully defining what $a^m$ is even supposed to mean if $a>0$ and $m$ is a real number which is not a positive integer. – peekaboo Jul 24 '20 at 09:22


Ask yourself the question how do you define $a^m$ if $m$ is not a positive integer. Once you answer this question, you'll see that your proof is incomplete and doesn't deal with these other cases – peekaboo Jul 24 '20 at 09:25