71

How can I prove from first principles that $0!$ is equal to $1$?

Jeel Shah
  • 8,688
  • 18
  • 71
  • 117
Ssegawa Victor
  • 987
  • 1
  • 8
  • 5
  • 16
    For any cardinal c let c! be the order of the group of permutations of a set of cardinality c. – Pierre-Yves Gaillard Feb 08 '11 at 12:01
  • 39
    You haven't stated what your definition of factorial is. An inductive definition would have as the base case 0! = 1, so there's nothing to prove from that definition, for example. – Zhen Lin Feb 08 '11 at 15:47
  • 59
    Please don't state questions as orders; write them as questions. – Arturo Magidin Feb 09 '11 at 04:34
  • 5
    Or to put it differently, $0!=1$ *is* one of the "first principles" in the most typical formulation. –  Sep 27 '11 at 02:45
  • 22
    Wow, I've been coding too much lately and read this question as asking to show `0 != 1`, (or for the non coders $0\neq 1$). – SL2 Oct 03 '12 at 00:32
  • 4
    What do you consider as "first principles"? – NoChance Nov 17 '12 at 02:11
  • 4
    Because there is ONLY ONE way to do nothing. http://math.stackexchange.com/questions/20969/prove-0-1-from-first-principles/485421#485421 – Ali Abbasinasab Jan 13 '14 at 02:53
  • Simple. Instead of defining $0!$, define $1!$ instead, then prove that $0! = 1$. – John Joy Jun 24 '16 at 13:50
  • See https://www.quora.com/How-do-I-prove-0-1-1/answer/Dan-Christensen-8 (includes link to my formal proof without using empty products) – Dan Christensen Oct 15 '19 at 22:21
  • `0! = 1` in a weird way reminds me of my consciousness. That way of doing nothing in just one and only one way, that is me. – Xaqron Feb 22 '20 at 14:57

21 Answers21

69

We need $0!$ to be defined as $1$ so that many mathematical formulae work. For example we would like $$n! = n \times (n-1)!$$ to work when $n=1,$ ie $1! = 1 \times 0!.$ Also we require that the formula for the number of ways of choosing $k$ objects from $n$ is valid for $k=n.$ ie $${n \choose k} = \frac{n!}{k!(n-k)!}$$ is valid when $k=n.$

Things need to work when we extend our definition of the factorial via the gamma function.

$$\Gamma(z) = \int\limits_0^\infty t^{z-1} e^{-t} \,\mathrm{d}t,\qquad \Re(z)>0.$$

The above gives $\Gamma(n)=(n-1)!$ and so we require $0!=1,$ since $\Gamma(1)=1.$

Sasha
  • 68,169
  • 6
  • 133
  • 210
Derek Jennings
  • 13,233
  • 34
  • 59
  • 1
    The argument about the factorial formula isn't needed: that formula would hold anyway when $k=n$: maybe you mean $k=0$? – gented May 06 '19 at 14:22
65

I'm not sure that there is anything to prove. I think it follows directly from the definition of factorial:

$$ n! := \prod_{k = 1}^n k$$

So if $n=0$ the right hand side is the empty product which is $1$ by convention.

Elements in Space
  • 1,829
  • 1
  • 20
  • 33
Rudy the Reindeer
  • 43,529
  • 24
  • 140
  • 302
  • 50
    It also follows the "n! = number of ways of arranging n objects" convention: there is only one way of arranging no objects. (That's what Wolfram says, anyway.) – Rawling Feb 08 '11 at 10:33
  • 7
    You can also prove it by moving the space: "0! = 1" $\Leftrightarrow$ "0 != 1", which is computer notation for "0 $\neq$ 1" :-). Then it depends on what you count as "first principles". If we're dealing with the natural numbers, this follows from the Peano axiom that the successor of a natural number is not 0 (1 being defined as the successor of 0). If we're dealing with general rings or fields, 0 $\neq$ 1 is part of the axioms for those structures (which is to exclude the trivial cases of one-element rings and fields). – joriki Feb 08 '11 at 11:03
  • 5
    @joriki Thats just fudging notation. In the same sense I can say that 1! = 1. – user64742 Jun 03 '16 at 05:04
  • @TheGreatDuck: How's that? That's not true when you move the space. – joriki Jun 03 '16 at 05:48
  • Exactly you said 0! = 1 therefore 0 != 1. I can also say 1! = 1 therefore 1 != 1. Your reasoning is flawed. – user64742 Jun 03 '16 at 05:55
  • By convention is that for the multiplicative monoid of the semiring $\mathbb{N}$, we define the empty product $x^0$ to be the identity element which is $1$. –  Jul 03 '16 at 20:42
  • @Rawling You said that " "n! = number of ways of arranging n objects" convention: there is only one way of arranging no objects." I just wanted to ask that doesn't there exist infinite number of ways of arranging n objects ?? – Amritanshu Aug 14 '16 at 12:17
40

One of the simplest ways of doing this is to observe that if you have $$ 6!= 720 $$ then divide both sides by $6$ to get $$ 5!=120 $$ then divide both sides by $5$ to get $$ 4!=24 $$ then divide both sides by $4$ to get $$ 3!=6 $$ then divide both sides by $3$ to get $$ 2!=2 $$ then divide both sides by $2$ to get $$ 1!=1 $$ then divide both sides by $1$ to get $$ \text{[fill in the blank here]} $$

Michael Hardy
  • 1
  • 30
  • 276
  • 565
  • 17
    But how is this a proof by "first principles"? (How is it a proof at all??) To be sure, I find the question itself somewhat fishy: first tell us what the definition of $0!$ is, then we can talk about how to prove $0! = 1$... – Pete L. Clark Sep 26 '11 at 23:43
  • 13
    It's not a proof, but it could be the reason why the definition was written in the first place. – Michael Hardy Dec 23 '11 at 00:19
  • 33
    Then divide both sides by zero to get $(-1)!=\infty$, which is consistent with $\Gamma(0)$ – Chris Brooks Oct 03 '12 at 01:17
  • 2
    The question does not want this kind of proof (which is nevertheless excellent). I think it wants a philosophical discussion of why mathematicians allow this sort of reasoning? – apkg Sep 02 '15 at 16:05
  • @MorganRodgers : ha ha...... Probably I can do that, but it's not worth doing in this context. – Michael Hardy May 17 '17 at 16:53
27

$0! = 1$ is consistent with, and for reasons related to, how we define the empty product.
See, for example, this entry on empty product. This is simply the name of the phenomenon Michael Hardy alludes to:

Empty product:

The empty product of numbers is the borderline case of product, where the number of factors is zero, that is, the set of the factors, is empty. In such a "borderline" case, the empty product of numbers is equal to the multiplicative identity, which is $1$.

Some of the most common examples are the following:

  • The zero$^{\text{th}}$ power of a number $a$: $a^0 = 1$,
  • The factorial of $0$: $0! = 1$,
  • The prime factor presentation of unity, which has no prime factors.

Just as ${n^0 = 1}$ for any $n$, and the "prime factorization" of $1$ = $1$, we define, as a matter of convention, $0! = 1$.

amWhy
  • 204,278
  • 154
  • 264
  • 488
22

Because there is only one way to do nothing.

Ali Abbasinasab
  • 1,633
  • 12
  • 18
16

The empty product is taken to be equal to 1. Take logs and you get an empty sum equal to zero, which is somehow more intuitive, but this trick of taking logs to convert a product into a sum never seems to get a mention in the literature. [Assumes products have positive terms]

Mark Bennet
  • 96,480
  • 12
  • 109
  • 215
  • One of the primary uses of logs before calculators was to multiply numbers using addition and log tables. Slide rules also used logs to turn multiplication into addition. – robjohn Sep 26 '11 at 20:36
  • 4
    @robjohn:L Indeed - I still have my slide rule, and ten years ago I used it as a "mystery object" for a youth group - they didn't know what it was. I remember using it in A-level chemistry practicals to set up the instants, and do the calculations quicker than schoolmates with their Sinclair calculators! But the point I was making was more that taking logs can take the mystery out of infinite products (and the notion of convergence applicable) by turning them into (less elementary) infinite sums. – Mark Bennet Sep 26 '11 at 20:49
12

One can prove this by convention (that is based on what 'works', based on base cases for product of a list).

But for a proof from 'nothing' as it were...define what factorial is supposed to mean. I think of it as the number of one-to-one, onto (=bijective) functions from a set of size $n$ to itself. if$n$ is 0 then the set is the empty set. How many bijective functions are there from $\emptyset$ to $\emptyset$? (a function is a set of ordered pairs (with some restrictions)? There are no legal pairs for these restrictions, and so there are no legal subsets of these pairs...other than the empty set. So $\emptyset$ -is- a function (it -is- a set of pairs (empty of course), all of whose pairs satisfy the function criteria). No other functions will work so $\emptyset$ is the only function that works. So there is only 1 bijective function on a set of size 0. So $0! = 1$.

Yes, this is weird, but it works. Negative numbers, complex numbers, they're all weird even when you just manipulate their properties. But you'll get over it.

Mitch
  • 8,123
  • 2
  • 35
  • 68
  • 2
    I remember in my college days getting drunk and using this argument in an attempt to settle this $0! = 1$ matter. See also https://math.stackexchange.com/a/789411/432081 – CopyPasteIt Sep 09 '17 at 03:33
12

Let's try a different approach from my other answer to this:

To multiply a number $N$ by $6!$ is to multiply it by six factors and get $$ N\cdot1\cdot2\cdot3\cdot4\cdot5\cdot6. $$ Similarly to multiply $N$ by $0!$ is to multiply it by $0$ numbers: $$ N. $$ But that is the same as multiplying it by $1$ $$ N\cdot1. $$ Multiplying $N$ by no numbers at all is multiplying $N$ by $1$.

(This answer doesn't apply only to factorials; it may be taken as a general explanation of why, when one multiplies no numbers, one gets $1$.)

Michael Hardy
  • 1
  • 30
  • 276
  • 565
9

Similar to @Mitch's answer, but with a slight twist at the end.

For any $n$, we define $n!$ to be the the number of invertible functions from a set of size $n$ to itself. The question is thus restated as "Prove that there is exactly one invertible function from the empty set to itself."

Now, it should be clear that there can't be more than one function from $\emptyset$ to itself, because in order for two functions to be different, they have to give different values for at least one input (clearly impossible if the domain is empty). The question therefore reduces to "Prove that there is at least one invertible function from the empty set to itself."

Well, assuming we can all agree that the empty set is a set, meaning an object in the category of sets, the empty set must have an identity function (morphism) defined on it; every object in every category is required to have an identity morphism. Further, identity morphisms are always invertible because $Id\circ Id = Id$, so we are done. $\Box$

http://en.wikipedia.org/wiki/Category_of_sets

This answers the question if and only if the "first principles" include axioms of category theory (and a few facts about sets).

mathmandan
  • 1,907
  • 1
  • 14
  • 16
  • 1
    Note: this argument also shows that $0^0 = 1$, if we define $a^b$ as the number of functions from a set of $b$ elements to a set of $a$ elements (for nonnegative integers $a$ and $b$). – mathmandan Jan 07 '15 at 06:55
  • Why isn't this the accepted answer? It is by far the best. – goblin GONE Mar 26 '15 at 04:41
8

Explanation 1: We define $n!$ as the product of all integers $k$ with $1\le k \le n.$ When $n = 0$ this product is empty so it should be 1.

Explanation 2: If $n$ is a nonnegative integer, we define $n!$ to be the number of orderings on a set with $n$ distinct objects. If $n = 0$, this set is empty. Vacuously, it has 1 order.

ncmathsadist
  • 47,689
  • 3
  • 75
  • 127
8

You can define $\exp(x)$ as

$$ 1 + \frac {x} {1!} + \frac {x^2} {2!} + \frac {x^3} {3!} + ... $$ but the following seems more uniform: $$ \frac {x^0} {0!} + \frac {x^1} {1!} + \frac {x^2} {2!} + \frac {x^3} {3!} + ... $$ These are only equal if $0! = 1$

Not a proof, but makes sense to me!

Argon
  • 24,247
  • 10
  • 91
  • 130
Luigi Plinge
  • 359
  • 4
  • 10
  • so we finally found a good possibility for support of spam? – Gottfried Helms Oct 02 '12 at 23:20
  • Well, almost equal. There is an exception at $x=0$ where the first term on the latter sum has a $0^0$ which is indeterminate. – Roman Chokler Oct 03 '12 at 00:51
  • @RomanChokler Yes, but in that case, we take it to mean $\lim_{x\to0}\left(x^0\right)$ rather than $x^0\big|_{x=0}$. – Akiva Weinberger Oct 12 '14 at 21:47
  • 4
    In think that, in the context of power series, $0^0$ is always taken to be $1$. This is even used in other power series like $\left(1+x\right)^{-1}=\sum \left(-1\right)^n x^n = 1-x+x^2-x^3+\cdots$, so that it has nothing to do with the factorial at all – fonini Jan 06 '15 at 15:38
5

$0! = 1$ usually is one of the "first principles".

4

First we state that $n!=\frac {(n+1)!}{(n+1)}$. Now let $n=0$ then $0!=\frac{(0+1)!}{(0+1)}=\frac{1!}{1}=1$ Hence $0!=1$

Carlos Afonso
  • 333
  • 3
  • 9
3

$${n \choose R} = \frac{n!}{R!(n-R)!}$$

You know: ${n \choose 0}=1$, so: $$\frac{n!}{0!n!}=1\implies\frac{1}{0!}=1\implies 1=0!$$ We know every set has $2^n$ subsets we know empty subset is unique so

jinawee
  • 2,417
  • 1
  • 20
  • 39
Khosrotash
  • 23,025
  • 4
  • 37
  • 74
1

$n!$ is defined as the number of way n distinct object can be arranged.

For example,

$1!$ is described to arrange $1$ object,i.e $1$ way.

$A$

$2!$ is described to arrange $2$ object,i.e $2$ way.

$AB,BA$

$3!$ is described to arrange $3$ object,i.e $6$ way.

$ABC,ACB,BAC,BCA,CAB,CBA$

Similary, $0!$ describe the way to arrange $0$ object.$0$ object means nothing.There comes a hypothetical thinking,there is $1$ way to arrange $0$ object or nothing,that is nothing.

So, $0!=1$

Hailey
  • 990
  • 2
  • 10
  • 26
1

The way, I think about it is via permutations. Suppose you have $n$ objects then to permute them you have $n!$ factorial ways. So, for example, if you had $3$ objects then to permute them you would do $3!$ which is $6$. Suppose, you had $0$ objects then how many times could you permute $0$ things? Once. Therefore, $0!$ should be $1$. This is just an intuitive explanation and a proof by no means.

Jeel Shah
  • 8,688
  • 18
  • 71
  • 117
0

You cannot prove that $0!=1$ from first principles, because it is a convention. First, we define "$!$" as follows:

$n!=1\cdot2\cdot3\cdot...\cdot(n-1)\cdot n$

This has no meaning for $n=0$ . But it would be useful if it had one. So, we make the convention that $0!=1$ . This is the best choice, because:

  1. For $n>1$ , $n!=n\cdot(n-1)!$ If we want to extent this for $n=1$ , we must have: $1!=1\cdot0!\Rightarrow1!=0!\Rightarrow0!=1$.

  2. There are $n!$ ways to arrange $n$ objects into a sequence, for $n\geq1$. For zero objects, there is only one arrangement. So, if we want to extent the formula $n!$ to be applicable to zero objects, we must have $0!=1$.

  3. A taylor series is an infinite series of the form: $$f(a)+\dfrac{f'(a)}{1!}\left(x-a\right)+\dfrac{f''(a)}{2!}\left(x-a\right)^{2}+\dfrac{f'''(a)}{3!}\left(x-a\right)^{3}+...$$

If we aggre that $0!=1$ , we can write the same thing using the sigma notation: $$\begin{array}{c} \infty\\ \sum\\ k=0 \end{array}\dfrac{f^{(k)}(\alpha)}{k!} \left(x-a\right)^{k}$$

From these and other cases (see previous answers), we conclude that the best choice is to give $0!$ the value 1.

A comment on "$0!=1$ because, by definition, $\begin{array}{c}0\\ \prod\\ k=1 \end{array}k=1$": It is, correct, of course, if we define $n!=\begin{array}{c}n\\ \prod\\ k=1 \end{array}k$, but, what came first, the “!” notation or the capital pi notation?

Actually, the question mark notation is older. The first use of $n!$ is due to Christian Kramp of Strasbourg, in Élémens d'arithmétique universelle (1808), p.219. But there are even older notations for the factorial, like $n^{*}$, by J. B. Basedow, in Bewiesene grundsätze der reinen mathematik, Vol I (1774).

Also, Leonhard Euler uses $M$ for $1\cdot2\cdot3\cdot...\cdot m$, in Calcul de la probabilité dans le jeu de rencontre, (1753), p.259 and p.265. The use of the capital pi, is found in Gauss: Commentationes Societatis Regiae Scientiarum Gotti, (1813), vol.II, Classis Mathematicae. In sec.18, p.24, we find the formula

$$ \varPi(k,z)=\dfrac{1\cdot2\cdot3\cdot...\cdot k}{(z+1)(z+2)(z+3)...(z+k)}k^{z} $$

In sec.19, p.25, we find the formula

$$\varPi(k,z)=\dfrac{1\cdot2\cdot3\cdot...\cdot z}{(1+\dfrac{1}{k})(1+\dfrac{2}{k})(1+\dfrac{3}{k})...(1+\dfrac{z}{k})}$$

etc.

Although Gauss use capital pi for formulas which include repetitive multiplication, it is not yet the modern notation. Also, in sec.21, p.26, Gauss writes: $\varPi z=1\cdot2\cdot3\cdot...\cdot z$.

The above references are taken from Florian Cajori, “A History of Mathematical Notations”, vol.2, section 448-449. (Cajori's reference for Basedow's book, directs at page 259, but I could not find the notation there. Perhaps the page number he mentions is wrong).

Aris Makrides
  • 583
  • 3
  • 15
0

Also remember that $n! $ is the number of ways to arrange $n$ objects. We have only one way to arrange $0$ objects, i.e; $0!=1$.

tarit goswami
  • 2,909
  • 1
  • 9
  • 26
0

To shed some more light on the combinatorial intuition, and to define things formally, we can proceed the following way. we begin with a (finite) set $T$. We now define a set of sequences $Seq(T)$, which is defined inductively as:

  • The empty sequence exists for any set $T$: $\forall T \in Set, \texttt{EmptySeq} \in Seq(T)$
  • For If we have a sequence $seq \in Seq(T)$, and an element $t \in T$, we have an element which is the element that appends $t$ to $seq$: $\texttt{append}(seq, t) \in Seq(T)$.

For example, the sequence $[1, 2, 3]$ can be encoded as:

$$ \texttt{append}(\texttt{append}(\texttt{append}(\texttt{EmptySeq}, 1), 2), 3) $$

A permutation of $T$ is defined as an element of $Seq(T)$ of length $|T|$, where each element $t \in T$ occurs exactly once. We can formalize (1) length, (2)occurs exactly once as follows:

$$ \begin{align*} &\texttt{count}: Seq(T) \times T \rightarrow \mathbb{N} \\ &\texttt{count}(EmptySeq, \_) \equiv 0 \\ &\texttt{count}(\texttt{append}(t, seq), t_0) \equiv \begin{cases} 1 + \texttt{count}(seq, t_0) & t = t_0 \\ \texttt{count}(seq, t_0) & t \neq t_0 \end{cases}\\ \\ &\texttt{length}: Set(T) \rightarrow \mathbb N\\ &\texttt{length}(\texttt{EmptySeq}) = 0 \\ &\texttt{length}(\texttt{append}(t, seq)) = 1 + \texttt{length}(seq) \\ \\ &occurs: Seq(T) \times T \rightarrow \{ true, false \} \\ &occurs\_once(seq, t) \equiv [count(seq, t) = 1] \\ \\ &is\_permutation: Seq(T)\rightarrow \{ true, false \} \\ &is\_permutation(seq) \equiv \forall t \in T, occurs\_once(seq, t) \land length(seq) = |T| \end{align*} $$

Now, we have the machinery to ask ourselves

how many permutations of the empty set exist?

This is equivalent (by definition) to:

how many sequences of length $0$ exist in which every element of the empty set occurs exactly once?

  1. There is only on sequence of length 0, $\texttt{EmptySeq}$
  2. This (vacuously) contains all elements of the empty set exactly once. Since the empty set has no elements, this sequence contains all of them.

Combining this, we can see that $\texttt{EmptySeq}$ is a permutation of the empty set. It must be the only permutation, since any other sequence will have length greater than 0.

Hence, there is one permutation (the empty permutation) of the empty set.

References

To make this notion of "inductively defined set" $Set(T)$, we will need to use either some lattice theory to define least fixed points.

We can also use a theorem prover such as Coq, where we can define objects such as $Seq(T)$ and reason mathematically about them.

Siddharth Bhat
  • 7,788
  • 1
  • 16
  • 39
0

Perhaps it helps to consider this question in a broader context, where we consider "empty products" more generally. Let me introduce "Capital pi" notation, which is analogous to sigma notation for sums. If $a_1,a_2,\dots,a_{n-1},a_n$ are a list of numbers, then we define $$ \prod_{i=1}^{n}a_i=a_1\cdot a_2\cdot\dots\cdot a_{n-1}\cdot a_n \tag{*}\label{*}\, . $$ The product on the right-hand side of $\eqref{*}$ is obtained by letting $i=1,2,\dots,n-1,n$. A few remarks about this notation will come in handy:

  • The letter $i$ is called a "dummy variable", meaning that it is an arbitrary symbol that can be replaced with another letter without changing the value of the product, i.e. $\prod_{i=1}^{n}a_i=\prod_{r=1}^{n}a_r=\prod_{k=1}^{n}a_k$.
  • The right-hand side of $\eqref{*}$ should not be understood literally as a product of at least four factors. For instance, $\prod_{i=1}^{3}a_i$ should be taken to mean $a_1\cdot a_2\cdot a_3$.

To make this product notation precise, we require a recursive definition: first we let $\prod_{i=1}^{1}a_i=a_1$, and second we let $\prod_{i=1}^{n}a_i=\left(\prod_{i=1}^{n-1}a_i\right)\cdot a_n$ when $n$ is an integer greater than $1$. Notice that in order to make this definition work, we need to agree that the "product" of a single term $a_1$ to be $a_1$. Therefore, while the "product" of a single term seems like an unnatural idea at first, it is convenient to agree that the product of a single number is itself.

It is for a similar reason that we agree that the product of no numbers at all—the empty product—is equal to $1$. It's not because empty products have to be defined in this way, but rather that this is a convenient notational convention. Why is it convenient? Well, notice that if $n>1$, then $\prod_{i=1}^{n}a_i=\left(\prod_{i=1}^{n-1}a_i\right)\cdot a_n$. If we want this equation to be true when $n=1$, then it ought to be the case that $\prod_{i=1}^{1}a_i=\prod_{i=1}^{0}a_i\cdot a_1$, and so assuming that $a_1$ is non-zero, we must define $\prod_{i=1}^{0}a_i$ as $1$. Of course, we don't want one rule for when $a_1=0$, and one rule for when $a_1\neq 0$, and so the simplest convention to adopt is that $\prod_{i=1}^{0}a_i=1$ regardless of whether $a_1=0$.

How does this relate to factorials? Well, by definition $n!=\prod_{i=1}^{n}i$, and so $0!=\prod_{i=1}^{0}i=1$. In this case, we are motivated by the desire for the equation $n! = n(n-1)!$ to still hold when $n=1$. It's not that $0!$ has to be defined in this way, but rather that this is the most natural definition. This is why it doesn't make much sense to "prove" that $0!=1$; it's not a a matter of proof—it's a matter of definitions.

It turns out that there are a diversity of other reasons why it is convenient to define $0!$ as $1$, which you can read about here. It is fortunate that in every context where $0!$ crops up, it seems obvious what it "should" be. In other cases, we are not so lucky. For instance, sometimes it is convenient to define the degree of the zero polynomial as $-1$, but other times it convenient to define it as $-\infty$, and other times still it feels more natural to leave it undefined. This example shows that mathematical definitions are quite flexible, and that they suit our needs depending on the context, rather than being eternal truths etched into stone.

Joe
  • 14,185
  • 2
  • 28
  • 65
0

For any finite set $S$, $\Pi (S)=\Pi(S\cup\{1\})$

By definition of the factorial $0!=\Pi\{\}=1\cdot\Pi\{\}=\Pi\{1\}\cdot\Pi\{\}=\Pi\{1\}=1$

Naci Er
  • 21
  • 3