I can perform operations on polynomials. I can add, multiply, and find their roots. Despite this, I cannot define a polynomial.

I wasn't in the advanced mathematics class in 8th grade, then in 9th grade I skipped the class and joined the more advanced class. This question isn't about something I don't understand; it's something I missed.

My classes have not covered what really a polynomial is. I can generate one, but not define one. The internet has yielded incomplete definitions: "Consisting of multiple terms" or "A mathematical expression containing 2 or more terms and variables."

Take the following expressions for example:

$2x^2-x+12-2x^2+x-12$. Consists of multiple terms, but can also be expressed as $0$. Is zero a polynomial?

What about $x^{-1}$? $x^{-1}$ have -1 zeroes?">I've been told this one isn't a polynomial, but I don't understand why.

Is $x^2+x+1-x^{-1}-x^{-2}-x^{-3}$? a polynomial? It contains both positive and negative exponents?

tl;dr: What actually is the mathematical definition of a polynomial? Is $0$ a polynomial, and why isn't $x^{-1}$ a polynomial under this definition?

  • 3,328
  • 3
  • 20
  • 43
  • 17
    $0$ is a polynomial of degree $0$ as is any other constant. A polynomial is any linear combination of nonnegative integer powers of an indeterminate. – Qudit Mar 14 '17 at 00:15
  • 26
    @Qudit 0 is a polynomial, but there is some question of how to define its "degree" in the most useful way, or if it should be defined at all. It may depend on what you want the "degree" for. See http://math.stackexchange.com/questions/1796312 – David K Mar 14 '17 at 00:28
  • 1
    In addition to several good answers here, there is a definition of polynomials in [this answer to an earlier question](http://math.stackexchange.com/questions/2184208/domain-of-a-polynomial-function/2184247#2184247). – David K Mar 14 '17 at 00:30
  • You may also be interested in [Laurent polynomials](https://en.wikipedia.org/wiki/Laurent_polynomial) which are allowed to have negative exponents. But they are definitely not (regular) polynomials. – Teepeemm Mar 14 '17 at 01:48
  • 4
    Your question about $x^{-1}$ can be interpreted in two ways: first, why is $x^{-1}$ not a polynomial by its form; second, why is $f(x) = x^{-1}$ not a *polynomial function*? The first question is answered below: the exponent $-1$ is not a nonnegative integer, as required in the definition of a polynomial. The second question means: is it possible that for some polynomial $P(x)$, we have $x^{-1} = P(x)$ for all values of $x$. The answer is no, for the simple reason that $x^{-1}$ is not defined when $x = 0$, whereas the polynomial $P(x)$ must be. But one could still ask, is it possible... – user49640 Mar 14 '17 at 02:32
  • 1
    to have $x^{-1} = P(x)$ for some polynomial $P(x)$ whenever $x \ne 0$? The answer is still no, but now this fact is not so obvious that it should be accepted without proof. I won't write a proof here (of which there are many). If you look at the graph of $g(x) = x^2 + 1/(x^2 + 1)$, it's perhaps not obvious at all why this could not be the graph of a polynomial function. Unlike $f(x) = x^{-1}$, its graph has no vertical or horizontal asymptotes. – user49640 Mar 14 '17 at 02:37
  • 6
    According to Serge Lang, a polynomial with coefficients in a ring $A$ is the same thing as a finitely-supported function from the monoid of natural numbers (including zero of course!) into $A$. – Dorebell Mar 14 '17 at 02:46
  • 1
    A fun way to define polynomials that appeals to computer programmers is: a polynomial is (1) `f(x) = c`, a constant, or (2) `f(x) = x`, or (3) the sum or product of any two polynomials. It may seem bizarre to define a thing in terms of itself, but this is a perfectly valid and reasonable definition. – Eric Lippert Mar 14 '17 at 18:40
  • Your question, at least linguistically, is ambiguous. For example, your own face(-ial) features, as approximated by a polynomial, could "actually be" a polynomial, thus answering your question... But I am almost certain *that* is **not** what you *intentioned* to ask. Right? – hello_there_andy Mar 15 '17 at 04:02
  • I'm kind of shocked that nobody here has considered that something like $\cos^{-1}(2 \cos(x))$ can be reasonably called a polynomial in $x$... maybe y'all's answers should cover cases like these. – user541686 Mar 15 '17 at 05:23
  • Poly+nomial == many+terms – MmmHmm Mar 15 '17 at 21:35
  • I understand that, @mr. Kennedy, but that's not the mathematical definition, just an anitatomical breakdown – Travis Mar 15 '17 at 21:37
  • 1
    @Qudit, defining degree of zero polynomial to be $0$ is ill-behaved, since it will break $\deg(fg) = \deg f + \deg g$ (over integral domains). – Ennar Mar 16 '17 at 20:39
  • 2
    @Qudit : Actually, the degree of the constant polynomial $0$ is often defined to be $-\infty$, and isn't usually defined to be zero. That's done so that the formula $\operatorname{deg}(pq)=\operatorname{deg}(p) + \operatorname{deg}(q)$ holds even if $p$ or $q$ is $0$. – MPW Mar 17 '17 at 20:03
  • @Mehrdad, maybe you meant $\cos(2\arccos x)$? What you gave definitely *isn't* a polynomial. – J. M. ain't a mathematician Apr 23 '17 at 07:46
  • @J.M.isn'tamathematician: Yes :\ I wrote it correctly in my answer actually... – user541686 Apr 23 '17 at 07:48

19 Answers19


A polynomial (in one variable) is an expression of the form $$ p(x) = a_0+a_1x+a_2x^2+\ldots+a_nx^n$$ where the coefficients $a_i$ are some kind of number (or more generally they're elements of a Ring). The exponents $1,2,\ldots n$ must all be integers.

Unless we've been silly and $a_n=0,$ $n$ is called the degree of the polynomial. We can formalize this by defining the largest $n$ such that $a_n\ne0$ as the degree.

Notice that constants are allowed. $p(x) = 3$ is a zero-th degree polynomial.

You asked about zero. Yes, $p(x) =0$ is considered to be a polynomial. However, you'll notice that there is a problem with the definition of degree here since there is no coefficient that is nonzero. The degree of the zero polynomial is thus undefined.

This allows us to say that if we multiply two polynomials $w(x)=p(x)q(x)$ with $p$ of degree $n$ and $q$ of degree $m,$ then $w$ has degree $n+m.$ (Notice how the zero polynomial would mess this up if its degree were defined to be zero like the other constants.)

You're right that simplification is important. The $x$ is just a symbol and we can always "combine like terms" $$ a_lx^l+b_lx^l= (a_l+b_l)x^l.$$ We always combine all the terms together and simplify in order to get an expression into the form above with only one term for each power before we do things like consider the degree.

Notice we can add two polynomials according to the simplification rule and get a polynomial as a result. This is a good reason to consider zero to be a polynomial... it allows the sum of two polynomials to always be a polynomial. Likewise we can multiply two polynomials according to the the distributive property, the rule $$ (a_mx^m)(a_lx^l) = a_ma_l x^{m+l},$$ and the additive simplification rule. The result will be another polynomial.

Yes, the exponents all need to be positive. Of course other expressions are possible but they aren't called polynomials. Terms like $x^{-3}$ are considered part of the family of rational functions (or as a commenter noted, the Laurent polynomials, not to be confused with the (unqualified) polynomials). This is just a definition and thus somewhat arbitrary (though good definitions are important for organization). It's just like saying $-4$ is an integer but not a natural number. It's true by definition, and yes, a bit arbitrary, but nonetheless useful and a nearly universal convention.

EDIT As Paul Sinclair pointed out in the comments, there are also polynomials in multiple variables. For instance $$p(x,y) = A + Bx + Cy +Dx^2+Exy+Fy^2$$ is the general degree two polynomial in two variables. The degree of a term is just the sum of the degrees with respect to the individual variables. So a term like $3xy$ has degree two and a term like $3x^4y^5z$ would have degree $4+5+1=10.$ The degree of a polynomial is the degree of its highest-degree term with nonzero coefficient.

  • 51,044
  • 3
  • 35
  • 78
  • 28
    This may sound pedantic, but I would call the polynomial $p$, $p(x)$ is just a number (or some other thing depending on your ring, etc). – YoTengoUnLCD Mar 14 '17 at 02:30
  • 12
    @YoTengoUnLCD Agree there's a distinction and potential ambiguity between the polynomial in $R[x]$ and its image under evaluation at an arbitrary point $x\in R$. But to me, $p(x)$ could refer to either. $x$ is not a ring element on the right hand side so it doesn't need to be on the left. Usually see this distinction handled by having a character other than $x$ be standard notation for an arbitrary element of $R$ (say it's $\alpha$) and writing $f(\alpha)$ when evaluation is intended. I guess for me the notation $p = a_0+a_1 x$ feels imbalanced, but everyone's mileage varies. – spaceisdarkgreen Mar 14 '17 at 03:05
  • @YoTengoUnLCD Of course I have no problem writing $p\in R[x].$ I even prefer it slightly to $p(x)\in R[x]$, so who knows. – spaceisdarkgreen Mar 14 '17 at 03:13
  • 50
    I think it's pretty common to say that the zero polynomial has degree $-\infty$ to restore the additivity of degree under multiplication. – Danu Mar 14 '17 at 10:39
  • I'm not sure if it is obvious from this definition that $x^2+1$ and $1+x^2$ are the same polynomial in $\mathbb{Z}_2$ but $1+x$ isn't. – JiK Mar 14 '17 at 15:38
  • 14
    More accurately, what you've described is a *polynomial of one variable*. There are of course polynomials of many variables, which alas only Ethan Bolker has mentioned, and he only as an "abstraction". – Paul Sinclair Mar 14 '17 at 16:15
  • @PaulSinclair Thanks! Agree it was an oversight to not at least mention. – spaceisdarkgreen Mar 14 '17 at 22:38
  • Your answer is also variable, but you got there in polynomial time (or did you?) – hello_there_andy Mar 15 '17 at 04:03
  • Perhaps another pedantic comment, which perhaps goes too far, but for the additivity of the degree function you have to assume that you do not have any zero divisors in your ring R. – Emrys-Merlin Mar 15 '17 at 13:03
  • 1
    Some sources define the degree of the zero polynomial as $-\infty$, since this still honours $\text{deg}fg=\text{deg}f+\text{deg}g$ while obtaining $\text{deg}f'\le \text{deg}f-1$. – J.G. Mar 15 '17 at 22:19
  • 1
    @Emrys-Merlin not pedantic at all (and I honestly didn't think about it), but yeah, I was pitching this to OP so didn't want to go too deep but also wanted to communicate the generality a bit. And it's already ballooned since the original edit to the point that I think its utility and readability are on the wane if I keep editing (and there are already a couple fantastic high-level answers). On one hand restricting to an integral domain's better for an uncomplicated treatment of that point about additivity, on the other hand such a restriction is unnatural. – spaceisdarkgreen Mar 16 '17 at 00:00
  • I think it would improve this answer if you explicitly stated that the exponents must be integers. That's a bit of a subtle point and bears stating explicitly in addition to the formula in the beginning that implies it. – Stefan Monov Mar 16 '17 at 19:52
  • 7
    To prevent some confusion: When you say " a polynomial is an expression of the form $p(x)=a0+a_1 x \cdots$" someone could believe the "expression" includes the equal sign - but the "expression" is not the equation but the RHS. – leonbloy Mar 17 '17 at 11:51

There are lots of good answers here and they are all essentially correct, even though they are different! I will try to contribute another, which is somewhat more abstract than the others. I normally wouldn't try this for a high school student, but your very good question deserves different kinds of answers. Maybe this one will help.

It's the "what actually is" in your question that I want to address. In mathematics at a more advanced level you don't think as much about what something "is" as you do about how it "behaves". (The same is true in object oriented programming languages = you say you're studying computer science. If you're learning Java you know about this.)

To manipulate polynomials (which you know how to do) all you really need to know is the sequence of coefficients. We'll assume for the moment that those coefficients are ordinary numbers. It's useful to start those coefficients with the constant term. since the degree (which is the place that holds the last nonzero coefficient) isn't fixed. So the polynomial $$ 8x^3 + 5x + 7 $$ is "really just" the sequence $$ (7, 5, 0, 8) $$ or, if you like $$ (7, 5, 0, 8, 0, 0, \ldots) $$ where the zeroes go on forever.

What "really just" means there is that if you know the sequences of coefficients for two polynomials you can calculate out the sequence for their sum. Just add the sequences element by element. You can also calculate their product. It's a little harder to write down the algorithm, but you can figure it out if you understand how writing a polynomial the high-school way with powers of $x$ makes the multiplication automatic.

You can even divide one polynomial by another as long as you're willing to allow yourself a remainder (and allow fractions for the coefficients). You may in fact have learned how to do that and called it "synthetic division".

You can also "evaluate" a polynomial at a number $n$ when you know its coefficients.

What all this means in practice is that you don't need "$x$" or its powers to think about polynomials. The "variable" just helps to keep the polynomial arithmetic straight. And that's so useful that we almost always write polynomials with an $x$ rather than as a sequence of coefficients.

Finally, this abstract view lends itself to further abstraction! All you need to know to manipulate polynomials (written as sequences) is how to add and multiply the coefficients. So the coefficients themselves might be polynomials. So, for example, you can think of $$ 4x^2y^3 + 6xy^3 - 2xy^2 $$ as "a polynomial in $x$ whose coefficients are polynomials in $y$": $$ (0, -2y^2 + 6y^3 , 4y^3) = ((0), (0, 0, -2, 6), (0, 0, 0, 4)) $$ or as "a polynomial in $y$ whose coefficients are polynomials in $x$". (You write that one.)

The coefficients can even be matrices, when you learn what matrices are and how to add and multiply them.

Further thoughts:

You can think of the algorithms for addition and multiplication you learned a long time ago as like the arithmetic of polynomials, only more complicated. When you "collect like powers of $x$" in a polynomial, you just add up what you see. When you "collect like powers of $10$" in ordinary arithmetic you have to simplify further by "carrying", so replacing, say, $21 + 7 \times 10$ by $1 + 9 \times 10$.

If you relax the requirement that the coefficients be $0$ from some point on then you are dealing with a (formal) power series, traditionally written $$ a_0 + a_1 x + a_2 x^2 + \cdots = \sum_{n=0}^\infty a_n x^n . $$ You can add these and multiply them with the usual polynomial rules. They are "formal" power series because trying to evaluate them by substituting a value for $x$ is much more subtle than it is for polynomials. You'll study that in calculus. (And formal power series have uses that don't depend on evaluation.)

Then you can decide allow a few terms with negative powers, like $$ 4x^{-3} + 7x^{-1} + \text{ an ordinary formal power series} . $$ These are called "Laurent series"; they come up when you study functions of a complex variable. You have lots of nice mathematics to look forward to.

Bill Dubuque
  • 257,588
  • 37
  • 262
  • 861
Ethan Bolker
  • 80,490
  • 6
  • 95
  • 173
  • 5
    I see that @CarlMummert wrote essentially the same answer while I was writing mine. So you have two explanations for the same idea. That's often useful. – Ethan Bolker Mar 14 '17 at 01:01
  • 2
    @hello_there_andy I think you misunderstand. I wasn't accusing CarlMummert. If anytning I was apologizing for seeming to duplicate his fine answer. I'm pretty sure he and I agree that both answers may be helpful. – Ethan Bolker Mar 15 '17 at 12:17
  • 1
    @CarlMummert See my response above to hello_there_andy. – Ethan Bolker Mar 15 '17 at 12:18
  • +1 just for explicitly mentioning the gem that is 'synthetic division' - oh how grateful I was to my high school teacher for dispensing with cumbersome writing out of indeterminates (if it's what I think you mean)! A cute first algorithm to learn as a kid! – Mehness Nov 24 '18 at 06:31
  • Typo fixed per [here](https://math.stackexchange.com/q/3513779/242) – Bill Dubuque Jan 18 '20 at 18:32
  • Really nice answer. Though, Cauchy's product still looks "unnatural" to me, i.e. with much hindsight in it: why not the product component-wise, i.e. sum-like? –  Jul 24 '21 at 10:12

Note: in this answer I will try to motivate the definition which is used in more advanced contexts such as "abstract algebra". This may go beyond what is in a typical pre-algebra book, but I hope it will show how the mathematics community has found a way to come up with a workable definition, even it is less obvious at first.

It is hard to define polynomials because there is a tension between several of their key properties, which don't quite agree:

  1. A polynomial can be written as an expression in the form $a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$ for some $n \geq 0$ and some choice of coefficients $a_0, \ldots, a_n$.

  2. The sum of two polynomials is a polynomial. The product of two polynomials is a polynomial. Overall, the collection of polynomials is the smallest collection that includes all the numbers, $x$, and is closed under addition and multiplication.

  3. The expressions $(x+1)(x-1)$ and $x^2-1$ determine the same polynomial.

If we want to use something like (1) as a definition, we end up with the issue that $x$ and $2x$ are defined to be polynomials, but $x + 2x$ is a polynomial according to (2) but is not literally in the form shown in (1). So we have to define a "simplification" operation.

If we want to use something like (2) as a definition, then we still have the issue of defining when two polynomials are equal, as (3) points out.

In general, although it is tempting to define polynomials in terms of "expressions", this causes more trouble than it is worth. So it is common in more advanced texts to define polynomials as follows:

A polynomial (over the real numbers) is a sequence of real numbers $(a_i : i \in \mathbb{N})$ in which at most finitely many of the terms are nonzero. Two polynomials are equal when they are the same sequence.

So $(2,1,0,0,\ldots)$ and $(0,1,3,0,0,\ldots)$ are polynomials according to this definition. Of course, the "polynomial" $(2,1,0,0,\ldots)$ is meant to stand for $2 +x$, and $(0,1,3,0,0,\ldots)$ stands for $x + 3x^2$. But in these definitions we do not define the polynomials in terms of the expressions. Rather, we view the expressions as nothing more than notation - shorthand - for the sequences which are actually polynomials.

We continue the definition by defining addition of polynomials using the formula $(a_n) + (b_n) = (a_n + b_n)$.

Multiplication is defined in a way analogous to the Cauchy Product: $(a_n)(b_n)$ is defined to be the sequence $(c_n)$ where $$ c_k = \sum_{i=0}^k a_i b_{k-i}. $$ This is exactly the formula you would discover if you multiply polynomials in the usual, pre-algebra style.

In this way, the collection of polynomials in the variable $x$ is identified with the ring $\mathbb{R}[x]$, which is also defined as the set of finitely-supported sequences of reals with the operations shown above. These definitions of the operations take care of simplification automatically, so we do not need to worry about "unsimplified" polynomials in the formal definition.

Carl Mummert
  • 77,741
  • 10
  • 158
  • 292
  • 1
    good job remembering that n must be finite! otherwise, you have a power series. also, IIRC, a polynomial with more than one indeterminate would be a "multinomial". – richard1941 Mar 14 '17 at 23:23
  • 1
    For the benefit of the OP, Chapter 4 of Lang's Undergraduate Algebra contains further reading on polynomials in the context of abstract algebra. – Sasho Nikolov Mar 15 '17 at 03:18
  • Motivation-wise, the more interesting feature IMO is that polynomials can be associated with algebraic numbers (numbers which can be derived from rational numbers via addition, multiplication and exponentiation): for any algebraic number `a`, there is a unique rational-coefficient polynomial `p` for which `p(a)=0`, and all other polynomials with that property are multiples of `p`. So in a (very sloppy) sense polinomials describe possible ways in which the rational numbers can be extended. This concept turns out to generalize well to other number systems (such as integers modulo some prime). – Tgr Mar 15 '17 at 09:53
  • 1
    "Rather, we view the expressions as nothing more than notation - shorthand - for the sequences which are actually polynomials." - If you define $x$ as the sequence $0,1,0,0,0\ldots$ (assuming the coefficient ring has, or can be given, a 1), the expressions are actually valid for the polynoleum ring. – Martin Rattigan Mar 16 '17 at 10:38
  • Also, your definition (2) is interesting, except that the coefficients only need to be elements of a ring, and are not necessarily "numbers". In a ring, elements have additive inverses and there is an additive identity, so two polynomials p1 and p2 are equal if p1 + (-p2) = 0. – richard1941 Mar 16 '17 at 23:48
  • Really nice answer. Though, Cauchy Product still looks "unnatural" to me, i.e. with much hindsight in it: why not the product component-wise, i.e. sum-like? –  Jul 24 '21 at 11:55

Added: 15/12/2018

Although I still think the ideas in this answer are great, in retrospect the exposition is lacking. As one commenter says, this answer would be infinitely more useful if it actually explained things rather than just stating stuff. Consequently, I would request that someone edit or totally rewrite it to make the answer more comprehensible. If there's any takers, please comment below. If there are no takers, I might try myself, although I'm not sure where to even start.

the exposition and lack of explanatory

The other answers do a great job of giving a non-technical explanation. For users of the website who are a little further along in their studies, here's a fairly technical answer.

Philosophically speaking, I think that the concept polynomial with coefficients in $R$ somehow "is" the endofunctor $U \circ F : \mathbf{Set} \rightarrow \mathbf{Set}$, where $U$ is the forgetful functor $R\mathbf{Alg} \rightarrow \mathbf{Set}$ and $F$ is its left-adjoint. This ties in with Carl's answer, namely that:

The sum of two polynomials is a polynomial. The product of two polynomials is a polynomial. Overall, the collection of polynomials is the smallest collection that includes all the numbers, x, and is closed under addition and multiplication.

The reason this is a good description of polynomials is because:

  • Carl is being vague, and just emphasizing polynomials with integer coefficients
  • An object of $\mathbb{Z}\mathbf{Alg}$ is just a ring
  • The signature $(+,\times,0,1)$ is a sufficiently large to state the axioms of ring theory, so we just need closure under these operations (and Carl is being vague and not including $0$ and $1$.)

The reason this is an incomplete answer is because

  • it doesn't tell how to decide whether or not two polynomials are equal.

So, how do we decide whether or not two polynomials are equal? By applying the axioms of ring theory, of course! Two polynomials with integer coefficients are equal if, and only if, the axioms of ring theory can be used to prove they're equal. Otherwise, they're distinct. Seen from this vantage point, it's not too surprising that the category $\mathbb{Z}\mathbf{Alg}$ of rings has a direct connection to polynomials.

By the way, I think it's similarly the case that the concept $R$-linear combination "is" the endofunctor $U \circ F$, with $R\mathbf{Alg}$ is replaced by $R\mathbf{Mod}$. In fact, there's a whole dictionary of such things:

$R \mathbf{Alg} \mapsto \mbox{Polynomial with coefficients in $R$}$

$R \mathbf{Mod} \mapsto \mbox{$R$-linear combination}$

$\mathbf{Mon} \mapsto \mbox{Word}$

$\mathbf{Grp} \mapsto \mbox{Reduced Word}$

$\mathbf{PSet} \mapsto \mbox{Element}$

$\mathbf{SupLat} \mapsto \mbox{Subset}$

$\mathbf{Magma} \mapsto \mbox{Catalan Tree}$

etc. On the left we have concrete categories, and on the right we have the monads that they define, and are defined by. The technical concept that underlies this correspondence is that of a monadic adjunction. This is all well-known, of course, but I like reassuring myself that apparently abstract concepts give meaningful, coherent answers to the kinds of questions Year 9 students might ask their apparently-humble math tutor. This is the kind of thing that got me excited about mathematics in the first place :)

goblin GONE
  • 65,164
  • 15
  • 90
  • 252
  • 2
    I could not resist voting up even though I suspect it is useless to the questioner. – PJTraill Mar 15 '17 at 18:49
  • 1
    @PJTraill, thanks. I'd say you more than suspect, but I wrote this as a little treat for people like us :) – goblin GONE Mar 16 '17 at 00:07
  • 7
    Your answer would be a lot more useful if you would explain instead of just stating. To start with, how are polynomials maps from sets to sets, rather than from rings to rings? – celtschk Mar 16 '17 at 07:57
  • 1
    What is $\mathbb{Z}\textbf{Alg}$? (By the way, _this_ is precisely the sort of thing that got me into math in the first place, too!) – étale-cohomology Mar 16 '17 at 15:55
  • I'm missing something obvious here, it seems. The endofunctor is a functor. In your correspondence it seems to correspond to the set of all polynomials over a given ring, which, as you state, is a ring itself. But how is this functor a ring? A functor does not have 'elements', let alone that they can be added or multiplied. I can see how the ring of polynomials, viewed as an object of the category $R$-Alg has enough universal properties that it can be recognized inside that category just by looking at the category structure, but $RAlg$ seems to have dropped out of the picture. What's going on? – Vincent Mar 16 '17 at 16:14
  • By the way I do share your feeling that it is reassuring that abstract concept have meaningful interpretations in more accessible realms of mathematics! That's why I like to understand how this is done in this case. Perhaps I should make it into a seperate question? – Vincent Mar 16 '17 at 16:16
  • @celtschk, thanks for the feedback. I'll do a rewrite when I've worked out exactly what needs to be said and how to say it. – goblin GONE Mar 17 '17 at 14:08
  • 2
    @étale-cohomology, $\mathbb{Z}\mathbf{Alg}$ is the category of ring $\cong$ the category of $\mathbb{Z}$-algebras. Google the phrase "$R$-algebra" to learn more. – goblin GONE Mar 17 '17 at 14:09
  • I thought some more about it and I realized that I would fully agree if you had said that the concept of polynomial 'is' the functor $F$ you describe above. After all, the object map of $F$ applied to a set with $n$ elements gives the $R$-algebra of polynomials in $n$ variables. So since each of the $R$-algebras obtained this way could in itself be considered an answer to the original question the functor could be viewed as an elegant way of giving all these possible answers, handily parametrized by elements of $\textbf{Set}$. But now I don't understand... (ctd in next comment) – Vincent Mar 17 '17 at 14:23
  • (ctd from previous comment) ...But now I don't understand why you compose the functor $F$ (which already could be seen as the answer) with the forgetfull functor $U$. In my view this destroys everything relevant! You take these beautiful rings of polynomials and turn them into uninteresting featureless blobs whose only defining characteristics are their cardinality - something which has nothing to do with the 'concept of polynomial'. So why compose with $U$ in the end? Now it looks like you had the answer and then decided to deliberately forget it. How am I misrepresenting the situation here? – Vincent Mar 17 '17 at 14:27
  • 1
    @Vincent $U$ and $F$ are related by an adjunction, so they come with a canonical unit and counit transformation. By compusing $T=UF$ we get a monad on the category of sets (i.e. a monoid in the endofunctor category) which comes with a unit $e: 1 \Rightarrow T$ and multiplication $\mu: T^2 \Rightarrow T$. So in fact $T=UF$ is an algebraic object like a monoid. – ಠ_ಠ Mar 18 '17 at 07:50

A polynomial in the indeterminate $x$ is an expression that can be obtained from numbers and the symbol $x$ by the operations of multiplication and addition.

$0$ is a polynomial, because it is a number.

Any positive integer power of $x$ is a polynomial, because you can get it by multiplying the appropriate number of $x$'s together (e.g. $x^3 = x \cdot x \cdot x$). But negative and non-integer powers of $x$ are not polynomials (e.g. $x^{-1}$ is not a polynomial), because those operations only give you positive integer powers of $x$.

Robert Israel
  • 416,382
  • 24
  • 302
  • 603
  • 2
    One is introduced to *polynomial functions* in high school algebra, and later to *formal polynomial expressions*. The difference in simple terms is that polynomial functions take on values when argument $x$ (in the case of a single variable) is assigned a value, while formal polynomial expressions are not functions per se. – hardmath Mar 14 '17 at 01:36
  • 1
    This is a good perspective that deserves to be represented higher in the answer sequence. But I do wish there were some gentle description of quotienting out the difference between expressions that can be proven to be equal based on ring axioms and arithmetic on the constants, and how each equivalence class then has exactly one member that's in standard form. – hmakholm left over Monica Mar 14 '17 at 10:57
  • @hardmath... wrong. Not all highschools are equal. Any failure of the education-ing infrastructures causal to a lack of understanding, completely, the entirity of polynomials (in all associated fields), is not considered as a means to counter a fellow SM user... We are beyond the significance of our varied educational backgrounds here on this site. – hello_there_andy Mar 15 '17 at 04:12
  • I don't think "numbers" have anything to do with polynomials. As I remember, the $a_i$ only need to be elements of a ring. – richard1941 Mar 16 '17 at 23:51
  • 2
    @richard1941 True, but Travis seems to be in high school. I tried to pitch my answer to that level. – Robert Israel Mar 17 '17 at 01:03

This isn't a definition suitable for pre-calculus, but I would say that a polynomial in a variable $x$ is anything whose $n^\text{th}$ derivative with respect to $x$ vanishes everywhere (i.e. is equal to zero everywhere), for some integer $n \geq 0$.

The nice thing about this definition is that it talks about how the polynomial behaves, rather than how you write it (so $\cos(2 \cos^{-1} x)$ is also a polynomial in $x$). It also generalizes appropriately to more abstract objects such as rings, functions, etc. as long as you define derivatives appropriately.

  • 12,494
  • 15
  • 48
  • 93
  • 1
    This is an intriguing definition, but in a sense backwards because the concept of derivatives basically amounts to “function can be locally approximated by a polynomial”. – leftaroundabout Mar 18 '17 at 11:50
  • @leftaroundabout: I think you're thinking about analyticity, not merely derivatives? Not every differentiable function is analytic... – user541686 Mar 18 '17 at 20:03
  • No, analyticity is another subject. An analytic function is _equal_ to its Taylor expansion (in general an infinite series, i.e. _not_ a polynomial), but every $n$-times differentiable function is _approximated_ by a polynomial of degree $n$ corresponding to the truncated Taylor series (in some suitable $\varepsilon\mapsto\delta$ sense, which is how differentiability is defined in standard analysis). How well this approximation works has little to do with whether the function is analytic. – leftaroundabout Mar 18 '17 at 23:09
  • @leftaroundabout: no, doesn't make sense. Analyticity is what tells you that the Taylor series of the function **converges** to the function itself locally... i.e., it's what tells you the partial sums of the Taylor series approximate the function locally. If the Taylor series doesn't converge to the function locally then we don't say it's "approximating" it. But heck, if your function is continuous at $x_0$ then its value near $x_0$ is already "approximated" by $f(x_0) + 0$ too, by whatever hand-wavy definition you're using (what is it?) so is continuity now a polynomial concept too? C'mon... – user541686 Mar 19 '17 at 00:12
  • 1
    Furthermore there's literally no mention of or allusion to polynomials anywhere in the definition of a derivative, so I'm not sure how the concept is "backwards". It'd have been totally possible for derivatives to have been formulated first, with people subsequently realizing that polynomials always have uniformly vanishing derivatives. – user541686 Mar 19 '17 at 00:12
  • Well, yes, a continuous function is already approximated by a degree-0 polynomial, but only in the weakest sense (the error becomes arbitrarily small in a sufficiently small region). The higher you take the differentiability degree, the higher the requirement on the error vanishing rate you also impose. For single differentiability, you require that the error decreases faster than proportional to $\delta$, which is equivalent to the limit of the quotient of the displacements vanishing, or, as it's usually written, the limit of the quotient of the _expressions_ being 1. ... – leftaroundabout Mar 19 '17 at 01:22
  • ... _Or_, well, if you replace the first Taylor coefficient with 1 (in which case the quotient is the standard _difference quotient_), then the limit is just _the derivative_, which happens to be a good way to calculate this, provided you actually have a closed expression for the function, in which case you can then go on with the second derivative as simply the differentiability of the generally-phrased first derivative. ... – leftaroundabout Mar 19 '17 at 01:22
  • ... That's how it's usually taught in maths education, but for functions which aren't given in a closed form, e.g. only through _physical measurements_ (keep in mind that it was _physicists_ who invented calculus), this doesn't work: you can't _evaluate the derivative at every point_, let alone take any limits of it. Thus what you actually do to obtain an $n$-th derivative of such a function is, you directly fit an order-$n$ polynomial to approximate the data as best as possible in a small region. The derivatives are then then coefficients of this polynomial. – leftaroundabout Mar 19 '17 at 01:23
  • @leftaroundabout: In your entire 3-comment lecture on derivatives you **literally did not mention polynomials** until you stopped talking about math and started talking about physical measurements. Yet you're somehow telling me that talking about derivatives before talking about polynomials is backwards?! The only thing you did was to prove my point: like I said [here](http://math.stackexchange.com/questions/2185587/2188937?noredirect=1#comment4511614_2188937), you can totally formulate and use derivatives without ever dealing with polynomials, and you just did that yourself... – user541686 Mar 19 '17 at 01:48
  • Well, I literally did mention (order-0) polynomials in the first line. The rest of the two maths comments talked about how single differentiability is a notion of being approximated by an order-1 polynomial. Even if you never phrase it this way, you do for sure deal with a polynomial when writing down the difference quotient: $\lim_{x\to x_0} \frac{f(x) - f_0}{x - x_0}$. The denominator $x - x_0$ is a polynomial. (This is not quite the polynomial I'm making all the fuss about, though it's closely related.) – leftaroundabout Mar 19 '17 at 02:11
  • @leftaroundabout: You literally mentioned order-0 polynomial... when talking about continuity. Not derivatives. And nothing about $x - x_0$ screams "polynomial" any more than $2$ screams $\{\emptyset, \{\emptyset\}\}$. The fact that it just so happens to be a polynomial doesn't mean it somehow demands any understanding or knowledge of the concept of polynomials. As you **yourself** proved, you didn't need to discuss polynomials when discussing derivatives *at all*. I'm tired of arguing with you... can you just stop this and leave it alone? – user541686 Mar 19 '17 at 02:21

I will give you a rigorous definition.

Definition 1. A quadratic polynomial in the variable $x$ is an expression of the form $$ a x^2 + bx + c, $$ where $a$, $b$ and $c$ are real numbers and $a \not = 0$.

Example 1. Take $a=1$, $b=2$ and $c=0$. You can then see that $$ x^2 + 2x $$ is a quadratic polynomial.

More generally, we have the following definition of a polynomial (not necessarily quadratic).

Definition 2. A polynomial in the variable $x$ is either $0$ or an expression of the form $$ a_n x^n + a_{n-1}x^{n-1} + \dots + a_1 x + a_0, $$ where $n$ is a non-negative integer, $a_n, a_{n-1}, \dots, a_1, a_0$ are real numbers and $a_n \not = 0$. The non-negative integer $n$ is said to be the degree of the polynomial.

Example 2. The expression $x^{-1}$ is not a polynomial. While it is indeed an expression of the form $a_n x^{n}$, where $n = -1$ and $a_n = 1$, the integer $n$ is not positive, contradicting our definition.

Further remarks.

You can define the addition and the multiplication of polynomials in the way you are used to. This implies that $$ x+ 2x + 3x^2 = 3x^2 + 3x + 0 $$ and $$ (x-2)(x+2) = x^2 + 0x - 4 $$ are also polynomials, by definition.

  • 3,818
  • 17
  • 29
  • 1
    Your definition 2, as currently stated, has a problem with the polynomial $0$. – Jeppe Stig Nielsen Mar 14 '17 at 08:52
  • Yes, thanks for the remark. The degree of $0$ is usually undefined or taken as $-\infty$. – Olivier Mar 14 '17 at 13:07
  • I agree, but we must make sure that $0$ still satisfies the definition for a polynomial. It is fine that the definition of the _degree_ breaks down. But we must agree that $0$ is a polynomial. – Jeppe Stig Nielsen Mar 14 '17 at 13:26
  • Corrected, thank you. – Olivier Mar 14 '17 at 13:35
  • 2
    @richard1941 We are not talking about algebraic and transcendental numbers. There is really no problem about using real coefficients; you can find the ring $\mathbb{R}[x]$ used in all introductory abstract algebra books. Algebraic and transcentental numbers are defined with respect to $\mathbb{Z}[x]$. [See here.](http://mathworld.wolfram.com/AlgebraicNumber.html) – Olivier Mar 14 '17 at 23:48
  • Previous comment about algebraic and transcendental deleted. All we need is for the $a_i$ to be elements of a ring. And the numbers are certainly a ring. The ring you choose depends on what you are studying. One ring for analysis, another for algebra. – richard1941 Mar 16 '17 at 23:59

The simplest answer: a polynomial is a linear combination of a finite number of monomials.
See Wikipedia for monomial; also binomial and trinomial.

As the Wikipedia Monomial article says in the lead, in some contexts monomial may have negative integer exponents (for example in Laurent polynomials).

For ordinary polynomials (with positive exponents) a degree of a polynomial is the highest exponent among all monomial terms (those actually present in a polynomial, i.e. with non-zero coefficients) in case of one-variable polynomials, or a highest sum of exponents in case of multi-variable polynomials.

  • $2x^7+5x+2$ is of degree $7$ (which is the highest one among $7$, $1$ and $0$)
  • $3qt^3+5q^2 + t$ is of degree $4$ (which is the highest among $1+3$ from $q^\color{red}1t^\color{red}3$, $2$ and $1$)
  • 12,423
  • 2
  • 18
  • 51

This is a non-simple question, unfortunately. Polynomials can be defined in very abstract and potentially incomprehensible terms.

Formally, a polynomial in one variable--say $x$--with real coefficients can be defined as an expression that can be equivalently expressed as a real linear combination of finitely-many terms of the form $x^n$ (where $n$ is a non-negative integer, and $x^0:=1$).

$0$ is a polynomial, since it can be written (for example) as $0x^0.$ However, $x^2+x+1-x^{-1}-x^{-2}-x^{-3}$ is not a polynomial, since it has negative exponents. Neither is $\sqrt{x}$ a polynomial, since it has non-integer exponents. Neither is $1+x+x^2+x^3+\cdots$ a polynomial, since it cannot be expressed in finitely-many non-$0$ terms. On the other hand, the following is a polynomial: $-1+(x-x)+(x^2-x^2)+(x^3-x^3)+\cdots.$ In particular, it is equivalent to the (constant) polynomial $-1x^0.$

Cameron Buie
  • 98,939
  • 9
  • 90
  • 207
  • @Carl: Excellent point. Hopefully, my adjustment fixes things. (Please let me know if it doesn't.) – Cameron Buie Mar 14 '17 at 01:09
  • @David: I agree that the last expression was constant only if $x<1.$ More precisely, it was defined if and only if $|x|<1.$ Hopefully, my adjustment has fixed things. Please let me know if it hasn't. – Cameron Buie Mar 14 '17 at 01:13
  • The series now is a series of parenthetical expressions, which does converge (very quickly indeed!). Now the question is, as with several of the other definitions, is a polynomial defined by a particular form or is it defined by what it evaluates to? – David K Mar 14 '17 at 03:34
  • @David: I contend that a polynomial is more of a formal thing. One can indeed define a polynomial function, but there needn't be any notion of evaluation to have a polynomial. For that matter, it needn't "look like" a polynomial to "act like" a polynomial. One could instead define polynomials to be real sequences whose terms are eventually zero (all but finitely many terms are $0$). We can then define addition term-wise, multiplication in a more complicated fashion. – Cameron Buie Mar 14 '17 at 17:08
  • @CameronBuie I find the formal definition very attractive. It seems, however, not very compatible with writing things like $-1+(x-x)+(x^2-x^2)+\cdots,$ unless that expression is simply to be interpreted as the sequence $(-1,0,0,\ldots).$ – David K Mar 14 '17 at 17:15
  • @David: More like $\langle -1,1-1,1-1,1-1,\dots\rangle,$ but this amounts to the same thing, of course. – Cameron Buie Mar 14 '17 at 17:31
  • @richard1941: What about it? – Cameron Buie Mar 14 '17 at 23:40
  • @CarlMummert The polynomials (in $x$ over a ring $R$) form a ring, don't they? The product of two polynomials is a polynomial, isn't it? So $(x+1)(x-1)$ is the product of the polynomials $x+1$ and $x-1$ in the ring of polynomials, and is equal to the polynomial $x^2-1.$ – bof Mar 15 '17 at 22:45
  • @bof: Yes, that was the point Carl was making. My answer previously suggested that $(x+1)(x-1)$ would *not* be a polynomial. – Cameron Buie Mar 15 '17 at 22:49

A polynomial is any element of a free extension of a ring (which in this answer is taken to mean "commutative ring with a multiplicative identity"). Thus, a polynomial can only be defined with respect to a given ring, say the ring $R$. The simplest free extension of $R$ is generated by augmenting $R$ with a single free element, say $x$, and is denoted by $R[x]$. Here free means that the elements of $R[x]$ are unconstrained by any condition apart from the ring axioms and any particular condition on the elements of $R$. Every element of $R[x]$ can be written in the form $\sum_{k=0}^n a_kx^k$, where $n\in \Bbb N$ and $a_k\in R$ for $k=0,...,n$, with the usual operations of addition and multiplication for such elements. In this context, the element $x$ is often called a variable.

Generally a ring can be freely extended by any number of variables, even infinitely many; the elements of such extensions are still called polynomials; and the resulting rings are called polynomial rings. As an example, we have the polynomial ring $R[x,y,z]$ in three variables.

Often the base ring is $\Bbb R$. In this case, note that the ordered-field structure of $\Bbb R$ does not extend to $\Bbb R[x]$, although division of elements of $\Bbb R[x]$ by nonzero elements of $\Bbb R$ is still definable. Another common example is $\Bbb C[z]$, where the name of the variable is $z$, rather than $x$, by convention. Other base rings often encountered are $\Bbb Z$ and $\Bbb Q$.

Added note: It may be asked why we need to have such an abstract definition of a polynomial. Indeed, for each of the familiar rings $\Bbb Z$, $\Bbb Q$, $\Bbb R$, and $\Bbb C$, the related polynomial rings are isomorphic to the corresponding rings of polynomial functions; for example, we could identify the element $x^8-2x^6+x^4$ in $\Bbb R[x]$ with the polynomial function $x\mapsto x^8-2x^6+x^4$ on $\Bbb R$. Sadly this doesn't work in general. In the case of the "clock arithmetic" ring $\Bbb Z_{12}$, the polynomial function $x\mapsto x^8-2x^6+x^4$ on $\Bbb Z_{12}$ is indistinguishable from the zero function, although the polynomial $x^8-2x^6+x^4$ is a perfectly good member of $\Bbb Z_{12}[x]$ in its own right.

John Bentin
  • 16,449
  • 3
  • 39
  • 64
  • Free extension of a commutative ring I presume? Otherwise you have to modulo some commutators out. Obviously this only really matters when looking at multivariate poly rings. – Ali Caglayan Mar 18 '17 at 09:13
  • Thank you, @AliCaglayan. I have made this explicit now. – John Bentin Mar 18 '17 at 21:20

A polynomial is a mathematical expression (as opposed to an equation) where all terms are either added or subtracted from each other (if there is more than one term), each term contains some real number constant, and each term contains a variable with a non-negative power. You cannot have infinitely many terms. The number one is a polynomial. Likewise, zero is a polynomial. Any term with a negative powered variable invalidates the entire expression from being a polynomial.

Edit: In regards to the expression that simplifies to zero, both the original expression and zero are polynomials. The expression with negative powers is not a polynomial. If you had an expression with negative powers that simplified to zero, my understanding is that the unsimplified expression is not a polynomial, but the simplified expression, 0, is a polynomial.

Edit 2: No you cannot have infinitely many terms.

  • I think you meant "you cannot have infinitely many terms". Separately, the powers should be distinct, or else $x+x$ is a polynomial. – Carl Mummert Mar 14 '17 at 00:12
  • @Carl Mummert, you can have infinitely many terms. Think about a power series that starts with n=0 to infinity. That is a polynomial. Also, while x+x is not simplified, it is still a polynomial as far as my understanding goes. Do you have a source? – Unique Worldline Mar 14 '17 at 00:18
  • 2
    @UniqueWorldline No, you can't have infinitely many terms. If you do, it's not a polynomial, it's a power series. – Ethan Bolker Mar 14 '17 at 00:19
  • @CarlMummert what about a taylor polynomial. These are $\sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(a-x)^n$ which by deffinition is a polynomial of infinite terms. – Sentinel135 Mar 14 '17 at 00:20
  • 4
    @Sentinel135: if it has infinitely many terms, it isn't a polynomial. – Carl Mummert Mar 14 '17 at 00:22
  • By my understanding power series are polynomials. I have yet to find a definition where it isn't. Please if you know of one then please prove it. – Sentinel135 Mar 14 '17 at 00:31
  • Corrections: An infinite power series is not a polynomial (see any algebra book). The expression $x+x$ is a polynomial, by the definition of sums of polynomials, and it is equal to $2x$. – Olivier Mar 14 '17 at 00:32
  • 1
    @Sentinel135: https://en.wikipedia.org/wiki/Formal_power_series – Carl Mummert Mar 14 '17 at 00:38
  • Ok Carl quoting wiki doesn't Prove anything. You need to show that there exists a property that power series has and that polynomials do not (other than the power series being an infinite sum as that's whats under dispute). Otherwise there is literally no point in not having polynomials with infinite terms, and those would be defined as power series. – Sentinel135 Mar 14 '17 at 01:43
  • 2
    @Sentinel135 It's a matter of definitions: is the word "polynomial" defined as allowing infinite summations or not? And the answer is that the standard definition of the word "polynomial" does not allow infinite summations. The definitions given on Wikipedia agree with the five of us. Do you have a source which says that a polynomial *can* include an infinite summation? – Tanner Swett Mar 14 '17 at 04:38
  • @TannerSwett saying it's a matter of definition would mean that if I were to define a polynomial in such a way that it would include an infinite summation, then that would adequately involve power series as a type of polynomial. What I am saying is that for the definition to exclude the infinite summation there would have to be a reason. Unfortunately, neither your nor carl have given any logical reason to exclude the general case. Fortunately, I found that reason for myself, and as such am no longer contesting the issue. If you want to know my reasoning then I suggest posting another topic. – Sentinel135 Mar 14 '17 at 17:31
  • Here, kudos for ruling out negative powers. But curses for allowing real coefficients. If you allow real coefficients, you have x-pi=0, so pi is algebraic. – richard1941 Mar 14 '17 at 23:27
  • @richard1941 there is no restriction on polynomials having transcendental coefficients. $x-\pi=0$ is a polynomial. However, this does not mean that $\pi$ is algebraic. A transcendental number must not be a root of a polynomial with rational coefficients. since $x-\pi=0$ has transcendental coefficients, it does not prove $\pi$ to be algebraic. – Unique Worldline Mar 15 '17 at 04:12
  • If infinity cannot be possible, then only your understanding of reality is impossible – hello_there_andy Mar 15 '17 at 04:13

One way (Off the top of my head) to resolve the problem of of $x^{-1}$ not being a polynomial and $0$ being one is that all polynomials are the result of integrating $0$ a finite number of times.

  • 3,328
  • 3
  • 20
  • 43
  • 1
    But wouldn't that also rule out 1? – Travis Mar 14 '17 at 21:49
  • 1
    Or any number other than 0, for that matter? – Travis Mar 14 '17 at 21:50
  • 3
    @Travis 1 you can get by integrating 0 once and setting the constant of integration to 1. This answer can be made perfectly rigorous, although essentially the same point can be made by defining polynomials as infinitely differentiable functions which become constantly zero after differentiating a finite number of times. – John Coleman Mar 15 '17 at 03:56

A small remark to the role of $x$ in the spirit of the answer of @EthanBolker and @CarlMummert.

A representation of $x$:

We already know according to the given answers a polynomial \begin{align*} a_0+a_1x+a_2x^2+\cdots a_nx^n \end{align*} can be represented by the coefficients $a_0,\ldots, a_n$ as tuple with infinite many elements \begin{align*} (a_0,a_1,a_2,\cdots,a_n,0,0,\cdots) \end{align*} whereby all but finitely many elements are zero.

Question: But, what about the role of $x$ and why can we add and multiply $x$ with polynomials in more or less the same way as we can add and multiply the coefficients (i.e. the elements of the ring)?

Let's consider elements of $\mathbb{R}$ as coefficients of a polynomial and let's take e.g. \begin{align*} p(x)=7+5x+8x^3 \end{align*} We can represent this polynomial as \begin{align*} (7,5,0,8,0,0,\ldots) \end{align*}

We now pick out the special element $(0,1,0,0,\ldots)$, denote it with $$x:=(0,1,0,0,\ldots)$$ and using the Cauchy-product $\sum_{k=0}^n a_kb_{n-k}$ in order to multiply these tuples we can write \begin{align*} (7,5,0,8,0,\ldots)&=(7,0,0,0,0,\ldots)+(0,5,0,0,0,\ldots)+(0,0,0,8,0,\ldots)\\ &=(7,0,0,0,0,\ldots)+(5,0,0,0,0,\ldots)\cdot \color{blue}{x}+(8,0,0,0,0,\ldots)\cdot \color{blue}{x^3}\tag{1} \end{align*}

The right-hand side of (1) shows that all elements $a\in\mathbb{R}$ can be represented as \begin{align*} (\color{blue}{a},0,0,0,\ldots) \end{align*} while the indeterminate $x$ has a specific representation \begin{align*} (0,\color{blue}{1},0,0,0,\ldots) \end{align*} which is zero at the first coordinate but one at the second contrary to all other elements of the ring. In fact $x$ is an element of an extension ring in which all elements of the ring can be embedded.

This element $x$, called indeterminate or transcendental element has the following three properties

  • $x\cdot 1=1\cdot x =x$

  • $ax=xa\qquad\qquad\qquad \text{for all } a\in\mathbb{R}$

  • $a_0+a_1x+\ldots+a_nx^n=0 \quad(a_i\in\mathbb{R}) \qquad\Longleftrightarrow\qquad a_i=0,i=0,1,\ldots,n$

These properties of $x$ are fundamental and enables customary calculation with polynomials.

  • 94,265
  • 6
  • 88
  • 219

For finitely many variables $x_i$ comprising a vector $\mathbb{x}$, define $\mathbb{x}^\boldsymbol{\alpha}:=\prod_i x_i^{\alpha_i}$. Such an expression, multiplied by a constant called the coefficient, is a monomial of degree $\left| \alpha\right| :=\sum_i \alpha_i$.

A polynomial is a sum of finitely many monomials with non-zero coefficients. The zero polynomial is the case where the number of such monomials is zero. The polynomial's degree is the supremum of the monomials' degrees, so the zero polynomial has degree $-\infty$. Any non-zero polynomial has at least one monomial, and among these some monomial has maximum degree, and if there is exactly one of these its coefficient is the leading coefficient. It is customary to write a polynomial as a sum over monomials of degree at most its degree, so for non-zero polynomials in one variable a unique non-zero leading coefficient exists.

  • 111,225
  • 7
  • 71
  • 132

Normally we define a polynomial such that it can be written as $\sum_{i=0}^n a_ix^i$ for some $a_i\in \mathbb R$ where $i,n\in \mathbb N$. This is the reason why $x^{-i}$ isn't a polynomial. though it can be treated as a composition between a function and a polynomial.

The other reason is that when you start dealing with $\sum^n_{i=0}\frac{a_i}{x^i}$ you start to lose properties that all polynomials share. Like for instance $P(x)$ doesn't exist for $x=0$.

  • 611
  • 5
  • 11

You ask if $x^{-1}$ is a polynomial and other answers are saying it is not. That's okay, but... you should look up the term "Laurent polynomial."

  • 30,396
  • 2
  • 67
  • 110

What is a polynomial? How about the definition usually found in Pre-Calculus texts:

A polynomial of degree $n$ is any function of the form

$$ p(x) = a_{n}x^{n} + a_{n-1}x^{n-1} + \ldots + a_{1}x + a_{0},$$

where $n$ is a non-negative integer and the $a_{i}$ are real numbers for $i \in \{0, 1, \ldots, n \}$.

  • 2,540
  • 1
  • 7
  • 33

Polynomial is an object on some particular algebra $\mathbb{A}$ that can be created with addition and multiplication of elements of $\mathbb{A}$.

If the particular algebra is also a field $\mathbb{F}$, then we can have a nice form for the polynomial, e.g. $a_1 x a_2 x a_3=a_1 a_2 a_3 x^2$, where $a_i,x \in \mathbb{F}$.

  • 1,755
  • 12
  • 28

This definition presented in ncatlab.org is pretty helpful:

Let $R$ be a commutative ring. A polynomial with coefficients in $R$ is an element of a polynomial ring over $R$ . A polynomial ring over $R$ consists of a set $X$ whose elements are called “variables” or “indeterminates”, and a function $X\to R[X]$ to (the underlying set of) a commutative $R$ -algebra that is universal among such functions, so that $R[X]$ is the free commutative $R$ -algebra generated by $X$ ; a polynomial is then an element of the underlying set of $R[X]$ .

The link to the original article is here.

  • 1,343
  • 7
  • 19