51

Is there a not identically zero, real-analytic function $f:\mathbb{R}\rightarrow\mathbb{R}$, which satisfies

$$f(n)=f^{(n)}(0),\quad n\in\mathbb{N} \text{ or } \mathbb N^+?$$

What I got so far:

Set

$$f(x)=\sum_{n=0}^\infty\frac{a_n}{n!}x^n,$$

then for $n=0$ this works anyway and else we have

$$a_n=f^{(n)}(0)=f(n)=\sum_{k=0}^\infty\frac{a_k}{k!}n^k.$$

Now $a_1=\sum_{k=0}^\infty\frac{a_k}{k!}1^k=a_0+a_1+\sum_{k=2}^\infty\frac{a_k}{k!},$ so

$$\sum_{k=2}^\infty\frac{a_k}{k!}=-a_0.$$

For $n=2$ we find

$$a_2=\sum_{k=0}^\infty\frac{a_k}{k!}2^k=a_0+a_1+2a_2+\sum_{k=3}^\infty\frac{a_k}{k!}2^k.$$

The first case was somehow special since $a_1$ cancelled out, but now I have to juggle around with more and more stuff.

I could express $a_1$ in terms of the higher $a's$, and then for $n=3$ search for $a_2$ and so on. I didn't get far, however. Is there a closed expression? My plan was to argue somehow, that if I find such an algorythm to express $a$'s in terms of higher $a$'s, that then, in the limit, the series of remaining sum or sums would go to $0$ and I'd eventually find my function.

Or maybe there is a better approach to such a problem.

kahen
  • 15,112
  • 3
  • 34
  • 62
Nikolaj-K
  • 11,497
  • 2
  • 35
  • 82
  • 2
    What do you mean by non-trivial? Just not identically $0$? And why don't you want to use mollifiers tricks? We can solve the problem with Borel's theorem, but it uses the previous tricks. – Davide Giraudo Dec 15 '11 at 22:19
  • By non-trivial I ment not equal 0, yes. And the question really became interesting for my after I translated it to this $a_n$ recursive series problem. Of course, since I only state the problem in terms of a set which is not dense in $\mathbb{R}$, I could detach the neightborhood around $x=0$ for $f(0)$ and it's derivative from the points $x=n>0$ for $f(n)$ via a mollifier, but that's somewhat beside the point. – Nikolaj-K Dec 15 '11 at 22:26
  • 4
    Perhaps you might want to replace the fuzzy "no tricks" condition with "real analytic", then? – hmakholm left over Monica Dec 15 '11 at 22:29
  • Okay you're right, Henning & Davide, I stated that clearer in the original post now. – Nikolaj-K Dec 15 '11 at 22:33
  • 1
    I am sure that this is not the answer that you are looking for, but doesn't f(x):=1 count? :p – g24l Dec 15 '11 at 23:45
  • 1
    g24l : you misunderstand the $f^{(n)}$ notation, I think - it refers to the $n$th derivative of $f$, which would be identically zero for the function you describe (and thus not equal to $f(n)$ for any $n$ other than $0$). – Steven Stadnicki Dec 16 '11 at 00:35
  • Could Taylor series help in this case? – NoChance Dec 16 '11 at 00:35
  • 1
    For what it's worth, $f(x)=\sin\left(\frac{\pi}{2}x\right)$ satisfies $f(n)=\left(\frac{2}{\pi}\right)^nf^{(n)}(0)$. – Jonas Meyer Dec 16 '11 at 00:39
  • Note that real entire functions need not have everywhere converging Taylor series. E.g. $\frac{1}{1+x^2}$. – Jonas Meyer Dec 16 '11 at 00:47
  • 1
    Naively, what is wrong with f(x) = f(0) + f(1)x + (1/2)f(2)x^2 + (1/6)f(3)x^3...? If the question is whether there is such a function, do we have to exhibit it explicitly? – daniel Dec 16 '11 at 02:30
  • 4
    @daniel: To "define" something, you normally don't have it on both sides of the equation. – GEdgar Dec 16 '11 at 03:34

3 Answers3

48

Let complex number $c$ be a solution of $e^c=c$. For example $c = -W(-1)$, where $W$ is the Lambert W function. Then since function $f$ defined by

$$ f(x) = \sum_{n=0}^\infty \frac{e^{cn}x^n}{n!} $$

evaluates to $e^{e^c x} = e^{cx}$, we have $f(n) = e^{cn} = f^{(n)}(0)$. For a real solution, let $c = a+bi$ be real and imaginary parts and let $g(x)$ be the real part of $f(x)$. More explicitly:

$$ g(x) = \sum_{n=0}^\infty \frac{e^{an}\cos(bn)\;x^n}{n!} $$

evaluates to $e^{ax}\cos(bx)$. With the principal branch of Lambert W, this is approximately:

$$ g(x) = e ^{0.3181315052 x} \operatorname{cos} (1.337235701 x) $$

GEdgar
  • 96,878
  • 7
  • 95
  • 235
  • would the solution be unique, up to branch? or not necessarily? – daniel Dec 16 '11 at 04:39
  • I see no reason to suppose it is unique. If $f$ is a solution, then so is $\lambda f$ for any constant $\lambda$. Indeed, the problem is linear, so the set of solutions is a linear space. Probably infinite-dimensional... – GEdgar Dec 16 '11 at 04:45
  • 2
    @GEdgar: Thanks for the solution, looks very good. [I checked the first few cases of your approximation](http://img710.imageshack.us/img710/2629/solutionb.png) and it gives nice results. Would you want to elaborate, what made you think about this problem like this, using the $exp$ function etc.? – Nikolaj-K Dec 16 '11 at 07:03
  • Also, by "the real part of the complex function", you also mean the restriction of this real part to real arguments, right? – Nikolaj-K Dec 16 '11 at 07:15
  • 1
    Note that there are infinitely many solutions of $e^c = c$ (LambertW has infinitely many branches), so the space of solutions is indeed infinite-dimensional. – Robert Israel Dec 16 '11 at 07:47
  • This is so cool! – Greg Martin Dec 16 '11 at 08:32
  • @Robert: But, of course, we can still ask if there are other solutions. Say we stick to series with infinite radius of convergence... – GEdgar Dec 16 '11 at 15:28
  • **what made you think about this problem like this** First was daniel's "Naively" comment. Having $f$ on both sides does not make it a definition. But sometimes when you have your unknown on both sides of an equation, you can use it to iterate, and (hopefully) converge to a solution. So I tried the series with $a_n=1$ and got $e^x$, then plugged in $a_n = e^n$ and got $e^{bx}$ for some $b$. So I could then try to solve for a particular $c$ such that, when I plug in $a_n = e^{cn}$ the result will be $e^{cx}$. These were all non-real, so I made the "real part" fix you see. – GEdgar Dec 16 '11 at 15:33
  • I see. I was thinking about your solution and noticed that since we evaluate the derivatives at $x_0=0$ and then $e^{Ax_0}\cos{(Bx_0)}=1$ and $\cos'{(x_0)}=0$, the values $f(n)$ are actually just the constant $A=0.318...$ to the power of $n$. Strange. Maybe that can be generalized and used to find other solutions. – Nikolaj-K Dec 16 '11 at 16:22
3

(not yet an answer, but too long for a comment)

upps, I see there was a better answer of G.Edgar crossing. Possibly I'll delete this comment soon

For me this looks like an eigenvalue-problem.
Let's use the following matrix and vector-notations. The coefficients of the power series of the sought function f(x) are in a columnvector A.
We denote a "vandermonde"-rowvector V(x) which contains the consecutive powers of x, such that $\small V(x) \cdot A = f(x) $
We denote the diagonal vector of consecutive factorials as F and its reciprocal as f
Then we denote the matrix which is a collection of all V(n) of consecutive n as ZV . Then we have first
$\qquad \small ZV \cdot A = F \cdot A $
and rearranging the factorials
$\qquad \small f \cdot ZV \cdot A = A $
which is an eigenvalue-problem to eigenvalue $\small \lambda=1 $ . Thus we have the formal problem of solving
$\qquad \small \left(f \cdot ZV - I \right) \cdot A = 0 $
However, at the moment I do not see how to move to the next step...


[added] The solution of G. Edgar gives the needed hint

If we do not rearrange, but expand:

$\qquad \small \begin{eqnarray} ZV \cdot A &=& F \cdot A \\ ZV \cdot (f \cdot F)\cdot A &=& F \cdot A \\ (ZV \cdot f) \cdot (F \cdot A) &=& (F \cdot A) \end{eqnarray} $

we get a better ansatz. Let's denote simpler matrix constants $\small W=ZV \cdot f \qquad B=F \cdot A $ and rewrite this as
$\qquad \small W \cdot B = B $.
Now W is the carleman-matrix which maps $\small x \to \exp(x) $ by

$\qquad \small W \cdot V(x) = V(\exp(x)) $

Thus if $\small x = \exp(x) $ for some x , so x is some (complex) fixpoint $\small t_k$ of $\small f(x)=\exp(x)$ and we have one possible sought identity:

$\qquad \small W \cdot V(t_k) = V(t_k) \to B = V(t_k) \to A = f \cdot B = f \cdot V(t_k) $

Then the coefficients of the power series are

$\qquad \small f(x) = 1 + t_k x + t_k^2/2! x^2 + t_k^3/3! x^3 + \ldots $

and $\small f(x) = \exp(t_k \ x) $

Because there are infinitely many such fixpoints (all are complex) we have infinitely many solutions of this type (there might be other types, the vectors $\small V(t_k) $ need not be the only type of possible eigenvectors of W )

Gottfried Helms
  • 32,738
  • 3
  • 60
  • 134
1

I thought the problem was interesting and there seem to be a lot of guessing to come up with a solution. So I would just like to try to supplement with my own dubious and handwavy approach separate from GEdgar to motivate the solution, that might be more direct.

$$f(n) = f^{(n)}(0)$$

I then do a bit of arrangement by rewriting in terms of a translation operator,

$$\left.T^n f(x)\right|_{x=0} = \left.\frac{d^n}{dx^n} f(x)\right|_{x=0}$$

It is sometimes valid to write a translation operator as $T = e^{\frac{d}{dx}}$ and so,

$$\left.(e^{\frac{d}{dx}})^n f(x)\right|_{x=0} = \left.\left(\frac{d}{dx}\right)^n f(x)\right|_{x=0}$$

From here I consider the operator equation:

$$e^D = D$$

Which has the "solution" $D = -W(-1)$

Now I can use this to create an ansatz of the form $f(x) = Ke^{-W(-1)x}$ which we can then check in a fairly straightforward manner by simply plugging into the original $f(n) = f^{(n)}(0)$ and seeing that it does work out.

user702789
  • 49
  • 1
  • 1
  • I really like the translation operator approach. I'd cut it off at the last foruma line which amounts to $e^D-D=0$ and here study this in some function space context and see on which functions it could hold. Maybe something of spectral theory sorts. – Nikolaj-K Oct 30 '19 at 20:48