42

If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition

$$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$

then is $f$ constant function?

Willie Wong
  • 68,963
  • 11
  • 143
  • 246
Jineon Baek
  • 994
  • 8
  • 12

4 Answers4

41

You can prove this with probability.

Let $(X_n)$ be the simple symmetric random walk on $\mathbb{Z}^2$. Since $f$ is harmonic, the process $M_n:=f(X_n)$ is a martingale. Because $f\geq 0$, the process $M_n$ is a non-negative martingale and so must converge almost surely by the Martingale Convergence Theorem. That is, we have $M_n\to M_\infty$ almost surely.

But $(X_n)$ is irreducible and recurrent and so visits every state infinitely often. Thus (with probability one) $f(X_n)$ takes on every $f$ value infinitely often.

Thus $f$ is a constant function, since the sequence $M_n=f(X_n)$ can't take on distinct values infinitely often and still converge.

  • 1
    I love probabilistic arguments in analysis! Very nice. – JT_NL Jul 17 '11 at 11:46
  • @Jonas Thanks. I'm not sure how to prove this *without* probability, though I suppose it is possible. –  Jul 17 '11 at 11:50
  • 7
    This is indeed very nice: Question: Is it possible to change this argument in such a way that it applies for $\mathbb{Z}^n$ instead of $\mathbb{Z}^2$ only? Is it even true that a non-negative harmonic function on $\mathbb{Z}^n$ is constant for $n \geq 3$? For bounded ones this seems clear by considering the Poisson boundary. – t.b. Jul 17 '11 at 11:53
  • @Theo I was thinking the same thing. I seem to recall another probabilistic proof for $d\geq 3$, but not the details. Let me think about it. –  Jul 17 '11 at 11:57
  • @Andrew This proof only works for $d=1,2$, but I think there is another proof for $d\geq 3$. Let me work on it. –  Jul 17 '11 at 11:58
  • 5
    @Byron: [This paper](http://www.math.wustl.edu/~sawyer/hmhandouts/martbrwf.pdf) contains the claim that it *is* true that "nonnegative nearest-neighbors harmonic function on $\mathbb{Z}^d$ are constant for any $d$" on page 2. – t.b. Jul 17 '11 at 12:08
  • And following the references therein one finds Theorems 7.1 and 7.3 in [Woess's survey](http://www.math.tugraz.at/~woess/papers/RWsurvey.pdf) that gives a lot of references. It would be nice to extract a crisp proof from there, though. – t.b. Jul 17 '11 at 12:24
  • (Sorry for posting this comment as an answer; I do not have any reputation points yet.) Nice proof! But, even with the risk of sounding stupid, let me confirm that I understood correctly: is this the Liouville's theorem from complex analysis? This is related to Julian Aguirre's comment. Doesn't Liouville's theorem require that the entire function be bounded? What made it possible for us to work with just a lower bound (and no upper bound) on $f$ in the discrete case? Thanks! – Srivatsan Jul 18 '11 at 12:35
  • 5
    The usual Liouville theorem also holds with just a one-sided bound. – GEdgar Jul 18 '11 at 13:56
  • 3
    If you know that the real part of an entire function $f(z)$ is non-negative on the complex plane, what can you say about the function $g(z)=f(z)/(1+f(z))$? – Jyrki Lahtonen Jul 18 '11 at 14:07
12

I can give a proof for the d-dimensional case, if $f\colon\mathbb{Z}^d\to\mathbb{R}^+$ is harmonic then it is constant. The following based on a quick proof that I mentioned in the comments to the same (closed) question on MathOverflow, Liouville property in Zd. [Edit: I updated the proof, using a random walk, to simplify it]

First, as $f(x)$ is equal to the average of the values of $f$ over the $2d$ nearest neighbours of $x$, we have the inequality $f(x)\ge(2d)^{-1}f(y)$ whenever $x,y$ are nearest neighbours. If $\Vert x\Vert_1$ is the length of the shortest path from $x$ to 0 (the taxicab metric, or $L^1$ norm), this gives $f(x)\le(2d)^{\Vert x\Vert_1}f(0)$. Now let $X_n$ be a simple symmetric random walk in $\mathbb{Z}^d$ starting from the origin and, independently, let $T$ be a random variable with support the nonnegative integers such that $\mathbb{E}[(2d)^{2T}] < \infty$. Then, $X_T$ has support $\mathbb{Z}^d$ and $\mathbb{E}[f(X_T)]=f(0)$, $\mathbb{E}[f(X_T)^2]\le\mathbb{E}[(2d)^{2T}]f(0)^2$ for nonnegative harmonic $f$. By compactness, we can choose $f$ with $f(0)=1$ to maximize $\Vert f\Vert_2\equiv\mathbb{E}[f(X_T)^2]^{1/2}$.

Writing $e_i$ for the unit vector in direction $i$, set $f_i^\pm(x)=f(x\pm e_i)/f(\pm e_i)$. Then, $f$ is equal to a convex combination of $f^+_i$ and $f^-_i$ over $i=1,\ldots,d$. Also, by construction, $\Vert f\Vert_2\ge\Vert f^\pm_i\Vert_2$. Comparing with the triangle inequality, we must have equality here, and $f$ is proportional to $f^\pm_i$. This means that there are are constants $K_i > 0$ such that $f(x+e_i)=K_if(x)$. The average of $f$ on the $2d$ nearest neighbours of the origin is $$ \frac{1}{2d}\sum_{i=1}^d(K_i+1/K_i). $$ However, for positive $K$, $K+K^{-1}\ge2$ with equality iff $K=1$. So, $K_i=1$ and $f$ is constant.

Now, if $g$ is a positive harmonic function, then $\tilde g(x)\equiv g(x)/g(0)$ satisfies $\mathbb{E}[\tilde g(X_T)]=1$. So, $$ {\rm Var}(\tilde g(X_T))=\mathbb{E}[\tilde g(X_T)^2]-1\le\mathbb{E}[f(X_T)^2]-1=0, $$ and $\tilde g$ is constant.

George Lowther
  • 32,704
  • 3
  • 70
  • 103
  • Taxicab metric. Never heard that name (I learn maths in french at school). Funny! – Patrick Da Silva Jul 17 '11 at 20:24
  • @Patrick: Also called the Manhattan metric. – George Lowther Jul 17 '11 at 20:30
  • LAAAAAAAAAWL. Funnier. – Patrick Da Silva Jul 17 '11 at 20:39
  • 2
    Note: A similar proof will also show that harmonic $f\colon\mathbb{R}^d\to\mathbb{R}^+$ is constant. Interestingly, in the two dimensional case, Byron's proof can be modified to show that harmonic $f\colon\mathbb{R}^2\setminus\{0\}\to\mathbb{R}^+$ is constant (as 2d Brownian motion has zero probability of hitting 0 at positive times). Neither of the proofs generalize to harmonic $f\colon\mathbb{R}^d\setminus\{0\}\to\mathbb{R}^+$ for $d\not=2$. In fact, considering $f(x)=\Vert x\Vert^{2-d}$, we see that $f$ need not be constant for $d\not=2$. – George Lowther Jul 17 '11 at 22:54
8

Here is an elementary proof assuming we have bounds for $f$ on both sides.

Define a random walk on $\mathbb{Z}^2$ which, at each step, stays put with probability $1/2$ and moves to each of the four neighboring vertices with probability $1/8$. Let $p_k(u,v)$ be the probability that the walk travels from $(m,n)$ to $(m+u, n+v)$ in $k$ steps. Then, for any $(m, n)$ and $k$, we have $$f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} p_k(u,v) f(m+u,n+v).$$ So $$f(m+1, n) - f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} \left( p_k(u-1,v) - p_k(u,v) \right) f(m+u,n+v).$$ If we can show that $$\lim_{k \to \infty} \sum_{(u,v) \in \mathbb{Z}^2} \left| p_k(u-1,v) - p_k(u,v) \right| =0 \quad (\ast)$$ we deduce that $$f(m+1,n) = f(m,n)$$ and we win.

Remark: More generally, we could stay put with probability $p$ and travel to each neighbor with probability $(1-p)/4$. If we choose $p$ too small, then $p_k(u,v)$ tends to be larger for $u+v$ even then for $u+v$ odd, rather than depending "smoothly" on $(u,v)$. I believe that $(\ast)$ is true for any $p>0$, but this elementary proof only works for $p > 1/3$. For concreteness, we'll stick to $p=1/2$.

We study $p_k(u,v)$ using the generating function expression $$\left( \frac{x+x^{-1}+y+y^{-1}+4}{8} \right)^k = \sum_{u,v} p_k(u,v) x^u y^v.$$

Lemma: For fixed $v$, the quantity $p(u,v)$ increases as $u$ climbs from $-\infty$ up to $0$, and then decreases as $u$ continues climbing from $0$ to $\infty$.

Proof: We see that $\sum_u p_k(u,v) x^u$ is a positive sum of Laurent polynomials of the form $(x/8+1/2+x^{-1}/8)^j$. So it suffices to prove the same thing for the coefficients of this Laurent polynomial. In other words, writing $(x^2+8x+1)^k = \sum e_i x^i$, we want to prove that $e_i$ is unimodal with largest value in the center. Now, $e_i$ is the $i$-th elementary symmetric function in $j$ copies of $4+\sqrt{15}$ and $j$ copies of $4-\sqrt{15}$. By Newton's inequalities, $e_i^2 \geq \frac{i (2j-i)}{(i+1)(2j-i+1)} e_{i-1} e_{i+1} > e_{i-1} e_{i+1}$ so $e_i$ is unimodal; by symmetry, the largest value is in the center. (The condition $p>1/3$ in the above remark is when the quadratic has real roots.) $\square$

Corollary: $$\sum_u \left| p_k(u-1,v) - p_k(u,v) \right| = 2 p_k(0,v).$$

Proof: The above lemma tells us the signs of all the absolute values; the sum is \begin{multline*} \cdots + (p_k(-1,v) - p_{k}(-2,v)) + (p_k(0,v) - p_{k}(-1,v)) + \\ (p_k(0,v) - p_k(1,v)) + (p_k(1,v) - p_k(2,v)) + \cdots = 2 p_k(0,v). \qquad \square\end{multline*}

So, in order to prove $(\ast)$, we must show that $\lim_{k \to \infty} \sum_v p_k(0,v)=0$. In other words, we must show that the coefficient of $x^0$ in $\left( \frac{x}{8}+\frac{3}{4} + \frac{x^{-1}}{8} \right)^k$ goes to $0$.

There are probably a zillion ways to do this; here a probabilistic one. We are rolling an $8$-sided die $k$ times, and we want the probability that the numbers of ones and twos are precisely equal. The probability that we roll fewer than $k/5$ ones and twos approaches $0$ by the law of large numbers (which can be proved elementarily by, for example, Chebyshev's inequality). If we roll $2r > k/5$ ones and twos, the probability that we exactly the same number of ones and twos is $$2^{-2r} \binom{2r}{r} < \frac{1}{\sqrt{\pi r}} < \frac{1}{\sqrt{\pi k/10}}$$ which approaches $0$ as $k \to \infty$. See here for elementary proofs of the bound on $\binom{2r}{r}$.

I wrote this in two dimensions, but the same proof works in any number of dimensions

David E Speyer
  • 57,193
  • 5
  • 167
  • 259
  • Hi, I know this is an old post (and an amazing solution). Do you maybe have a few words as to the motivation you had when solving it? I have 3 questions (sorry the formatting won't let me enter): 1. What was the motivation to approach it with a random walk? 2.Did you try small case and conjecture the unimodularity for a fixed v? Or is there a good heuristic? Thanks in advance, I love your answers :) – Andy Sep 23 '17 at 01:25
0

A proof for the case $f\colon\mathbb{Z}^d \to \mathbb{R}$ is harmonic and bounded then $f$ is constant, taken from a book by Dynkin and Yushkevich.

First, a Lemma: if $g\colon \mathbb{Z}^d\to \mathbb{R}$ is harmonic and there exists $L>0$ such that $|g(x) + g(x+e_1) + \cdots + g(x + k e_1)| \le L$ for all $x \in \mathbb{Z}^d$ and $k\ge 0$ then $g \equiv 0$. For assume that $g$ takes positive values, and let $\sup g= M > 0$. Note that if $g(x)> M-\epsilon$, then for all neighbors $x'$ of $x$ we have $g(x')> M-2 d \epsilon$, otherwise the average around $x$ is $\le M-\epsilon$. In particular, $g(x+e_1) > M-2 d \epsilon$. Now, by taking $\epsilon$ small enough we can ensure that a long enough chain of values $g(x)$, $g(x+e_1)$, $\ldots$, $g(x+k e_1)$ are $> M/2$, and for $k$ large enough get a contradiction.

Now, consider $f\colon \mathbb{Z}^d \to \mathbb{R}$ harmonic and bounded. Then the function $g(x) \colon = f(x+e_1) - f(x)$ is again harmonic, and satisfies the condition of the lemma. We conclude that $f(x) \equiv f(x+e_1)$. Similarly for all the other unit vectors $e_i$ and we conclude $f$ constant.

orangeskid
  • 45,908
  • 3
  • 35
  • 81