10

Consider, for example, the elliptic PDE $u_{x}+u_y+u_{xx}+u_{yy}=0$ for $(x,y)\in[0,\infty)^2$.

Solution methods often require me to impose boundary conditions. Often, these arise naturally from applications (physics, biology, economics etc.). But sometimes, one may not know ex ante how the function should behave on a particular boundary.

Suppose my modeling setting imposes a behavior on $u$ as $x\to\infty$, $y=0$ and $y\to\infty$. However, my application in mind does not give me any priors on what $u$ does as $x=0$. Nonetheless, I have to impose something on this boundary. Can I simply impose that, as $x\to0$, the following inhomogeneous ODE holds $$u_x(0,y)+u_y(0,y)+u_{xx}(0,y)+u_{yy}(0,y)=0.$$ Because we are on the lower boundary of the $x$-domain, we can impose this conditions via finite (forward) differences, \begin{align*} \frac{u_{1,j}-u_{0,j}}{\Delta x}+\frac{u_{0,j+1}-u_{0,j-1}}{2\Delta y}+\frac{u_{2,j}-2u_{1,j}+u_{0,j}}{(\Delta x)^2}+\frac{u_{0,j+1}-2u_{0,j}+u_{0,j-1}}{(\Delta y)^2}=0, \end{align*} where $u_{i,j}=u(x_i,y_j)$.

So, instead of coming up with some Dirichlet or Neumann conditions, I essentially just impose the PDE itself on one (or several) of the boundaries. Is this something one has heard of? Do you know of problems with similar boundary conditions?

Alex
  • 629
  • 2
  • 18
  • 1
    Have you any reason to believe that a unique solution exists in theory? I am afraid you may have a problem with that... – Ali Jan 13 '21 at 13:27

2 Answers2

3

The underlying problem is that the boundary condition you have provided does not assist in evaluating the solution.

Consider that, for your PDE, if $u=f(x,y)$ is a solution, then so is $u=f(x,y)+C$ for some constant, $C$. This constant cannot be determined using your boundary condition.

Indeed, consider that the following expression is a solution for any choice of constants $C_i$: $$ u=C_1+C_2(x-y)+C_3e^{-x}+C_4e^{-y}+C_5e^{-x-y} $$ None of these constants can be determined by the boundary condition as provided.


That being said, there are often implied conditions that can be used to obtain boundary conditions. For example, if you are working in a real-world context, you may need to apply an energy-conservation requirement. Alternatively, boundedness and related concepts often imply some form of boundary condition. For instance, consider $$ xy'+y=0 $$ In the limit as $x\to0$, we get $y=0$ unless $y'$ tends to infinity. Indeed, the general solution is $y=C/x$, and so if $y$ must remain bounded, then $y(0)=0$, and thus $y=0$.

However, this is actually a separate condition on the problem that generates a boundary condition. You cannot simply apply the DE to the boundary, as it provides no additional information, and boundary conditions are applied to provide information that is otherwise missing.


The mistake you make is the claim that you must apply something at the boundary. This is untrue. You must apply a condition that allows you to identify the correct solution (unless you want a family of solutions, that is).

The nature of the condition will depend on the goal of the differential equation. As mentioned above, sometimes you need to apply some energy-conservation condition. Other conditions are possible - minimisation conditions are common. For example, you might want the solution to your problem that has the least amount of variation (that is, minimise $\iint u_{x}^2+u_{y}^2\ dA$).

There is one other category - cases where your choice won't matter. This is much like how, when performing integration by parts, you don't need to include the constant of integration for the "$\int dv$" - it will cancel out anyway. In this situation, pick the simplest boundary condition for the problem - for instance, if $u(0,0)=1$ and $u(0,\infty)=0$, then you might pick $u(0,y)=e^{-y}$.


EDIT: Here, I'll show what your forward-difference application of the PDE at the boundary is actually assuming for the boundary. Assuming you use central difference for first derivatives when not at the boundary, you have... $$ \begin{align*} \frac{u_{1,j}-u_{0,j}}{\Delta x}+\frac{u_{0,j+1}-u_{0,j-1}}{2\Delta y}+\frac{u_{2,j}-2u_{1,j}+u_{0,j}}{(\Delta x)^2}+\frac{u_{0,j+1}-2u_{0,j}+u_{0,j-1}}{(\Delta y)^2}&=0\\ \frac{u_{2,j}-u_{0,j}}{2\Delta x}+\frac{u_{1,j+1}-u_{1,j-1}}{2\Delta y}+\frac{u_{2,j}-2u_{1,j}+u_{0,j}}{(\Delta x)^2}+\frac{u_{1,j+1}-2u_{1,j}+u_{1,j-1}}{(\Delta y)^2}&=0 \end{align*} $$ With a bit of algebra, we can cancel the $u_{2,j}$ terms to get $$\begin{align*} \left(\frac12+\frac{2+\Delta x}{(\Delta y)^2}\right)u_{0,j} - \left(\frac12+\frac2{(\Delta y)^2}\right)u_{1,j} & \\ + \left(\frac{\Delta x}{4\Delta y} - \frac{\Delta x}{2(\Delta y)^2} + \frac{1}{2\Delta y} - \frac{1}{(\Delta y)^2}\right)u_{0,j-1} - \left(\frac{1}{2\Delta y} - \frac{1}{(\Delta y)^2}\right)u_{1,j-1} & \\ - \left(\frac{\Delta x}{4\Delta y} + \frac{\Delta x}{2(\Delta y)^2} + \frac{1}{2\Delta y} + \frac{1}{(\Delta y)^2}\right)u_{0,j+1} + \left(\frac{1}{2\Delta y} + \frac{1}{(\Delta y)^2}\right)u_{1,j+1} & =0 \end{align*}$$ By Taylor expansion around the $(0,j)$ point, letting $u$ be the function at that point and stopping at $O(\Delta)$ (treating both $\Delta x$ and $\Delta y$ as proportional to $\Delta$), we get $$\begin{align*} \frac{\Delta x}{(\Delta y)^2} u - \frac{\Delta x}2 (u_x+u_y+2u_{xy}) - \frac{\Delta x^2}4 u_{xx} + O(\Delta) & =0 \end{align*}$$ where I have left in the terms that come out explicitly... but note that this is equivalent to $$ \frac{\Delta x}{(\Delta y)^2} u + O(\Delta) = 0 $$ And so, if $\Delta x$ and $\Delta y$ are both scaled down in proportion with each other, your boundary condition is effectively $u(0,y)=0$.

However, be careful, as this analysis applies for keeping $\Delta x$ and $\Delta y$ in proportion. I suspect that some issues may arise if you reduce $\Delta x$ without reducing $\Delta y$.

Glen O
  • 11,786
  • 27
  • 38
  • Thank you very much for your detailed answer. I guess it all boils down to uniqueness. Often, there is a family of functions which satisfies a differential equation. The boundary conditions pin it down to one particular solution. Suppose I have strong priors that one unique solution exists and I specify three boundaries explicitly and use the trick from my answer to approximate the solution to the PDE. The resulting solution behaves exactly how I would expect it. Then, this approach seems justifiable? – Alex Jan 20 '21 at 17:08
  • @Alex - it depends on the context, really. Depending on the specific technique used, you may get a really well-behaved solution, or you might get a hot mess. The method you've used basically should make sure that the behaviour at the boundary isn't crazy... but I'll edit into my answer what it's actually doing. – Glen O Jan 21 '21 at 00:18
1

Since you're using a finite difference method, you're turning the the PDE into a set of algebraic equations, I'm sure you know this.

It seems to me that you won't ever get a solution because you are just adding an equation (expanding the dimensions of the matrix $A$ in the equation $Ax = b$). So your issue is that you don't have the first or last entries of the $b$ vector in the above equation. So you're just finding the kernel of the system, but if the system is non-singular (which I think it is) then the kernel is just zero.

So just a few ideas as far as options go:

since you're working in a semi-infinite domain, maybe try other methods of solution like similarity or characteristics.

If you need to use a finite difference approach, then maybe use some scaling arguments. Does it really matter what value $u$ takes on at that boundary, or is it arbitrary and the relevant physical phenomena is really happening far away from that boundary. Maybe try a bunch of values for $u$ or $\frac{\partial u}{\partial n}$ at the boundary. It looks like you're in luck since your governing equation is linear, so there should be some pattern established when you vary the prescribed boundary condition.

Hope this helps!

Emerson
  • 175
  • 7
  • I don’t think $A$ needs to grow. Suppose I have $NM$ total grid points. Then, $A\in\mathbb{R}^{NM\times NM}$ and $x,b\in\mathbb{R}^{NM}$. If I impose Dirichlet boundary conditions, some rows of $A$ would be zeros everyone and have a 1 as diagonal element and $b$ would include the boundary condition. For all other grid points, there is a row of the product $Ax$ which corresponds to their finite difference scheme. Imposing the PDE on the boundary means that $A$ never has a ''unit vector’’ row but that $Ax$ represents the finite difference scheme for the boundary points. But $A$ wouldn’t grow. – Alex Jan 16 '21 at 12:56
  • Tinkering with the details of your chosen numerical method is at best an indirect way to address your fundamental issue here, which is most straightforwardly expressed as a problem in PDE theory. Although the demonstration may be complicated by the unboundedness of your domain (what exactly are the conditions at infinity?), chances are that there will exist a different solution for any smooth boundary condition along $x=0$. So how will you decide on one? What strikes me as dangerous about all this is that by pressing on blindly you may well get some numbers out. But they will be meaningless. – Ali Jan 19 '21 at 11:22