Can you help me explain the basic difference between FDM, FEM and FVM?
What is the best method and why?
Advantage and disadvantage of them?
 23,901
 3
 44
 75
 463
 1
 6
 15

FEM is the most powerful and flexible yet mathematically sound method of the methods you listed. – Shuhao Cao Jun 24 '13 at 01:21

There is also the "Material point method", where a continuum body is described by a number of small Lagrangian elements. It's not a mesh based method and is instead categorized as a meshless/meshfree or continuumbased particle method – skan Aug 07 '19 at 22:27
7 Answers
This is a difficult question to answer.
"The FDM is the oldest and is based upon the application of a local Taylor expansion to approximate the differential equations. The FDM uses a topologically square network of lines to construct the discretization of the PDE. This is a potential bottleneck of the method when handling complex geometries in multiple dimensions. This issue motivated the use of an integral form of the PDEs and subsequently the development of the finite element and finite volume techniques." (http://www2.imperial.ac.uk/ssherw/spectralhp/papers/HandBook.pdf)
Here are two references to review so you can get a better feel for these methods.
http://files.campus.edublogs.org/blog.nus.edu.sg/dist/4/1978/files/2012/01/CN4118R_Final_Report_U080118W_OliverYeo1r6dfjw.pdf (see page 10 for a very nice comparison in the types of problems they were interested in  computational fluid dynamics)
There are some nice references for these methods at http://www2.imperial.ac.uk/ssherw/spectralhp/papers/HandBook.pdf (See section 7 for very nice references)
 749
 7
 19
 55,120
 25
 75
 108

2Nice post! Sometimes OP's don't know that their questions are difficult to answer...+1 – amWhy Jun 26 '13 at 00:06

1Wow, Amzoti, when you literally copy something, you should put quotation marks around the sentence, so that other people would not think that these wise words belong to you. If there are no quotation marks, then it can be considered to be an instance of a plagiarism. I hope you will edit this post properly. – Artem May 16 '14 at 00:41
Here is an old scicomp.SE question that answered some of your question: What are criteria to choose between finitedifferences and finiteelements?
In my humble opinion, FEM is the most flexible one in terms of dealing with complex geometry and complicated boundary conditions. FEM also allows the adaptive/local procedure to get higher order local approximation or battling singularities. FEM's basis can be discontinuous and not welldefined pointwisely, which is a nice heritage from the Hilbert space framework. For computational fluid dynamics and electromagnetism, FEM is the way to incorporate the intrinsic geometrical properties of the solutions.
For FVM: partly you can refer to my answer here: How should a numerical solver treat conserved quantities? It is also worth noting that FVM can only have lower order of approximation.
In some recently development in FEM addresses the problem I mentioned in the answer above. For example, for convectiondominated pde, tradition continuous Galerkin framework for FEM doesn't work well, which introduces dissapation over time and oscillation over materiallayers for the numerical solution. Now there are Discontinuous Galerkin FEM (higher order FVM) and hybrized DGFEM (see here: Unified hybridization of discontinuous Galerkin, mixed, and continuous Galerkin methods for second order elliptic problems) to remedy these two effects.
FDM and FVM are easy to implement, but you get tradeoff from this convenience of implementation for limited usage for different PDEs.
 18,320
 4
 51
 105

I see papers for finite difference method for non uniform grids. So if you can do this, why not use finite difference for everything? Is there not a way to deal with discontinuity with finite difference? – Frank Jul 30 '19 at 00:38

1@Frank Yeah, there are many new developments in FDM like mimetic methods and virtual elements in recent years, which can deal with discontinuities, and their formulations are as complicated as DG... – Shuhao Cao Jul 30 '19 at 19:54
It shall be argued in this post that the whole idea of one Numerical Method being superior to another is merely a prejudice that rests upon insufficient indepth analysis of the real thing.
The argumentation will proceed at hand of a twodimensional example.
The reader is invited
not to skip through but take notice of the details.
Numerical Analysis of Diffusion starts with a well known Partial Differential Equation (PDE). The problem will be restricted here to the simpler case of two space dimensions: $$ \frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} = 0 $$ $ (x,y) = $ Cartesian coordinates. A possible interpretation of the vector $ (Q_x,Q_y) $ is the heat flux. The differential equation then follows from the law of conservation of energy. In case of pure diffusion of heat, also known as conduction, the components of the heat flux are related to temperature $T$ as follows: $$ Q_x =  \lambda \frac{\partial T}{\partial x} \qquad \qquad Q_y =  \lambda \frac{\partial T}{\partial y} $$ Where $ \lambda = $ thermal conductivity. Hence the final differential equation for the temperature field is actually of the second degree. In order to make the PDE amenable for numerical treatment, an integration procedure has to be resorted to. At this point, there occurs a splitting into several distinct roads, all leading to a numerical solution, more or less efficiently.
Triangle isoparametrics
The simplest Finite Element in two dimensions  and my absolute
favorite
 is the linear triangle:
Let's summarize the isoparametrics (= affine transformation) in the first place:
$$
\begin{cases}
x = x_1 + (x_2x_1)\xi + (x_3x_1)\eta \\
y = y_1 + (y_2y_1)\xi + (y_3y_1)\eta \\
f = f_1 + (f_2f_1)\xi + (f_3f_1)\eta
\end{cases}
$$
It will be shown next how partial differentiation at such a linear triangle takes place.
First do the chain rules with global $(x,y)$ and local $(\xi,\eta)$ coordinates:
$$
\begin{cases} \Large
\frac{\partial f}{\partial \xi} =
\frac{\partial f}{\partial x}\frac{\partial x}{\partial \xi} +
\frac{\partial f}{\partial y}\frac{\partial y}{\partial \xi} \\ \Large
\frac{\partial f}{\partial \eta} =
\frac{\partial f}{\partial x}\frac{\partial x}{\partial \eta} +
\frac{\partial f}{\partial y}\frac{\partial y}{\partial \eta}
\end{cases} \quad \Longleftrightarrow \quad \Large
\begin{bmatrix} \frac{\partial f}{\partial \xi} \\ \frac{\partial f}{\partial \eta} \end{bmatrix} =
\begin{bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} \\
\frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{bmatrix}
\begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix}
$$
But what we need is the inverse, with determinant / Jacobian
$\;\Delta = (\partial x / \partial \xi)(\partial y / \partial \eta)
 (\partial x / \partial \eta)(\partial y / \partial \xi)$ :
$$
\Large
\begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix} =
\begin{bmatrix} \frac{\partial x}{\partial \xi} & \frac{\partial y}{\partial \xi} \\
\frac{\partial x}{\partial \eta} & \frac{\partial y}{\partial \eta} \end{bmatrix}^{1}
\begin{bmatrix} \frac{\partial f}{\partial \xi} \\ \frac{\partial f}{\partial \eta} \end{bmatrix} =
\begin{bmatrix} \frac{\partial y}{\partial \eta} & \frac{\partial y}{\partial \xi} \\
\frac{\partial x}{\partial \eta} & \frac{\partial x}{\partial \xi} \end{bmatrix} / \Delta
\begin{bmatrix} \frac{\partial f}{\partial \xi} \\ \frac{\partial f}{\partial \eta} \end{bmatrix}
$$
Giving full discretization of the function derivatives, with determinant / Jacobian
$\;\Delta = (x_2x_1)(y_3y_1)(x_3x_1)(y_2y_1)$ :
$$
\Delta
\begin{bmatrix} \partial f / \partial x \\ \partial f / \partial y \end{bmatrix} =
\begin{bmatrix} (y_3y_1) & (y_2y_1) \\ (x_3x_1) & (x_2x_1) \end{bmatrix}
\begin{bmatrix} f_2f_1 \\ f_3f_1 \end{bmatrix}
$$
Here $\Delta$ is the area of a vector parallelogram, which is twice the area of the
triangle. The above can also be written as:
$$
\Delta \left[ \begin{array}{c} \partial f / \partial x \\ \partial f / \partial y \end{array} \right] =
\left[ \begin{array}{ccc} +(y_2  y_3) & +(y_3  y_1) & +(y_1  y_2) \\
(x_2  x_3) & (x_3  x_1) & (x_1  x_2) \end{array}
\right] \left[ \begin{array}{c} f_1 \\ f_2 \\ f_3 \end{array} \right]
$$
The matrix in this last formula should be memorized; it is called a differentiation matrix.
Finite Element Method
When using a Finite Element method, the differential equation may be multiplied at
first with an arbitrary (test)function. Subsequently the PDE is integrated over
the domain of interest. Let the test function be called $f$, then:
$$
\iint f . \left[ \frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} \right] \, dx dy = 0
$$
It can be shown that this integral formulation is (more or less) equivalent with the original
partial differential equation. This is due to the fact that $f$ is an arbitrary function.
It should be nonzero, continuous and integrable, though.
Partial integration, or applying Green's theorem (which is the same),
results in an expression with lineintegrals over the boundaries and an area
integral over the bulk field. The latter is given by:
$$
 \iint \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \, dx dy
$$
Mind the minus sign. The advantage accomplished herewith is a reduction of the
difficulty of the problem: only derivatives of the first degree are left.
As a next step, the domain of interest is split up into "elements" $E$.
Due to this, also the integral will split up into separate contributions,
each contribution corresponding with an element:
$$
 \sum_E \iint \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \, dx dy
$$
It is clear that $\partial f / \partial x$ and $\partial f / \partial y$ are constants. While considering
only 2D diffusion, $Q_x$ and $Q_y$ are also partial derivatives of the first
degree, hence constants. Herewith the bulk Finite Element formulation, for one triangle, is given by:
$$
 \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \iint dx dy =
 \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \Delta/2
$$
The remaining integral is equal, namely, to de area of the triangle. Applying now
the differentiation matrix, we find:
$$
=  \frac{1}{2} \left[ \begin{array}{ccc} f_1 & f_2 & f_3 \end{array} \right]
\left[ \begin{array}{cc} y_2  y_3 & x_3  x_2 \\
y_3  y_1 & x_1  x_3 \\
y_1  y_2 & x_2  x_1 \end{array} \right]
\left[ \begin{array}{c} Q_x \\ Q_y \end{array} \right] =
$$ $$
= \frac{1}{2} \left[ \begin{array}{ccc} f_1 & f_2 & f_3 \end{array} \right]
\left[ \begin{array}{c} (y_3  y_2) Q_x  (x_3  x_2) Q_y \\
(y_1  y_3) Q_x  (x_1  x_3) Q_y \\
(y_2  y_1) Q_x  (x_2  x_1) Q_y \end{array} \right]
$$
Actually, we don't want to subdivide the Finite Element domain into triangular
elements, but rather into quadrilateral elements. However, any quad element,
in turn, can be subdivided yet into triangles, even in two different ways:
In addition, what we want is a configuration in which all quad vertices play an
equally important role. In order to accomplish this, all of the four triangles
must be present in our formulation, simultaneously. For just one quadrilateral,
it boils down to renumbering vertices in the formulation for a single triangle,
according to the following permutations:
1 2 3 2 4 1 3 1 4 4 3 2Also an upper label (not a power) will be attached to the values $(Q_x,Q_y)$, because it must be denoted at which triangle the discretization takes place. Any contributions are summed now over the four triangles (and the whole is divided by a factor two again): $$ \frac{1}{4} \left[ \begin{array}{ccc} f_1 & f_2 & f_3 \end{array} \right] \left[ \begin{array}{c} (y_3  y_2) Q^1_x  (x_3  x_2) Q^1_y \\ (y_1  y_3) Q^1_x  (x_1  x_3) Q^1_y \\ (y_2  y_1) Q^1_x  (x_2  x_1) Q^1_y \end{array} \right] + $$ $$ \frac{1}{4} \left[ \begin{array}{ccc} f_2 & f_4 & f_1 \end{array} \right] \left[ \begin{array}{c} (y_1  y_4) Q^2_x  (x_1  x_4) Q^2_y \\ (y_2  y_1) Q^2_x  (x_2  x_1) Q^2_y \\ (y_4  y_2) Q^2_x  (x_4  x_2) Q^2_y \end{array} \right] + $$ $$ \frac{1}{4} \left[ \begin{array}{ccc} f_3 & f_1 & f_4 \end{array} \right] \left[ \begin{array}{c} (y_4  y_1) Q^3_x  (x_4  x_1) Q^3_y \\ (y_3  y_4) Q^3_x  (x_3  x_4) Q^3_y \\ (y_1  y_3) Q^3_x  (x_1  x_3) Q^3_y \end{array} \right] + $$ $$ \frac{1}{4} \left[ \begin{array}{ccc} f_4 & f_3 & f_2 \end{array} \right] \left[ \begin{array}{c} (y_2  y_3) Q^4_x  (x_2  x_3) Q^4_y \\ (y_4  y_2) Q^4_x  (x_4  x_2) Q^4_y \\ (y_3  y_4) Q^4_x  (x_3  x_4) Q^4_y \end{array} \right] \mbox{ } $$ The four overlapping triangles at the corners of the quadrilateral are a small but significant twist to the standard Finite Element Method, which is motivated by the endresult.
Finite Volume Method
In order to save unnecessary paperwork, the following shorthand notation has
been adopted. It may be interpreted as an outer product:
$$
r_{ij} \times Q_k = (y_i  y_j) Q^k_x  (x_i  x_j) Q^k_y
= (x_j  x_i) Q^k_y  (y_j  y_i) Q^k_x
$$
Terms belonging to $f_k, k=1 ... 4$ are collected together. By doing so, the
standard Finite Element assembly procedure is demonstrated at a small scale.
What else is the Finite Element matrix than just an incomplete system of
equations?
$$
\frac{1}{4} \left[ \begin{array}{cccc} f_1 & f_2 & f_3 & f_4 \end{array} \right]
\left[ \begin{array}{c}
r_{32} \times Q_1 + r_{42} \times Q_2 + r_{34} \times Q_3 + 0 \\
r_{13} \times Q_1 + r_{14} \times Q_2 + 0 + r_{34} \times Q_4 \\
r_{21} \times Q_1 + 0 + r_{41} \times Q_3 + r_{42} \times Q_4 \\
0 + r_{21} \times Q_2 + r_{13} \times Q_3 + r_{23} \times Q_4
\end{array} \right]
$$
Subsequently use:
$$
r_{32} = r_{34} + r_{42} \qquad r_{14} = r_{13} + r_{34} \qquad
r_{41} = r_{42} + r_{21} \qquad r_{23} = r_{21} + r_{13}
$$
To put the above in a more handsome form:
$$
\left[ \begin{array}{cccc} f_1 & f_2 & f_3 & f_4 \end{array} \right]
\left[ \begin{array}{c}
\frac{1}{2} r_{42} \times \frac{1}{2} (Q_1+Q_2) + \frac{1}{2} r_{34} \times \frac{1}{2} (Q_1+Q_3) \\
\frac{1}{2} r_{13} \times \frac{1}{2} (Q_1+Q_2) + \frac{1}{2} r_{34} \times \frac{1}{2} (Q_2+Q_4) \\
\frac{1}{2} r_{21} \times \frac{1}{2} (Q_1+Q_3) + \frac{1}{2} r_{42} \times \frac{1}{2} (Q_3+Q_4) \\
\frac{1}{2} r_{21} \times \frac{1}{2} (Q_2+Q_4) + \frac{1}{2} r_{13} \times \frac{1}{2} (Q_3+Q_4)
\end{array} \right]
$$
It's a triviality, but nevertheless: a picture says more that a thousand words.
It is seen that the four piecesofequations correspond with four pieces of
lineintegrals, each of them belonging to one of the vertices. Midpoints of
triangle sides are connected by lines at which the integration takes place.
The heat flux at a midpoint is the average of values at the vertices.
Let's adopt another point of view now and no longer concentrate on elements but
on vertices. Instead of arranging vertices around an element, elements are
arranged around a vertex. Label triangle side midpoints as $a,b,c,d,e,f,g,h$.
It is immediately noted that the lines connecting the midsides of the triangles
around a vertex, when tied together, neatly delineate a closed area, which can
be interpreted as a kind of 2D Finite Volume. Expressed in the outer product
formalism, we find:
$$
r_{ba} \times Q_a + r_{cb} \times Q_c + r_{dc} \times Q_c +
r_{ed} \times Q_e + r_{fe} \times Q_e + r_{gf} \times Q_g +
r_{hg} \times Q_g + r_{ah} \times Q_a
$$
Which is the content of one equation in the Finite Element global matrix.
All terms together represent a discretization of the following circular
integral:
$$
\sum r \times Q = \oint Q_y dx  Q_x dy
$$
With help of Green's theorem, however, such a circular integral can be converted
into a "volume" integral, over the area indicated in the above figure:
$$
\oint Q_y dx  Q_x dy =
+ \iint \left[ \frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} \right] \, dx dy
$$
Conservation of heat is integrated over a finite volume, which is wrapped around
a vertex. So we have arrived at sort of a Finite Difference method. To be more precise:
at a Finite Volume Method. It is remarked that this F.V. procedure has been
applicable for curvilinear grids from the start.
Apply a Finite Element (Galerkin) method to a mesh of quadrilaterals. Subdivide
each of the quads into four (overlapping) triangles, in the two ways that are
possible. Then such a method is equivalent to a Finite Volume method:
midsides of the triangles, around the vertex of interest, are neatly connected
together, to form the boundary of a 2D finite volume, and the conservation law
is integrated over this volume.
Unification of a Finite Element and a Finite Volume method
has been accomplished herewith, for a restricted class of 2D diffusion problems.
 16,250
 2
 41
 77
FDM
FDM is created from basic definition of differentiation that is $$ \frac{df}{dx}=\frac{f(x+h)f(x)}{h}$$ here "h" tends to zero.
In numerical analysis, its not possible to divide a number by "0" so "zero" means a small number. So FDM is similar to differential calculus but it has killed the heart that is limit tenda to "zero". So in most of the cases accuracy of FDM increases with refining grid. Easy method but not reliable for conservative differential equations and solutions having shocks. Tough to implement in complex geometry where it needs complex mapping and mapping makes governing equation even tougher. Extending to higher order accuracy is very simple
FEM:
It is a numerical tool that is borrowed from calculus of variation. There are lot of types of FEM like point collocation method, subdomain method etc. Here they assume some trial function and multiply that trial function with weighting function . In Galerkins method the trial function itself weighting function. Different methods follow different ways in weighting. Then this weighting function is multiplied with trial function then integrated over the control volume ( weak form) and equated to zero (This procedure will differ for different types of FEM but theme is same). Then we get one set of algebraic equations. Solving that will give solution. Here we are working only in error and differential equation some times conservative law may be violated. This method is more accurate than FVM and FDM. Ideal for linear PDEs, expensive and complex for nonlinear PDEs. Here higher order accuracy is achieved by using higher order basis (i.e) shape functions. Extending to higher order accuracy is relatively complex than FVM and FDM. Higher order accurate calculations are expensive in computation and Mathematical formulation especially for nonlinear PDEs. Mostly suitable for Heat transfer, Structural mechanics, vibrational analysis etc.
FVM: This is similar to FDM but. It didn't kill the theme of differentiation because we are integrating the differential equation over a control volume and discretizing the domain. Since we have integrated the differential equation discetization is mathematically a valid one. It can be loosely viewed as FEM but weight here used is 1. Here fluxes are integrated and resultant is set to zero, so flux is conserved. Can handle almost any PDEs and complex domain. Interpolation is done from face to centre will reduce the accuracy of this process. Here accuracy is based on order of polynomial used. FVM can also produce any order accurate numerical solution similar to FDM but more expensive than FDM Aero acoustic problems use FVM about $11^{th}$ order schemes such schemes are rarely used even in DNS and LES. Ideal for Fluid mechanics.
 215
 3
 8
Isn't it a pity that one has to choose between these methods, while they all have their advantages and disadvantages? Wouldn't it be better trying to take the best of the worlds and mix ingredients together? Perhaps a decent research effort of the kind will result in just one numerical method for solving PDE's instead of two or three distinct ones. Here is my attempt:
Where it should be emphasized that more than one life will be needed to really accomplish things. So, where is my backup? 16,250
 2
 41
 77
Labrujère's Problem
In Februari 1976, Dr. Th.E. Labrujère, at the National Aerospace Laboratory
NLR, the Netherlands, wrote a memorandum which is titled, when
translated in English: The "Least Squares  Finite Element" Method [L.S.FEM]
applied to the 2D Incompressible Flow around a Circular Cylinder. To be more
precise: incompressible and irrotational flow.
In this memorandum, it was firmly established that a straightforward application
of the Least Squares Method, using linear triangular Finite Elements, quite
unexpectedly, does not work well. Herewith, Labrujère's report is
demonstrating a scientific integrity which is rarely seen these days. With our
own software we have been able to reproduce the poor results as obtained by NLR:
Improving on these results has been a nontrivial task. On the side of NLR, it could only be accomplished by introducing highly complicated elements. On the side of myself, it could only be accomplished by adopting an approach which is quite deviant from the common Finite Element methodology. It has to be decided by Occam's Razor which of the two approaches is to be preferred.
The Calgary Solution
In December 1976, Labrujère's problem was "solved" by G. de Vries,
T.E. Labrujère himself and D.H. Norrie, at the mechanical Engineering
Department of The University of
Calgary, Alberta, Canada. The result is written down in their Report no.86:
A Least Squares Finite Element Solution for Potential Flow. The abstract of
this report is copied here:
It seems to me that the above
solution is of pure academical interest, though. The apparent need for
fifthorder trial functions shall make this method unworkable
in practice. Even if attention is restricted to the simple case at hand,
it's way too complicated. What's worse, generalization is likely to be
hard. In the end, 2D and 3D Navier Stokes equations (at a curvilinear grid,
preferably) need to be solved. So the point of departure must be something
which is much more simple. Especially the number of unknowns at each nodal point
should not exeed the absolute minimum, the number of degrees of freedom: two.
I have never been in doubt that an alternative least squares finite element
solution, having such desirable properties, must be possible.
I have a dream ..
Incompressible irrotational (ideal) flow of an inviscid fluid is described by
the following system of linear firstorder (!) Partial Differential Equations
(PDE's):
$$
\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} = 0 \quad \mbox{: incompressible} \\
\frac{\partial v}{\partial x}  \frac{\partial u}{\partial y} = 0 \quad \mbox{: irrotational}
$$
Here: $(x,y) =$ coordinates , $(u,v) =$ velocitycomponents.
There does not exist a kind of "natural" variational principle for the
above differential equations. Conventional Finite Element Methods, however,
are very much dependent upon the existence of such principles. There must
be something to minimize (or to "make stationary"). In cases like the above,
it seems, at first sight, that L.S.FEM offers a possible solution. That is because
Least Squares Finite Element Methods
proceed by constructing an alternative minimum principle: square the equations
just astheyare (!) , add these squares together, integrate their sum over the
area of interest and minimize the result as a function of the unknowns. This is
the approach as described in O.C. Zienkiewicz "The Finite Element Method" (1977)
chapter 3.14.2. In our case:
$$
\iint \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]^2 +
\left[ \frac{\partial v}{\partial x}  \frac{\partial u}{\partial y} \right]^2 \right\} \, dx.dy = \mbox{minimum}(u,v)
$$
Simple as it sounds, but watch out! People (including myself) have wasted
very much time trying to get this method to work. After many years of
frustration, I even had to give up for a while. Appearance is highly
deceptive here: Least Squares may be the most tricky Finite Element Method
that has ever been invented. We have already seen that Least Squares does
not work well for linear triangles, that is iff the method is applied
in a straightforward Finite Element manner. Which is the bare essence of
Labrujère's Problem. Start of personal motivation.
It is our purpose to show, in the end, that Labrujère's problem can be solved in a
proper manner. Herewith I mean: a simple and straightforward manner.
However, to that end, we must look at the problem from a different, or should
I rather say a "difference" perspective. As if it were essentially a Finite
Difference problem, namely, instead of the Finite Element problem that it only
appears to be. With other words:
the Least Squares Finite Element Method is a Finite Difference Method in disguise.
A Difference Perspective
Let's look at the details. At first, the global F.E. integral is split up into
separate contributions, from all finite elements $(E)$ in the mesh:
$$
\sum_E \iint \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]^2 +
\left[ \frac{\partial v}{\partial x}  \frac{\partial u}{\partial y} \right]^2 \right\} \, dx.dy = \mbox{minimum}
$$
It is often advantageous to carry out a Numerical Integration, instead of an
"exact" one (see e.g. Zienkiewicz chapter 8.8). This means that function
values are to be determined at socalled integration points $p$. With each
integration point $p$ a certain weight factor $w_p$ is associated:
$$
\sum_E \sum_p w_p \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]_p^2 +
\left[ \frac{\partial v}{\partial x}  \frac{\partial u}{\partial y} \right]_p^2 \right\} \, J_p = \mbox{minimum}
$$
Here $J_p$ is the Jacobian (determinant), which is the result of a transformation from global to
local F.E. coordinates. The jacobians $J_p$ as well as the weighting factors $w_p$ are positive
realvalued numbers.
What follows is a small step for man:
unify the summations over the elements and the integration
points, resulting in one global summation over all integration points
$(i=E,p)$, where $(i)$ becomes the global index of any "integration point".
This merely says that summing over elements, together with their integration
points, is equivalent with summing over all the integration points
in the whole domain of interest, in one big sweep.
In this way, integration points can be interpreted as more elementary than the
elements themselves. And an element with more than one integration point can be
considered as a superposition of elementary integrated elements, with only one
integration point $(i)$ in each of them:
$$
\sum_i w_i \left\{ \left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]_i^2 +
\left[ \frac{\partial v}{\partial x}  \frac{\partial u}{\partial y} \right]_i^2 \right\} \, J_i = \mbox{minimum} = 0
$$
In order for L.S.FEM to work properly, the minimum required must be a small
number, rapidly approximating zero, as the size of the elements becomes less.
Thus maybe it would be not such a weird idea to demand that the minimum value
should merely be zero from the start. But in that case the above "variational
integral" would have been equivalent to an nonsquared system of linear equations.
Because when a sum of squares can possibly be zero? If and only if each
of the separate terms in the sum is equal to zero:
$$
\left[ \frac{\partial u}{\partial x} + \frac{\partial v}{\partial y} \right]_i = 0 \quad ; \quad
\left[ \frac{\partial v}{\partial x}  \frac{\partial u}{\partial y} \right]_i = 0
\quad \mbox{: for each integration point } (i)
$$
Let's go one more step further. It is realized that each 'integration point' in
the grid does in fact nothing else than creating two independent equations.
All integration points together contribute to the fact that a whole system of
linear equations emerges in this way. Nothing prevents us from calling this a "Finite
Difference" system of equations. Let's therefore, at last, replace the notion
of 'an integration point' simply by: 'an F.D. equation'. And here we are!
Any feasible Least Squares Finite Element Method is equivalent
with forcing to zero the sum of squares of all equations emerging from some
Finite Difference Method.
L.S.FEM gives rise to the same solution
as an equivalent system of finite difference equations.
We are ready now to look at Labrujère's problem in the following way. Let it be required that the Least Squares Finite Element Method always leads to an acceptable solution, with moderate mesh sizes. Then, of course, in the associated Finite Difference system, the number of unknowns $N$ should be equal to the number of independent equations $M$. If such is not the case, namely, then the system is likely to be overdetermined. And it is doubtful if the Least Squares minimum can still approach zero, fast enough. A simple count of the triangles involved with Labrujère's problem reveals that such kind of a delicate balance between unknowns and equations is definitely not achieved there: the number of elements outweights the number of nodal points by a factor $2$! This means that there are roughly twice as many "unsquared" F.D. equations as there are unknowns. Apart from of any more complicated kind of argument, like higher order continuity, this surely throws up a basic question.
I am not qualified to check out whether Norrie and DeVries implicitly adressed that question, in their report. They first kept the triangular shapes. I guess that, in order to compensate for an abundance of elementary equations, they had to introduce even so many additional variables. Now it becomes clear what kind of different approach may be feasible here. For the only thing that has to be accomplished is: a perfect balancing between the number of equations and the number of unknowns. Instead of increasing both these numbers by some complicated mechanism.
References
Th.E. Labrujère,
'DE "EINDIGE ELEMENTEN  KLEINSTE KWADRATEN" METHODE
TOEGEPAST OP DE 2D INCOMPRESSIBELE STROMING OM EEN CIRKEL CYLINDER',
Memorandum WD76030, Nationaal Lucht en Ruimtevaartlaboratorium (NLR),
Noordoostpolder, 23 februari 1976.
G. de Vries, T.E. Labruj`ere, D.H. Norrie,
'A LEAST SQUARES FINITE
ELEMENT SOLUTION FOR POTENTIAL FLOW', Report No.86, Department of
Mechanical Engineering, The University of Calgary, Alberta, Canada,
December 1976.
O.C. Zienkiewicz,
'The Finite Element Method', 3th edition, Mc.GrawHill
U.K. 1977, ISBN 0070840725
To be continued as:
Any employment for the Varignon parallelogram?
Take a good look at these (Least Squares) Finite Difference Elements:
 No continuity requirements, at all, on the components of velocity
 First order trial functions (linear) for both components of velocity
 Numerical results in close agreement with the theoretical solution
 16,250
 2
 41
 77
finite difference method is the oldest method to find the limited region or close region and FEM is the structural method to solve the partial deferential equation.