Partial differential equations generally arise as a mathematical expression of some conservation principle such as a conservation of energy, momentum or mass. Partial differential equations by their very nature deal with continuous functions -- a derivative is the result of the limiting process of observing differences at an infinitesimal scale. A temperature distribution in a material, for example, is assumed to vary smoothly between one extreme and another, so that as we look ever more closely at the differences between neighboring points, the values become ever closer until at “zero” separation, they are the same.
Computers, on the other hand, apply arithmetic operations to discrete numbers, of which only a limited number can be stored or processed in finite time. A computer cannot analyze an infinitude of values. How then can we use a computer to solve a real problem?
Many approaches have been devised for using computers to approximate the behavior of real systems. The finite element method is one of them. It has achieved considerable success in its few decades of existence, first in structural mechanics, and later in other fields. Part of its success lies in the fact that it approaches the analysis in the framework of integrals over small patches of the total domain, thus enforcing aggregate correctness even in the presence of microscopic error. The techniques applied are little dependent on shapes of objects, and are therefore applicable in real problems of complex configuration.
The fundamental assumption is that no matter what the shape of a solution might be over the entire domain of a problem, at some scale each local patch of the solution can be well approximated by a low-order polynomial. This is closely related to the well-known Taylor series expansion, which expresses the local behavior of a function in a few polynomial terms.
In a two-dimensional heat flow problem, for example, we assume that if we divide the domain up into a large number of triangular patches, then in each patch the temperature can be well represented by, let us say, paraboloidal surfaces. Stitching the patches together, we get a Harlequin surface that obeys the differential limiting assumption of continuity for the solution value—but perhaps not for its derivatives. The patchwork of triangles is referred to as the computation “mesh”, and the sample points at vertices or elsewhere are referred to as the “nodes” of the mesh.
In three dimensions, the process is analogous, using a tetrahedral subdivision of the domain.
How do we determine the shape of the approximating patches?
1. Assign a sample value to each vertex of the triangular or tetrahedral subdivision of the domain. Then each vertex value is shared by several triangles (tetrahedra).
2. Substitute the approximating functions into the partial differential equation.
3. Multiply the result by an importance-weighting function and integrate over the triangles surrounding each vertex.
4. Solve for the vertex values which minimize the error in each integral.
This process, known as a “weighted residual” method, effectively converts the continuous PDE problem into a discrete minimization problem on the vertex values. This is usually known as a “weak form” of the equation, because it does not strictly enforce the PDE at all points of the domain, but is instead correct in an integral sense relative to the triangular subdivision of the domain.
The locations and number of sample values is different for different interpolation systems. In FlexPDE, we use either quadratic interpolation (with sample values at vertices and midsides of the triangular cells), or cubic interpolation (with values at vertices and two points along each side). Other configurations are possible, which gives rise to various “flavors” of finite element methods.