Solving Nonlinear Problems
FlexPDE automatically recognizes when a problem is nonlinear and modifies its strategy accordingly.
In nonlinear systems, we are not guaranteed that the system will have a unique solution, and even if it does, we are not guaranteed that FlexPDE will be able to find it. The solution method used by FlexPDE is a modified Newton-Raphson iteration procedure. This is a "descent" method, which tries to fall down the gradient of an energy functional until minimum energy is achieved (i.e. the gradient of the functional goes to zero). If the functional is nearly quadratic, as it is in simple diffusion problems, then the method converges quadratically (the relative error is squared on each iteration). The default strategy implemented in FlexPDE is usually sufficient to determine a solution without user intervention.
In nonlinear time-dependent problems, the default behavior is to compute the Jacobian matrix (the "slope" of the functional) and take a single Newton step at each timestep, on the assumption that any nonlinearities will be sensed by the timestep controller, and that timestep adjustments will guarantee an accurate evolution of the system from the given initial conditions.
Several selectors are provided to enable more robust (but more expensive) treatment in difficult cases. The "NEWTON=number" selector can be used to increase the maximum number of Newton iterations performed on each timestep. In this case, FlexPDE will examine the change in the system variables and recompute the Jacobian matrix whenever it seems warranted. The Selector REMATRIX=ON will force the Jacobian matrix to be re-evaluated at each Newton step.
The PREFER_SPEED selector is equivalent to the default behavior, setting NEWTON=1 and REMATRIX=Off.
The PREFER_STABILITY selector resets the values of NEWTON=3 and REMATRIX=On.
In the case of nonlinear steady-state problems, the situation is somewhat more complicated. The default controls are usually sufficient to achieve a solution. The Newton iteration is allowed to run a large number of iterations, and the Jacobian matrix is recomputed whenever the change in the solution values seem to warrant it. The Selector REMATRIX=On may be used to force re-computation of the Jacobian matirx on each Newton step.
In cases of strong nonlinearities, it may be necessary for the user to help guide FlexPDE to a valid solution. There are several techniques that can be used to help the solution process.
Providing an initial value which is near the correct solution will aid enormously in finding a solution. Be particularly careful that the initial value matches the boundary conditions. If it does not, serious excursions may be excited in the trial solution, leading to solution difficulties.
You can use the staging facility of FlexPDE to gradually increase the strength of the nonlinear terms. Start with a linear (or nearly linear) system, and allow FlexPDE to find a solution which is consistent with the boundary conditions. Then use this solution as a starting point for a more strongly nonlinear system. By judicious use of staging, you can creep up on a solution to very nasty problems.
The selector CHANGELIM limits the amount by which any nodal value in a problem may be modified on each Newton-Raphson step. As in a one-dimensional Newton iteration, if the trial solution is near a local maximum of the functional, then shooting down the gradient will try to step an enormous distance to the next trial solution. FlexPDE limits the size of each nodal change to be less than CHANGELIM times the average value of the variable. The default value for CHANGELIM is 0.5, but if the initial value (or any intermediate trial solution) is sufficiently far from the true solution, this value may allow wild excursions from which FlexPDE is unable to recover. Try cutting CHANGELIM to 0.1, or in severe cases even 0.01, to force FlexPDE to creep toward a valid solution. In combination with a reasonable initial value, even CHANGELIM=0.01 can converge in a surprisingly short time. Since CHANGELIM limits each nodal change to a fraction of the RMS average value, not the local value, its effect disappears when a solution is reached, and quadratic final convergence is still achieved.
FlexPDE uses piecewise polynomials to approximate the solution. In cases of rapid variation of the solution over a single cell, you will almost certainly see severe under-shoot in early stages. Don't assume that the value of your variable will remain positive. If your equations lose validity in the presence of negative values, perhaps you should recast the equations in terms of the logarithm of the variable. In this case, even though the logarithm may go negative, the implied value of your actual variable will remain positive.
Any steady-state problem can be viewed as the infinite-time limit of a time-dependent problem. Rewrite your PDE's to have a time derivative term which will push the value in the direction of decreasing deviation from solution of the steady-state PDE. (A good model to follow is the time-dependent diffusion equation DIV(K*GRAD(U)) = DT(U). A negative value of the divergence indicates a local maximum in the solution, and results in driving the value downward.) In this case, "time" is a fictitious variable analogous to the "iteration count" in the steady-state N-R iteration, but the time-dependent formulation allows the timestep controller to guide the evolution of the solution.