What is linupdate? Log Out | Topics | Search
Moderators | Register | Edit Profile

FlexPDE User's Forum » User Postings » What is linupdate? « Previous Next »

Author Message
Top of pagePrevious messageNext messageBottom of page Link to this message

Jared Barber (jared_barber)
Member
Username: jared_barber

Post Number: 18
Registered: 01-2007
Posted on Sunday, July 29, 2007 - 09:29 pm:   

Hey,

I was wondering if I could get a bit more explanation on what "linupdate" is. In the manual it says:

In linear steady-state problems, FlexPDE repeats the linear system solution until the computed residuals are below tolerance, up to a maximum of LINUPDATE passes.

What does it mean to "repeat the linear system solution"? Does this mean one takes the supposed converged to solution and uses this as an initial vector restart lanczos iteration to find a new solution?

What "tolerance" is the definition referring to and is there any way to adjust this tolerance?

I have played with the "linupdate" number a bit.

In the problem I am considering, if I change linupdate, the number of linear solves attempted is always equal to linupdate (even if linupdate = 100). Because of this it seems that the "tolerance" in the definition is never reached. The tolerance seems to be too small for my problem.

I also note it seems to improve results for many of my runs and hence linupdate is too small for those runs. (Improve results both in terms of efficiency and accuracy.) For other runs, the accuracy we require needs no adjustment of linupdate.

More information on linupdate and maybe any ways to alter the effective "tolerance" would be helpful. Thanks,

Jared
Top of pagePrevious messageNext messageBottom of page Link to this message

Robert G. Nelson (rgnelson)
Moderator
Username: rgnelson

Post Number: 921
Registered: 06-2003
Posted on Monday, July 30, 2007 - 12:57 am:   

The matrix and variables are scaled before the conjugate gradient pass, hopefully to improve the convergence speed.

This means that the solution is formed in a different space than the actual problem. Once the scaled system is solved, the solution is re-inserted into the galerkin equations and an error measure formed in the real problem space. If the error is above "tolerance", the conjugate gradient is solved again for a perturbation to improve the solution in problem space. This is repeated, if necessary, up to LINUPDATE times (default 5). It is almost never necessary.

"Tolerance" in this iteration phase means ERRLIM*OVERSHOOT. The theory of this is that the conjugate gradient has to solve the system to greater accuracy than the overall error limit in order to force an improvement in the real-space solution. OVERSHOOT defaults to 0.001.

Your problem, for some reason involving your equations on the surface of the hole, is nearly singular. The conjugate gradient iteration is able to achieve almost no improvement in each iteration because of round-off losses.

Version 3 is willing to quit early, or grids differently so that the coupling matrix is not quite so ill-conditioned, or some combination of these or other details of the iteration methods which I am not at this time able to define. Version 3 used a different error estimation procedure which may be more optimistic in some cases (ie, ignores some errors). It also used a different matrix scaling method which may result in better matrix conditioning in marginal cases.

I didn't see the dramatic differences between versions on the problem you sent earlier. I will try your newer script. But since version 5 is not able to satisfy its accuracy conditions, all the additional time is spent bouncing around a solution that it can't pin down. That is, it is not productive time.

Try relaxing ERRLIM.
Top of pagePrevious messageNext messageBottom of page Link to this message

Jared Barber (jared_barber)
Member
Username: jared_barber

Post Number: 19
Registered: 01-2007
Posted on Monday, July 30, 2007 - 11:56 am:   

Here's something to try. Try it with precondition and then try it without precondition (on FlexPDE5). Amazing difference on my machine (goes from 54 min down to 56 seconds). Hopefully you can copy this result. I'm thinking the best thing is to go without preconditioner, though I haven't fully tested it, as I think for this particular problem preconditioning may only make things worse. The solution appears to be just as good.

I suppose there are two questions, is there anything wrong with the preconditioner or is the preconditioner just unsuitable for my problem (or something else)? Is there any harm in not using the preconditioner (results seem pretty good without it)?

Thanks,
Jared
Top of pagePrevious messageNext messageBottom of page Link to this message

Robert G. Nelson (rgnelson)
Moderator
Username: rgnelson

Post Number: 922
Registered: 06-2003
Posted on Monday, July 30, 2007 - 09:53 pm:   

Interesting discovery. My guess is that there are shortcomings in the way the preconditioner is built in the presence of a large number of global variables.

I will look into this aspect.

In the meantime, obviously, you should turn the preconditioner off.

Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action:

Topics | Last Day | Last Week | Tree View | Search | Help/Instructions | Program Credits Administration