Author |
Message |
junqiao wu (charlesw)
Member Username: charlesw
Post Number: 4 Registered: 04-2007
| Posted on Wednesday, April 18, 2007 - 01:41 pm: | |
Hi,I want to build a 1D function f(q) using interpolation of data points imported from a table. How do I test that my f(q) was successfully built? ie, how do I use f(q) at different values of q? Here q is NOT necessary a coordinate. my code is: title 'numerical Input Test' {tabletest.txt: x 10 0 2 4 6 8 10 12 14 16 18 data 1 2 4 6 23 56 234 2245 12900 234000} DEFINITIONS ftable=TABLE("tabletest.txt",x) f(q) = EVAL(ftable,q,0) boundaries region 1 start(0,0) line to (0,-1) to (100,-1) to (100,1) to (0,1) to close plots elevation(log10(ftable)) from (0,0) to (20,0) {this works well} report(f(11)) {this shows error; how to call f(q)?} end |
Robert G. Nelson (rgnelson)
Moderator Username: rgnelson
Post Number: 822 Registered: 06-2003
| Posted on Wednesday, April 18, 2007 - 06:29 pm: | |
There seems to be trouble passing the argument of the function as an argument to the EVAL. I should have tested this before I suggested it to you. Sorry. Anyway, you can get around this simply by using EVAL(ftable,q,0) wherever you wanted to use f(q). For example, REPORT(EVAL(ftable,11,0)) works correctly. (I realize this is a clumsy way to do this, but until we add new syntax to the script grammar, it will have to do. There may also be troubles in cases where the table coordinate range is significantly different from the spatial coordinate range. Fortunately, in your case this is not a problem.)
|
Robert G. Nelson (rgnelson)
Moderator Username: rgnelson
Post Number: 824 Registered: 06-2003
| Posted on Thursday, April 19, 2007 - 01:25 am: | |
The design premise of the TABLE function was that the table file declares a name for the table coordinate (say Q), and the script defines how Q is computed. You can define Q any way you want, and you can change the definition of Q in each material region (or by IF..THEN or by other arithmetic). What was not planned is to treat the TABLE argument like a function in a procedural language.
|
junqiao wu (charlesw)
Member Username: charlesw
Post Number: 5 Registered: 04-2007
| Posted on Friday, April 20, 2007 - 02:08 am: | |
thanks. Now another question about the Table input: What is the dynamical range of data that a table can be read? For example, the following table does not seem to be readable: x 5 -10 -5 0 5 10 data 8.48335e-149 3.35144e-26 1 6.7761e14 1.86249e23 It shows: Invalid Floating-Point Operation ---called from tables:: readsymbol ---called from parser:: next_symbol ---called from parser:: parse ---called from tables:: tablex ---called from parser::parse I understand that here the numbers are too small or too big. So 1)How do I change the range of numbers that a TABLE function can be used to input data? 2)If it is not possible, than can I use some command to cut off (set to zero) data whose values are beyond the acceptable regime, so that my data can be still partially input? 3) I tried to make a new table equal to the Log of my original data, and then convert back into original data after being read; but unfortunately there are other problems of divergence with solving the PDE using the 10^() format.
|
Liem Peng Hong (liemph)
Member Username: liemph
Post Number: 10 Registered: 09-2004
| Posted on Friday, April 20, 2007 - 02:13 am: | |
You may try to scale your table into logarithmic one. |
Liem Peng Hong (liemph)
Member Username: liemph
Post Number: 11 Registered: 09-2004
| Posted on Friday, April 20, 2007 - 02:18 am: | |
Addition to the previous thread. You might have to change your equations to fit with the new log scale. Otherwise, if you convert back into the original data then you would face the same problem. A more fundamental question is whether your table represents something meaningful (physically not just mathematically). |
Robert G. Nelson (rgnelson)
Moderator Username: rgnelson
Post Number: 825 Registered: 06-2003
| Posted on Friday, April 20, 2007 - 05:47 pm: | |
Your table represents a dynamic range of some 170 orders of magnitude for your data value. Depending on how the table is used, the data may be subject to the precision of the computer's numeric storage format, or approximately 18 digits. The estimated number of electrons in the known universe is about 1e79, so the dynamic range of your table is significantly greater than the contrast between counting galaxies and counting individual electrons. Most physical constants are known to no better than six or eight decimal places. So I agree with the earlier posting: you really should consider whether this range of values is physically meaningful. |
|