Re: This calculation is just wrong / computer can't count!
GT schrieb:
Original
question stands - how do I (or perhaps I should ask what data types should I
use) to program this decimal mathematics?
One more try.
We do not understand the rationale behind your desire to compare the computer
arithmetics to "real math" or decimal arithmetics.
Usually, when I need to do calculations in a computer program, I look at the
property (range, precision, resolution, accuracy) of source data and the result
I need. Usually I find that the more than 14 significant digits of the double
data type is more than I would ever need. So I do my calculations. I then take
the final result and format it myself to the number of decimal digits I need to
present to the user.
If I stick to the rules of basic floating point math on computers (some of which
have already been explained in this long thread) I can be quite sure that the
result I get is the result I expect up to the digit I chose to present to my
user. I would never care if the texual representation of some interim results
was wrong at the 14th digit if I only need 6 digits in the result.
So we just don't understand why you actually insist of beeing exact in decimal
arithmetic. Any arithmetic is good if performed correctly, and I would not care
if the computer would use digits with a value of 3, 5 or 17 as long as the end
result has more correct significant digits than I need. So while a decimal
number can not exactly be represented in computer logic, it usually can be
represented *exact enough* for most applications.
So the question should be: Does the final result I get from my calculation
differ from the mathematically correct result with an error of more than x% or
not. If not, you should not care abotu errors in the 17th digit because they
have no meaning for the end result.
If you think it has a meaning for the end result, maybe you can explain again
*why*, because I think this information has been lost in this long thread.
Norbert