Re: question regarding java puzzlers #2
blmblm@myrealbox.com wrote:
In article <461a3f12$0$759$bed64819@news.gradwell.net>,
Chris Uppal <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:
blmblm@myrealbox.com wrote:
What I'm thinking, though, is that the BigDecimal
constructor must then be taking that double and turning it into
a decimal representation with more significant figures than seem
reasonable to me -- I mean, I understand where they come from, but
it seems a little wrong-headed to me to use an exact representation
for something that I think is better thought of as an approximation.
This, I think, is the heart of the problem. You use the word "approximation",
but it's not clear what a value is an approximation /to/. A big decimal,
seeing a double with exact value
0.1000000000000000055511151231257827021181583404541015625
has no way of knowing whether that is an "approximation" to
0.1
or to
0.100000000000000005551115
or, indeed, to
0.100000000000000005551115123125782702118158340454101562500000000001
So what is it going to choose ?
Yes, I thought about that, and you may be right that there's really no
sensible choice other than the one made by the BigDecimal constructor.
It still seems subtly wrong to me, but maybe not more so than any
other choice.
That goes double
! (pun intended?)
when you consider that BigDecimal's /job/ is the precise
representation of numerical values -- it would be inappropriate for it to make
information-loosing guesses about what the programmer "really meant". If you
want to convert doubles to BigDecimals using different rules (which is not in
itself unreasonable), then some of the possible rules are pre-packed for you.
For instance, by using Double.toString(double), and the BigDecimal(String)
constructor, you can convert using the
shortest-sequence-of-decimal-digits-which-maps-to-the-same-floating-point-value
rule (as Patricia has mentioned).
I don't really think that the concept of "approximation" is appropriate here.
There is a sense in which floating point computation can be considered to be
"like" precise computation with a certain amount of random noise added, so that
(like real physical measurements), you only ever have an approximate number,
and -- equally importantly -- you don't know what the true number should be.
That picture is fine as an approximation (;-) but it doesn't really reflect the
semantics of floating point arithmetic.
I think is close to what's underlying my vague sense of unease --
except for the part about "random noise". As you say:
The rules for Java FP are precise and
exact, down to the last bit -- there is no approximation, or error involved at
all[*].
Well, if you add floating-point numbers of different magnitudes,
there is round-off error involved -- possibly a precise and
well-defined error, but error.
Indeed, but generally the objective is to keep the error to a minimum.
For the most frequent operations, the error is the absolute minimum
possible. The rule for dealing with rounding from numbers exactly half
way between two representable numbers is designed to be predictable, but
round half down and half up.
If we programmers want to use floating point values to represent
something other than the specific set of rational numbers defined by the IEEE
format,
Rational numbers with some unusual properties under arithmetic
operations, no? e.g., addition is not always associative.
Given that, I'm inclined to stick with my claim that it's more
sensible to regard floating-point numbers as approximations of
reals than as rationals, even though almost [*] every floating-point
number has associated with it an exact rational value.
[*] Excluding NaN and other "not number" values (+/- Inf?).
Also -0, a double that is useful for representing numbers with tiny
absolute magnitude, but known negative sign. The mathematical rationals
have a single zero, because arbitrarily tiny numbers have rational
representations.
But I may still just not be looking at things from the most useful
or reasonable perspective.
Remember that if you are doing double arithmetic there is no particular
reason to assume that the true value has a short decimal representation.
It is convenient to have a relatively short default conversion to
String, because people prefer short numbers.
Truncating on conversion to BigDecimal would be an arbitrary rounding
error. How many digits would you drop? If the conversion is done exactly
the programmer has control because setScale() can be used to chop off
digits. Generally, rounding error is a BAD THING, not something to do
for the fun of it.
The program at the end of this message calculates sqrt(2) two ways,
using exactly the result of Math.sqrt(2), and rounding it to 17 decimal
places, the most that can be justified for a double of magnitude around 1.0.
The square of the rounded square root is further from 2 than the square
of the raw square root without any rounding.
Raw:
-2.7343234630647692806884916507957232351955020204815893780647684252471663057804107666015625E-16
Rounded: -2.862320485909535225E-16
import java.math.BigDecimal;
import java.math.RoundingMode;
public class SquareRootTest {
public static void main(String[] args) {
test(2);
}
static void test(int square){
BigDecimal bigSquare = BigDecimal.valueOf(square);
double dRoot = Math.sqrt(square);
BigDecimal bigRoot = new BigDecimal(dRoot);
BigDecimal roundedBigRoot =
bigRoot.setScale(17,RoundingMode.HALF_EVEN);
BigDecimal rawError = bigSquare.subtract(bigRoot.multiply(bigRoot));
BigDecimal roundedError =
bigSquare.subtract(roundedBigRoot.multiply(roundedBigRoot));
System.out.println("Raw: "+rawError+" Rounded: "+roundedError);
}
}
Patricia
Patricia