Re: Java puzzler
On Sat, 14 May 2011, markspace wrote:
On 5/14/2011 2:18 AM, Tom Anderson wrote:
I'm only saying that the semantics would be defined in terms of
infinite precision; the implementation could be anything the compiler
can come up with which simulates the semantics. Since we're talking
about fairly small expressions with integers, i would expect it to do a
good job of finding efficient ways to compute things. For example, in
the c = (b + b) - b case, it would be free to reduce that to c = b.
I think I understand what you are saying, but that's not what I'm after,
personally.
What I'm saying is: at the machine level, the implementation is free to throw
an excpetion if any intermediate calculation overflows. I think that's the
difference between "infinite precision" and saying the compiler and optimizer
have some latitude.
Okay, i think we now understand where we both stand. And, of course, i
think your desires are unnatural, and that you hate freedom.
What I'm really worried about is how do you simulate "infinite
precision" with longs or something like that, where there is no larger
width word? It becomes a mess. Then there's also really large numbers
of small width integers, which could overflow quickly, but "infinite
precision" means they don't overflow until the end. Both of those
scenarios are impractical to implement with "infinite precision" I
think.
I was mostly thinking about the infinite width rule being something that
gives permission for algebraic optimisation. For example, if you evaluate
c = (b + b) - b with infinite width, then whatever the value of b, you end
up with c = b. So, the infinite width rule gives the compiler permission
to make that optimisation.
But you're right, in that there are possible, and even realistic, cases,
which can't be optimised away and where a calculation with a tolerably
small result has some intolerably large intermediate values. Anything
like:
long a, b, c;
long d = (a * b) % c;
Runs that risk. Having access to a 128-bit integer type in the hardware
wouldn't fix it, either, because:
long a, b, c, d, e, f, g, h, i, j, k, l;
long z = (a * b * c * d * e * f) / (g * h * i * j * k * l);
And because these are integers, i don't think you can get away with
rewriting to a more plausible:
long z = (a / g) * (b / h) * (c / i) * (d / j) * (e / k) * (f / l);
The problem is that not all correct results are practical to compute
with finite-width arithmetic. That means that as well as a rule that
results have to be correct, we have to have a rule about when results
that would be correct explode.
It's alright for a result to not be correct, as long the system throws
an exception instead. That's the rule: incorrect result means throw an
exception.
Just to be 100% clear, what I'm really after is for Java to use the
existing hardware detection that exists in most platforms for integer
overflow/underflow. It's practically cost free, and would really go a
long ways to making programs error-free, imo. That's what I'm gunning
for.
Fair enough. I suppose my position is that this hardware ability has to be
used in the service of some solidly well-defined software semantics.
Throwing exceptions at random from arithmetic expressions is not what i
think of as solidly well-defined.
tom
--
Nullius in verba