Re: Java puzzler
On Sun, 15 May 2011, markspace wrote:
On 5/15/2011 3:16 AM, Tom Anderson wrote:
Fair enough. I suppose my position is that this hardware ability has to
be used in the service of some solidly well-defined software semantics.
Throwing exceptions at random from arithmetic expressions is not what i
think of as solidly well-defined.
I guess I'm not understanding what people think is wrong with the
semantics of "overflow." Overflow is very well defined. It's something
like 8 gates on the hardware level, and happens at the same speed as the
the add or subtract itself. I.e., about 1 CPU cycle in most cases
nowadays.
By contrast, the current semantics of integer math are that sometimes
you get the right answer and sometimes you don't. I consider that to be
non-deterministic.
Here's a method to sum a list of numbers:
public int sum( int[] nums ) {
int sum = 0;
for( int n : nums ) sum += n;
return sum;
}
What I want is for this to throw an error if it can't compute the sum
correctly. How is that non-semantic, even slightly? How is the current
implementation where sometimes this works and sometimes it doesn't even
*slightly* semantic in comparison?
With the current semantics, for a given input - a given sequence of values
in nums - this code will always produce the same output, on all JVMs,
under all situations, optimised or not. That answer will sometimes be
utter rubbish, but it is consistently rubbish across all platforms. You
will recall that one of Java's earliest goals was just that - Write Once,
Rubbish Anywhere.
If the rules about overflow were as you want, which is roughly "if a
calculation carried out by the processor overflows, an exception is
thrown", then we would indeed no longer produce rubbish. But because
different machines - different processors, different compilers, different
applications of the same compiler - will carry out that calculation in
different ways (due to different hardware capabilities, and also having
made different choices about loop unrolling, use of SIMD instructions, use
of registers vs stores to the stack, etc), those hardware overflows will
happen differently. A given input will be successfully summed on one
machine, but will cause an exception on another machine. You now have
Write Once, Failure Somewhere (but Rubbish Nowhere).
You evidently feel that Rubbish Nowhere is a bigger win than Failure
Somewhere is a loss. Others do not.
What i was getting at with the infinite width stuff is the idea that you
might be able to define the semantics such that the results are still
consistent across all machines (now we've got Write Once, Right Anywhere).
But in doing so, you would constrain implementations so that they can't
optimise effectively. So you actually gave Write Once, Right Anywhere -
Slowly.
So basically, it's a matter of consistent, correct, quick - pick two.
tom
--
a moratorium on the future