Re: Possible Loss of Precision - not caused by type conversion
Karl Uppiano wrote:
"Lew" <lew@lewscanon.nospam> wrote in message
news:PYqdnfNxeYmnkA3bnZ2dnUVZ_vqpnZ2d@comcast.com...
Lew wrote:
If you have an array of long with 2 billion entries it will occupy over
16GB of heap - the issue of your index will be moot.
Patricia Shanahan wrote:
Not if you have a 64 bit JVM and a server with a large memory.
(and -Xmx configured accordingly)
Good catch - even though I have a 64b machine for my own development I
keep forgetting what a huge difference it makes. I found
<http://msdn2.microsoft.com/en-us/vstudio/aa700838.aspx>
which examines the effect objectively for both .NET and Java (IBM
WebSphere with their JVMs).
I agree that an argument can be made that it was shortsighted of Sun to
limit array indexes to int, but the fact is that they did and it is
documented in the JLS. Does Sun have a plan to change this, or to
introduce a large-array type to Java? A sparse-array type?
I foresee when people will superciliously disparage those who once thought
64 bits provided enough address range, along the lines of those now
parroting the "and you thought 640K is enough" canard.
Technology marches on, but the idea of using 'int' for array indices was
probably a compromise between performance and size for computers of the mid
1990s. The question I have to ask, how often does someone need to look up
one of 2 billion entries that quickly, and does it make sense to have it all
in memory at once? If not, then an array might not be the right data
structure anyway.
Of course one would want other data structures, such as rectangular
matrix, but array is a good starting point. Most data structure
operations can be expressed in terms of array access, and several of
Java's Collection classes are effectively built on arrays.
There are a lot of tasks that can be done out-of-core, with explicit
program transfers between slices of a file (representing the logical
array) and chunks of memory. However, such algorithms are significantly
more complicated to code than their in-core equivalents. For example, I
remember a program for solving 50,000 linear equations in double complex
that was primarily a data movement program, copying chunks of a single
logical array between files and memory.
At each memory increase so far, there have turned out to be jobs that
were best expressed using a single array occupying most of the new
memory size. One of the benefits of increased memory size is making
those jobs simpler, by allowing the natural large array representation.
Why should the Integer.MAX_VALUE boundary be different?
Patricia