Re: Hash table performance
On Sat, 21 Nov 2009 18:33:14 +0000, Jon Harrop <jon@ffconsultancy.com>
wrote, quoted or indirectly quoted someone who said :
import java.util.Hashtable;
public class Hashtbl {
public static void main(String args[]){
Hashtable hashtable = new Hashtable();
for(int i=1; i<=10000000; ++i) {
double x = i;
hashtable.put(x, 1.0 / x);
}
System.out.println("hashtable(100.0) = " + hashtable.get(100.0));
}
}
Some more datapoints:
java Hashtbl --> out of memory error
java -Xmx1000m Hashtbl (hashtable(100.0) = 0.01) --> 29 secs
Hashtbl.exe(jet statically compiled, hashtable(100.0) = null)--> 5 sec
(Why did this code fail? I have reported the bug to jet).
machine is Athlon 64 X2 3800+, 2GHz. 3 gig ram.
Since you did not provide initial size constraints, the table will
have to be recopied over and over to double the buffer size.
Most of us now have dual core machines. Perhaps with such large
datasets you could split the work for two cpus to work simultaneously,
e.g. two HashSets, one for small numbers and one for big ones.
--
Roedy Green Canadian Mind Products
http://mindprod.com
I mean the word proof not in the sense of the lawyers, who set two half proofs equal to a whole one, but in the sense of a mathematician, where half proof = 0, and it is demanded for proof that every doubt becomes impossible.
~ Carl Friedrich Gauss