Re: optimsed HashMap
On 11/24/2012 1:42 AM, Roedy Green wrote:
On Fri, 23 Nov 2012 17:33:43 -0800, markspace <-@.> wrote, quoted or
indirectly quoted someone who said :
I'm not sure what you are trying to say there. You want the case where
you do not find something in a hash map to be optimized? "Optimized" how?
What do you mean "add to the list of words" and "freeze"?
The following is not the real problem, but it might more simply
illustrate what I am asking.
Think of an ordinary HashMap<String,String>
What it does is translate a few English words with French derivation,
putting the French accents on them. e.g. naive -> naïve Napoleon
Let us say you have 100 such words you want to transform. (In my
actual problem I have about 1500 words).
You go through the files for a website looking at each word of text
(avoiding HTML markup) in the HashMap. If you find it you replace it.
Most of the time word you look up is not in the list.
This is a time-consuming process. I would like to speed it up.
Reading the HTML files, splitting them up in words and discarding
HTML seems more likely to be the bottleneck than the lookups in a
What does your measurements show CPU time and wall time
distributed between the different activities?
You did measure right??
My lookup has two properties that might be exploited in some variant
1. nearly always the lookup fails. The code should be optimised for
this case. If it has some fast way of knowing the elt is not there,
it should do that first.
HashMap should already do that.
2. the list of words to lookup does not change after initial
preparation. I can afford to do some special calculation to prime the
lookup. For example, I once heard of some tweaking to avoid long
collision chains for a C implementation of HashMap.
If you have long collision chains then you can obviously improve
by choosing a better hash functions.
But do you have long collision chains?
Java String hash is not that bad.
You can easily test number of collisions.
Another way of looking at the problem is it would be nice to have a
HashSet implementation that was considerably faster than a HashMap.
IIRC, currently HashSet is implemented as a HashMap.
Not carrying data will not make it considerable faster.
Hashing and lookup are the same.