Re: optimsed HashMap
On 24.11.2012 12:39, Roedy Green wrote:
On Sat, 24 Nov 2012 10:21:14 -0000, "Chris Uppal"
<chris.uppal@metagnostic.REMOVE-THIS.org> wrote, quoted or indirectly
quoted someone who said :
Look into the literature on fast text searching (for instance bit-parallel
matching). It's not entirely clear to me what Roedy is trying to do, but it
sounds as if "bulk" matching/searching might be relevant.
Yes a Boyer-Moore to simultaneously search for the whole list of
words, then when it has a hit see if it has word in isolation rather
than a word fragment.
Here's another approach:
1. fill a HashMap with the translations.
2. Create a tree or trie from the keys.
3. Convert the trie to a regular expression optimized for NFA automata
(such as is used in Java std. library).
4. Surround the regexp with additional regexp to ensure word matches and
probably exclude matching inside HTML tags
5. Scan the document with Matcher.find()
The idea of item 3 is to create a regexp with as little backtracking as
possible. For example, from
foo
foot
fuss
you make
(?:f(?:oot?)|uss)
Not sure though whether it is dramatically faster or slower than a
standard string search like Boyer-Moore - probably not.
Kind regards
robert
--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/