Re: rules for iteration invalidation in std::map and tr1::unordered_map
Content-Type: text/plain; charset=ISO-8859-1
X-Original-Date: Wed, 10 Sep 2008 09:51:37 -0700 (PDT)
[ How to implement a hash map that does not invalidate iterators on
insert and erase ]
To do as you want, each node would have to store extra information.
Each node would have a pointer to the hash map, know which bucket
in, and know the offset into the bucket (or an equivalent).
Sorry, no. You do not need this information.
I'm sorry if I'm missing the obvious quick solution. If I am, please
There are many implementations of hash_map that have these properties
and were proven to work well in the practice. You can look at for
example STLPort's hash_map, GNU's __gnu_cxx::hash_map (in GCC's
include/c++/bits/hash_map), Dinkumware's hash_map, etc. There are many
of them. Here is my solution though.
The key idea is that when you work with a certain bucket, you do it in
the context of the hash map, you know what bucket it is and you do not
need this information have explicitly stored in bucket's nodes. The
same holds for pointers to the hash map; you always know which hash
map the nodes belong to, because both insert and erase operations are
called on a particular instance of hash map.
The whole hash map would maintain an array of buckets, each bucket
would be a double-linked list of nodes. Each node would only store
pointers to the next and previous nodes in the same bucket plus
pointers to the next and previous nodes in the global list of all
nodes in the hash_map. The global list would be maintained such that
it would represent buckets spliced in a certain, potentially random
order (that is, if node A precedes nodes B in a certain bucket, then A
would immediately precede node B in the global list). This order would
be determined implicitly by the order in which insert operations are
performed. Iterators would point directly to the nodes. So we will
have something like:
Node * next_in_bucket;
Node * prev_in_bucket;
Node * next_in_table;
Node * prev_in_table;
std::vector< Node > buckets; // Tails of bucket lists.
Node tail; // Tail of the global list.
On insert, a new node would be put to an appropriate bucket and also
spliced to the global list of all nodes. That is, if the new node A is
put before node B in a bucket, the same node A would be spliced right
before node B in the global list (if the bucket was empty, and there
was no B in the bucket, the node A gets spliced to the front of the
global list --- this would determine the order in which buckets appear
in the global list). If the loading factor would exceed a limit, the
whole hash map would get rehashed first; simply all nodes in the
original hash map would get removed from the map and will get
reinserted to the new hash map.
On erase, node corresponding to the element to be deleted would be
unlinked from both the bucket and the global list of all nodes. All
that really needs be done here is just unlinking the node from two
double-linked lists. You do not have to know which bucket the node to
be deleted happened to belong to.
Hope this clarifies things a little. The only more involved thing is
the global list. It allows one to iterate over all nodes in the hash
map while still allowing for defining iterator as a plain pointer to a
node. The whole thing can be easily updated for a multi map.
However, by forcing me to think about it, I have started to share you
curiosity as to why it was done with iterators that are invalidated on
Yes, that is what I want to know.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]