Re: Container performance
"Kaba" <REkalleMOunderscoreVErutanenME@hotmail.com> wrote in message
news:MPG.207f3a9dd11ca4af989864@news.cc.tut.fi...
This is a combined answer to Kevin, Mirek, Andre and Ondra.
I am not using GCC, I must see if I can the timing results from my
friend. Anyway, this slow performance should not come as a surprise and
on rough scale is not dependent on the implementation. Here's why:
By current containers and current memory allocation, I refer to the
problems with current allocators. As one can read from example from
Scott Meyer's "Efficient STL", sections 10 and 11, the use of STL
allocators is very, very limited. Let me quote the relevant part:
"That's all well and good, but the more you think about it. the more
you'll realize just how draconian a restriction it is that STL
implementations may assume that allocators of the same type are
equivalent. It means that portable allocator objects ? allocators
that will function correctly under different STL implementations ? may
not have state. Let's be explicit about this: it means that portable
allocators may not have any nonstatic data members, at least not any
that affect their behavior. None. Nada."
True, but implementations are encouraged to support allocators
that don't compare equal. The Dinkum C++ Library, shipped with
Microsoft VC++ (among many others) supports such allocators.
So where does the requirement for equivalence come from? I am aware of
just one answer: std::list splice().
Nope. The main motivation was the belief that allocators couldn't
really support much variety in things like pointer and reference
types, so the prospect of useful allocators supporting multiple
pools was illusory. The "compromise" was to permit implementations
to support only simple allocators, while encouraging smarter ones.
In our implementation, if you splice between two lists whose
allocators don't compare equal, the nodes get copied. Not a
big deal.
So, what you end up doing as your only choice is to use new and delete
for every single node. This is the source of the inefficiency and it is
of the practical kind. I am not suggesting that there are any problems
with asymptotic complexity.
Also not true. You can still write node allocators that maintain
pools of reusable nodes. Indeed, we provide several such allocators
with our Compleat Library, available at our web site as an add on
to VC++, gcc, and other popular compilers.
Note that the vector has the fastest insertion and deletion times. This
is to be excepted: there is a logarithmic amount of allocations and the
deallocation is a single delete[] for _all data_.
True.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]