Re: C++ Memory Management Innovation: GC Allocator
On Apr 21, 10:43 pm, marlow.and...@googlemail.com wrote:
On 21 Apr, 19:13, xushiwei <xushiwe...@gmail.com> wrote:
I took a quick look with particular interest
in what you have to say about scoped allocators.
It seems to me that there is some overlap with
what you are doing and the work by Pablo Halpern in
his N2523 submission to WG21 entitled "The Scoped Allocator
Model". Seehttp://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2523.pdf
IMO Pablo's proposal is more thought through.
I recommend you take a look.
Pablo doesnt actually propose any allocator. It only mentions a
technique for applying allocators to containers. The only "scoped
allocator" in there is a wrapper around the system allocator.
Pablos work's doesnt mention anything on how to write allocators that
could be scoped. Rather, that if they could be written then this is
how to pass them from one container to another.
And that is the rub-- writing an allocator that could be "scoped" is
very hard indeed. The stated intent of "scoped allocators" is to
change the semantics of STL "regular type" such that any property that
does not contribute to operator== is a runtime policy, instead of
compile time policy. So as such it has little to do with allocators.
It is just that in std containers the allocator is the only thing that
is a policy that does not constribute to operator==.
So OK, now I have a way to propogate allocators from one container to
another. Now there are two questions 1) write a non trivial allocator
that *could* be scoped, and 2) just what problem am I solving by
passing this allocator instance from one container to the next?
xushiwei's work is far more relevant, in that it actually presents
something useful. It is not ready to be standardized, but I am
personally interested in writing up a few allocators for
standardization
One final point, it seems to me that there is a need
to bear thread safety in mind. You just say "it's not
needed". Can you explain why please?
If you look at http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2486.pdf
page three you will see a example of this usage. The idea is this
void * threadfunc(void*){
std::list<X> x;//assume X allocates no memory
x.resize(10000);
}
now, when I start up N threads, the list instances *should* be
independent, but this does not happen. Rather, they contend via their
allocator.
So the idea is this
void * threadfunc2(void*){
SpecialAlloc sa;//grab memory from stack, or in big chunks from
the heap
std::list<X, SpecialAlloc > x(sa);
x.resize(10000);
}
Now each thread is isolated, nor does SpecialAlloc need to be thread
safe.
The difference can be dramatic: See
http://groups.google.com/group/comp.lang.c++.moderated/browse_thread/thread/a1eaab58c5bb390c/b763830e33a45985?hl=en&tvc=2#b763830e33a45985
for a test I did. The formatting is not the best, but you can get the
idea. In this case I used a lock-free allocator, but when I repeated
the test using something like SpecialAlloc, I got better performance
still.
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]