Re: How do you create an efficient _and_ scaleable multi-threaded allocator..
"Chris Thomasson" <cristom@comcast.net> wrote in message
news:rLGdnersrM0DFYHanZ2dnUVZ_hWdnZ2d@comcast.com...
"Bill Todd" <billtodd@metrocast.net> wrote in message
news:HcWdnZMPHa6BxYbanZ2dnUVZ_rKtnZ2d@metrocastcablevision.com...
[...]
Any thoughts/comments/suggestions/rants/raves?
Describing how blocks eventually get deallocated from the free lists back
to the originating thread's heap (without requiring interlocks that
defeat your goal of not having them on such heaps) so that those heaps
don't become so fragmented that they become unusable might be nice. So
might a simulation to demonstrate that the free lists themselves don't
create increasingly unusable (effectively, fragmented) storage for their
temporary owners (surely you're not assuming that all block requests are
equal in size).
I use the per-thread user-provided allocator, to allocate a simple
per-thread segregated array of slabs. Something like:
[...]
The user can request vZOOM to bypass the segregated slabs and forward to the
user-defined allocator directly.
Blocks are deallocated from the free-list directly back to the user-defined
allocator, in this case. Something like:
__________________________________________
void* malloc(size_t sz) {
per_thread* const _this = pthread_getspecific(...);
void* buf = _this->fp_user_defined_malloc(sz);
if (! buf) {
block* blk = SWAP(&_this->freelist, 0);
while(blk) {
block* const next = blk->next;
_this->fp_user_defined_free(blk);
blk = next;
}
buf = _this->fp_user_defined_malloc(sz);
}
return buf;
}
__________________________________________
Does that answer your question?