Re: atomically thread-safe Meyers singleton impl (fixed)...
On Jul 30, 11:10 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
"James Kanze" <james.ka...@gmail.com> wrote in message
On Jul 30, 6:44 am, "Chris Thomasson" <x...@xxx.xxx> wrote:
Any thoughts on this approach?
I'm not familiar enough with modern the x86 architecture and
VC++'s implementation of it to judge, but my first reaction
is, why bother?
Well, I wanted to see if I could implment a working generic
DCL algorihtm. It looks like the code I posted is going to
work as long as the rules are followed:
The problem is, of course, that to do so, you have to introduce
all sorts of complexity and portability issues (inline
assembler, etc.). Something that you should probably avoid
unless it is absolutely necessary.
It's fairly easy to ensure (in practice) that the
first initialization of the singleton occurs either before main
(and thus, hopefully, before threads have started), or in a
protected environment (during dynamic loading, before the loaded
memory is made visible to threads other than the one doing the
Fair enough. However, I was thinking about somebody else using
the singleton. I don't have control over their code.
It's still rather trivial to ensure initialization before main,
since *you* can also declare statics, and write code which
On the other hand, I do have control over the generic
primitive I provide to them. Robustness is a fairly
significant factor. I want the DCL algorithm to be able to
cope with threads created before main.
The case should be rare enough that you can require specific
actions from the client to guarantee it. (Which only apply if
the thread is going to use your singleton, of course). IMHO,
it's not worth adding complexity to the singleton (which every
user of the singleton has to pay for).
It's also fairly easy to fusion the mutex lock with
any mutex lock needed to use the singleton (so its effective
cost is 0).
I don't think I understand what your getting at. Could you
please elaborate on this some more?
If the singleton is mutable, the client code will need a lock to
use it. The instance function acquires this lock before testing
the pointer, and returns a boost::shared_ptr, which releases it
(instead of destructing the object).
And of course, acquiring a mutex lock when there is
no contention is very, very fast anyway.
Well, most mutex implementations still will execute 2 atomic
RMW's and a #StoreLoad plus #LoadStore style memory barrier. 1
atomic op and 1 storeload for mutex lock, and 1 atomic op and
1 loadstore for mutex unlock.
Certainly, but you're going to need some of that anyway in your
DCL algorithm. And compared to the function call, and whatever
the user is going to do with the singleton, the difference is
almost certainly negligeable. It's not as if the call to the
instance() function will take place in a tight loop. (If it
does, and it causes a performance problem, the client code can
trivially hoist it out of the loop.)
you can get cache ping-pong with multiple threads frequently
accessing a uncontended mutex. Say CPUS 1-6 frequently access
a mutex and the execution sequence is perfect such that the
CPUS don't access it at the same time. There still could be
cache thrashing due to the frequent mutation to the mutex
internal state and stalls from the hard core memory barrier on
Do you have an actual scenario where you can measure a
significant difference? If not, you're trying to solve
something that isn't really a problem.
James Kanze (GABI Software) email:email@example.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34