Re: Am I or Alexandrescu wrong about singletons?
On Mar 29, 11:55 pm, "Leigh Johnston" <le...@i42.co.uk> wrote:
"James Kanze" <james.ka...@gmail.com> wrote in message
Performance is often cited as another reason to not use
volatile however the use of volatile can actually help with
multi-threading performance as you can perform a safe
lock-free check before performing a more expensive lock.
Again, I'd like to see how. This sounds like the
double-checked locking idiom, and that's been proven not to
IMO for an OoO CPU the double checked locking pattern can be
made to work with volatile if fences are also used or the lock
also acts as a fence (as is the case with VC++/x86).
Double checked locking can be made to work if you introduce
inline assembler or use some other technique to insert a fence
or a membar instruction in the appropriate places. But of
course, then, the volatile becomes superficial.
This is also the counter-example you are looking for, it
should work on some implementations.
It's certainly not an example of a sensible use of volatile,
since without the membar/fence, the algorithm doesn't work (at
least on most modern processors, which are multicore). And with
the membar/fence, the volatile is superfluous, and not needed.
FWIW VC++ is clever enough to make the volatile redundant for
this example however adding volatile makes no difference to
the generated code (read: no performance penalty) and I like
making such things explicit similar to how one uses const
(doesn't effect the generated output but documents the
The use of a fence or membar (or some system specific "atomic"
access) would make the intent explicit. The use of volatile
suggests something completely different (memory mapped IO, or
Which is better: use volatile if there is no noticeable
performance penalty or constantly check your compiler's
generated assembler to check the optimizer is not breaking
The reason there is no performance penalty is because volatile
doesn't do the job. And you don't have to check the generated
assembler for anything (unless you suspect a compiler error);
you check the guarantees given by the compiler.
OK: I'll admit that finding such specifications is very, very
difficult. But they should exist, and they'll guarantee you
with regards to future releases, as well. And there are some
guarantees that are expressed indirectly: if a compiler claims
Posix conformance, and supports multithreading, then you get the
guarantees from the Posix standard; the issue is a bit less
clear under Windows, but if a compiler claims to support
multithreading, then it should conform to the Windows
conventions about this.
The only volatile in my entire codebase is for the "status" of
my "threadable" base class and I don't always acquire a lock
before checking this status and I don't fully trust that the
optimizer won't cache it for all cases that might crop up as I
I'd have to see the exact code to be sure, but I'd guess that
without an mfence somewhere in there, the code won't work on a
multicore machine (which is just about everything today), and
with the mfence, the the volatile isn't necessary.
Also, at least under Solaris, if there is no contention, the
execution time of pthread_mutex_lock is practically the same as
that of membar. Although I've never actually measured it, I
suspect that the same is true if you use CriticalSection (and
not Mutex) under Windows.
BTW I try and avoid singletons too so I haven't found the need
to use the double checked locking pattern AFAICR.
Double checked locking is a pattern which can be applied to many
things, not just to singletons.
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]