Re: Am I or Alexandrescu wrong about singletons?
On Mar 30, 12:03 pm, Andy Venikov <swojchelo...@gmail.com> wrote:
Herb Sutter wrote:
[...]
So, with the above said, here's a concrete example of how I'd
use volatile without an access to a ready-made library. Let's
take Magued Michael's lock-free queue ("Simple, Fast and
Practical Non-blocking and blocking queue algorithms", Magued
Michael & Michael Scott; 1996). It uses a technique similar to
DCL to verify a validity of a read. Look into it's deque()
method.
I'll provide the pseudo code here:
dequeue(Q: pointer to queue t, pvalue: pointer to data type): boolean
D1: loop # Keep trying until Dequeue is done
D2: head = Q->Head # Read Head
D3: tail = Q->Tail # Read Tail
D4: next = head->next # Read Head.ptr->next
D5: if head == Q->Head # Are head, tail, and next consistent?
D6: if head.ptr == tail.ptr # Is queue empty or Tail falling behind?
D7: if next.ptr == NULL # Is queue empty?
D8: return FALSE # Queue is empty, couldn?t dequeue
D9: endif
# Tail is falling behind. Try to advance it
D10: CAS(&Q->Tail, tail, <next.ptr, tail.count+1>)
D11: else # No need to deal with Tail
# Read value before CAS, otherwise another dequeue might free the
next node
D12: *pvalue = next.ptr->value
# Try to swing Head to the next node
D13: if CAS(&Q->Head, head, <next.ptr, head.count+1>)
D14: break # Dequeue is done. Exit loop
D15: endif
D16: endif
D17: endif
D18: endloop
D19: free(head.ptr) # It is safe now to free the old dummy node
D20: return TRUE # Queue was not empty, dequeue succeeded
Look at line D5: it needs to check if Q->Head is still the
same as what we read from it before. Otherwise two
possibilities for breaking the correctness arise: 1) it would
be possible for the element pointed to by Q->Head to be
re-inserted back into the queue with NULL in the "next" and
was never empty in any given moment; or 2) The first element
was removed after we've read Q->Head and before we've read
next thus there could be garbage in head->next by the time we
read it and we'd try to access garbage on line D12.
This piece of pseudo code could be naively translated to a
following c++ code:
while (true)
{
Node * localHead = head_;
Node * localTail = tail_;
Node * localNext = localHead->next;
if (localHead == head_)
{
...
}
But it wouldn't work for the obvious reasons. One needs to
insert MemoryFences in the right places. Memory fences is
something that is highly platform-specific, so one would
define macros for them that would expand to different
instructions on different platforms. Here's the code with
memory fences inserted:
while (true)
{
Node * localHead = head_;
Node * localTail = tail_;
DataDependencyBarrier(); //All the systems that I know of will do
//this sort of barrier automatically, so
//this macro will expand to nothing
Node * localNext = localHead->next;
LoadLoadBarrier(); //on x86 this will expand to nothing
if (localHead == head_)
{
...
}
This is much better, but it still got problems: first, on x86,
the LoadLoadBarrier() will expand to nothing and there will be
no indication to the compiler not to re-order different loads;
and second (and I think it's the crux of my argument) that an
optimizing compiler will dispose of the "if" statement even in
the face of memory barriers.
First, I very much doubt that the LoadLoad barrier can expand to
nothing if the code is to work. It certainly cannot on a Sparc,
and I rather doubt that it can on an Intel; I'd have to see the
guarantees in writing to believe otherwise. And second, if a
compiler moves code accross a barrier, it is broken, and there's
not much you can do about it.
No matter how many or what type of memory barriers you insert,
the compiler will be allowed to omit the if statement.
Really? According to who? Obviously, the ISO standard says
nothing about this case; the presence of the barrier introduces
undefined behavior, and the compiler might do anything, as far
as ISO C++ is concerned. But you're obviously depending on
other standards as well. Standards which specify what the
barrier guarantees, for example. And these standards apply to
the compiler as well.
The ONLY way to force the compiler (any compiler for that
matter) to generate it is to declare head_ as volatile.
No. The way to force the compiler to generate the necessary
accesses is to use the same implementation specific guarantees
you count on for the barrier to work.
Here's the final code:
struct Node
{
<unspecified> data;
Node volatile * pNext;};
Node volatile * volatile head_;
Node volatile * volatile tail_;
dequeue()
{
while (true)
{
Node volatile * localHead = head_;
Node volatile * localTail = tail_;
DataDependencyBarrier();
Node volatile * localNext = localHead->next;
if (localHead == head_)
{
...
}
....
}
Now this code will produce the intended correct object code on
all the compilers I've listed above and on at least these
CPUs: x86, itanium, mips, PowerPC (assuming that all the
MemoryBarriers have been defined for all the platforms). And
without any modifications to the above code. How's that for
portability?
Yes, but it's just as portable without the volatile. If the
barriers are defined correctly, the compiler will not move code
over them.
--
James Kanze
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]