Re: What in C++11 prohibits mutex operations from being reordered?
On Apr 3, 12:59 pm, =D6=F6 Tiib <oot...@hot.ee> wrote:
Yeah, sorry for being unclear. I do not get what exact part of
puzzle it is then that you miss? "sequenced before" it was not
maybe "synchronizes with" or "happens before"?
Inside one thread it is "sequenced before" and ordering constrains of
atomics what
majorly should govern compilation. I think that "happens before" and
"synchronizes with"
are not related as they govern inter-thread synchronization (inside
one thread "happens before" is mostly trivially "sequenced before" ).
Anyway I do not see you are making any clear argument here.
Mutex access can not be reordered.
I hope so. Yet this is not written in the standard.
For example =A7 1.10/5 elaborates the difference.
It is not clear for me still what in this paragraph prevents mutex
operations from being reordered in my example.
Yes, =A71.10/5 only tells that atomic operations, mutexes and fences
as synchronization operations and that relaxed atomic operations *are*
*not* synchronization operations.
Maybe you should first find out how synchronization operations work?
It is tricky logic sprinkled all over the standard ... I think most
of the =A71.10, =A729.3 and =A729.8 are relevant. I try to collect *short=
*
and *simple* and *logical* explanation ... : If A is sequenced before
B on the same thread, and B synchronizes with C on another thread,
and C is sequenced before D on that second thread, then A happens before
D, and A and D can access the same data even if they are non-atomic
operations. So ... even ordinary 'int' access like 'i = 42' can not be
ordered to other side of m2.lock() nothing to talk of m1.unlock() or
some other synchronization operation like that.
Yes, this is clear. I spent some time learning the standard and some
articles around
before making this post. I just do not see how the scenario you
present is related
to my example.
Besides take into
account that in my original example (start of the thread) I have an
implementation of spin-locks with atomic variables and ask the same
question - what prevents these atomic operations from being reordered
by compiler.
I did not understand what your example does, sorry.
while( atomic1.exchange(1, memory_order_acquire) ){;}
// critical section protected by atomic1
atomic1.store(0, memory_order_release);
this was just a very simple (and ineffective) implementation of a
spinlock. The first line grabs the lock, the last releases it. And my
question is then what prevents the following lines to be reordered:
atomic1.store(0, memory_order_release); // unlocking previous
critical seciotn
while( atomic2.exchange(1, memory_order_acquire) ){;} // locking next
critical section
Let me try to write
example that achieves some synchronization ...
// bool that tells that thread 1 is done with data. It is
// atomic just because standard does not guarantee that accesses to
// bool are atomic.
std::atomic<bool> ready(false);
// the ordinary data
int data=0;
void thread_1()
{
// these lines can't be reordered in any way
data=42;
std::atomic_thread_fence(std::memory_order_release);
ready.store(true,std::memory_order_relaxed);
}
void thread_2()
{
if(ready.load(std::memory_order_relaxed))
{
// these lines can't be reordered
std::atomic_thread_fence(std::memory_order_acquire);
std::cout<<"data="<<data<<std::endl;
}
}
The effect is that thread 2 does not touch data if thread 1
is not ready with it. So access to non-atomic "data" is safe, synchronize=
d
and no data race is possible.
Yep, this is a very basic stuff as for memory ordering in C++. Sorry,
do not see a relation to the discussed problem.
Regards, Michael