There are architectures with cache tagged by virtual addresses (which also
includes corresponding process tag). In those architectures, mutex operation
should only cause cache flush for one particular process. Conincidentally,
physical addresses cannot be snooped by it.
"Joseph M. Newcomer" <newcomer@flounder.com> wrote in message
Why is a mutex unlock coupled to a cache synchronization issue? The
correct approach
would be to provide either implicit cache flushing on lock and unlock (as
the x86 does
with write pipes, and the rest of the architecture handles the coherency
issue well), and
in this case it would have nothing to do with locking the SAME mutex, as
specified, but
ANY mutex. For it to apply to the SAME mutex means that the mutex must
now track every
address written to, a clearly insane specification.
I find the whole specification badly written. It is badly written to the
level of not
having a clue as to how to implement it, even on a system which does not
maintain implicit
cache coherency. The only approach I can think of is to force cache
flushes on lock and
unlock, which simply says that values which are modified within the scope
of the lock may
not be visible until a cache flush operation is performed, and values
which are modified
outside the scope of the lock (which would be fundamentally erroneous, to
say the least)
may not be seen at all. But to couple the language to phrases such as
Whatever memory values a thread can see when it unlocks a mutex (leaves
a synchronized method or block in Java), either directly or by waiting on
a
condition variable (calling wait in Java), can also be seen by any thread
that later locks the SAME mutex.
[emphasis added] imposes a serious burden on the compiler, the runtime,
and the mutex
code.
It is fair to say that in a non-cache-coherent architecture that an
explicit action is
required to ensure that cached values are properly flushed back to memory
to ensure that
correct values are seen, and further that such an implicit action is built
into the lock
and unlock operations, but as stated I find the specification somewhere
between
ill-thought-out and incoherent.
joe
On Wed, 4 Jul 2007 21:38:34 -0700, "Alexander Grigoriev"
<alegr@earthlink.net> wrote:
"Joseph M. Newcomer" <newcomer@flounder.com> wrote in message
news:scoo83lp5r7la1jrc387kg6k1km8ihal1t@4ax.com...
See below...
even if the write occurs before the lock.
*****
I find this truly unbelievable. How can a mutex know what values were
accessed during the
thread, so that it can ensure the values are going to be consistent if
that same mutex is
locked? This strikes me as requiring immensley complicated bookkeeping
on
the part of the
mutex implementation. It seems so much easier to follow the semantics
of
most hardware
and just make sure that locking guarantees all pipes and caches are
coherent across all
processors. While I first find it hard to imagine how it is possible to
create a
situation of this nature on any closely-coupled MIMD architecture, I can
imagine how it
can exist in a distributed MIMD architecture, but in that case, the
mutice
have to keep
track of every variable that is accessed within their scope, and ensure
that an attempt to
lock a mutex can ensure that all remotely-cached data is sent back and
all
locally-cached
data is distributed out. An awesome task.
*****
This satisfies architectures without cache coherency, which require an
explicit cache synchronization, done by mutex unlock.
Joseph M. Newcomer [MVP]
email: newcomer@flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm