Re: Object-oriented multithreading
kanze wrote:
....
and we assume that thread_b() starts before thread_a()
finishes, it is conceivable that thread_a() will release
the mutex, and thread_b() will get it, before thread_a's
update of shared_value propagates to the processor running
thread_b.
What makes you assume this? It depends on the system, but Posix
guarantees full memory synchronization in both the lock and the
unlock.
If by "full memory synchronization" you mean all writes
by all threads, that is one solution to the problem.
But it does not scale well to large numbers of processors,
since each mutex lock or unlock requires all CPUs to
wait until the whole memory system settles down.
If C++ writes such a requirement into a new standard, it
will make C++ multithreading uninteresting for those
who wish to make high-performance applications using
many CPUs.
(Keep in mind that in a thread-unaware system, no operation
can be assumed to be atomic.)
Keep in mind, too, that a thread-unaware system doesn't have
mutextes:-).
The C++ standard is currently thread-unaware, as has been
mentioned here many, many times.
The question is what form thread-awareness in a future
C++ standard would take.
My comment was intended to remind people that to
write multi-threaded code the way we are used to requires
some form of thread-awareness from the compiler. Just
having mutexes in the library is not enough.
(Though it is possible to do multithreading with no compiler
support at all; it's ugly, but it works.)
... having a single thread be able to make a
set of updates to a set of related variables without another
thread updating the same variables in the meantime.
That's true for some uses. For others, it is a question of one
thread making two series of updates, a first, followed by a
second, and ensuring that no other thread can possibly see the
results of the second series of updates without also seeing
those of the first.
Agreed. I think both uses are related, I just can't think
of a good way to describe the issue that includes both,
as well as others I didn't think of.
Except that it often doesn't work out that way. You have two or
three objects which must remain coherent amongst themselves, so
your atomicity has to cover several objects.
I take it that putting the several objects
into a larger object is not reasonable.
Has this approach ["atomic" function locks *this]
been considered? If so, has it been
discarded as a Bad Idea, and, if so, why?
It's one of the approaches used by Java. One which, in
practice, doesn't seem to be much used, because it is so rare
that an object and a function correspond to the desired
granularity of locking.
I had in mind things like queues, or "simple" shared variables
or objects. I haven't needed atomic update of more "scattered"
sets of objects. Evidently, you have.
The main objection that I can think of that I can't easily
dispose of (to my own satisfaction, at least) is that there
are forms of synchronization that don't easily fit into an
"atomic member function" model.
It obviously can't be the only possibility. Even in Java, you
can also synchronize a block on a separate object. My
experience suggests that at times, in fact, synchronization
doesn't even respect scope (i.e. scoped_lock doesn't work).
It sounds like I have my answer -- it's been tried, and is
nice, but not sufficient.
I would still like to know if a more C++-centric approach to
multithreading could be found, which might make
multithreaded code cleaner and safer than the current
(C-style) approaches.
-- Alan McKenney
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]