Re: Threading in new C++ standard
On Apr 28, 8:42 pm, "Boehm, Hans" <hans.bo...@hp.com> wrote:
On Apr 21, 10:07 am, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
[...]
If you need low-level atomics, things are very tricky anyway,
I do not need low level atomics because I can write correct concurrent
programs with Conditional Critical Regions. An optimising compiler may
implement some Critical Regions with atomics. However, there are
hackers around who are just crying for atomics and low level means.
and you
will actually have to read the standard or the committee papers (try
N2480).
You have the same problem there: Your concern is not about any
concurrent language feature but about what if an _incorrect concurrent
program_ is optimised by the compiler which compiler optimises for the
_sequential execution._ You do not want to inform the compiler that it
should not optimise for sequential execution any more because it is
not a sequential program but rather a concurrent one. You could inform
the compiler if threading would be introduced at the language level
(see marking shared variables and marking critical sections at the
language level).
Besides, the concepts of `memory model' and `visibility' are false
concerns in programming languages. At the programming language level
the issues about memory are already abstracted away into the notion of
variables. Visibility is only a concern if the language has no means
to mark shared variables and Critical Regions. Otherwise, if the
language has these means, visibility simply is not an issue. What is
declared as shared variable is seen by all processes, hence, these
variables can only be accessed in an exclusive manner. All the other
variables are assigned to a single process only and thus visible for
that process only. It is as simple as this.
Aside from the low-level atomics, this is similar to the Ada
approach. And Ada83 probably was the first major language to
explicitly take this route. (There are some tricky issues in making
this work for try_lock(), but I couldn't find an equivalent of that in
Ada95, so this issue may simply not have arisen.) It's consistent
with the Java approach, as revised in 2005, though Java needs to
address the much harder, and in my opinion still not completely
solved, problem of giving partial semantics to programs with races.
2) There was no portable way to get access to hardware atomic
operations (including simple atomic loads and stores). In practice,
this seems to often be necessary, for everything from maintaining a
simple global counter, to the commonly used (and often misimplemented)
"double-checked-locking" idiom. This is fixed in the C++ working
paper, with the addition of atomic<T>, etc. This is roughly analogous
to Java volatiles. (Caution: In C++, increments of atomics are atomic
operations; increments of Java volatiles contain two atomic
operations, and are not atomic. I think Ada behaves like Java here.)
Ada95 also has atomic operations, but if I read the spec correctly,
they seem to have largely overlooked the memory ordering issues.
=46rom your point of view it may seem as an overlook but the truth is
that there is no such problem as memory ordering in a well designed
concurrent language. Just look at the comments to your following code
fragments.
In
particular, it I write the equivalent of
Thread 1:
x = 42;
x_init = true;
Thread 2:
while (!x_init);
assert(x == 42);
this can fail (and in fact is incorrect) even if x_init is atomic.
Of course it is incorrect. Here both `x' and `x_init' are shared
variables but you miss to declare them as such. Furthermore, you are
trying to access them `in sequential manner' i.e. as if they were
variables in a sequential program. However, then, why are you
wondering that an incorrect concurrent program can fail?
I guess the meaning of your fragment that you would like Thread 2 to
proceed only when shared variable becomes 42:
shared int x = 0;
Thread 1:
with (x) {x = 42;}
Thread 2:
with (x) when (x == 42) {
// do whatever you want to do when x==42
}
Let me note that if you want Thread 2 to do anything when x==42, you
should specify that action inside the Critical Region. It is because
in a concurrent programming environment the change of the shared
variable must be regarded as a non-deterministic event.
However, it is still not correct from the concurrent point of view if
shared variable `x' can be changed by other processes. Then you cannot
be sure that Thread 2 will detect the situation when for a transient
period x==42 and your Thread 2 would hang forever waiting for the
status.
Moving from sequential programming to concurrent programming is not so
easy. It needs quite another kind of thinking from the programmer. You
can just escalate this problem if you are trying to hack a sequential
programming language so that is must stay as a sequential language
first of all, but you also want concurrency just as an after thought.
More interestingly, I think something like RCU can't be made to work
at all. Neither does passing data between threads through a message
queue implemented with atomic objects. Of course, nobody else got
this right in 1995 either.
3) There was no portable API for thread creation and synchronization.
The current WP has one that is largely a Boost descendent, allowing
portable code to create threads. Work on higher level concurrency
APIs was explicitly postponed.
I think (1) and often (2) are essential for a useful concurrent
language. But languages designed for concurrency from the start
didn't always get them right either.
I think the contrary: (1) and (2) simply does not apply to a decent
language designed for concurrency. Even (3) is not an API in any well
designed concurrent language but language element (see parallel
statement). Let me note that Java is not one of them, however, the
trendy crawd thinks it is (see Java's Insecure Parallelism)
Let me draw your attention to the first concurrent language Concurrent
Pascal which already contained the notion of the shared class named a
monitor. There are other languages designed for concurrency such as
Edison, OCCAM to name a few. The latter is so genuine that the
sequencing two operations after each other as a default (see semicolon
in most languages) is missing from the language.
Finally, let me stress that I am not suggesting that you would make
either Concurrent Pascal, Edison, Ada or OCCAM out of C++. These are
just examples containing useful ideas with respect to concurrent
programming language features.
Once, C++ was a success because it could add object-oriented
programming concepts to a procedural language. Stroustrup himself
claims that it did not seem such a straightforward idea to take up
object-oriented programming at that time: "all sensible people "knew"
that OOP didn't work in the real world: It was too slow (by more than
an order of magnitude), far too difficult for programmers to use,
didn't apply to real-world problems, and couldn't interact with all
the rest of the code needed in a system."
http://ddj.com/cpp/207000124
The situation is very similar now with respect to adding concurrency
at the language level to an OOP language. All sensible people "knows"
that it is inefficient to have it at the language level. All sensible
people "knows" that you need memory model and you have to care about
visibility concerns.
You need a brave step for the success, though.
Otherwise C++0x will be yet another MT library-based language.
Best Regards,
Szabolcs