Re: Mutable Objects and Thread Boundaries
Brian Goetz calls this piggy-backing in JCIP. (He uses a volatile
variable as an example, which is why I was confused about synchronized
blocks.) And he says it's a dangerous thing to do because it's hard to
reason out these side effect correctly, and hard for maintenance
programmers to spot them as well. So to the OP and others, don't rely
on these side effects. Put all data in a synchronized block and you'll
be less likely to wonder what the heck is going on six months after your
wrote the code.
Like everything else in programming, there are tradeoffs that mean
that that is sometimes, but not always good advice. The
countervailing advice is to minimize the extent of critical sections
to reduce contention.
If you use a syncrhonized block, it's because you need concurrent
execution. That much is painfully obvious, but the question of how
concurrent is less obvious. I've worked on systems that run on stages
of 64-core machines running many threads for high-volume
applications. Lock contention was a major bottleneck - really major -
where the original implementors had not fully grokked the implications
of highly concurrent systems.
The more work resides in a critical section, the higher the likelihood
of thread contention for that section. Lock contention has a
cascading effect - the more threads have to wait, the worse the wait
gets, causing more threads to have to wait, worsening the wait, ...
Compounding that, modern JVMs optimize the handling of uncontended
IMO the rule to "get in, get out quickly" for critical sections trumps
the questionable notion that we cannot expect Java programmers to
reason effectively about /happens-before/. If a programmer works with
concurrent programming, they absolutely must educate themselves about
the matter or they are negligent and irresponsible. Java 5
incorporated the notion of /happens-before/ and reworked the memory
model precisely in order to reduce the effort to understand the
consequences of concurrency. We should expect and demand that
programmers of concurrent systems understand it.
Programs don't exist for the convenience of programmers, but for our
clients. Although I do believe in making life easier for the
maintenance programmer, I don't believe in making it easier for the
ignorant or incompetent maintenance programmer. Reducing the extent
of critical sections benefits the program, and thus the client, not
the less-educated programmer, but the competent programmer can deal
As Albert Einstein said, we should make things as simple as possible,
but no simpler. Face it, concurrent programming is hard no matter
what we do. It's good to keep code straightforward, but it's
inadvisable to ruin concurrent performance because we're afraid
someone will be too lazy to think carefully.
The only reason you would piggy-back memory synchronization like this is
for extreme speed optimization. The Future object in Brain Goetz's
That's not the only reason. The other reason is for non-extreme speed
optimization, or if you will, to prevent stupid de-optimization.
example uses a single volatile variable and piggy-backs all its
synchronization on a single write/read of that variable. This is for
speed, because the Future object is heavily used in Executors. Don't t=
the same until you are sure that you need to.
Do piggy-back on synchronization. That's why /happens-before/
exists. Do not make critical sections larger than they need to be.
The rule of thumb I recommend: Keep critical sections to the minimum
length necessary to correctly coordinate concurrent access. That will
introduce tension against markspace's advice to simplify the
concurrent code by piling more into the critical section. Balancing
the two recommendations is a matter of art.