Re: multithreaded cache?
On 5/18/12 3:31 PM, Eric Sosman wrote:
On 5/18/2012 5:45 PM, Robert Klemme wrote:
On 18.05.2012 20:42, Silvio Bierman wrote:
On 05/17/2012 11:54 AM, Robert Klemme wrote:
I provide a variant of Silvio's, Eric's and Daniel's solution which
should yield higher throughput because it works without read write
locking. You can find it as gist in case the code is garbled in the
I think you have as many locks as I suggested (being one)? My initial
implementations of something like this used a plain map with an extra
lock but later cases used the by then available ConcurrentHashMap as
well, making one lock redundant.
You didn't show it here, did you? I can's seem to find it in the thread.
Note that CHM does also do synchronization. I am not sure from your
statement what exact locking scheme you apply. There does seem to be one
difference though: I my version the second lock goes away after the
value has been computed so there is only the sync of CHM left.
It seems to me that if N threads query the same key at about
the same time, they may all miss the map and go off to perform
the slow computation.
Nope, they will all create a "Reference" object that *can* perform the
calculation, however because the method uses "putIfAbsent", only the
first calculating "Reference" object is actually ever used.
After that, the Reference.get() method will block until that particular
calculation is complete. Once that calculation is done, the calculating
Reference will replace itself with a constant Reference for future
accesses to use.
If "slow" is large compared to the cost of
a lock-release pair (and if it weren't, why cache?), the tradeoff
It would be suspect, but there isn't a tradeoff here. All threads lock
until one value is calculated, basically coalescing the requests for
Also, different threads may wind up using different value
instances. If the cache is purely a convenience for a value-only
object that may be all right, but it's not all right if the values
are supposed to be singletons.
Again, invalid if you recognize what is actually happening.
Finally, there's more than a whiff of the double-checked locking
antipattern about what you're doing with the `here' flag. I'm not
absolutely sure what you've got is in fact DCL (hence, wrong), but
I'm also not absolutely sure it's DCL-free. Before using the pattern
in any important way, I'd want to check with a major-league guru,
just as "due diligence."
"double-checked" locking is only broken if you don't have the
appropriate memory semantics enforced elsewhere. The reason the
"classical" double-check locking fails is because there could be a
data-race condition, where the reference is not null, but the object
state hasn't been flushed between threads.
In this case, the "first-check" of obtaining a value from the
ConcurrentMap causes happens-before relationships, and causes the object
state to flush.
Robert Klemme's code is correct, according to everything I've read in
JCIP and elsewhere.