Re: Tradeoffs between ConcurrentLinkingQueue and LinkedBlockingQueue
Sebastian Millies wrote:
what factors should I consider when choosing between a
ConcurrentLinkingQueue and a LinkedBlockingQueue as a
I am not asking about the defining difference between blocking
and non-blocking queues. My concern is more with execution
As I see it, when I use ConcurrentLinkingQueue, I have some
overhead because the consumer thread must continually work in
a while loop, checking to see if poll() returns null. On
the other hand, with a blocking queue I have some overhead
because it is not lock-and-wait-free, though I can avoid blocking
on put() by having an unlimited capacity queue.
In a scenario with many producers and only one (or few) consumers,
how do I find out what is faster (memory is no concern)?
As long as there's no good reason to use a blocking queue, I'd stick
with the non-blocking version because the code is simpler.
It all depends. If you go with the non-blocking version you will
probably need to put a Thread.sleep() call in each consumer's poll loop,
to avoid having the consumers run continuously while the queue is empty,
taking up processors the producers could have been using to find
something to put on the queue. The longer the sleep, the less the CPU
time cost of each empty queue consumer, but a long sleep also makes the
consumers less responsive.
My approach to something like this would be to isolate the decision
in a class that provides what looks like blocking behavior to its users.
Implement it initially which ever way is easier - I would have expected
that to be blocking, because of not needing to decide how long to sleep,
but your mileage may vary.
If it turns out your program is fast enough, stop there. If you want the
program to go faster or use less CPU time, profile it. If the profiling
shows that the queue system is a significant issue, try the other
implementation and go with the implementation that makes the whole
program faster. You also only have one place to change to adjust the
sleep duration for the non-blocking queue consumers.
In general, good modularity can put off a lot of "which is faster"
decisions until you have the program working and can measure it. At that
point you only have to work on those "which is faster" decisions that
have real performance impact. Finding out which implementation is faster
is easy when the decision is internal to a single class that is taking a
significant amount of time in a program you can measure.