Re: Serious concurrency problems on fast systems
On 02.06.2010 08:02, Mike Schilling wrote:
"Robert Klemme" <shortcutter@googlemail.com> wrote in message
news:86m8vjF4ulU2@mid.individual.net...
On 02.06.2010 06:57, Mike Schilling wrote:
"Arne Vajh=F8j" <arne@vajhoej.dk> wrote in message
news:4c059872$0$272$14726298@news.sunsite.dk...
On 01-06-2010 00:21, Kevin McMurtrie wrote:
I've been assisting in load testing some new high performance serve=
rs
running Tomcat 6 and Java 1.6.0_20. It appears that the JVM or
Linux is
suspending threads for time-slicing in very unfortunate locations.
That should not come as a surprise.
The thread scheduler does not examine the code for convenience.
Correct code must work no matter when the in and out of
CPU happens.
High performance code must work efficiently no matter when the
in and out of CPU happens.
For
example, a thread might suspend in Hashtable.get(Object) after a
call to
getProperty(String) on the system properties. It's a synchronized
global so a few hundred threads might pile up until the lock holder=
resumes. Odds are that those hundreds of threads won't finish befor=
e
another one stops to time slice again. The performance hit has a
ton of
hysteresis so the server doesn't recover until it has a lower load
than
before the backlog started.
The brute force fix is of course to eliminate calls to shared
synchronized objects. All of the easy stuff has been done. Some
operations aren't well suited to simple CAS. Bottlenecks that are p=
art
of well established Java APIs are time consuming to fix/avoid.
High performance code need to be designed not to synchronize
extensively.
If the code does and there is a performance problem, then fix
the code.
There are no miracles.
Though giving a thread higher priority while it holds a shared lock
isn't exactly rocket science; VMS did it back in the early 80s. JVMs
could do a really nice job of this, noticing which monitors cause
contention and how long they tend to be held. A shame they don't.
I can imagine that changing a thread's priority frequently is causing
severe overhead because the OS scheduler has to adjust all the time.
Thread and process priorities are usually set once to indicate overall=
processing priority - not to speed up certain operations.
Not at all. In time-sharing systems, it's a common scheduling algorithm=
to adjust the effective priority of a process dynamically, e.g.
processes that require user input get a boost above compute-bound ones,=
to help keep response times low. As I said, I'm not inventing this: it
was state of the art about 30 years ago.
That's true but in these cases it's the OS that does it - not the JVM.
From the OS point of view the JVM is just another process and I doubt
there is an interface for the adjustment of the automatic priority
(which in a way would defy "automatic"). The base priority on the other =
hand is to indicate the general priority of a thread / process and I
still don't think it's a good idea to change it all the time.
So, either the OS does honor thread state (mutext, IO etc.) and adjusts
prio accordingly or it doesn't. But I don't think it's the job of the JV=
M.
Cheers
robert
--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/