Re: Question on -Xms/-Xmx and -XX:MaxPermSize in JVM start parameter
Tom Anderson wrote:
On Sun, 10 May 2009, Arne Vajh?j wrote:
Tom Anderson wrote:
On Sat, 9 May 2009, Arne Vajh?j wrote:
If I read the figure in:
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html#1.1.Sizing%20the%20Generations|outline
correctly then MaxPermSize is not inclusive in Xmx.
But it is not that explicit in the text, so it would be nice if one of
JVM gurus would comment.
I'm no JVM guru, but i do know that this is correct: there are a
number of separate memory pools in the JVM, whose maximum sizes are
set separately. They include the permanent space (set with
-XX:MaxPermSize) and the general heap (set with -X:mx). However,
there are more, besides those two! I don't know much about them, but
i know they exist, because we frequently observe apps using more
space than the total of mx and PermGen. I speculate that it could be
stacks or memory allocated by native code, but i really don't know.
ordinary heap = app objects
perm heap = app class definitions
It seems obvious to me that more stuff is needed - like the JVM itself !
ISTR reading that HotSpot allocates (some of) its structures on the
heap, so maybe not as much as you might think. Or, of course, maybe much
more!
Personally, i find it incredibly irritating that there isn't a flag
to limit total memory use, which is surely what people actually need.
I couldn't give two hoots about the size of PermGen, but if i have a
machine with 4 GM of RAM that's running two app server stacks in
parallel, i bloody well need to be able to put a hard limit of 2 GB
(or whatever) on each. Yes, i can probably do this with ulimit in the
startup script, but why isn't there just an option for it?
All the JVM flags limit virtual memory not RAM.
Correct - as does ulimit (or rather, so does ulimit -v; ulimit -m does
affect physical memory). But RAM use cannot exceed virtual memory use,
so this is an effective way of keeping java processes the right size to
fit in RAM. And the reason to do that is the rule of thumb is to keep
everything in RAM - it's better to do more frequent GC work to keep your
app in RAM than to let it leak into virtual memory.
That is good general advice.
But if you prefer an OOME instead of degrading performance,
then you could use that limit you want.
Hang on, are you suggesting that using ulimit would lead to OOME when
JVM flags wouldn't, or that either would lead to OOME when not using a
limit wouldn't?
If the JVM will not allocate or the JVM can not allocate what
you need then you get OOME.
I don't know about ulimit; i would hope that the JVM would cope
gracefully with hitting a memory limit, ie not crashing or getting
over-excited with OOMEs. And i don't think setting a hard limit on
memory use, by any method, is likely to lead to OOME in my particular
situation: as i mention above, it will just lead to more frequent GCs.
I'm talking about cutting down memory use from 2.5 GB to 2GB, not 2.5 GB
to 0.5 GB. I'm fairly confident that my app will run in that amount of
memory; it uses most of its RAM for caches, so those can always be tuned
to use less memory (i don't think they're self-tuning, sadly).
If the app can run in MIN(Xmx,ulimit) then you should not get OOME.
But I see your point. You want to decrease paging by increasing GC'ing.
Interesting point.
I believe that a JVM should itself increase GC'ing if it notice
a lot of PF's. But I have no idea whether current implementations
actually do do.
Arne