Re: JVM vs my VM
Peter Duniho wrote:
Sorry, you'll have to be more specific. If you would like to avoid the
overhead of having to reinitialize thousands of object instances each
The only overhead compared to new allocation within the loop is the allocation
itself - the initialization would need to be done in either case. If the
preallocated object's value does not change, then of course you would create a
final reference to it outside the loop, but if you have to load new values
each time through then you do that regardless.
There is an overhead to allocation, but it has the virtue of presetting all
instance values to zero/null. That virtue is nullified, pardon the pun, if
you need non-zeroish initial values. The allocation overhead is small, but
could be significant if a loop runs enough times.
OTOH, a pre-allocated object will get tenured, so its individual GC profile
gets worse.
OTOOH, you probably won't have enough of these to create much memory pressure.
Unless they contain references to young-generation objects, or worse, masses
of young-generation objects. Then the tenured object keeps the younguns alive
longer than they should be, until they age into the rest home, too. Depending.
OTOOOH, certain new allocations within a loop are candidates for magnificent
HotSpot optimization. For value type simulation of the kind needed for the
benchmark under discussion, and pardon me again because I haven't looked at
the source for it so I'll speak from some ignorance, you might be able to
create fairly simple value objects, i.e., ones with a few attributes
represented either as simple accessor methods or public variables. HotSpot
under "-server" will likely recognize that these objects have value semantics,
and enregister their attributes, eliminating the 'new' allocation altogether.
Of course, I'm speculating unless I test this objectively for a particular
program. Just because the possibility for this optimization exists, and
HotSpot is documented to do this sort of thing, doesn't mean that it will
happen for a given scenario.
time through a loop, along with the overhead of traversing all those
objects repeatedly created in the youngest generation of the heap, what
What overhead? There is no such traversal, ergo no such overhead. Zero. I
assume you're referring to traversal of dead objects, because if they were
alive and the algorithm called for their traversal that wouldn't be a problem.
GCs don't traverse dead objects.
would YOU do instead? And why do you think pre-allocating objects is an
"anti-optimization"?
Long-lived objects in Java can be and generally are an anti-optimization. For
tight loops executed a gazillion times with objects expensive to construct
(not simple value objects, in other words) and complex enough to defeat ready
HotSpot optimization, they can help. For loops not so performance critical
that you care about the odd 10 ns per iteration, or that are amenable to
HotSpot's care, or where the object contains references to young-generation
objects, new allocation may well be superior.
--
Lew