Re: micro-benchmarking
Giovanni Azua wrote:
Hi Mark,
Thank you for your nice feedback!
"Mark Space" <markspace@sbc.global.net> wrote
1. Are your JVM parameters redundant with -server? That flag might remove
the need for a "warm-up time" for all tests. -server might also allow
others to test and compare with your results more easily.
I am still looking around but have not found yet what the -server actually
does ...
Basically it doubles the speed of Java programs compared to the -client
option, as measured on enterprise-class hardware (e.g., Sun servers) in
real-world applications.
The "-server" option turns on every HotSpot optimization in the JVM - loop
unrolling, lock escape analysis, compilation and decompilation of key code
blocks between bytecode and native code with On-Stack Replacement (OSR), dead
code elimination, and the like. Unlike static optimizers, HotSpot can account
for transitory factors, for example, that a variable isn't changing at this time.
A good idea (I think brought up by Tom) would be to measure each iteration
separately and then discard outliers by e.g. [sic] discarding those that exceed
the abs [sic] diff [sic] between the mean and the stddev [sic].
That technique doesn't seem statistically valid.
In the first place, you'd have to use the outliers to calculate the mean and
"stddev".
I've seen techniques before that discard the endmost data points, but never
ones that required statistical analysis to decide what to include or reject
for the statistical analysis.
--
Lew