How to programmatically test for LINEAR TIME (as opposed to qudratic)?
I'm totally unfamiliar with programmatic testing of run-time behavior.
Currently, my Google Test case (? whatever) looks like this, where the
macro CPPX_U just provides a strongly typed literal (as discussed in my
"literals" article in the August issue of ACCU Overload mag):
[code]
// Linear time concatenations.
{
enum { n = 40000 };
String s4;
String s5( s4 ); // Addref.
for( int i = 1; i <= n; ++i )
{
s4 += CPPX_U( "xyz" );
}
EXPECT_EQ( n*3, s4.length() );
Timer t1;
for( int i = 1; i <= n; ++i )
{
s4 += CPPX_U( "xyz" );
}
t1.stop();
EXPECT_EQ( 2*n*3, s4.length() );
Timer t2;
for( int i = 1; i <= n; ++i )
{
s4 += CPPX_U( "xyz" );
}
t2.stop();
EXPECT_EQ( 3*n*3, s4.length() );
double const time_difference = t2.seconds() - t1.seconds();
double const percent_variation =
100*(time_difference/t1.seconds());
EXPECT_TRUE( percent_variation < 20.0 ) // Negative is OK,
that's "better" than linear.
<< percent_variation << "%: " << t1.seconds() << " " <<
t2.seconds();
}
[/code]
I started out with a max 5% difference criterion and only 10.000
concatenations, but the resolution of the timer in Windows is only
milliseconds. So that's one problem, that this is apparently sensitive
to the number of iterations. Too few for the test machine and result are
imprecise, too many and the test runs too long (maybe hours).
Also, I discovered that due to gremlins or something, when I use the
Visual Studio debugger to "run to cursor" on terminating right brace
above, then for mysterious reasons more time is used in the t2 loop than
in the t1 loop. So, it seems sort of unreliable. And even though I now
get consistent results, 20% leeway feels like it doesn't necessarily
prove linear time?
Any thoughts or experience?
- Alf (in uncharted territory!)