Re: How to programmatically test for LINEAR TIME (as opposed to qudratic)?
On 17.10.2013 00:49, Ian Collins wrote:
Alf P. Steinbach wrote:
I'm totally unfamiliar with programmatic testing of run-time behavior.
Currently, my Google Test case (? whatever) looks like this, where the
macro CPPX_U just provides a strongly typed literal (as discussed in my
"literals" article in the August issue of ACCU Overload mag):
[code]
<snip>
[/code]
I started out with a max 5% difference criterion and only 10.000
concatenations, but the resolution of the timer in Windows is only
milliseconds. So that's one problem, that this is apparently sensitive
to the number of iterations. Too few for the test machine and result are
imprecise, too many and the test runs too long (maybe hours).
This is really a question about operating system timers. On my
platforms (Solaris and its derivatives) I would use per-process high
resolution timers (which use a hardware source) for this kind of test.
Does windows have those?
Well, it does, and apparently with somewhat higher resolution than the
1000 ticks/sec of Visual C++'s std::chrono::high_resolution_clock:
[code]
#include <rfc/winapi_wrappers/windows_h.h>
#include <iostream> // std::cout, std::cerr, std::endl
void cpp_main()
{
LARGE_INTEGER result = {0};
if( !::QueryPerformanceFrequency( &result ) ) { throw 666; }
using namespace std;
cout << result.QuadPart << " ticks/sec" << endl;
}
#include <rfc/cppx/default_main.hpp>
[/code]
2435917 ticks/sec
On this laptop, i.e. about two and half thousand times better. :-)
So, now to write a platform-dependent version of the Timer class.
For those who just need a timer -- and it's probably very good
resolution in *nix -- here's the original standard code I used, just
ad hoc code (ironically, while used in testing not itself tested):
[code]
#pragma once
#include <chrono>
namespace cppx{ namespace instrumentation{
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::nanoseconds;
// Most likely this timer will measure wall clock time, not user
process time.
// It depends on the standard library implementation of
high_resolution_clock.
class Timer
{
public:
typedef high_resolution_clock Clock;
typedef Clock::time_point Time;
typedef Clock::duration Duration;
private:
Time start_;
Time end_;
bool is_running_;
public:
typedef long long Int64;
auto duration() const
-> Duration
{ return end_ - start_; }
auto nano_seconds() const
-> Int64
{ return duration_cast<nanoseconds>( duration() ).count(); }
auto seconds() const
-> double
{
static double const nano = 1e-9;
return nano*nano_seconds();
}
void stop()
{
end_ = Clock::now();
is_running_ = false;
}
void carry_on() // A.k.a. "continue", which however is a
C++ keyword.
{
start_ = Clock::now() - duration();
is_running_ = true;
}
Timer()
: start_( Clock::now() )
, end_()
, is_running_( true )
{}
};
} } // namespace cppx::instrumentation
[/code]
Cheers, & thanks!,
- Alf