Re: A simple unit test framework
On May 6, 3:03 am, Gianni Mariani <gi3nos...@mariani.ws> wrote:
James Kanze wrote:
...
Yes, but nobody but an idiot would pay you for such a thing.
Thread safety, to site but the most obvious example, isn't
testable, so you just ignore it?
Common misconception.
1. Testability of code is a primary objective. (i.e. code that can't be
tested is unfit for purpose)
2. Any testing (MT or not) is about a level of confidence, not absolutene=
ss.
I have discovered that MT test cases that push the limits of the code
using random input does provide sufficient coverage to produce a level
of confidence that makes the target "testable".
Then write it up, and publish it, because you have obviously
struck on something new, which no one else has been able to
measure. But you must mean something else by Monte Carlo
testing than has been meant in the past. Because the
probability of finding an error by just throwing random data at
a problem is pretty low for any code which has passed code
review.
If you consider what happens when you have multiple processors
interacting randomly in a consistent system, you end up testing more
possibilities than can present themselves in a more systematic system.
However, with threading, it's not really systematic because external
events cause what would normally be systematic to be random. Now
consider what happens in a race condition failure. This normally
happens when two threads enter sections of code that should be mutually
exclusive. Usually there are a few thousand instructions in your test
loop (for a significant test). The regions that can fail are usually
10's of instructions, sometimes 100's.
The regions that can fail are often of the order of 2 or 3
machine instructions. In a block of several million. And in
some cases, the actual situation an only occur less often than
that: there is a threading error in the current implementation
of std::basic_string, in g++, but I've yet to see a test program
which will trigger it.
If you are able to push randomness, how many times do you need
to reschedule one thread to hit a potential problem.
Practically an infinity.
Given cache latencies, pre-emption from other threads, program
randomness (like memory allocation variances) you can achieve
pretty close to full coverage of every possible race condition
in about 10 seconds of testing. There are some systematic
start-up effects that may not be found, but you mitigate that
by running automated testing. (In my shop, we run unit tests
on the build machine around the clock - all the time.)
So perhaps, after a couple of centuries, you can say that your
code is reliable.
So that leaves us with the level of confidence point. You can't achieve
perfect testing all the time, but you can achieve high level of
confidence testing all of the time.
Certainly. Testing can only prove the existence of errors.
Never the absense. Well run shops don't count on testing,
because of this. (That doesn't mean that they don't test. Just
that they don't count on testing, alone, to ensure quality.)
It does require a true multi processor system to test adequately. I
have found a number of problems that almost always fail on a true MP
system that hardly ever fail on a SP system. Very rarely have I found
problems on 4 processor or more systems that were not also found on a 2
processor system, although, I would probably spend the money on a 4 core
CPU for developer systems today just to add more levels of confidence.
In practice, I have never seen a failure in the wild that could not be
discovered with a properly crafted MC+MT test.
And I have.
[...]
My customers want to know what the code will do, and how much
development will cost, before they allocate the resources to
develope it. Which means that I have a requirements
specification which has to be met.
I have met very few customers that know what a spec is even if it
smacked them up the side of the head. Sad. Inevitably it leads to
pissed off customer.
Regrettably, it is often up to the vendor to define what the
customer actually needs. But that doesn't mean not writing
specifications.
--
James Kanze (Gabi Software) email: james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34