Re: A simple unit test framework
On May 6, 3:30 am, Gianni Mariani <gi3nos...@mariani.ws> wrote:
James Kanze wrote:
On May 6, 1:27 am, Gianni Mariani <gi3nos...@mariani.ws> wrote:
Pete Becker wrote:
Yup. Typical developer-written test: I don't understand testing well
enough to do it right, so I'll do something random and hope to hit a
I have yet to meet a "test" developer that can beat the monte carlo te=
OK - I agree, there are cases where a monte-carlo test will never be
able to test adequately, but as a rule, it is better to have a MC test
than not. I have uncovered more legitimate problems from the MC test
than from carefully crafted tests.
Which proves that you don't have anyone who knows how to write
tests. A carefully crafted test will, by definition, find any
problem that a MC test will find.
We will have to agree to disagree on this.
There's nothing to disagree with. It's a basic definition. If
a MC test finds the error, and a hand crafted test doesn't, the
hand crafted test isn't well designed or carefully done.
I have one anecdotal evidence which suggests that no-one is capable of
truly foreseeing the full gamut of issues that can be found in a well
designed MC test.
I have more than anecdotal evidence that there are significant
errors which will slip through MC testing. Admittedly, the
most significant ones also slip through the most carefully
crafted tests as well. It is, in fact, impossible to write a
test for them which will reliably fail.
This is why no shop serious about quality will rely solely on
A pass on an MC test raises the level of confidence which is always a
Certainly. It's just the often, generating enough random data
is more effort than doing things correctly, and beyond a very
low level, doesn't raise the confidence level very much. If we
take Pete's example of a log() function, testing with a million
random values doesn't really give me much more confidence than
testing with a hundred, and both give significantly less
confidence than a good code review, accompanied by testing with
a few critical values. (Which values are critical, of course,
being determined by the code review.)
In my experience, the main use of MC tests is to detect when
your tests aren't carefully crafted. Just as the main use of
testing is to validate your process---anytime a test reveals an
error, it is a sign that there is a problem in the process, and
that the process needs improvement.
If I read between the lines here, I think you're saying that we need
test developers to conceive every kind of possible failure. I have yet
to meet anyone who could do that consistently and I have been developing
software for a very long time.
The probability of a single programmer missing something is
close to 1, I agree. The probability of several programmers
missing the same thing, on the other hand, is close to 0. And
the probability of a random test hitting the single input value
for which the code doesn't work is 1/N, where N is the number of
input values. If N is small, exhaustive testing is obviously a
perfect solution. In most cases, however, N is large enough
that a significant sampling by random selection is simply not
possible in a reasonable time.
I don't think your premise (if I read it correctly) is achievable.
It seems to work in practice. At least one company is (or was)
at SEI level 5, and even companies at level 3 regularly turn out
software with less than one error per 100 KLoc, going into
I lean toward getting making the computer do as much work as possible
because it is much more consistent than a developer.
The problem is that the computer only does what you tell it to
do. If you don't tell it to test such and such a feature, it
won't. If you don't tell it what the critical values are
(limits, etc.), then it is unlikely that it will hit them by
James Kanze (Gabi Software) email: firstname.lastname@example.org
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34