Re: A simple unit test framework
On May 7, 10:02 am, anon <a...@no.no> wrote:
James Kanze wrote:
On May 5, 2:31 am, Ian Collins <ian-n...@hotmail.com> wrote:
Pete Becker wrote:
Ian Collins wrote:
Pete Becker wrote:
If you apply TDD correctly, you only write code to pass tests, so al=
l of
your code is covered.
Suppose you're writing test cases for the function log, which calcula=
tes
the logarithm of its argument. Internally, it will use different
techniques for various ranges of argument values. But the specificati=
on
for log, of course, doesn't tell you this, so your test cases aren't
likely to hit each of those ranges, and certainly won't make careful
probes near their boundaries. It's only by looking at the code that y=
ou
can write these tests.
Pete, I think you are missing the point of TDD.
It's easy for those unfamiliar with the process to focus on the "T" and
ignore the "DD". TDD is a tool for delivering better code, the tests
drive the design, they are not driven by it.
Which, of course, is entirely backwards.
It is, but you get better code
Better code than what?
In practice, if a coder wants to write the unit tests first, no
one will stop him; his deliverable consists of both the code and
the unit tests. But there's absolutly no advantage in doing so,
and most people I've worked with feel more comfortable writing
the actual code first.
So if I were tasked with
writing he function log, I'd start with a simple test, say log(10) and
then add more tests to cover the full range of inputs.
That is, of course, one solution. It's theoretically possible
for log, but it will take several hundred centuries of CPU time,
which means that it's not very practical. In practice, the way
you verify that a log function is correct is with code review,
with testing of the border cases (which implies the what Pete is
calling white box testing), to back up the review.
If you were to write a log() function, would you test it
against all floats?
That's precisely my point: you can't. So you have to analyse
the code, to determine where the critical boundaries are, and
test the limit cases. And of course, you have to document that
analysis, because the code reviewers will definitly want to
review it.
These tests
would specify the behavior and drive the internals of the function.
In this case, I think that the behavior is specified before
hand. It is a mathematical function, after all, and we can know
the precise result for every possible input. In practice, of
course, it isn't at all possible to test it, at least not
exhaustively.
Remember, if code isn't required to pass a test, it doesn't get writte=
n.
So your log function only has to produce correct results for the
limited set of values you use to test it? I hope I never have
to use a library you wrote.
If you take some random numbers (for example 0.0, 0.007, 0.29, 0.999,
1.0, 1.0001, 1.9900, 555.0, 999999.0) and your log function for these
numbers gives correct results (with small enough error) you can be sure
your log function is good
Obviously, you don't know how floating point arithmetic works,
or you wouldn't make such a stupid statement.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34