Re: C++ fluency
On May 6, 5:51 pm, Noah Roberts <n...@nowhere.com> wrote:
Jerry Coffin wrote:
In article <76d32lF1c73b...@mid.individual.net>,
ian-n...@hotmail.com says...
[ ... ]
If you use TDD, the tests always fail first time. You add
the code to make them pass.
This makes me curious. Why would you bother running the
tests if you _know_ they're going to fail? I've always
written the tests first, but then written at least some
minimal bit of code that should pass at least some part of
the tests before attempting to run the tests. What's the
point of running the test when you're sure there's no
possibility of even coming close to passing it at all?
Because you might have fscked up and your test might not even
be running for one. Before Boost.Test became the great, easy
to use testing framework it is now, I used CppUnit. With
CppUnit you had to register each individual test method by
hand...sometimes one would forget, write lots of code thinking
it's passing (because it doesn't say, "With one function not
being run as test") and then be stuck with a big bunch of
stuff to debug when you realize what happened.
So I always wrote one to explicitly fail first, by telling the
test to fail.
Or you forgot to implement both the test and the functionality
it was to test:-).
But I think that Jerry is just asking for a little common sense.
For more complicated things, it may be worth writing code to
"test the test", along the lines that Philip suggests, but in a
lot of cases, it's probably more effort than it's worth.
Carried to an extreme: you then have to write code which tests
the code which "tests the tests", to ensure that it really does
have the error it's supposed to be checking for, and so on, ad
infinitum. What you actually have to do is recurse until the
"test" is trivial enough that you're satisfied it's valid.
The second reason is on principle. In TDD you're not supposed
to write ANY code that isn't guided by a test failure.
So what do you do about things which can't be thoroughly tested?
Like thread safety, or most floating point.
The third reason is that in theory (and it's happened to me in
practice) you could be writing a test that doesn't actually
fail. Nobody's perfect and you might not be writing your
tests write. The best way to make sure your test tests the
code you're going to write is to write it first and make sure
it fails the way you expect it to.
Certainly. And what if the code you write to ensure that the
test fails actually causes it to fail for some unrelated reason.
Philip gave a list of things that have to be considered.
I disagree with Schwab's assessment that you start with a
bunch of tests. We do try to guide assessment of completion
by integration tests (and have a team specifically for that so
the same person isn't both developing and testing the feature
for completion) but unit tests are different. It seems to me
that most TDD proponents describe the approach as a 1:1 thing.
Write the test, make it pass, write another test, make it
pass... That's how I do it and I've found the approach very
beneficial.
At the lowest level, its one fashion of doing things. I think
it may depend on personality. I tend to write the code which is
to be tested before I write the tests, but at least for more
complicated components, I will write the code and the tests for
one thing, and only go on to the next once the tests
past---except for the order of writing the code and the tests, I
think this pretty much corresponds to what you are doing. On
the other hand, I've known people who prefer larger blocks,
writing the entire component, then all of the tests, then
testing. In the end, I think it should be left up to the
individual developer (and a good developer will give all of the
combinations an honest trial)---what counts in the end is that
the code passes all of the tests, that the tests are as
exhaustive as possible, and that the code is easily understood
and maintained by someone who's never seen it before. The first
is verified by the tests themselves, automatically (assuming
your build environment runs the tests automatically, as it
should). The other two are verified by code review (which also
verifies things like thread safety, which doesn't lend itself to
testing).
For one thing, trying to write a lot of code to make a whole
series of unit tests pass is rather unpractical. It's hard to
even get the tests to compile most of the time, especially if
you're writing new code. You need positive feedback more
often to make sure that what you've done so far is correct.
Now, you might write a bunch of tests and comment them out or
something, I do that occasionally when I know exactly what's
coming.
Another aspect of TDD though is that it isn't just a
development guiding process but also a design guiding process.
And how do you design the tests? Or rather, how do you define
what the tests should test? At some point, your client or user
specifies a problem, not a series of tests, and you have to
"design" something (not necessarily even a program) which solves
that problem.
The actual situation varies a lot depending on the domain.
Where I am now, the "problem" is generally formulated very
vaguely, and it's up to me (or more often my collaborators) to
use their domain knowledge to determine what is really needed,
and how to best implement it. In other cases, the
"specification" has come in the form of an official standard,
an RFC, for example, which includes state diagrams,
etc.---things that are logically part of the design.
One of the best features of TDD is that by making sure your
code is unit testable it rather forces a better design.
I don't know. Back in the days of punched cards and overnight
compiles, unit tests were universal; a lot of the designs I saw
then wouldn't qualify as "good" today.
Globally, although there is a correlation between the two, I
suspect you're confusing cause and effect. A desire for quality
will lead to both cleaner design and extensive unit tests. A
good process will insist on both.
It's one thing to scratch out a design on paper or UML, it's
quite another to make it actually function and TDD does a
great job of forcing a solid, *functional* design.
Maybe we mean something different by "design". Unit tests
certainly help enforcing the design, and testability should
generally be considered when doing the design. But the tests
themselves aren't a silver bullet.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34