Re: A simple unit test framework
On May 6, 10:01 pm, Ian Collins <ian-n...@hotmail.com> wrote:
James Kanze wrote:
On May 6, 3:12 am, Ian Collins <ian-n...@hotmail.com> wrote:
James Kanze wrote:
I've worked with the people in charge. We evaluated the
procedure, and found that it simply didn't work. Looking at
other companies as well, none practicing eXtreme Programming
seem to be shipping products of very high quality. In fact, the
companies I've seen using it generally don't have the mechanisms
in place to actually measure quality or productivity, so they
don't know what the impact was.
We certainly did - field defect reports and the internal cost of
So what were the before and after measures?
You should at least publish this, since to date, all published
hard figures (as opposed to annecdotal evidence) goes in the
opposite sence. (For that matter, the descriptions of eXtreme
Programming that I've seen didn't provide a means of actually
I don't have the exact before figures,
Which is typical for most process improvements:-). At least at
the beginning; one of the problems in the initial process (or
lack of it) is that it doesn't generate figures.
but there were dozens of bugs in
the system for the previous version of the product and they took a
significant amount of developer and test time. The lack of unit tests
made the code extremely hard to fix without introducing new bugs.
Comprehensive unit tests are the only way to break out of this cycle.
Comprehensive unit tests are important for many reasons. I'm
not arguing against them. I'm saying that they aren't
sufficient, if they are the only measure. For that matter, how
do you know the tests are comprehensive? The only means I know
is to review them at the same time you review the code.
Comprehensive unit tests are most important for maintenance.
New code should always be reviewed. But a review is not without
costs, and reviewing an entire module because someone has
changed just a couple of characters in just one line isn't
really cost effective. Comprehensive unit tests, on the other
hand, are very effective at catching slips of the finger in the
Another point---one thing that eXtreme Programming does get
right (but we were doing it before eXtreme Programming came
along)---is that you never correct an error detected in
integration or in the field without first writing a test which
detects it. (If possible, of course. Some errors just aren't
testable.) During everyday maintenance, we do use something
along the lines of what you suggest: don't fix the error until
you have a (unit) test which fails. But if you've got a good
process, everyday maintenance is well under 10% of your total
activity; errors in integration or in the field are exceptional
We didn't bother tracking bugs for the replacement product, there were
so few of them and due to their minor nature, they could be fixed within
a day of being reported. We had about 6 in the first year.
And why weren't they being tracked? 6 is, IMHO, a lot, and can
certainly be reduced. Find out why the bug crept in there to
begin with, and modify the process to eliminate it. (I've
released code where the only bug, for the lifetime of the
product, was a spelling error in a log message. That happened
to be the product where there weren't any unit tests, but
admittedly, it was a very small project---not even 100 KLoc in
When I actually talk to the engineers involved, it turns out
that e.g. they weren't using any accepted means of achieving
quality before. It's certain that adopting TDD will improve
things if there was no testing what so ever previously.
Similarly, pair programming is more cost efficient that never
letting a second programmer look at, or at least understand,
another programmer's code, even if it is a magnitude or more
less efficient than a well run code review.
Have you tried it? Not having to hold code reviews was one of the
biggest real savings for us.
Yes, we tried it. It turns out the effective code (and design)
review is the single most important element of producing quality
Sounds like you weren't pairing.
We tried that, but found that it's no where near as cost
effective as good code review. The code review brings in an
"outside" view---someone who's not implicated in the code. In
practice, it turns out that it's this external viewpoint which
really finds the most bugs. And of course, code review costs
less than pairing (although not by a large amount).
Ideally, you do both, but that can be overly expensive, and the
incremental benefits of pairing aren't worth the cost, once you
have established good code reviews.
If you don't have an effective review process, of course,
pairing has definite benefits.
established good practices, however, most of the suggestions in
eXtreme Programming represent a step backwards.
That's your opinion and you are entitled to it.
Actually, it's not an opinion. It's a result of concrete
So you practiced full on XP for a few months and measured the results?
One team used full XP for about six months. We stopped the
effort when we saw their error rate shooting up significantly.
(We were at about 1 error per 100 KLoc, going into integration,
at that time. The team using full XP ended up at about ten
James Kanze (GABI Software) email:email@example.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34