Re: C++ fluency
On May 7, 6:11 pm, Noah Roberts <n...@nowhere.com> wrote:
James Kanze wrote:
On May 7, 6:00 am, Jerry Coffin <jcof...@taeus.com> wrote:
In article <4a02059c$0$2721$cc2e3...@news.uslec.net>,
n...@nowhere.com says...
[ ... ]
I think a better way of characterizing the problem is that the
various latencies are part of the "input". The problem isn't
that the code behaves differently for the same input; the
problem is that the set of input is almost infinite, and that
you usually have no way of controlling it for test purposes.
That's why you apply Karl Popper's principle of "Science as
Falsification" (search that and the article will be the top
link). Your tests then become attempts to prove your code is
broken. You think of as many ways as you can to cause a
failure in your code. There's little reason to try them ALL,
just come up with a scenario that could break your code in the
various general ways that are possible. For instance, if
you're working with threads you can attempt to cause a
deadlock or race condition. There's surely a few different
paths you can check that would cover most of the failures that
you can generate.
Generally not, when threading is involved.
Even this won't turn up everything though. It simply is a
fact that we can never test our code to 100% certainty. To
argue that we shouldn't do TDD because it's impossible to test
EVERYTHING is really a red herring. If we were to buy it as a
valid argument we'd apply it to testing in general and say
THAT'S a waste of time. Instead, what we do is come up with
methods to intelligently test so that we're wrong as little as
possible. Yes, much *THINKING* has to go into what we want to
test.
Certainly. Nothing it 100% certain, and I'm certainly not
saying that you should forego testing of threaded codes, just
because there are certain things the tests can't be guaranteed
to pick up. I was just responding to the statement that you
never write a line of code except as a result of a test that
failed. There are requirements that can't be tested, and you
write code to handle them, even if you can't create a test which
is guaranteed to fail. (More precisely, you write code which
you hope handles them, and you use other methods---which aren't
100% either---to verify it.)
And of course, there are major issues which testing doesn't
address at all: readability or maintainability. I presume that
you use other techniques (code review, pair programming, etc.)
for these.
Of course, what you're doing is taking my statement that I
"know" my code is good from that point and considering it too
literally.
What I'm trying to do is simply make you realize that TDD isn't
a miracle solution. Testing is certainly necessary---we both
agree with that. And at many levels. There are different
solutions to organizing it, and in the end, the important thing
is that it is done, not the details of the solution which
ensures that it is done. If TDD (as you have explained it, not
as some of its other proponents seem to be explaining it, not
the details of the solution which ensures that it is done. If
TDD (as you have explained it, not as some of its other
proponents seem to be explaining it) works for you, to ensure
that adequate testing takes place, fine. I don't say you
shouldn't use it. Just that it is not the only possible
solution, and that it (like all of the other solutions) requires
additional measures, e.g. to ensure that it is actually used,
and used correctly (tests sufficiently complete, etc.), and that
other non-testable requirements (readability, etc.) are met.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34