Re: Coding Standards
On Sep 6, 9:46 pm, Ian Collins <ian-n...@hotmail.com> wrote:
James Kanze wrote:
On Sep 6, 4:38 am, Ian Collins <ian-n...@hotmail.com> wrote:
James Kanze wrote:
On Sep 5, 6:31 pm, Phlip <phlip2...@gmail.com> wrote:
James Kanze wrote:
In particular, you can't
write a single line of code (unit test or implementation) before
you know what the function should do,
I didn't bring up TDD, but if you are curious enough about it
to keep asking these entry-level FAQs,
I'm not asking anything. I'm simply stating an easily
established fact, which wishful thinking seems to cause some
people to ignore. Tests definitly have their role, but until
you know what the code should do, you can't begin to write them.
And until you've written something down in black and white, you
don't know it.
I think we ares starting form a different definition of "know what the
code should do".
Those of us who use TDD, tend to use plain customer language stories as
our requirements. A recent example I had was "add a new type of alarm
that combines up to 10 existing alarms". From another, completely
non-technical customer "Add a notes box if the user selects red".
Writing the requirements specifications is an iterative process.
Obviously, given requests like the above, you start by getting
more information, and writing up some sort of more detailed
requirements for the customer approve.
I think that part is where we diverge. Building the system is an
And each iteration has its requirements, which must be
documented. Nothing new here.
We do not write up more detailed requirements, we
implement the story and give the customer working code, the easiest
thing for them to understand.
And what's the "story", if it isn't some sort of requirements.
Probably incomplete, since there are aspects that the client may
not even understand, but that you'll want to document---see my
posting about things like a dropped connection. But with the
use of the word "story", you've sort of half-way agreed with me
anyway: you don't just randomly code something; you specify what
you are going to code first. My main argument is that this
specification 1) must be written (not just oral), and 2) is
documentation. (Afterwards, we can agree to disagree with
regards to how detailed this specification must be. My
experience is the more detailed the better, as long as it
doesn't introduce any implementation details.)
Depending on the nature of the project,
I'd use one or two week iterations, shorter if the customer is unsure
what they really want.
And longer, perhaps, if there are rigorous technical documents
detailing much of the interface (e.g. a LDAP server, or a C++
compiler---if the customer's story is he wants to compile C++, I
doubt you can come up with anything close in a week). If the
final customer requests an LDAP server front end for his data
base, I certainly won't require him to review partial code every
week or so, regardless of what we mean by review. His data base
schemas and the LDAP protocol are a very precise specification
of what is required.
Of course, internally, once the design is done, the people
working on one subsystem may be considered clients (customers)
of another subsystem, and more frequent deliveries (to detect
design flaws) would definitely be desirable there.
I wonder if part of the problem isn't that you are confusing
prototyping with testing.
Most definitely not. I would tinker with a prototype to investigate new
technologies or ideas, but the customer doesn't get to see the
prototype, only the tested implementations.
That's a different type of prototype. For interactive programs,
it's often useful to hack up a prototype to verify ergonomy
features before you've really begun to define what will take
place behind the scenes. In several cases, I've even seen
Tcl/tk used for this, although the final application would be in
C++. If you can do this in a way that the prototype code can be
used, at least partially, in the final application, I'm all for
For interactive programs, some degree
of prototyping is often useful, or even essential, in order for
the user to really see the impact of what he is approving.
So does working with short iterations. With a couple of customers, I
kept a live system (updated on each commit) running so they could play
with it as it grew and feed back changes.
I'm not too sure what kind of systems you work on, but my
customers don't "play" with the system. There are some aspects
where this sort of play is useful (e.g. ergonomy of a GUI), but
for a lot of things, it simply isn't relevant. Many of the
details in my current application (a server), for example, are
conditionned by various stock market and bookkeeping laws, and
even for the others, many types of changes would require
modification in the various client software before them could be
exercised. And what about my previous contract, where the
requirements were simply to take the existing Radius server
(used for dynamic allocation of IP addresses in a wireless
network) and upgrade it so that it could handle 170 transactions
per second (given that each data base transaction required an
non-compressible 10 millisecond write to disk). How does such
interaction play a role when issues like transactional integrety
or thread safety are the issues? (Maybe I'm oversensitive about
this, but I've just seen so much code which passed all tests,
but which would fail once or twice a year in actual use.)
James Kanze (GABI Software) email:firstname.lastname@example.org
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34