Re: Testing Program Question
On Feb 14, 2:08 pm, "Leigh Johnston" <le...@i42.co.uk> wrote:
"James Kanze" <james.ka...@gmail.com> wrote in message
news:27bc9db5-36f7-44c0-a14a-c9da8a535bfa@v25g2000yqk.googlegroups.com...
On Feb 13, 12:41 pm, "Leigh Johnston" <le...@i42.co.uk> wrote:
Asserts are a sort of life jacket, to prevent things from
fucking up too much when you screwed up. Removing them
from production code is akin to wearing a life jacket
when practicing in the harbor, but taking it off when you
go to sea. Sometimes, it's necessary to remove asserts
for performance reasons, and for some particular
applications (games?), it's probably preferrable to
remove them completely (in which case, you do have to
study what happens when they are removed---there's no
point in removing an assert if you're just going to get a
crash three statements later).
I disagree, why do you think assert was designed to do
nothing for NDEBUG?
So that you can turn it off when you have to. The designed use
would be to use some application specific macro, defined (or
not) in the command line, and then to wrap the (few) critical
functions in something like:
#ifdef PRODUCTION
#undef NDEBUG // Just in case.
#define NDEBUG
#include <assert.h>
#endif
void critical_function()
{
// ...
}
#undef NDEBUG
#include <assert.h>
Why do you think you're allowed to include <assert.h> multiple
times, with it's meaning depending each time on the current
definition of NDEBUG?
Asserts were designed to be used as a debugging tool so should
do nothing in released production code *unless* a higher
degree of defensiveness is required.
That's your opinion. It doesn't correspond to the design of the
feature, nor good programming practices.
Sorry but I disagree with your opinion.
For the most part, I'm not stating opinion. Look closely at the
design of assert, and the guarantees it gives you.
Different software has different requirements regarding how
defensive you should be. A typical application should not be
using assert to terminate in a released product.
A typical application in what domain. It's clear that any
critical software must terminate as soon as it is in doubt. And
I've pointed out why this is true for an editor (and the same
logic also holds for things like spreadsheets). I also
recognize that there are domains where it isn't true. I'm not
sure, however, what you consider "typical".
A released product should be using exceptions for errors which
are valid during runtime. Assert is used for catching
programming errors, not valid runtime errors or bad user
input.
There's no disagreement on that.
I disagree that only games are an exception,
They're certainly not the only exception. But such exceptions
are just that, exceptions. Most software should ship with
asserts turned on, *if* they can afford the performance impact.
(An awful lot of software is IO bound, so the performance impact
can be ignored.)
I would also argue that such defensiveness is not required for
a typical office application for example (e.g. a word
processor).
It depends on whether you consider trashing the users work
acceptable or not. And what the other costs are---since it
costs nothing to leave it, assuming no performance problems, and
requires extra work to remove it, it's more or less stupid to
choose the less robust solution.
Programmers should get into the habit of adequately testing
their software prior to release (assert helps with this) and
users should get into the habit of regularly backing up their
important data.
It would help if you'd read what was written, before disagreeing
with it. No one is arguing against testing. And it's not the
user who's backing up his data, it's the editor---all of the
editors I know to day regularly checkpoint their data in case
they crash. The whole point is that if there is a programming
error, and the editor continues, it's liable to overwrite the
checkpoint with corrupt data (or the user, not realizing that
the data is corrupt, is liable to overwrite his own data).
The amount of software which requires less defensiveness
probably outnumbers the amount of software that requires
increased defensiveness. If you are worried about cosmic
rays hitting your ram chips then perhaps you should use
assert more! :)
You should always use assert liberally. We're talking about
whether you should turn it off in production code. In other
words, whether you should take an additional, explicit action
(defining NDEBUG) to render the software less robust.
Using assert liberally is fine (I have no problem with this)
but this (in most cases) is an aid during development only,
rather creating hundreds of crash points in a released product
(in most cases).
If you've tested correctly, leaving the asserts active creates
zero crash points in the released product. And if you've missed
a case, crashing is the best thing you can do, rather than
continuing, and possibly destroying more data.
[...]
This is the usual argument put forward in favour of more
defensive programming but in my opinion having an assert after
every other line of code is overkill for a typical office
application as I have already said.
I've never seen assert's used that heavily. And how often or
where you write an assert is a different question---in general,
a minimum would be to assert preconditions for a function, at
least when that function is called from other modules, and
post-conditions for a virtual function, when the derived classes
may be called from other modules. (This means, of course, that
your virtual functions shouldn't be public. But that's a pretty
well established rule anyway.)
I am sorry but you are wrong, you should either be extremely
defensive or not defensive at all, somewhere in-between is
pointless.
When someone starts issuing statements as ridiculous as that, I
give up. It's not humanly possible to be 100% defensive.
Extremely defensive means at least one assert at some point
after call a function which has side effects (which could be a
precondition check before a subsequent function call). This
is overkill for typical desktop applications for example.
It is a nonsense to say that virtual functions shouldn't be
public: a public virtual destructor is fine if you want to
delete through a base class pointer.
The destructor is an obvious exception. But most experts today
generally agree that virtual functions should usually be either
protected or private.
Again, there are exceptions, and I have classes with only public
virtual functions. (Callbacks are a frequent example.) But
they're just that: exceptions.
Bjarne Stroustrup's first virtual function example in TC++PL
is a public Employee::print() method, I see no problem with
this.
For teaching, neither do I. (For that matter, a print function
might be an exception. It's hard to imagine any reasonable pre-
or post-conditions.)
You are probably thinking of virtual functions which are
called as part of some algorithm implemented in a base class,
such virtual functions need not be public as it makes no sense
for them to be but it does not follow that this is the case
for all virtual functions.
No, I'm not thinking of the template method pattern. I'm
thinking of programming by contract.
Genuine runtime errors (e.g. out of memory, disk space
or user input error) are best handled by exceptions or
return codes.
Agreed. Any program which has an assertion failure on
user input error is broken, and the same probably holds
for out of memory, although it's discutable for some
applications. (What happens if the "out of memory"
occurs because the runtime is trying to grow the stack?
Or if the only possible cause of "out of memory" is a
memory leak? Or simply that your OS doesn't handle "out
of memory" safely?)
Running out of stack space is more likely to be due to a
programming error rather than a valid runtime error
condition.
Probably in most applications, although it also depends
somewhat on the OS. (Linux and Solaris don't have
"pre-allocated" stacks of a specific size, so using too much
heap can cause a stack overflow in some specific cases.)
I've worked on applications with embedded recursive descent
parsers, parsing various forms of user input. How deep the
stack needs to be depends on the complexity of the
expression given to it by the user. But in most cases, even
then, you can set some arbitrary complexity limit, test it,
and ensure that the stack is large enough to handle it.
Writing code without some bound on stack growth is incorrect
in my opinion.
Yes, but only because we can't catch the overflow in the same
way we can catch bad_alloc. Otherwise, the principle is the
same.
A compiler should not stack fault when parsing source code of
any complexity for example, it should either be non-recursive
(be heap bound) or have some hard limit. A stack fault is not
acceptable, running out of heap is acceptable and can be
signalled via an exception.
Most people would disagree with you in general, concerning a
compiler. Why should it have an artificial hard limit?
In fact, the fact that running out of stack cannot be gracefully
caught means that we do have to do something. But don't confuse
the cause and the effect.
--
James Kanze