Re: Testing Program Question
On Feb 13, 12:41 pm, "Leigh Johnston" <le...@i42.co.uk> wrote:
Asserts are a sort of life jacket, to prevent things from
fucking up too much when you screwed up. Removing them from
production code is akin to wearing a life jacket when
practicing in the harbor, but taking it off when you go to
sea. Sometimes, it's necessary to remove asserts for
performance reasons, and for some particular applications
(games?), it's probably preferrable to remove them
completely (in which case, you do have to study what happens
when they are removed---there's no point in removing an
assert if you're just going to get a crash three statements
later).
I disagree, why do you think assert was designed to do nothing
for NDEBUG?
So that you can turn it off when you have to. The designed use
would be to use some application specific macro, defined (or
not) in the command line, and then to wrap the (few) critical
functions in something like:
#ifdef PRODUCTION
#undef NDEBUG // Just in case.
#define NDEBUG
#include <assert.h>
#endif
void critical_function()
{
// ...
}
#undef NDEBUG
#include <assert.h>
Why do you think you're allowed to include <assert.h> multiple
times, with it's meaning depending each time on the current
definition of NDEBUG?
Asserts were designed to be used as a debugging tool so should
do nothing in released production code *unless* a higher
degree of defensiveness is required.
That's your opinion. It doesn't correspond to the design of the
feature, nor good programming practices.
I disagree that only games are an exception,
They're certainly not the only exception. But such exceptions
are just that, exceptions. Most software should ship with
asserts turned on, *if* they can afford the performance impact.
(An awful lot of software is IO bound, so the performance impact
can be ignored.)
I would also argue that such defensiveness is not required for
a typical office application for example (e.g. a word
processor).
It depends on whether you consider trashing the users work
acceptable or not. And what the other costs are---since it
costs nothing to leave it, assuming no performance problems, and
requires extra work to remove it, it's more or less stupid to
choose the less robust solution.
The amount of software which requires less defensiveness
probably outnumbers the amount of software that requires
increased defensiveness. If you are worried about cosmic rays
hitting your ram chips then perhaps you should use assert
more! :)
You should always use assert liberally. We're talking about
whether you should turn it off in production code. In other
words, whether you should take an additional, explicit action
(defining NDEBUG) to render the software less robust.
With regards to "typical desktop applications", I'm not too sure
what you mean by that. As I said, games are a possible
exception, where removing asserts makes sense. But which is
preferrable for an editor: that it crash, leaving you to recover
using its last check-point save (5 seconds or 5 keystrokes
before the crash), or that it stumble on to overwrite all of
your data, and save that at a check-point before finally
crashing?
This is the usual argument put forward in favour of more
defensive programming but in my opinion having an assert after
every other line of code is overkill for a typical office
application as I have already said.
I've never seen assert's used that heavily. And how often or
where you write an assert is a different question---in general,
a minimum would be to assert preconditions for a function, at
least when that function is called from other modules, and
post-conditions for a virtual function, when the derived classes
may be called from other modules. (This means, of course, that
your virtual functions shouldn't be public. But that's a pretty
well established rule anyway.)
Genuine runtime errors (e.g. out of memory, disk space or user
input error) are best handled by exceptions or return codes.
Agreed. Any program which has an assertion failure on user
input error is broken, and the same probably holds for out of
memory, although it's discutable for some applications. (What
happens if the "out of memory" occurs because the runtime is
trying to grow the stack? Or if the only possible cause of "out
of memory" is a memory leak? Or simply that your OS doesn't
handle "out of memory" safely?)
Running out of stack space is more likely to be due to a
programming error rather than a valid runtime error condition.
Probably in most applications, although it also depends somewhat
on the OS. (Linux and Solaris don't have "pre-allocated" stacks
of a specific size, so using too much heap can cause a stack
overflow in some specific cases.) I've worked on applications
with embedded recursive descent parsers, parsing various forms
of user input. How deep the stack needs to be depends on the
complexity of the expression given to it by the user. But in
most cases, even then, you can set some arbitrary complexity
limit, test it, and ensure that the stack is large enough to
handle it.
--
James Kanze