Re: Invalid parameter handling
On Mar 27, 4:07 pm, Kaba <REkalleMOunderscoreVErutane...@hotmail.com>
wrote:
This is a combined reply to James and Gerhard to avoid branching.
Yes, autosave in itself is an important feature as James pointed out, in
that the "crash" can become from example by pulling the cord out.
I use assertions very frequently myself. They also create a kind of
secure checking web. But I differentiate between inconsistencies "coming
from the inside" and those "coming from the outside". For example, if I
were to develop an implementation of the std::map using a red-black
tree, an inner inconsistency would be that some of the red-black
invariants were detected broken. Or that a null-pointer was given by a
private function, which should not have happened. By using the
assertions I get more assured that the class really does what I mean to:
they check for self-consistency inside a small closed system (the
class).
Inconsistencies coming from the outside are just those that break the
preconditions of the function parameters: that the caller gives
parameters that are invalid for the called function. It is my point that
these should not be treated with in a harsh way, like shutting the
program down. Note that these inconsistencies come from higher
abstraction levels: the callers use the classes.
I agree up to a point. I would never crash because of something
funny on the line, for example. The question is: what is really
"outside"? In my mind, the only place you can really draw it
(or at least, the lowest place) is the process. Anything within
the process is sharing memory; if the error comes from memory
corruption (very often the case), then you don't know the state
of any of the rest of the program.
In every case, in the end, it's a cost if you do vs. cost if you
don't problem. In practice, however, I find it preferable to
design systems so that a process can crash. It's going to
happen anyway, so why not take it into account. And once the
system is robust against the process crashing, it's far better
to crash than to continue with possibly bad data.
As for the unknown state by continuing the running of the program after
detecting an outer inconsistency, as of this day, I haven't seen a
situation in my code that would cause such behaviour.
You've never had a bad pointer? I'd have said that that is one
of the most frequent errors in C/C++.
The behaviour that
is executed instead (for example, nothing) is done such that it won't
cause trouble further on.
Really. You don't save the file to the disk, you just continue.
Later on, when the problem causes the program to crash, the user
has lost a couple of hours work, and not just a couple of
seconds.
As I said, there are exceptions. In any system, you have to
analyse the costs and the benefits of all of the alternatives.
But in my experience, when you get to the point of having
detected an error in the code, a crash is very often the most
useful thing you can do. (Note that crashing also ensures that
the error won't slip through testing.)
This is clearly because the abstraction level
is already such high. If there is a danger of this happening, one should
use exceptions instead.
In a nutshell: "by whatever parameters the functions are called with,
always keep the program in a well-defined state".
Except when the only conclusion you can draw from the values of
the parameters is that the program isn't in a well-defined
state.
--
James Kanze (GABI Software) mailto:james.kanze@gmail.com
Conseils en informatique orient?e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S?mard, 78210 St.-Cyr-l'?cole, France, +33 (0)1 30 23 00 34
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]