Re: Exception handling
On Sep 23, 8:24 pm, Bart van Ingen Schenau <b...@ingen.ddns.info>
wrote:
Alan McKenney wrote:
On Sep 21, 7:07 pm, "Bo Persson" <b...@gmb.dk> wrote:
Alan McKenney wrote:
1. Expense. You can't use exceptions unless you can be
quite sure that they will occur rarely. I work with
real-time data, so if too many exceptions are thrown,
we lose data.
Exceptions are supposed to be used for exceptional conditions. :-)
Which means what, exactly?
It means that the condition should not happen, but the developer is not
able/willing to bet his life that it won't happen, even after all bugs
have provably been removed from the software.
You're just pushing words around. In practical
terms, this comment is useless. It does not help
_anyone_ decide whether to use an exception or
some other way of handling a condition.
It also ignores the issue -- expense.
This is not theoretical. I speak from
experience, having had to remove exceptions
from some code because under certain circumstances,
what "should not" happen was happening frequently
enough to impact performance enough for us to
drop gobs and gobs of data. (Reminder: I work
with real-time data, so slow code = dropped data.)
As far as I'm concerned, any strategy which uses
exceptions without knowing that they won't occur
frequently enough to cause performance issues is
a non-starter.
And what language support do you have for making sure that all return
codes are properly checked and handled?
Well, if someone adds an error parameter to a function
signature, the compiler will tell you pretty quick if you
don't add it to every call.
And after all call-sites have been updated, what language support do you
have to ensure the error is either handled or propagated onward?
Using throw and catch doesn't guarrantee that errors are handled
appropriately, either.
I'm not saying that error codes are always better than
exceptions, but they are at least visible, whereas
an exception that travels over a dozen function calls
is more likely to be hard to track down simply by looking
at the source code.
A better view on exception is, IMHO, that every statement can
potentially throw an exception, unless it is explicitly documented not
to do so.
If that were true, it would make C++ unusable.
If our shop had to deal with such a situation,
we would not -- could not -- use C++.
How can you "correctly" handle exceptions you don't
even know might be thrown, and of types you've
never heard of?
If it first happens when you're in Production, you will
suddenly find yourself having to explain to top execs
why a crash was better in this case than simply ignoring the
error.
As if you don't have anything to explain if your program happily
corrupts/destroys important data from your customers.
This may be the issue in your application area.
In my case, crashing is _far_ worse than a few wrong
numbers, which we have procedures to correct anyway.
The "data corruption" (not to mention the penalties
we have to pay our clients) from a 5-10 minute outage
(or all day if Ops isn't on its toes) is many orders
of magnitude higher.
In the few situations where it is better to crash,
(i.e., a condition which guarrantees that _no_ data
will be processed correctly), it makes more sense for
us to log an error and call abort() (which is what we do.)
My point is not that exceptions are bad, or that we
shouldn't use them.
My point is that, based on my experience, figuring
out when and how to use exceptions is not
trivial. Either exceptions are not all that
useful, or I don't have the kind of experience
that would allow me to see when they _would_ be
useful. Based on the comments in this thread,
I'd say I'm not alone.
The sort of glib answers and tired dogmas that
keep getting repeated here are not adding
to anyone's understanding, either.
It would be far more useful for those who believe
they know when exceptions are useful to present
examples from their own work (not academic, toy
problems, either!) of where exceptions proved
useful and what it is about what they were used
for and how they were done that made them better
than the alternatives. And, of course, to talk
about examples where they proved to be a
bad idea.
To bring up my own experience:
In my own experience, there's only one situation I
have run into where I thought using an exception was
a good idea. It's in code that frames messages out
of a TCP stream, and if you become aware that you're
misframed (which can happen in a number of places),
the only reliable way to recover is to disconnect
and reconnect. In that case, I figured that (1)
the time spent disconnecting and reconnecting was
large compared with the cost of the exception, so
the expense was not an issue, (2) there was only
one logical place to call the code that processed
the data, and that was also the place you'd want
to do the disconnect at, and (3) as it turns out,
the exception has never been thrown in about 5 years
of daily operation.
It's worth noting that the exception is both thrown
and caught in code I wrote (the same file, in fact),
AFAIK, I'm the only one who has ever touched or looked
at the code, and (as mentioned), the exception has
never been thrown, anyway.
I tried them in another code package, but every
time they got thrown, it indicated a problem that
had to get fixed before the code could be used.
They're still there (I think), but they don't get
thrown any more.
In my work, using exceptions to handle data problems
is out of the question -- our data rates are much
too high and the frequency of such problems is too
unpredictable to be able to afford exceptions.
If I were to try to generalize this to a conclusion
(which I wouldn't recommend at this point), it would
be that if you see an exception being thrown, it's
time to start figuring out another way of handling
the condition.
But YMMV -- and that's my point.
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]