Re: Robust error handling, an error while handling another error
JoshuaMaurice@gmail.com ha scritto:
I've been having this discussion a lot with my colleagues at work.
A buffered file writer is the classic example.
#include <fstream>
int main()
{ std::ofstream fout("foo.txt");
if ( ! fout)
return 1;
//write important data to file
//let the destructor implicitly close the file handle
}
<snip>
Using the std::ofstream interface above, it's still possible to write
correct code by calling flush explicitly as the last part of the
normal success path, then return an error (either by return code or by
exception) if the flush fails. (I still think it's bad because it
promotes the bad style of relying on the destructor to flush the
data.)
Whether having flush() called by the destructor is good or bad design is
very debatable. Frankly, I like it the way it is and your arguments do
not convince me of the contrary.
My real question is, what should you do when you encounter an error
when unwinding the stack from another error?
The fact is that in most cases throwing the second exception is
pointless because the state of system is so messed up that you couldn't
recover it anyway. In such scenario, calling std::terminate is the most
sensible thing. So the question can't be answered in the general case.
As I see it, your only options are:
1- Kill the process (after attempting to log).
2- Return them both. Create some new error return value that wraps the
other two errors.
That can be achieved with the new C++0x facility std::nested_exception.
Notice that without std::nested_exception or std::exception_ptr (another
feature introduced by C++0x) it's quite hard to achieve point 2 in the
general case.
Some colleagues at my company want to throw an exception on a sanity
check failure. A lot of them are very averse to killing the process
under any circumstances, even when a sanity check is tripped, aka a
programmer error. Their argument is that we would prefer having the
process to stay up if it's just your component which is broken. For
example, a user clicks something in the GUI, a sanity check is hit,
and instead of the process dying, it's reported to the user that that
functionality is broken and you should not use it, but you may
continue to use other aspects of the program. I question the merits of
it. I question whether it's a good idea to have the process continue
in the face of a confirmed programmer error, and I question if the
benefit of this rare case keeping up possibly corrupt process is worth
the large amount of extra hassle a programmer has to go through
returning an error instead of using a release-mode assert equivalent.
To be clear, I agree that a user of a executable should not be able to
crash that executable from bad input for most executables. However,
should we be so extreme as to include programmer errors in that
category?
The right attitude, here, is not asking what is "correct", but what is,
in the end, "most useful" to you and/or the user. Ok, you could inform
the user that a certain feature is broken and not to use it, instead of
having the application crash. So what? The user will probably yell at
you and stop using the feature.
Wouldn't it be more useful to make the application crash and let the
user send you a crash dump so that you can debug and possibly fix the
problem? The user will still yell at you, but at least he will know that
you care about fixing his problem.
A different case is, for example, if your application supports
third-party plugins and one of them appear to be broken. In that case
you might decide to handle the situation more gracefully, inform the
user that the plugin is broken and disable it. The user will then yell
at the plugin developer, but that's not your problem.
There is no one-size-fit-all approach. It's your job as developer to be
sensible and capture what is the Right Thing to do in each case.
Just my opinion,
Ganesh
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]