Re: throwing dtors...
"Paavo Helde" <nobody@ebi.ee> wrote in message
news:Xns9B2B65D8AFDA9nobodyebiee@216.196.97.131...
"Chris M. Thomasson" <no@spam.invalid> kirjutas:
Is it every appropriate to throw in a dtor? I am thinking about a
simple example of a wrapper around a POSIX file...
_______________________________________________________________________
[...]
Throwing from a dtor is not really advisable in C++. It can easily lead
to duplicate throws during stack unwinding, and calling terminate() as
the result.
The C++ RAII model is built up on the assumption that releasing the
resource always succeeds (or its failure can be ignored by upper levels).
If this is not the case, then the application logic becomes very complex
immediately, essentially you are back in C again.
In any case, I would suggest to move any activity which can fail out of
the destructor, into a separate member function which has to be called
explicitly before destroying of the object, possibly from inside a try-
catch block dealing with errors.
I think I agree here. Since, IMVHO, at least attempting to gracefully handle
`fclose()', such as deferred retrying in the case of `EINTR' or `EAGAIN', is
extremely important. Therefore, it sure seems to make sense to force the
used to explicitly call a member function which invokes `fclose()' and
throws when a very bad error its encountered (e.g., something other than
EINTR or EAGAIN).
In regard of this example, for most applications, fclose() failing
indicates that the disk is full.
What if the calling thread simply gets interrupted by a signal? What if the
file is non-blocking and the close operation would block? Those errors can
be handled in a straight forward manner. The former can even be handled
within the dtor itself:
class file {
FILE* m_handle;
public:
~file() throw() {
while (fclose(m_handle) == EOF && errno == EINTR);
}
};
What can you do about this? Try to
delete some random other files from the disk?
The application does what it has to do in order to prevent data corruption
and/or loss.
For most applications I
believe a proper behavior would be to try to log the error somewhere,
then either continue or abort, depending on the application type.
What if the application needs to copy a file to disk and destroy the
original? If `fclose()' fails on the destination file, well, the application
won't know about it and will continue on and destroy the source file. Well,
the destination file is by definition in a non-coherent state because
`fclose()' failed to "do its thing". Well, the lost data is gone forever. A
log file will only show why the data was lost, it does not prevent it. In
this case I bet the user wished the application just terminated when the
`fclose()' failed. Or better, I bet the user would like to be able to catch
and explicitly handle this case...
If the file integrity is of the most importance, e.g. in case of a
database program, this has to be managed explicitly anyway by storing
something like transaction completion markers in the file itself, or
whatever. I bet this is not trivial.
Not trivial at all!
;^0