Re: Exception Misconceptions
On Dec 17, 2:51 pm, "dragan" <spambus...@prodigy.net> wrote:
"James Kanze" <james.ka...@gmail.com> wrote in message
news:e09665be-4d56-4e93-a473-06c5a172eb10@m38g2000yqd.googlegroups.com...
On Dec 12, 10:05 am, "dragan" <spambus...@prodigy.net> wrote:
James Kanze wrote:
On Dec 11, 11:05 am, "dragan" <spambus...@prodigy.net> wrote:
James Kanze wrote:
On Dec 11, 8:57 am, "dragan" <spambus...@prodigy.net> wrote:
James Kanze wrote:
[...]
OK. In that case, the mechanism I explained is explicit. In
all of the compilers I've seen, destructors are simply called at
the end of scope when no exceptions are raised. Exactly as they
were before exceptions were added to the language. The extra
tables and the associated stack walkback only comes into play
when an exception is thrown.
The goal here is "you don't pay for what you don't use".
I wonder if a much simpler implementation is possible if not being
"hell-bent" on _zero_ overhead. One that reuses the same destructor
call/stack walk as in the non-exceptional case. I would proceed in
that direction first, if I was implementing the language.
I'm not sure what you're describing here? That every function have an
additional, hidden return value, which is tested on return from every
function?
Other mechanisms are possible. I believe some earlier compilers did
use
a system of objects automatically registering themselves on
construction, and deregistering themselves on destruction (with try
blocks registering and deregistering their catch clauses as well).
The
registry is organized more or less as a stack, and the exception
handler
just pops until it encounters a catch clause which handles the
exception.
Such mechanisms have a very noticeable impact on performance in the
case
where an exception isn't thrown. Probably acceptable in most
applications, but certainly not in all.
As long as no exception is thrown, the code executes as fast as if
exceptions weren't in the language (or almost---added control flow
paths may affect optimization, and the additional tables may affect
locality).
Not all compilers use this technique. G++ does, as does Sun CC, but
Microsoft does seem to insert some extra calls here and there
(although I've not studied its mechanism enough to know exactly how
it works).
Basically, at least with Sun CC (and except for the optimizer
considering the additional flow paths), code is generated exactly as
if exceptions didn't exist. Plus the additional tables are
generated. When you throw an exception, the compiler generates
special code to allocate memory in a reserved area and copy the
exception into it, then calls a special runtime function which does
the stack walkback. Which can be relatively expensive in runtime,
because of all of the table lookup's it is doing. (I don't know off
hand whether it has to do a linear search each time, of if the
tables are sorted, and it can do a binary search.)
So overall, the answer is that in practice compiler implementors opt
for the zero-overhead goal which may require or make lucrative
machinery that is separate from the machinery used in the normal
processing case, but other simpler schemes are probably possible.
Since there is nothing inherently tying exceptions to
dedicated/explicit mechanisms, any statement worded such that it
implies that, is wrong (a misconception).
The machinery isn't that complicated. After all, you need to be able
to
walk back the stack in other cases as well (e.g. in a debugger---and
what compiler doesn't come with a debugger). The alternatives are
relatively expensive, and some people do choose their compiler based
on
benchmark results (and those benchmarks rarely test the performance
when
an exception is thrown). For better or worse, performance is an issue
for compiler vendors---lower performance means less sales.
--
James Kanze