Re: A few questions on C++
Kai-Uwe Bux wrote:
James Kanze wrote:
On Sep 21, 10:58 am, Kai-Uwe Bux <jkherci...@gmx.net> wrote:
James Kanze wrote:
On Sep 19, 3:10 pm, "Phlip" <phlip...@yahoo.com> wrote:
D. Susman wrote:
[snip]
2)Should one check a pointer for NULL before deleting it?
No, you should use a smart pointer that wraps all such checks
up for you.
Why? What does a smart pointer buy you, if all it does is an
unnecessary test?
Don't forget, too, that most delete's are in fact "delete this".
And "this" cannot be a smart pointer.
Are you serious?
Yes. Most (not all) objects are either values or entity
objects. Value objects aren't normally allocated dynamically,
so the question doesn't occur. And entity objects usually (but
not always) manage their own lifetime.
Most of my dynamically allocated objects are used to implement container
like classes (like a matrix class), wrappers like tr1::function, or other
classes providing value semantics on the outside, but where the value is
encoded in something like a decorated graph.
The internally allocated nodes do not manage their own lifetime: they are
owned by the ambient container/wrapper/graph.
That is one of the cases where "delete this" would not be used.
But it accounts for how many delete's, in all? (Of course, in a
numerics application, there might not be any "entity" objects,
in the classical sense, and these would be the only delete's,
even if they aren't very numerous.)
I venture the conjecture that this heavily depends on your
code base and on the problem domain.
And your style. If you're trying to write Java in C++, and
dynamically allocating value objects, then it obviously won't be
true.
I have no idea about Java. My code is heavily template based,
uses value semantics 95% of the time, and new/delete is rather
rare (about one delete in 500 lines of code).
Curious. Not necessarily about the value semantics; if you're
working on numerical applications, that might be the rule. But
templates at the application level? Without export, it's
unmanageable for anything but the smallest project. (The
companies I work for tend to ban them, for a number of reasons.)
In my codebase, the lifetime of an object is managed
by the creator, not by the object itself.
There are cases where it is appropriate. There are also cases
where the lifetime will be managed by some external entity such
as the transaction manager.
Ownership almost never is transfered. The reason that the
object is created dynamically is, e.g., that its size was
unknown (in the case of an array) or that a client asked for
an entry to be added to a data structure.
O.K. That's a case that is almost always handled by a standard
container in my code. I have entity objects which react to
external events.
If you're writing well designed, idiomatic C++, then a
large percentage of your deletes probably will be "delete this".
I disagree. Could it be that you are thinking of object oriented designs?
More to the point, I'm thinking of commercial applications. I
sort of think you may be right with regards to numerical
applications.
[...]
So here is a question: given that uses cases frequencies can differ
dramatically, can one give rational general advice concerning smart
pointers? and if so, what would that advice be?
The only "rational" advice would be to use them when
appropriate:-). Which depends a lot on context---if you're
using the Boehm collector, for example, you'll probably need
them less than if you aren't. But on the whole, even without
the Boehm collector, I doubt that they'd represent more than
about 10% of your pointers in a well designed application.
That depends on what you count as a smart pointer. E.g.,
tr1::function or boost::any are very close to smart pointers
with copy semantics. However, it clearly does not compete with
pointers.
I'm not sure I agree. I'd be tempted to say that if you can't
dereference it, it isn't a smart pointer. STL iterators are
smart pointers because they support dereferencing. Still, in a
commercial application, *most* pointers are used for navigation
between entity objects. You rarely iterate; you recover the
real pointer from the return value of std::map<>::find almost
immediately, etc.
However, by and large, I also found that (smart) pointers
rarely ever make it into client code. When I put a class in my
library, it usually provides value semantics, and in fact,
most of my classes do not have virtual functions or virtual
destructors.[1] Thus, client code has no reason to use dynamic
allocation.
Are you writing libraries? Obviously, something like
std::vector<> won't use delete this for the memory it manages.
Something that primitive probably won't use a classical smart
pointer, either, but I guess more complex containers might.
In the applications I work on, of course, such low level library
code represents something like 1% or 2% of the total code base.
And for the most part, we don't write it; the standard
containers are sufficient (with wrappers, in general, to provide
a more convenient interface).
They obviously don't apply to entity objects, whose lifetime
must be explicitly managed. And how many other things would you
allocate dynamically?
A whole lot. E.g., very often in math programming, I find myself dealing
with _values_ that are best represented by trees, pairs of trees, trees
with some decoration, or graphs. Implementing those classes requires a
whole lot of dynamic allocation, but in the end that is just some means to
realize a class that has value semantics from the outside. The objects are
then in charge of destroying the internal nodes whose graph structure
encodes the mathematical value of the object. Leaving that to smart
pointers is very helpful in prototyping.
I think that's the difference. I guess you could say that my
code also contains a lot of trees or graphs, but we don't think
of them as such; we consider it navigating between entity
objects---the objects have actual behavior in the business
logic. And the variable sized objects (tables, etc.) are all
handled by standard containers.
Most of the time I see a lot of smart
pointers, it's for things that shouldn't have been allocated
dynamically to begin with.
I cannot refute that observation. However, that is a function
of the code you are looking at.
Certainly. I also see a lot of code in which there is only one
or two deletes in a million lines of code; the value types are
all copied (and either have fixed size, or contain something
like std::string), and the entity types are managed by a single
object manager. In many cases, the architecture was designed
like this just to avoid "delete this", but the delete request to
the object manager is invoked from the object that is to be
deleted---which means that it's really a delete this as well.
And in my last job, the application worked with a fixed set of
entity objects, and I don't think that there was any new/delete
outside of initialization and shutdown code; given the behavior
of the objects, we really could have skipped the delete in the
shutdown, and had code without a single delete. (In our code;
the application did use std::set a lot for secondary indexes,
with a lot of insert/erase, since the secondary indexes were
mutable values. So there was actually a lot of use of dynamic
memory. Just not in our code.)
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34