Re: naked pointer vs boost::shared_ptr<T>
On Mar 7, 11:39 pm, "Dejan.Mircev...@gmail.com"
<Dejan.Mircev...@gmail.com> wrote:
On Mar 6, 5:13 am, "James Kanze" <james.ka...@gmail.com> wrote:
The orginal question was: "Should we stop using naked pointer
and replace all of them with boost:shared_ptr or
(shared_array<T> or scoped_ptr)?" And the answer to that is
simply NO. The poster is looking for a silver bullet, and there
isn't one. You have to think about object lifetime, regardless.
Oh, I absolutely agree with this. I'm hard-pressed to think of anyone
in this thread who disputed it.
That we have to think about object lifetimes? Or that we should
avoid raw pointers? My impression is that a lot of people are
saying that raw pointers are bad.
What I am disputing is your claims about raw pointers. They range
from incredible (weak_ptr leaks)
If you count on it, it leaks. The problem isn't with weak_ptr
per se; the problem is that it solves a problem that in practice
is very, very rare. And when used to solve other problems (e.g.
relationship management), the weak_ptr is leaked.
through puzzling (map lookup needs raw pointers to signal
failure)
Where did I say that?
to intriguing but refutable (raw pointers are more
useful than smart pointers).
Both have their uses. Typically, I'd guess that about 80% of my
pointers are raw pointers. The ration will vary, according to
the application, but I find it hard to imagine a case where
there wouldn't be a significant percentage of raw pointers.
Your example with self-destructing objects is excellent for
accentuating the similarities and differences between raw and smart
pointers. All the problems around copying pointers and maintaining
their validity are exactly the same for both raw and smart pointers.
Right. You can use make the smart pointers work. It's a little
bit more work, but not much. On the other hand, using smart
pointers in such cases is pretty much lying to the reader, since
there is in fact no "shared ownership". So compared to using
raw pointers, they represent a little extra work, and a lot of
obfuscation. Not what I would consider a net gain.
For example, the weak_ptr-in-a-set problem you indicate is still
present with a raw pointer, and for the same reason. If these
problems are already solved by your design, they can be solved
analogously with smart pointers, with very little code change.
But why? They do require a slight bit of extra work, an do
nothing but confuse the reader as to what is going on, since
there is no shared ownership, and the pointers don't manage
lifetime.
But the genuine difference here is that your usage of raw pointers
forces you to be extremely cautious in your implementation:
I does mean that I have to do some design up front, yes. But
smart pointers don't eliminate the need for design either.
you flirt
with undefined behavior [FAQ Lite 16.15], and if an exception is ever
thrown that destroys your registry, you will leak all your objects.
Now you're being silly. If the registry is ever destroyed, the
whole application crashes, regardless. At that point, what's
one leak more or less?
A
smart-pointer solution would free you from these worries.
That's precisely the attitude I'm arguing against. Smart
pointers aren't a silver bullet. A smart-pointer solution won't
free you from these worries. If an exception (or anything else)
causes the registry to be destructed, and I'm still using it, my
code will crash. Period. The correct functionning of my
program depends on the existance of the registry---it is
fundamental to the application. (In fact, the registry is
generally a singleton, or a member of a singleton, and created
in such a way that it will never be destructed. Although in one
particular case, it was in fact a local variable in main.)
For this
reason, and because of the explicit ownership conventions, a smart-
pointer solution would be more maintainable.
Been there, done that. It doesn't work.
The more interesting question you raise is how do we allow object
lifetimes that don't coincide with scopes. Dynamic allocation using
operator new is one option.
It's the only option, according to the C++ standard.
(don't forget to use smart pointers, though ;).
Which only solve a very small subset of the problems, some of
the time.
But another option is putting objects into STL containers
using value semantics.
Entity objects don't have value semantics. They have identity.
They are polymorphic. You can't put them (directly) into an STL
container.
If there's no danger of slicing, I find this option very
appealing.
Try it, sometime, with objects which have identity, and don't
support copy or assignment. (As I said earlier, that's about
30% of my objects. And about 75% of those dynamically
allocated.)
(Whether a container might use new/delete under the hood is
irrelevant, since I don't have to worry about leaking that
memory.) In fact, you could very likely solve your example by
putting the objects themselves into your map and passing
around references to them.
Except that they are polymorphic, and don't support copy or
assignment.
The same map-updating mechanism you use today would let you
remove them and invalidate outstanding references when they
die.
You don't seem to understand: invalidating outstanding
references is only a small part of the problem. And when you've
solved the rest, the outstanding references won't be there to
invalidate (or will already have been invalidated).
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient?e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S?mard, 78210 St.-Cyr-l'?cole, France, +33 (0)1 30 23 00 34
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]