Re: shared_ptr from dinkumware ... comments??
Peter Dimov wrote:
James Kanze wrote:
Having said that, of course, shared_ptr is one of the oldest
components in Boost, and I'm pretty sure that it is now stable
enough to be used in a production environment. As far as I
know, it is also pure template code, entirely contained in its
headers, and (at least for the non-threaded version) could be
installed just by copying the headers, without going through the
hassle of installing Boost (a real pain). And while there are
probably minor bugs in any version, like Pete, I doubt you'd
encounter problems in a typical application.
shared_ptr has been stable for use in production environments for
years. It hasn't changed much since 2003. Boost.Test, the testing
infrastructure for most of Boost, depends on shared_ptr, so it has to
work across the board, or every test fails. :-) The threaded version is
header-only, too. If you know of a minor bug, please let me know. We
don't tolerate bugs, however minor.
I don't know of any, but you know as well as I that there can
always be subtle issues that don't show up for years. (That is,
of course, true of any piece of software.) As I said, it is
highly unlikely that any given application hit one of these, and
I'd not hesitate to use it in any application, even a critical
one. Which puts it right up there with the g++ or the
Dinkumware implementation of the standard library. And very
little else; when I'm writing critical software, I make very
little use of third party libraries, because I generally don't
know their quality.
Unless, of course, you are trying to use a multi-threaded
version. Then, I'd be very sceptical. But IMHO, shared_ptr
isn't something you'd want to cross a thread boundary anyway.
(If the g++ version uses their atomic_add, there's a problem
with the Sparc 32 bit version, for example, which can cause the
process to pause unnecessarily in some cases; if the OS
implements real-time threads, the process can even block
completely due to priority inversion. And if it doesn't, it
means a pthread_mutex_lock per increment or decrement. 100%
correct, but likely to be a bit slow.)
Using shared_ptr in a threaded environment does increase the
chance of encountering a problem (mainly for non-x86
platforms) since we now use a lock-free algorithm, and in the
absence of a portable atomic operations library (something
that will be addressed by C++0x), this is platform/compiler
specific. We do fix the bugs as soon as we discover them,
though. :-)
Well, my own feeling is that it doesn't have the correct
semantics for sharing between threads anyways, so it's not
something that bothers me:-). I just comment out all mulithread
support when I use it, and it works fine for me, including in a
multi-threaded environment:-).
We have a version for Sparc (contributed by Piotr Wyderski),
but it won't be in the upcoming 1.34 release, so you're right
about the current version using a mutex on this platform.
g++'s own std::tr1::shared_ptr (which is based on
boost::shared_ptr) is, I believe, lock-free on Sparc. I don't
know about Dinkumware's.
g++ has used a COW implementation of string since 3.0, which
supposes reference counting. They have a couple of
implementation dependant functions to handle this, generally in
a lock free manner; it seems reasonable to assume that they are
used for the g++ implementation of shared_ptr as well. The
implementation for Sparc 32 bits uses a primitive implementation
of a spin lock, which can stall unnecessarily, or even block
completely if the platform supports real-time threads. (I don't
think that standard Solaris does.) As far as I know, it is
impossible to implement lock-free reference counting correctly
on pre-version 9 Sparcs, which didn't have the cas instruction.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient?e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S?mard, 78210 St.-Cyr-l'?cole, France, +33 (0)1 30 23 00 34
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]