Re: Which libraries in Boost are mature enough to be used in real applications?
David Abrahams wrote:
Wait; you're suggesting that libraries should be created without a
careful examination of their problem domain? A good library provides
abstractions. Abstractions are normally generalized _from_ something.
I can't imagine approaching any library design without looking very
carefully at the domain being modeled could ever lead to success.
I think we're saying the same thing - one should mull the problem
domain considerably before making a declaration of virtue of the
library.
By contrast, for example, there are many people who try to write
wrapper libraries for primitives in computer networking. We have seen
many of these, everything from wrapping DNS operations and to IP
addresses, security (look at what Microsoft did to security with their
libraries), etc. None of these wrapper libraries have ever reached
major popularity because they wrap something that is simply "not
right", and though it is often difficult to point out specifically what
is "not right", programmers know when they have warm and fuzzy and when
they don't, and in these cases, they don't, so they reject it.
In the case of Boost, IMO, there are libraries where the author
"mulled" to the maximum, as Jeff Garland did when he creaed the
time/date library, and after he was finished, he knew that he had
created a work of art.
Typical traits of a "well-mulled" library is that the nomenclature
makes sense; the concepts are complete, self-contained, and true,
irrespective of context; and the potential for composition is
unlimited.
Some of the concepts being modeled in Boost are simply undercooked.
For example, I strongly suspect that the thread model should allow for
a thread function to take always an Event argument to tell it when it
should gracefully die. I am not certain that this is true, and my own
thread library is far from cooked, but it seems that, with threads,
this is generally the case. So I would think more mulling is needed
before full-on commitment is made to the library.
Isn't that a conundrum for anyone building abstractions? STL probably
seemed to its creators to be a framework that could handle the domain
of linear sequence processing, but of course despite being a great set
of abstractions, it constrains the domain. For example, STL doesn't
handle heterogeneous sequences (tuples), and is really inefficient for
dealing with logically-linear-but-pysically-hierarchical data
structures like std::deque.
True. However, there is a difference here. With implementation of
deque as red-black tree for instance, everything is clear. There is no
mystery whatsoever. You know what it is you have, that there are no
alternatives, that there will be trade-offs, what those trade-offs are,
the complexity of the algorithms, essentially everything. The only
real problem is finding someone has the the insight to choose wisely.
With under-cooked models however, you do not have a choice, you
generally have something that someone just came up with. You are not
even sure if there is a better alternative. You *hope* there is not,
and you use what you have. The nomenclature in these cases is often
either incorrect or simply confusing.
....and how does the introduction of versioning "create a steady-state
model" of the serialization domain? Do you honestly believe that not
adding versioning would have made a difference in that regard?
Versioning detracted away from steady-state model (IMO). More than
anything in programming, I depend on type to create form. Versioning
implies that types are semi-weak. If types are semi-weak, then my form
is semi-weak, and I cannot manage big systems with semi-weak form, so I
cannot use a versioned serialization library.
-Le Chaud Lapin-
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]