Re: Solving the data inheritance problem
James Kanze wrote:
Kirit Sflensminde wrote:
[...]
For a class like std::map< key, value > we know that there are
constraints that the map implementation (and definition in the
standard) enforces on the abstract type 'key' and that we must conform
to in order to substitute any other type in its place. This has nothing
to do with sub-classing from some common ancestor.
Languages like Java which lack generic programming enforce the use of
hierarchies that switch an operational polymorphism design into
inclusional polymorphism implementation because the langauge lacks the
tools to express the design clearly (and Java interfaces are just a
(nasty) way of doing this). In C++ we don't have this constraint
because our language enables us to express the design clearly and
exactly to start with.
Attention about trying to apply a set of definitions in the
wrong place. Neither C++ (at present) nor Java have any real
means of expressing the constraints of the instantiation types
of a map. (There is a proposal to add constraints to C++,
precisely because the need for this was felt.)
And with that ability we would have parametric polymorphism which is
distinct from either inclusional or operational polymorphism.
The reason for applying this definition is that the person I was
replying to was trying to say that LSP was sub-classing which is only
part of the story. What I am trying to show the OP is that many
problems that are *obviously* class hierarchies can often be better
reformulated as templates.
In order to see where this can be done it is extremely useful to
understand the difference between operational and inclusional
polymorphism and why some languages (and idioms borrowed from those
languages) enforce what is actually an operational polymorphic design
to be implemented by transforming it to inclusional polymorphism
(there's an example further down).
This distinction between these two forms of polymorphism is critical to
understanding why the STL is designed the way it is, and why languages
like Java cannot have a library implemented the same way.
The templates, in this case, enforce a different set of
constraints---they statically enforce predicates over the type
system. Something that was lacking in Java until recently. But
this has nothing to do with polymorphism.
I disagree, it has everything to do with polymorphism. If you consider
polymorphism through the narrow lens of inclusional polymorphism then,
sure, it has nothing to do with it.
[...]
The final point then becomes whether we choose to implement the
operational polymorphism through templates (as the STL does) or through
virtual functions. In C++ this becomes an implementation detail with
one input becoming how and when the client code that uses our classes
gets used and another being which forms of behavioural extension we
wish to support.
Since when is something which depends on "how and when the
client code uses our classes" an implementation detail. Using
templates for polymorphism means static resolution---a serious
restriction for the client, which should normally be avoided.
The STL doesn't seem to find this a serious restriction and certainly
hasn't avoided it. Again if you see polymorphism only as inclusional
polymorphism then your statement makes sense, but polymorphism is a
wider concept that includes other uses of type substitution which are
neatly formalised by the LSP.
It was exactly because I see so many people take the narrow view that
LSP and polymorphism is *only* inclusional polymorphism that I wrote my
original reply.
The main "simple" use of templates is to statically enforce
predicates over the type system: if I want a vector which will
only hold Foo's, and nothing but Foo's, I can use
std::vector<Foo>, and any attempt to insert or extract anything
other than a Foo will fail at compile time.
A template doesn't quite enforce predicates, for that we would need a
language that allowed us to describe those predicates. We cannot say
(syntax made up and certainly appalling):
template< typename T, typename V >
{ std::numeric_limits< V >::max() >=
std::numeric_limits< T >::max() * std::numeric_limits< T >::max() }
V sqr( T );
If we could we would have parametric polymorphism. Instead what we can
say is that for a type it must support a given set of operations, those
operations being the ones we use in the implementation. We cannot
enforce the existence of some other operations that we do not use in
our implementation even if they are part of the formal
design/definition of our class or operation.
This means we are left with this:
template< typename T >
T sqr( T t ) {
return t * t;
}
Now we have an operational polymorphism in that any type T used in this
context must support operator *() - the 'operator' here is
co-incidental of course, operational means (slightly more formally) the
messages that the instance understands.
There are very few cases where you can reasonably choose between
templates and inheritance. They do different things, and play
different roles in programming. Any good modern language will
provide both (in some form or another).
Indeed, good languages will, and of course they play different roles.
Here is an example that I hope illustrates better what I mean. Imagine
a system where we have an operation foo that performs some job the
application needs on some object. We know that part of that operation
is going to require the use a member bar() on the object.
I'm going to say that the design stops here. I don't think either of us
wants to get into a fruitless discussion of where design stops and
implementation starts. If you consider design to go further than this,
fine, but I hope you will translate my language then.
From our design we can implement this in one of two ways. Here is one:
template< typename T >
void foo( const T &t ) {
t.bar();
}
That was the operational implementation. Here is the inclusional one:
class BarBase {
public:
virtual void bar() const = 0;
};
void foo( const BarBase &b ) {
b.bar();
}
Which of these you consider most obvious depends on your mindset. I
think that the template function (the operation polymorphism) is
clearest, shortest, has the lowest overhead and is the easiest from the
point of view of extending the implementation of bar() on new classes.
Other considerations (as we will see later) will also make you choose
one over the other.
The way that I have put this, the choice between the two
implementations is an implementation issue rather than a design issue.
You may argue that this is really part of the design, but I can't
imagine a fruitful discussion about where design stops and
implementation starts. This is the sense that I meant in my original
wording.
There is, however, one case where we do not have a choice as to which
approach to use. If we are putting this code into a library then which
we choose is constrained by where foo is called. We are not
constrained, as normally seems the case, by how we wish to use new
types with differing implementations of bar(), but in how the function
foo that makes use of bar() is to be used!
If foo is an operation that is used by the library (and where that use
is not implemented as a template) then we *must* translate the
operational polymorphism to inclusional polymorphism because we cannot
leave the type dangling in C++. If foo is used at the application level
(that makes use of the library) then we can use a template and drop the
virtual overhead.
To my way of thinking this is a clear issue of implementation detail
rather than design consideration, but YMMV.
K
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]