Re: Alternatives to using virtuals for cross-platform development
On Jun 7, 11:38 pm, greek_bill <greek_b...@yahoo.com> wrote:
I'm interested in developing an application that needs to run on more
than one operating system. Naturally, a lot of the code will be shared
between the various OSs, with OS specific functionality being kept
separate.
I've been going over the various approaches I could follow in order to
implement the OS-specific functionality. The requirements I have are
as follows :
- There is a generic interface that needs to be implemented on each
OS. This is what the client code would call.
- In addition to the common/generic interface, each OS-specific
implementation can expose an OS-specific interface.
The first requirement should be fairly clear. The second one is there
to allow the OS-specific part of one sub-system to use the OS-specific
part of another sub-system (I'm assuming that the two sub-systems know
at design time that they will both be implemented for any given OS).
Probably the most obvious way to go about this is to use an abstract
base class to define the generic interface and this gets subclassed by
concrete, OS-specific implementations. Access to OS-specific
interfaces is then provided by means of a dynamic_ (or even static_?)
cast.
My first worry about this is performance. Some of the functions in the
generic interface will be very small (e.g. int GetID() const ), called
very often, or both.
If the function ends up making a system request, it's doubtable
that the cost of a virtual function call will be measurable. At
any rate, I wouldn't wory about it until I'd actually measured
it, and found it to be a problem.
The other issue that bothers me (and this verges on the border of
being philosophical about it), is that using virtuals somehow doesn't
feel right. It would feel right if I had one interface and multiple
implementations in the same executable/module (i.e. at run-time). In
my case I'm more interested in one interface and multiple 'compile
time' implementations, so the overhead of the virtual mechanism is
kind of wasted.
That's very philosophical. It's not the sort of thing I'd worry
about.
On the other hand...
So, looking at alternatives. Compile-time polymorphism sounds like a
good candidate. You have a base class defining all your functions,
which are implemented using an OS-specific implementation class. Or,
looking at it a different way, you can use this approach to guarantee
that a class implements an interface, but without using virtuals (pure
or otherwise).
I often use link time polymorphism for this. You have a single
class definition for all systems, in a header file, and
different implementations for different systems. You link in
whichever one is appropriate. (If the class requires member
data, you might have to use the compilation firewall idiom to
avoid dependencies in the header.)
A quick code snipet to explain this a bit more:
template<class OSImp>
class BaseFoo
{
public: void Bar() { static_cast<OSImp*>(this)->Bar(); }
};
class MyFavOSFoo : public BaseFoo<MyFavOSFoo>
{
public: void Bar() { // do some OS specific stuff }
public: void OSFunc() { // some OS-specific interface }
private: //store some OS specific data
}
Now this is more like what I want, I can have multiple
implementations, without any run-time overhead.
There are a couple of problems however (otherwise I wouldn't be here,
would I? :)
The main one is the same as for the virtual functions: you
introduce unnecessary complexity for nothing. Philosophically,
how is it unacceptable to have a base class with only one
derived class, but acceptable to have a template with only one
instantiation? (In fact, there are also times when I use this
solution.)
The client code, which is OS-independent, no longer has a polymorphic
base class that it can just use. I somehow need to use MyFavOSFoo
directly. The most obvious solution that comes to mind (but I'm open
to suggestions) is to have the client code use a class called 'Foo'. I
could then have an OS specific header file that has :
typedef MyFavOSFoo Foo;
The problem then is that the selection of implementation/OS boils down
to #including the right header, which seems very fragile.
Why? Give both headers the same name, put them in different
directories, and select which one by means of a -I option
positionned in the header file.
My personal solution here would be, I think, the common class
definition, with separate implementations, and a function
returning a pointer to a forward declared class with the OS
specific parts, i.e.:
class OSSpecific ;
class Unspecific
{
public:
// the usually generic function declarations...
OSSpecific* getOSSpecific() ;
} ;
Obviously, anyone wanting to use the OS specific stuff would
have to include an additional, OS specific header, but then,
he'd only be doing so if he needed functions which were only
available on a specific OS anyway.
Then I started thinking of some less 'common' solutions (aka hacks).
Most I've already discarded, but one that seems to have stuck in my
mind is the following :
Have a 'public' header file defining the public interface. Then have
an OS-specific .h and .cpp which implement the interface defined in
the public .h. i.e.
// public.h
class Foo
{
public: void Bar();
// No data members
};
// private.h
class Foo // same name
{
public: void Bar();
private: // Some OS specific data that the client code doesn't need to
know about
};
That results in undefined behavior, and would likely cause
crashes and such.
// private.cpp
void Foo::Bar() { // do some OS specific stuff}
(obviously this needs some sort of Create() function, as the client
code can't call new() or anything like that)
The client can't include the header, in fact, without incuring
undefined behavior. Just use the compilation firewall idiom in
Foo, and there should be no problem.
This does pretty much all I want. I can have multiple compile-time
implementations and there is no runtime overhead. I can put private.h
and private.cpp in a static library and let the linker do its magic.
A couple of problems though :
- I would need to make darned sure that the interface in public.h
matches that in private.h (even the declaration order has to match)
In fact, the token sequence after preprocessing of the entire
class must match, and all names must bind identically. (There
are a very few exceptions.)
[...]
2. Am I trying to use C++ in a way that it was not designed to be
used? For example, C# and Java have the concept of a module (or
package or assembly or whatever) that is a monilithic unit that
exposes some types and interfaces. C++'s answer to this would be a
pure virtual class.
No. The C++'s answer is that you have a choice of solutions,
and can use whichever one is best for your application. Java
forces you to use one particular solution, whether it is best or
not.
3. Is there a 'standard' approach to cross-platform developement that
I've completely missed?
The most frequent one I've seen is a common header, using the
compilation firewall idiom, and separate implementations. But
I've also seen the abstract base class used.
--
James Kanze (Gabi Software) email: james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34