Re: Effective streaming questions
On Sep 3, 10:29 am, Alexander Adam <cont...@emiasys.com> wrote:
I've got a few (more generic) questions for those C++ experts
1. I am trying to store any kind of data (custom PODs, numbers,
strings etc.) within some kind of stream which consists of an unsigned
char* array which gets resized whenever needed. Writing and reading is
done by using a simple memcpy(). Is this the most efficient method
considering that a single stream might include up to 10 different data
values within it and for each read/write (which happens very often
especially the reading) I'll do such a memcpy or is there a better,
more general approach for such a thing? I am talking specifically
about a rendering tree storage structure for 2d graphics
I'm not too sure what you mean with regards to streaming here.
What is the purpose of using an unsigned char*, except if you're
going to write and read it to some external source (disk or
network)? And in that case, memcpy doesn't work; there's
absolutely no guarantee that another program can read what
2. What's the fastest and most efficient way to compress /
uncompress the mentioned streams in memory?
Using a fast and efficient compression algorithm (which may
depend on the actual data types).
Note that if you're doing any serious compression downstream,
the time it takes to serialize (your point 1, above) will be
negligeable. You might even want to consider a text format.
(The one time I did this, I output a highly redundant text
format, piped to gzip.)
3. I am using some custom stream implementation for reading which can
also do unicode conversion on the fly etc. The problem is this: first,
I am using a virtual base class which defines a virtual ReadWChar()
function. Next, I do call this function for the whole stream for each
char which can result in thousands of calls at a time. Now, reading is
not so of a problem as I am using a buffer underneath but I am more
concerned of the overhead of the function call. Any thoughts on that?
It depends on the system and the compiler. I do this all the
time, but I suspect that it could make a measurable difference
on some machines, in some specific cases, but not very often.
Note too that some (regrettably very few) compilers will use
profiling data to eliminate critical virtual function calls,
replacing them with inline versions of the function if the find
that the same function is actually called most of the time.
4. What's the general overhead considering perfomance of virtual
classes / functions?
Compared to what? On the machines I usually use (Intel, AMD and
Sparc), an implementation using virtual functions is no more
expensive than one using switches or other mechanisms, but from
what I understand, this isn't true on all machines.
To be honest, I am already scared on declaring a virtual
destructor as I fear that the overall classes' function calls
will be with lower perfomance or is that simply not true?
That's simply not true. Declaring the destructor virtual may
have a very small impact on destructing the object (but
typically not measurable), but I know of no compiler where it
will have any impact on anything else.
5. Are there any updated perfomance documents out there giving
some insights and what you can take care from the beginning
considering modern C++ programming?
Probably, but... performance depends enormously on the
individual machine and compiler. What's true for one
configuration won't be true for another. From experience, the
most important thing you can do to ensure sufficient performance
is to enforce rigorous encapsulation. That way, once you've
profiled, and found what has to be changed in your code, for
your configuration, you can change it easily, without having to
rewrite the entire program.
James Kanze (GABI Software) email:email@example.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34