Re: converting char to float (reading binary data from file)
On May 27, 12:07 pm, gpderetta <gpdere...@gmail.com> wrote:
On May 27, 10:04 am, James Kanze <james.ka...@gmail.com> wrote:
On May 26, 8:15 pm, c...@mailvault.com wrote:
On May 22, 1:58 am, James Kanze <james.ka...@gmail.com> wrote:
In Boost 1.35 they've added an optimization to take advantage of
contiguous collections of primitive data types. Here is a copy
of a file that is involved:
// archives stored as native binary - this should be the fastest way
// to archive the state of a group of obects. It makes no attempt to
// convert to any canonical form.
// IN GENERAL, ARCHIVES CREATED WITH THIS CLASS WILL NOT BE READABLE
// ON PLATFORM APART FROM THE ONE THEY ARE CREATE ON
Where "same platform" here means compiled on the same hardware,
using the same version of the same compiler, and the same
compiler options. If you ever recompile your executable with a
more recent version of the compiler, or with different options,
you may no longer be able to read the data.
In sum, it's an acceptable solution for temporary files within a
single run of the executable, but not for much else.
Modulo what is guaranteed by the compiler/platform ABI, I guess.
Supposing you can trust them to be stable:-). In actual
practice, I've seen plenty of size changes, and I've seen long
and the floating point types change their representation, just
between different versions of the compiler. Not to mention
changes in padding which, at least in some cases depend on
compiler options. (For that matter, on most of the machines I
use, the size of a long depends on compiler options. And is the
sort of option that someone is likely to change in the makefile,
because e.g. they suddenly have to deal with big files.)
In particular, the Boost.Serialization binary format is
primarily used by Boost.MPI (which obviously is a wrapper
around MPI) for inter process communication. I think that the
idea is that the MPI layer will take care of marshaling
between peers and thus resolve any representation difference.
I think that in practice most (but not all) MPI
implementations just assume that peers use the same layout
format (i.e. same CPU/compiler/OS) and just network copy bytes
back and forward.
In a sense the distributed program is a logical single run of
the same program even if in practice are different processes
running on different machines, so your observation is still
If the programs are not running on different machines, what's
the point of marshalling. Just put the objects in shared
memory. Marshalling is only necessary if the data is to be used
in a different place or time (networking or persistency). And a
different place or time means a different machine (sooner or
later, in the case of time).
James Kanze (GABI Software) email:email@example.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34