Re: Size of bool unspecified: why?
On Jul 22, 1:42 am, "JBarleycorn" <jbn...@newsnet.org> wrote:
James Kanze wrote:
On Jul 20, 9:46 pm, "JBarleycorn" <jbn...@newsnet.org> wrote:
James Kanze wrote:
On Jul 16, 6:02 pm, BGB <cr88...@hotmail.com> wrote:
On 7/16/2011 3:33 AM, MikeP wrote:
No. Unspecified means that you cannot know or depend on the
size; it might change from one compilation to the next.
So is the list of unspecified things in the standard very long?
Definitely more than I'd like. The two which seem to cause the
most problems in the code I've seen is order of evalutation:
std::cout << f() << ' ' << g() << std::endl;
, for example, where both f() and g() use and modify the same
global variable, and whether intermediate values in a floating
point expression are in extended precision or not: the trickiest
case I saw of that was where someone had defined:
bool operator<( MyClass const& lhs, MyClass const& rhs )
return lhs.doubleCalule() < rhs.doubleCalule();
and used it as an ordering operator for 'sort'. (The compiler
he was using returned the floating point value in a register
with extended precision, but truncated to double when it spilled
to memory. Which resulted in the function returning true when
lhs and rhs designated the same object.)
As for data transfer and storage, you treat it
like any other type: you define the external format, and
implement the conversion.
Unnecessary tedium. Use bool though, and it becomes necessary tedium.
Only unnecessary if you don't need to read the data later (in
which case, writing it out in any format is unnecessary tedium).
Of course, normally, all serialization code will be generated by
other programs anyway. The only thing that is hand written is
the low level code to handle the basic types.
If the representation is a single byte, and the both the caller
and the callee agree, there's no need for extension.
is only necessary if you have a smaller size, and need to pass a
I have a feeling that "most" compilers by default expand all arguments
less than the "word" size to the "word" size. Surely for simplicity of
implementation... stack offsets, etc.
Most compilers do insert padding, since not doing so will either
slow the code down significantly (Intel) or cause the program to
crash (most other processors) because of misaligned data.
Inserting padding is not expanding an argument to word size;
when a compiler inserts padding, the bytes in the padding have
unspecified (and often random) values. If I have something
void f( char ch );
f( aChar );
I expect a compiler on an Intel to generate either:
(if `aChar` is correctly aligned), or
mov al, aChar
to pass the argument. Four bytes end up on the stack, but only
one is significant, and read by the called code. (On a Sparc,
of course, arguments are passed in registers, and there are no
byte registers, so you end up with something like:
ldsb aChar, %o0
---load signed byte to register o0.)