Re: Type length in bits?
On May 2, 8:13 pm, "Matthias Hofmann" <hofm...@anvil-soft.com> wrote:
"James Kanze" <james.ka...@gmail.com> schrieb im
Newsbeitragnews:ac3bc85f-aa5d-465b-b0c5-83ab1fac78fa@x18g2000yqe.googlegroups.com...
On Apr 30, 12:25 am, "Alf P. Steinbach /Usenet"
<alf.p.steinbach+use...@gmail.com> wrote:
* Matthias Hofmann, on 29.04.2011 02:55:
So if I understand this correctly, for types "char" and
"signed char", there may be bit patterns that do not
represent numbers, unless "char" is an unsigned type?
Right.
Not for plain char. In C++. (This is, I think, a difference
between C and C++. C allows trapping bit patterns in plain
char. Not that it makes a difference in practice.)
Yes, it does make a difference.
Not in practice, since no known implementation has ever had
trapping bit patterns in plain char. All known implementations
that don't use a straightforward 2's complement representation
make plain char unsigned.
Assuming that the standard allows a conversion from a pointer
to any object type to void* and back to char*, the following
two utility functions are legal:
// Converts a pointer of any non-const
// type to a non-const char pointer.
inline char* char_ptr( void* p ) throw()
{ return static_cast<char*>( p ); }
// Converts a pointer of any constant
// type to a constant char pointer.
inline const char* char_ptr( const void* p ) throw()
{ return static_cast<const char*>( p ); }
But what if the value of one byte accessed through such a char
pointer is a trap representation? Then iterating through the
bytes of the underlying object may cause the program to crash!
You can imagine implementations where such things might not
work. In practice, they don't exist.
The answer seems to be using a pointer to unsigned char
instead of plain char, but there is a problem with that, too:
3.9.2/4 guarantees a char* to have the same representation as
a void*, but it does not give such a guarantee to unsigned
char*.
That's probably an oversight. In general, the pointer
representations for the corresponding signed and unsigned types
should be the same. (In practice, they will be the same.)
So how do you implement these two utility functions above?
In practice, they're fine as they stand (although I prefer
unsigned char).
If you you a plain char*, then you may have problems with bit
patterns or sign expansion errors, and if you use an unsigned
char, then you have object representation problems.
In which real implementations?
But if C++ does in fact not allow trapping bit patterns, then
the problem is solved and plain char should be used! So could
you please refer me to the corresponding section of the
standard where I can find such a guarantee?
Not the standard, but compiler implementers want their compiler
to be used. If unsigned char* has a different representation
than char* (and I can't even imagine an architecture where this
wouldn't be the case), or if plain char actually does have
trapping representations (easily avoided by making plain char
unsigned), then the compiler won't be used.
--
James Kanze
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]