Re: Does this memory access yield undefined behaviour?
On 1 Apr., 09:17, "Matthias Hofmann" <hofm...@anvil-soft.com> wrote:
Hello everyone,
please have a look at the following code, which is a stripped-down version
of a 24 bit graphics routine I am working at:
int main()
{
const int size = 12;
const int width = 4;
int red = 0x00ff0000;
int green = 0x0000ff00;
int blue = 0x000000ff;
char* pc = new char[size];
int* pi = ( int* ) pc;
for ( int i = 0; i < width; i++ )
{
*pi &= 0xff000000;
*pi |= red | green | blue;
pi = ( int* ) ( ( char* ) pi + 3 );
}
delete [] pc;
return 0;
}
[...]
Now I wonder whether my code yields undefined behaviour?
I don't know whether it qualifies as "undefined behaviour" or just
"implementation defined". Apart from the obvious endianess issues and
the fact that an int might only have 16 bits you have alignment
problems. On some machine a pointer to an int could be restricted to
point at "even addresses" only (for example).
What you *can* do is using memcpy:
std::vector<unsigned char> blah (12);
unsigned k = 0;
std::memcpy(&k,&blah[0],3);
provided that "unsinged" has at least 24 bits, a character has exactly
8 bits and the integer's layout is little endian (an x86 based
machine).
Checkout the <climits> header file and its macros UINT_MAX and
ULONG_MAX and pick the first type of {unsigned, unsigned long} where
xxx_MAX >= 16777215. Well, according to the standard ULONG_MAX is
guaranteed to be at least 2^32-1. So, you only have to check UINT_MAX
like this:
#if UINT_MAX >= 16777215
typedef unsigned pixel24_type;
#else
typedef unsigned long pixel24_type;
#endif
If you want your code to be endian-safe you can assemble your 24 ints
by hand. If that turns out to be too slow on "little-endian + 8bit
chars" machines you can still use std::memcpy for those machines.
Prefer the use of *unsigned* characters. If you use plain chars and
they turn out to be signed than you run into implementation-defined
behaviour when writin something like this:
char c = (some_24bit_color_code) & 0xFF;
The rhs could result in a value of, say, 160. An assignment to a
signed integer with a value outside of the range of possible values
for that integer is implementation-defined. So, in case "char" is
signed and has 8 bits you don't really know what the value of 'c' is
going to be. For *unsigned* integers as target the assignment obeys a
simple rule.
So, assembling your integers would look like this:
inline pixel24_type uchar2pixel(unsigned char* p)
{
return p[0] | (p[1] << 8) | (p[2] << 16);
}
and if you're ultra paranoid and fear characters with more than 8
bits:
inline pixel24_type uchar2pixel(unsigned char* p)
{
return (p[0] & 0xFFu) | ((p[1] & 0xFFu) << 8)
| ((p[2] & 0xFFu) << 16);
}
See http://home.att.net/~jackklein/c/inttypes.html
Cheers!
SG
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]