Re: Enum bitfield (Visual Studio Bug or not?)
Chris Morley wrote:
In my copy of C++03, there is also:
9.6/4 "If the value true or false is stored into a bit-field of type
bool of any size (including a one bit bit-field), the original bool
value and the value of the bit-field shall compare equal. If the
value of an enumerator is stored into a bit-field of the same
enumeration type and the number of bits in the bit-field is large
enough to hold all the values of the enumeration type, the original
enumerator value and the value of the bit-field shall compare equal.
[Example:
enum BOOL { FALSE=0, TRUE=1 };
struct A {
BOOL b:1;
};
A a;
void f() {
a.b = TRUE;
if (a.b == TRUE) // shall yield true
{ /* ... */ }
}
--end example]" (applied the resolution for DR 436 to the quoted
example
(http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#436))
I simply don't find the 9.6/4 argument compelling for why GCC & Intel
"work" with this example. I maintain that the compilers GCC + Intel
work with this example because they define enum BOOL as unsigned type
(documented by Intel) not because of 9.6/4.
(Again I interpret 9.6/4 as needing enough bits to hold the values as
the underlying type of the enum)
Did you also read DR 58, which exactly specifies how to determine the
number of bits that is enough to hold the values of an enumeration
type?
If yours and others' interpretation of the meaning and intent behind
9.6/4 is correct, then I think the standard has taken a wrong turn
here.
That might be so, but it does not change the actual requirements in the
standard. And with DR 58 incorporated, I don't see any room left over
for interpretation.
Consider a normal old stylee bitfield example:
#include <iostream>
union U {
int Word;
struct {
int bit : 1;
Warning: It is implementation-defined if this is a signed or unsigned
bit-field. (see 9.6/3)
Warning: allocation of the bit-field is implementation-defined (see
9.6/1)
};
};
int main () {
U x;
x.Word = 1;
std::cout << x.bit; // prints -1
x.bit = 1; // not even a warning from GCC
std::cout << x.bit; // prints -1
}
Prints -1-1 with gcc (and hopefully all compilers) because the
bitfield has meaning inherited from C that "bit" has type int yet is
truncated to 1 bit storage. This is what bitfields mean historically.
Then you have a very inaccurate view of all the variation that is
permitted within bit-fields.
Your example can just as well print 01 and the compiler would still be
fully conforming.
An enum which has no negative integers in:
enum foo : signed int { i }; // only used 1... project will expand
later. Used C++Ox extension to resolve any ambiguity over base type
foo a : 1; // but wait! what is the compiler to do?? foo:1 signed or
unsigned?? guess??
Your suggestions are that foo:1 is essentially a new type which the
compiler chooses depending on the contents of foo enumeration and with
it a new set of rules for comparisons outside the norm. 9.6/4 should
not create a new type here as it flies in the face of all C/C++
variable declarations. "a" is a "foo" with storage 1 bit is how
someone new to this with proper understanding of bitfields will
interpret this - not "a" is a new type depending on the definition of
foo & there are new rules for it too.
Let's turn the question around: I have an embedded device with a keypad.
The keypad driver sends messages that are encoded as
enum e_key { /* several key ID's ,*/ e_key_last };
struct payload {
bool press : 1;
bool long_press: 1;
e_key key : 6;
};
Is this structure definition sufficient to report presses/releases of
all the 60 different keys on the keypad?
The compiler in question claims C++03 conformance and does _not_ support
the extension of specifying a base type for enumerations.
I say that it should behave like an "int:1" for consistency with the
rest of the language when the enum type is int. It must then be
correctly promoted to foo (int) for any and all uses including
comparisons using the normal mechanisms of the language.
By focussing so much on the underlying type, you are forgetting a very
important distinction between an enumeration and plain int: With the
enumeration, the programmer specifies which range of values is going to
be used and the compiler (and programmer) can use that knowledge to
make optimisations.
For the int type, the compiler does not have that kind of information,
so it must assume that the entire allowed range will be used.
On the other hand, the programmer does not know if the compiler will use
a signed or unsigned underlying type for the enumerations, so there has
to be some specification that tells him how many bits are needed at
minimum to store all enumeration values.
The only kind of contract that can decide this between a programmer and
a host of compilers is an international standard.
I can think of all sorts of not too contrived examples of packing &
unpacking structures where program bugs will occur if the compiler has
to guess the type instead of using the defined type which is explicit.
(e.g. compatibility between rx & tx on different systems where the
enums internals haven't been matched EXACTLY). The type is explicit,
the compiler should use this explicit type. Where else in the language
does a type which has been explicitly defined morph into something
else depending on values?
There is no need to guess at all for the compilers.
The type is explicit (foo), but the internal representation of the type
*is not*. You should not rely too much on the information that your
current compiler prefers to use a signed underlying type for
enumerations. The next compiler might prefer unsigned.
Consistency with the existing function of bitfield is essential.
Consistency is good, but not surprising your users is also good.
I find it surprising that I need an extra bit for my 'unsigned'
enumerations, due to some implementation detail that I don't want to be
bothered with.
Therefore the example in the standard should have undefined behaviour.
Don't hold your breath waiting for that to happen.
The committee has already once affirmed that the behaviour in the
example is defined and as required.
For me it (with C++0x extension) it must read:
enum BOOL : unsigned int { FALSE=0, TRUE=1 };
To have well defined behaviour according to the standard.
Do other people agree or disagree that enum type should follow the
same rules as any other type in a bitfield?
Looks like I am out on a limb here so that is my last word. :)
Chris
Bart v Ingen Schenau
--
a.c.l.l.c-c++ FAQ: http://www.comeaucomputing.com/learn/faq
c.l.c FAQ: http://c-faq.com/
c.l.c++ FAQ: http://www.parashift.com/c++-faq-lite/
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]