Re: Enum bitfield (Visual Studio Bug or not?)
On Sep 12, 3:28 am, "Chris Morley" <chris.mor...@lineone.net> wrote:
[snipping 9.6/4]
I simply don't find the 9.6/4 argument compelling for why GCC & Intel "work"
with this example. I maintain that the compilers GCC + Intel work with this
example because they define enum BOOL as unsigned type (documented by Intel)
not because of 9.6/4.
Please read 9.6/4 again. I do not see how we are disagreeing on the
interpretation of this portion of the standard, so I'll spell it out
quite anally.
1- The standard guarantees the behavior in the example in the
standard. This behavior is that the original enumerator value shall
compare equal with a bit-field of the same enumeration type which is
large enough to hold all of the enumeration values when previously
stored with that enumeration value.
2- The example in the op's post is effectively the same example in the
standard in 9.6/4
3- Visual Studios does not follow the behavior outlined in the
standard for the op's example.
The only ambiguous part IMHO is
"[if] the number of bits in the bit-field is large enough to hold all
the values of the enumeration type,".
I read this as saying:
* if there is no negative enum value, then it's the number of bits to
hold the biggest value.
* If some enum value is negative, then it's the max of
*** the number of bits required for the "most positive" / greatest
value, including the 0 sign bit. (For the greatest value, regular
binary form, the required number of bits is 1 + the number of bits
which contain the most significant 1 and the least significant bit.)
*** and the number of bits required for the "most negative" / least
value, including a leading 1 sign bit. (For the least value, in 2's
complement form, the required number of bits is 1 + the number of bits
which can contain the most significant 0 and the least significant
bit.)
(Again I interpret 9.6/4 as needing enough bits to hold the values as the
underlying type of the enum)
If yours and others' interpretation of the meaning and intent behind 9.6/4
is correct, then I think the standard has taken a wrong turn here.
Consider a normal old stylee bitfield example:
#include <iostream>
union U {
int Word;
struct {
int bit : 1;
};};
int main () {
U x;
x.Word = 1;
std::cout << x.bit; // prints -1
x.bit = 1; // not even a warning from GCC
std::cout << x.bit; // prints -1
}
Prints -1-1 with gcc (and hopefully all compilers) because the bitfield has
meaning inherited from C that "bit" has type int yet is truncated to 1 bit
storage. This is what bitfields mean historically.
An enum which has no negative integers in:
enum foo : signed int { i }; // only used 1... project will expand later.
Used C++Ox extension to resolve any ambiguity over base type
foo a : 1; // but wait! what is the compiler to do?? foo:1 signed or
unsigned?? guess??
Your bit-field example explicitly has its type as signed. Thus it
should print -1.
However, in the example in the standard and in the op's example, the
underlying type is not explicitly spelled out. The type of the bit-
field is an enum type. The standard in other sections says that the
compiler is free to choose any underlying type for the enum as long as
it's big enough. However, I read 9.6/4 as effectively providing
additional requirements on the underlying type of the enum. In effect,
9.6/4 requires the implementation to use signed or unsigned, whichever
makes the required comparison work.
[Snipping the rest]
If you accept that 9.6/4 further restricts the compiler to use
unsigned or signed for the underlying type of enums, or at least for
the underlying type of bit-fields of enum type in this corner case,
then I think everything works and makes sense.
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]