Re: Style question: Use always signed integers or not?
* Kai-Uwe Bux:
[snip]
Jerry Coffin wrote:
The fact is that I neither said nor implied that you could not portably
detect "overflow" (i.e. wraparound on an unsigned type).
Really? This is from uptrhead:
Jerry Coffin wrote:
In article <Xns9AB7179786602nobodyebiee@216.196.97.131>, nobody@ebi.ee
says...
[ ... ]
AFAIK the unsigned arithmetics is specified exactly by the standard. This
means for example that a debug implementation cannot detect and assert
the overflow, but has to produce the wrapped result instead:
Unsigned arithmetic is defined in all but one respect: the sizes of the
integer types. IOW, you know how overflow is handled when it does
happen, but you don't (portably) know when it'll happen.
Now, if somebody impersonated you, I apologize.
Whoever made the statement
"IOW, you know how overflow is handled when it does happen, but you don't
(portably) know when it'll happen."
was mistaken. This is the statement I responded to. Snipping it away over
and over again will not change that.
Actually Jerry is right, and except for attributing an opinion to him
(misinterpreting his statement), so are you.
We do know how overflow of unsigned type is handled, because that's prescribed
by the standard, namely arithmetic modulo 2^n.
We can't easily and portably detect such overflows, or wrap-arounds, except for
special case operations such as addition (with addition it's trivial).
Cheers, & hth.,
- Alf
--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?