Re: mixed-sign arithmetic and auto
Bart van Ingen Schenau wrote:
Walter Bright wrote:
Bart van Ingen Schenau wrote:
On the other hand, I have used a compiler for a DSP that supports
saturating arithmetic (overflow gets clipped to the largest value).
Although the compiler writers decided otherwise, this mode of
arithmetic could have been selected to be used for signed operands
without any violation of the standard.
The compiler writers made a wise move. Such an unusual mode could
silently introduce pernicious bugs when porting existing, debugged
code to it.
I don't think everyone would agree with that assessment.
One of the major fields where DSP's get used is in audio processing. In
that field, saturation actually gives better results than wraparound.
I agree that makes perfect sense for someone writing code specifically
for that DSP.
But that doesn't help you when you're porting an existing, working code
base to it. Let's say you're porting an MP3 compressor that does heavy
integer manipulation. You have the source code, but have no idea how it
works, nor do you care. You compile it, and it doesn't work. Now what?
You've got a major investment of your time ahead of you.
If we were to define the behaviour for overflow of signed integers, I
would much rather prefer that it is made implementation defined than
that one particular behaviour get chosen.
That doesn't help anyone who is faced with code that breaks when ported,
nor does it help anyone test to see if they are dependent on such
implementation defined behavior.
Not only that, if one were to write code that *relied* on that DSP's
behavior, that code becomes inherently non-portable, so why must this be
standardized? What advantage is there for anyone?
Since there is no way to defend against such possible errors in one's
code, and the overwhelming majority (dare I say all?) compilers
implement it in one way, that way should be standardized.
As long as there are niche markets where some other behaviour gives
better results, we should allow the compiler writers the choice of what
they implement.
This is not a law. There's nothing preventing a compiler vendor from
having non-standard behavior that's specific to a particular niche and
is specifically there to aid programmers for that niche. Compiler
vendors do it all the time, in fact.
And if the behaviour is implementation defined, you can test for the
behaviour that is provided:
int main()
{
int i = INT_MAX;
int test = i+1;
if (test == INT_MIN)
{
std::cout << "Wraparound on overflow\n";
}
else if (test == INT_MAX)
{
std::cout << "Saturating arithmetic\n";
}
else
{
std::cout << "Weird. Is this compiler conforming?\n";
}
}
I agree it's easy enough to detect what behavior is there, but I contend
it is impractical (or even impossible?) to mechanically detect whether
any particular section of code depends on particular integer overflow
behavior or not.
I propose that for such types of behavior, it is better for the standard
to standardize it as much as possible, so programmers do not have to
worry about bugs that cannot be detected. Or even worse, implement
"portable" solutions that don't work because the programmer doesn't have
a machine to test it on (it was common in the 16 bit days for people to
write code that was "portable" to 32 bits only to find when 32 bit
machines became available that they'd misunderstood the issues completely).
--------
Walter Bright
http://www.digitalmars.com
C, C++, D programming language compilers
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]