Re: mixed-sign arithmetic and auto
James Dennett wrote:
It allows for implementations which diagnose all overflows;
Are there any such implementations?
I don't know. Even if there aren't, the standard allows for one
in future. The C++ market has been around a long time, and will
continue to be significant for a long time yet, and a standard
that chooses too often to fix things when it could allow for
better QoI is not a good standard.
Yes, C++ has been around for 20+ years. If a particular characteristic
of C++ compilers, allowed for by the standard, and not too hard to
implement, has not emerged in that time, then I argue there is no
significant interest in it. And so allowing for it in the standard is
not compelling, particularly if there are costs associated with allowing it.
There's also something
like -fno-wrapv which disables those optimizations at some cost in
code speed (but the cost is <<< 1% for typical application code).
An optimization which breaks code and offers <<< 1% improvement is a bad
idea to implement, even if allowed by the standard.
Maybe what's needed is a
"simple" mode where non-obvious optimizations are disabled and
code runs more slowly, and a "strict" mode where the optimizer
is allowed to do anything consistent with the language spec.
The siren song of more compiler switches should be resisted as much as
possible. Put wax in your ears and rope yourself to the mast :-)
I'll mention again: turning an overflow into wraparound is
not generally safe -- code which assumes no overflow is broken
in either case.
The code this optimization breaks is not always incorrect code. See the
link I posted in my reply to Andrei.
I'd prefer to have code that can be made safe.
Leaving it as UB doesn't help at all with that.
But it's a meta-solution: it allows implementations to offer
a choice of solutions, and for the market (rather than a BDFL
or a committee) to determine which is most useful.
But that means you're encouraging *reliance* on undefined behavior.
There aren't very many "good" C++ programmers, then <g>.
There's sadly a shortage of competent programmers, and it's
somewhat independent of the implementation language.
The bar for competence in C++ is a lot higher than for other languages.
That's pretty marvy if you're a top C++ expert and can command $5000/day
in consulting fees, but it stinks if you're on the other side of that
having to write the checks. Hence the demand for languages where the
costs are lower, a demand that I think C++ dismisses a little too easily.
The very uniformity and guarantees
that Win16 (and later Win32) laid down caused portability
problems to later generations of machines; the flexibility
that was built in to Unix-style specifications aided that
same portability.
My point was that 16 bit programs *tried* to be portable to 32 bits. All
those typedefs in windows.h were there for a reason.
It stinks to use "int" if it's not going to be as fast as "int64_t"
in some context, certainly. And how am I to get the fastest type
for operations that could be done in 16 bits? intfast16_t, or...
I have to guess with D whether to use short or int, and profile on
every platform? If I say that I need a type of exactly 32 bits, I'm
overspecifying the size while underspecifying optimization goals
(for speed or for space).
You can still use typedefs in D for your various tradeoffs. Like I said
in another post, D approaches this from the opposite direction than C++.
C++: int sizes variable, typedef sizes fixed
D: int sizes fixed, typedef sizes variable
But if your focus is narrow enough that you care only about mainstream
desktop and server platforms, it's a perfectly reasonable trade-off.
It's a false choice. D doesn't prevent you from using varying integer
sizes. It's just not the *default*.
Please
don't assume that those who disagree with you do so because they
lack experience or knowledge. It's common that they have knowledge
which you don't (and vice versa).
Point taken.
I bet if I sat down and quizzed
you on arcane details of the spec, I'd find a way to trip you up, too.
Yup; you might start by grilling me on arcana of name lookup,
there are enough landmines there I'd fail on.
It's fun to watch even the experts' mouths drop when I point some of
these things out <g>. Ok, so I enjoy a little schadenfreude here and
there; I'm going to hell.
A noble goal. Probably good PL design can make 10% as much
difference as the variation between programmers does, but that's
still a huge potential benefit.
My goal is 10%, and I agree it's huge. If you've got a million dollar
budget, that's $100,000 going to the profit.
A compiler is part of the checklist. Static analysis tools go
further, design and code reviews help, unit testing helps (and
I know you agree on that, as D built it right into the language).
Yes. I found that putting such in the language makes it much more likely
that people will use it. Built in unit testing has become a very popular
feature of D. Bruce Eckel gets the credit for talking me into that one.
because compilers have warned in any marginal situation,
Warnings are a good sign that there's something wrong with the language
design. BTW, I just tried this:
int test(char c) { return c; }
with:
g++ -c foo.cpp -Wall
and it compiled without error or warning. (gcc-4.1)
What's "marginal" about that situation?
It gives different answers for different signedness of char's, when c
has a value 0x80<=c<=0xFF. I should think if a compiler was to usefully
warn about these things, that example would be first on the list.
--------
Walter Bright
http://www.digitalmars.com
C, C++, D programming language compilers
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]