Re: Array Size
On Monday, 1 July 2013 15:14:10 UTC+1, David Brown wrote:
On 01/07/13 14:24, James Kanze wrote:
On Saturday, 29 June 2013 15:12:30 UTC+1, David Brown wrote:
<snip>
These sorts of types are okay on their own, but as soon as you
have to store them, convert them, manipulate them, or do
arithmetic on them, it is easier to see correctness if you
know exactly what type sizes you have.
That is simply false. Well written software doesn't depend on
any specific sizes. It validates all input against the values
it can handle.
[...]
And it is also often inappropriate to "validate all inputs" to code -
you only need to do so for external data coming in.
"External data coming in" sounds like the usual meaning of
"input".
Re-validating internal data
Is not what I'm talking about. I specifically said "validate
input". Immediately, up front so that you don't have to do
validations everywhere else (where it's too late to do anything
about it, and where you no longer have the information necessary
for a good error message).
that you know is correct is a waste of code space and
run-time, clutters "real" code with unnecessary code making it
hard to understand, gives you code paths that never get run or
tested (a big "no-no" in the world of high reliability
embedded software), and leaves you stuck with the question of
what to do if you /do/ get incorrect data. Not all programs
can pop up an error message to the user.
I still make heavy use of assert. Better to crash than to
continue in an unknown state (especially in a critical system,
where you have a backup which will detect the crash and take
over). But that's not what I was talking about.
Of course, in critical software, the unit tests will ensure that
the assertions are there, and that they do trigger. But in most
general purpose software, I won't take testing to quite that
level. The asserts are there, but they are untested code.
To ensure maximum correctness I write simple static_assert
that the required limits fit into numeric_limits<int> and
a pile of unit tests. The fixed width types are no way
magic bullet and help from them is minimal.
So you have a static assertion somewhere to ensure that (for
example) "int" is 32-bit, and then later in the code you use
"int" knowing it is 32-bit? What possible benefit is that
over writing "int32_t" when you want a 32-bit integer?
The static assertion is more likely along the lines of
MAX_INT > 40000. With the advent of std::vector, and
dynamically sized arrays, of course, you're more likely to use
a dynamic check:
if ( input > MAX_INT / sizeof( MyType ) ) {
throw IllegalInputError();
}
"int32_t" says exactly what you want
Except that int32_t does NOT say exactly what I want. On one
hand, it says too much (2s complement, exactly 32 bits). On the
other, it doesn't say enough (value in the range 0 to 40000).
It's misleading. It's obfuscation.
There are no systems in the real, modern world that are /not/ 2's
compliment - it's a mute issue.
One could make that argument. Java did. For the moment, one of
the expressed goals of C++ is to be efficiently implementable on
*all* machines.
C could - and should - specify it as a
requirement. Dinosaur machines can be ignored for all practical
purposes, because coding for them is so specialised anyway.
Embedded systems can be ignored for all practical purposes,
because coding for them is so specialised anyway:-).
That's the Java philosophy. It's very definitely not the
C philosophy (and the Unisys mainframes *have*
C compilers---with an option to turn off standard compliant
unsigned behavior, because it is so expensive in runtime).
Generally, the majority of the C++ committe seem to agree with
C here, although unlike the case of C, there is a significant
minority which would willingly drop some of the most exotic
platforms.
--
James