Re: Unsignedness of std::size_t
Kaba wrote:
A little higher on the same page you will see "Interface types
revisited (and more)," Give that a read too.
Thanks for the hint. Yes, it seems unsigned types can also be used
robustly if their range is reduced.
I still think it is more natural to test for < 0 than for > MAX_INT.
It's far more natural to not test. Either of these makes use of an
arbitrary criterion for an incorrect size, which is rarely helpful.
There are far more invalid sizes than just the ones that don't fit the
library's notion of the maximum that should be allocated. If the caller
hasn't ensured that the requested size is correct according to the
application's requirements, having a different check later on won't
ensure that the application requests the sizes it needs.
As
given by the article, the unsigned and signed types end up having
exactly the same positive range in robust code. Therefore the naturality
of negative numbers should make the choice turn to the signed types.
A somewhat bad thing is that if a library makes a decision to use
unsigned types, then that decision is kind of propagated to the users
code: our library is using signed integers explicitly, and whenever I
use std::vector::size(), for example, I get a warning about signedness
difference (I have disabled this warning).
Exactly right: compiler writers don't know what your criteria are, so
aren't in a position to decide whether you've used a valid, well-defined
code construct correctly. Don't rely on compiler writers to debug your
code. That's your responsibility.
The same thing applies to library writers: they can't determine whether
your application has passed the right value for an argument, only
whether the value actually passed is something the library can handle.
Don't rely on the runtime library to debug your code. That's your
responsibility.
My viewpoint for supporting signed integers is two-part:
1) Unsigned types can only deliver one "precondition", that the number
be >= 0 (and obviously <= MAX_INT). If you have a function that has a
precondition >= 3, the unsigned type can't help you to "reinforce" that.
Precondition checking is inherently a run-time process, not a compile-
time one.
Yup. And the precondition for whatever that function is doing is that
the requested amount makes sense to the library. It's not the libary's
responsibility to find bugs in your code.
2) Unsigned types are not closed under subtraction around zero. For
example 3 - 4 does not give you -1, but some very big number. But we use
integers almost explicitly to model integers as in mathematics Z, and
assume there is no such circulation. Unsigned integers are thus a bad
model for Z.
The point 2 is the one causing many errors.
Actually, the fact that robust code must reduce the range of the
unsigned type reinforces the idea of changing std::size_t to signed:
that means no numerical range will be lost!
Bad assumption. Reducing the range doesn't make code robust. Robust code
ensures that the values passed to library functions (or any other
functions, for that matter) make sense, and doesn't rely on the library
to apply some other criterion to catch coding errors.
--
-- Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com)
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." (www.petebecker.com/tr1book)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]