Re: Array Size
On Friday, June 21, 2013 7:38:25 PM UTC+1, Scott Lurndal wrote:
=?ISO-8859-1?Q?=D6=F6_Tiib?= <ootiib@hot.ee> writes:
On Friday, 21 June 2013 18:45:44 UTC+3, Scott Lurndal wrote:
And I disagree with the 'avoid unsigned types'. Use the type that's
appropriate for the problem. The vast majority of types I use in
my current projects are unsigned (uint32_t, uint64_t).
These types are good for bit-wise operations. What problem it is for
what you need integers in range 0 to 18446744073709551617?
new processor simulation. The internal busses are 64-bits and unsigned.
That's a very special case. You're emulating an entity which
consists of 64 individual bits. It's not an arithmetic value,
at least not in the traditional sense.
I've seen far to many cases in OS, hypervisor and simulation where int is
used for fundamentally unsigned data and bad things happen.
And I've seen far too many cases in normal applications where
the strange behavior of unsigned values causes problems.
The most widespread special case where you *do* use unsigned
types is when the code is manipulating bits, rather than
treating the value as an arithmetic type. If the usual (or
likely) binary operators are + and -, then you should not use
unsigned. If the binary operators are &, |, << and >>, you
should avoid signed.
For most applications, the second case simply doesn't occur (or
if it does, it in in "bitfield" types, implemented over an
enum). For simulating hardward, of course, this second case is
likely the most frequent.
And yes, I've
written all three of these in C++, several times.
In any case, uintX_t/intX_t aren't just good for 'bit-wise operations'.
You certainly wouldn't want to use them for general numeric
values.
additionally, there is no speed differential between uint32_t and int
on any modern platform (and no speed differential between int and uint64_t
on any 64-bit native platform; albeit there may be a minor space penalty
for uint64_t)
James suggestion would break for:
unsigned char bigguy[3u*1024u*1024u*1024u];
int bigguy_elements = sizeof(bigguy)/sizeof(bigguy[0]);
on any architecture where sizeof(size_t) != sizeof(int).
Except that I never suggested anything like that. I clearly
said that for things like that, I would use:
std::vector<unsigned char> bigguy(...);
The *only* time you need to calculate the length is when it was
established by the compiler, from an initialization list. And
an initialization list which contains more than INT_MAX elements
almost certainly won't compile. (I had a case once, in machine
generated code, where the initialization list did get very
large. Much less than INT_MAX, but g+++ still aborted with out
of memory,)
I might also add that vectors of this size simple can't occur
in most applications, except perhaps in some very special,
limited cases. In fact, I've never seen a vector<unsigned char>
in my applications (although I can easily imagine other types of
applications where it makes sense); the only time I've had to
deal with memory blocks of that size was in the implementations
of memory management schemes: the blocks were based on pointers
I got back from sbrk(), *and* of course, I couldn't use size_t,
because the size of the blocks could be larger than would fit in
a size_t.
--
James