Re: 64 bit C++ and OS defined types
On Apr 5, 7:40 am, Ian Collins <ian-n...@hotmail.com> wrote:
Alf P. Steinbach wrote:
So, for you and some others it's already not a practical
proposition or even possible at all to map a file into more
than half the available address range (not even mentioning
the matter of processing it at the byte level). :-)
Well I do have some 8 and 16 bit embedded development boards I
could power up....
Historically (and this does go back some), some 16 bit systems
used a segmented architecture, in which a user process could
have up to 640KB memory, but the maximum size of a single object
(or array) was 64KB, and size_t was 16 bits. In such systems,
there is an argument concerning the addressability; making
size_t signed effectively does divide the largest size of a byte
array by 2, and it isn't that unreasonable to imagine an
application which deals with byte arrays larger than 32KB, even
on such a system. Whether supporting the additional range is
worth the hassles it causes (due to mixing of signed and
unsigned types) is very debatable, but the fact that Stepanov
originally developed the STL on such a system is probably not
foreign to his choice of size_t for indexes.
Today, of course, you won't find such things other than in
embedded systems, and I'm, not sure whether such issues are
relevant in them.
--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34