Re: 64 bit C++ and OS defined types
On Apr 3, 6:36 am, Christopher <cp...@austin.rr.com> wrote:
Anyway, is it bad practice for me to just give in and start using the
defined types MS provides instead of standard types? I am beginning to
believe it might make my job alot easier, even if my code is no longer
portable.
How would changing 'unsigned' to 'UINT' help you in the example below?
Most likely, 'UINT' is a typedef for 'unsigned int' anyway.
Also, should I start using 64 bit types whenever possible vs 32 bit
types, even if I don't think the values will grow that large? Does it
make a difference?
For instance I used to have a class something like this:
class MySpecialContainer
{
public:
const unsigned GetNumElements() const
{
return m_udts.size();
}
private:
unsigned m_numElements;
typedef std::vector<UDT> UDTVec;
UDTVec m_udts;
}
Now the return value of size() is 64 bits and does not convert to an
unsigned.
I could go and change everything to return size_t, but then I have to
change everywhere it is passed as a parameter, and half the APIs I
call are asking for a UINT, even the Windows APIs. I guess because
they expect the value to never grow that big. So, then I have to
static_cast<unsigned>( mySize_T) back again. I don't know what kind of
rules to adapt for these situations.
You should look at the application logic.
If UINT_MAX has always been a few orders of magnitude larger than the
values returned from GetNumElements(), what makes you think that on a
64-bit platform your container will be storing so many more elements
that you actually need to count them in 64 bits?
If your problem is that the compiler has started to complain about the
conversion from size_t to unsigned, then you should use a cast in
GetNumElements itself (after you verified that 32-bit is indeed more
than enough). For added safety, you could add an 'assert(my_udts.size
() < UINT_MAX)'
Bart v Ingen Schenau