Re: Why does int64_t * float get promoted to a float?
On Nov 20, 3:02 pm, Jeff Koftinoff <jeff.koftin...@gmail.com> wrote:
It surprised me to see the following code, compiled with GNU GCC 4.0:
#include <stdint.h>
#include <iostream>
template <typename T>
void show( T a )
{
std::cout << "calls: " << __PRETTY_FUNCTION__ << std::endl;
}
#define compare(T1,T2) { T1 a = 100; T2 b = 666; std::cout << #T1 " *
" #T2 " "; show(a * b); }
int main()
{
compare(float,int64_t);
...
}
It outputs the following:
float * int64_t calls: void show(T) [with T = float]
It would have made more sense to me if a float * int64_t would be a
double, or even a int64_t, but not a float... What is the real rule?
I had just assumed that the type with the most precision would be
chosen
I think that greater accuracy edges out greater precision in this
case. After all, when multiplying 0.5 by 1 (1uLL * 0.5f), should the
compiler choose 0.5, 0, or 1 as the "best" result (that is, the one
"least surprising" to the programmer)? In other words, although the
greater number of bits of the int64_t type may provide greater
precision, the greater representational flexibility of the float's
bits - is more likely to provide the more accurate result.
Greg
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]