Re: behaviour of setprecision(0)
On Jun 12, 2:07 pm, jacek.dzied...@gmail.com wrote:
Can someone tell me what the expected output of the following
program is?
#include <fstream>
#include <iomanip>
using namespace std;
int main() {
ofstream o("o");
o << fixed << setprecision(0) << 13.0 << endl;
}
Depending on which STL implementation I use with my compiler, I
either get
"13" or "13.000000".
It should be "13". The standard is quite clear about this.
I don't have the Standard (at all) or the Josuttis book (with
me), so can anyone shed some light on what setprecision(0)
does?
It ensures that all future reads of the precision will return
0:-).
The output format is defined in terms of equivalent printf
specifiers; if the type is a floating point type, the output
format always has the precision set, e.g. "%.*f", with the *
begin replaced by the precision.
I seem to vaguely recall that it turns precision control off
(13.000000 would be the 'right' output then), but I'm not sure
(or is it setprecision(-1)?).
At least in the standard iostream, you can't turn precision
control off. The precision is set to 6 during initialization,
and is always used for floating point (and never for any other
type).
Moreover, I'm after option 1, that is, I insist on printing my
doubles with _no_ digits past the decimal point (and no
decimal point), how do I achieve this behaviour?
By setting the precision to 0, at least with a standard
conforming library.
What implementation doesn't do this? All of those available to
me are correct here.
--
James Kanze (GABI Software, from CAI) email:james.kanze@gmail.com
Conseils en informatique orient=E9e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=E9mard, 78210 St.-Cyr-l'=C9cole, France, +33 (0)1 30 23 00 34