Re: std iostreams design question, why not like java stream wrappers?
On Aug 27, 3:21 am, James Kanze <james.ka...@gmail.com> wrote:
On Aug 27, 8:52 am, Joshua Maurice <joshuamaur...@gmail.com> wrote:
I'd also say that it's convoluted given that it doesn't really
solve any problems. Sure, it correctly handles the systems end
of line, and it correctly uses the right code points for
converting an integer to string representation for the locale,
and cool stuff like a comma or a period for the thousand
separators and "1 to tenths" separator. However, the entire
iostream library is overkill if those are the only problems it
solves, hence convoluted.
You seem to be confusing the issues. The iostream library isn't
concerned about much of that. The iostream library provides:
-- a standard interface for data sinks and sources
(std::streambuf), along with two sample implementations
covering the most frequence cases (files and strings),
-- a standard interface for formatting, using the strategy
pattern to cleanly separate sinking and sourcing bytes from
the formatting---the formatting interface uses overloading
so that client code can extend it for types it's never heard
of,
Except for system specific newline handling for non-binary mode, but I
can live with that. I think. Not exactly sure how that works.
-- a standard error handling strategy (which is rather
simplistic), and
-- a couple of wrapper classes to make it easier to set up the
most common cases (reading or writing from a file or a
string).
For all localization issues, it defers to <locale>, which is
overly complicated for what it does (which isn't really enough).
And remind me again, exactly what does the iostream library without
<locale> do again? It handles newlines for you if not in binary mode,
and uhh... is it with facet support that it handles printing integers
and floats into human readable strings? So, back to my original
complaint of why a separate streambuf and iostream class hierarchies
when something like the Java Writer hierarchy or the Java OutputStream
hierarchy seems so much clearer and simpler IMHO?
Well, not exactly that. That would imply virtual overhead at every
step of the way. I like my plan where each root stream type is a stand
alone type, and then you have wrapping filtering streams which take
other streams as 'template arguments'. Something very much like how
the containers of the STL are stand alone classes, and a lot of the
algorithms are written using templates to work on any of the STL
containers. All you need for a dynamic dispatch stream wrapper is that
one class I wrote above, jjm::istream_wrapper (though probably with
buffering on by default for jjm::istream_wrapper to avoid virtual
function calls on every call).
2- No virtual overhead if it's not required. I should not have
to go through a virtual function to write to a file or to a
string with a stringstream.
And the alternative is? The alternative is simply an
unacceptable amount of code bloat, with every single function
which does any I/O a template. Without the virtual functions,
iostream becomes almost unusable, like printf and company.
Please review my first post where my solution supports both compile-
time polymorphism and runtime polymorphism. Specifically look at the
functions
template <typename stream_t>
void print_compiletime_polymorphism(stream_t& stream)
{ std::cout << "---- " << __FILE__ << " " << __LINE__ << std::endl;
for (std::string line; getline(stream, line); )
std::cout << "X" << line << std::endl;
}
void print_runtime_polymorphism(jjm::istream_wrapper stream)
{ std::cout << "---- " << __FILE__ << " " << __LINE__ << std::endl;
for (std::string line; getline(stream, line); )
std::cout << "X" << line << std::endl;
}
I fully recognize that templates do not solve everything, and that you
need to be able to work on a generic stream which uses dynamic
dispatch to do the actual work, for compilation speed reasons, code
bloat reasons (and thereby runtime speed and size reasons), etc. My
solution offers both. With it, you would only pay for the virtual
overhead when you actually need to use it. (Note that
jjm:istream_wrapper should probably be buffered by default.) I haven't
thought this through fully, but enough to ask "Why isn't it done this
way which seems superior?"
I'd like to be able to use it in performance critical aspects
of code without invoking the holy war of printf vs cout. Also
see point 1 for why it might be slow.
3- Actual support for changing between encodings like UTF 16 and UTF
32. Ex:
raw_ofstream out("some_file");
buffered_ostream<raw_ofstream> out_2(out);
out_encoder<buffered_ostream<raw_ofstream>> out_3(out_2, "UTF-16");
//you have some data in utf8 format, like in a utf8string
utf8string str;
out_3 << str;
//or perhaps some raw data which you know the encoding
//(Yes, a string literal may not be ASCII. I know.)
out_3.writeString("foo", "ASCII");
Or more likely, a helper class would exist:
ofstream_with_encoding /*for want of a better name*/ out
("some_file", "ASCII");
utf8str str;
out << str;
utf16str str2;
out << str2;
with whatever failure mechanism by default if it can't do the
encoding translation, be that by exception or setting the fail
bit, and of course it would be configurable on each iostream
like setw.
That's less obvious than you think, because of buffering. You
can change the encoding on the fly, e.g.:
std::cin.rdbuf()->imbue( localeWithNewEncoding ) ;
or
std::cin.imbue( std::locale( std::cin.getloc(),
newEncodingFacet ) ) ;
but there are serious practical limitations (which affect e.g.
the Java solution as well). Buffering and changing encodings on
the fly don't work well together.
Sorry. Indeed. You are correct. Oversight there.