Re: Stream Operator Overloading Design Choices
On Jan 22, 2:37 am, VirGin <virgi...@gmail.com> wrote:
[...]
I'm not sure you want to derive here. I tend to have an
ostream* member, which I use. Depending on whether output is
desired or not, the oLog function returns an instance of log it
with this pointer set to null, or to a valid output stream. (In
my case, the "valid output stream" uses a filtering streambuf
which can fan the output out to several different destinations,
including special streambuf's to output to syslog or to email,
as well as a file or cerr or cout.)
This is a good point. Orginally, the log class only wrote to files=
and cout. Part of my goal is to support different destinations.
The key point in the way I do it is the layering. A lot takes
place in the streambuf, behind the ostream, even. In fact, in
my current implementation, the ostream is created (constructed)
new each time the oLog function is called---only the streambuf's
persist. (This has the advantage that someone outputting in hex
to the log doesn't end up causing all of the following output to
be in hex.)
template <class T>
LogIt & operator<<(LogIt &oObj, const T &xVal)
{
cout << "Prefix_Rez\t" << xVal;// << endl;
return oObj;
}
This *isn't* where you want to put the prefix. The prefix
should be handled by the special streambuf as well. In
particular, the special streambuf has a flag, isStartOfLine,
which is initialized true, and then set as a function of each
character output. Something like the following, for example:
int
LogStreambuf::overflow( int ch )
{
if ( myIsStartOfLine ) {
myDest->sputn( prefix ) ;
}
int result = myDest->sputc( ch ) ;
myIsStartOfLine = (ch == '\n') ;
return result ;
}
Sounds like good advice. In thinking about a flag, it seemed like it
would be overhead that could be avoided.
You're outputting. By the time you get here, you're definitly
outputting---you're not in a log which has been disactivated.
Testing or setting a boolean is nothing compared to the rest of
what you're going to be doing. (Don't forget that to be really
useful, you're going to want to flush at the end of the log
record. Otherwise, if the program core dumps, the log will
contain everything except the most interesting part---what
happened immediately before the core dump.)
Again, I prefer using a member, with something like:
template< typename T >
LogIt&
operator<<( LogIt& dest, T const& obj )
{
if ( dest.stream != NULL ) {
dest << obj ;
}
return dest ;
}
I was toying with using a sstring member to hold the log
messages until they should be flushed or deriving from sstream
as well.
In practice, the way my front-end streambuf works is to just
stuff the characters into an std::vector<char>. It's only when
it is informed of the end of the record that it passes the data
on to the final targets. In my case, I even use a different
interface from streambuf for these final targets: a single write
of an std::vector<char> which the derived class is supposed to
handle atomically (if that makes sense for the class). One of
the derived classes wraps a streambuf*, calling sputn(), then
sync(), for the write---this class handles std::cout and
std::cerr. Since portability to non-Unix systems hasn't been an
issue to date, I use the low level file functions when writing
to a file---up to a certain size (which is sufficient to cover
most, if not all, log records), write() is guaranteed to be
atomic, and the file can be opened in such a way that writes
always go to the end (again, atomically). These two
characteristics together make it possible for two communicating
processes to log to the same file---very useful when tracking
down errors in the communications. And of course, there are
also derived classes which send the message to an email address,
or put it somewhere where snmp can find it (syslog, etc.).
One of the ways I use the older version of the class is to log
different types of messages to different files.
This meant that I would have several instances of LogIt. There
have been projects where I had the application log to three
different files. The reason for this in that situation was
that a script would process one of the logs and populate a
database based on the contents of the log. Also, the network
manager that was managing the application was able to look at
a log that only dealt with application errors.
In the more elaborate versions I use, logging is controlled by a
configuration file, with a line oriented syntax something like:
<severity> <subsystem> <command>
The severity can be a single number, a range or a comma
separated list. <subsystem> can be a single name, or a comma
separated list of names, and <command> is the rest of the line,
starting with a keyword such as "file", "mail", etc. (Handling
the subsystem parameter is rather complex, and is probably only
necessary for very big systems. Also, some of the systems I've
used this on run 24 hours a day---in such cases, I try to work
something out so that the log can be reconfigured without
shutting the system down. In a multithreaded system, that can
be very non-trivial.)
--
James Kanze (GABI Software) mailto:james.kanze@gmail.com
Conseils en informatique orient=EF=BF=BDe objet/
Beratung in objektorientierter Datenverarbeitung
9 place S=EF=BF=BDmard, 78210 St.-Cyr-l'=EF=BF=BDcole, France, +33 (0)1 30 2=
3 00 34