Peter Olcott wrote:
I still think that the FIFO queue is a good idea. Now I
will
have multiple requests and on multi-core machines
multiple
servers.
IMO, it just that its an odd approach to load balancing.
You are
integrating software components, like a web server with an
multi-thread ready listening server and you are hampering
it with a
single thread only FIFO queuing. It introduces other
design
considerations. Namely, you will need to consider a store
and forward
concept for your request and delayed responses. But if
your request
processing is very fast, maybe you don't need to worry
about it.
In practice the "FIFO" would be at the socket level or
listening level
with concepts dealing with load balancing by restricting
and balancing
your connection with worker pools or simply letting it to
wait knowing
that processing won't be too long. Some servers have
guidelines for
waiting limits. For the WEB, I am not recall coming
across any
specific guideline other than a practical one per
implementation. The
point is you don't want the customers waiting too long -
but what is
"too long."
What is your best suggestion for how I can implement the
FIFO queue?
(1) I want it to be very fast
(2) I want it to be portable across Unix / Linux /
Windows,
and maybe even Mac OS X
(3) I want it to be as robust and fault tolerant as
possible.
Any good collection class will do as long as you wrap it
with
synchronization. Example:
typedef struct _tagTSlaveData {
... data per request.........
} TSlaveData;
class CBucket : public std::list<TSlaveData>
{
public:
CBucket() { InitializeCriticalSection(&cs); }
~CBucket() { DeleteCriticalSection(&cs); }
void Add( const TSlaveData &o )
{
EnterCriticalSection(&cs);
insert(end(), o );
LeaveCriticalSection(&cs);
}
BOOL Fetch(TSlaveData &o)
{
EnterCriticalSection(&cs);
BOOL res = !empty();
if (res) {
o = front();
pop_front();
}
LeaveCriticalSection(&cs);
return res;
}
private:
CRITICAL_SECTION cs;
} Bucket;
Joseph M. Newcomer [MVP]