grows out of what a single core processor can accomplish.
communication, building and maintaining my FIFO queue.
list. The reason for this may be that the expensive memory
reshuffling a few items. This would probably not work for
very long lists. Also, maybe there is some sort of
std::list::reserve(), that would mitigate this cost.
Example usage of the class below, I added an Add()
override to make it easier to add elements for the
specific TSlaveData fields:
#include <windows.h>
#include <conio.h>
#include <list>
#include <string>
#include <iostream>
using namespace std;
const DWORD MAX_JOBS = 10;
typedef struct _tagTSlaveData {
DWORD jid; // job number
char szUser[256];
char szPwd[256];
char szHost[256];
} TSlaveData;
class CBucket : public std::list<TSlaveData>
{
public:
CBucket() { InitializeCriticalSection(&cs); }
~CBucket() { DeleteCriticalSection(&cs); }
void Add( const TSlaveData &o )
{
EnterCriticalSection(&cs);
insert(end(), o );
LeaveCriticalSection(&cs);
}
void Add(const DWORD jid,
const char *user,
const char *pwd,
const char *host)
{
TSlaveData sd = {0};
sd.jid = jid;
strncpy(sd.szUser,user,sizeof(sd.szUser));
strncpy(sd.szPwd,pwd,sizeof(sd.szPwd));
strncpy(sd.szHost,host,sizeof(sd.szHost));
Add(sd);
}
BOOL Fetch(TSlaveData &o)
{
EnterCriticalSection(&cs);
BOOL res = !empty();
if (res) {
o = front();
pop_front();
}
LeaveCriticalSection(&cs);
return res;
}
private:
CRITICAL_SECTION cs;
} Bucket;
void FillBucket()
{
for (int i = 0; i < MAX_JOBS; i++)
{
Bucket.Add(i,"user","password", "host");
}
}
//----------------------------------------------------------------
// Main Thread
//----------------------------------------------------------------
int main(char argc, char *argv[])
{
FillBucket();
printf("Bucket Size: %d\n",Bucket.size());
TSlaveData o = {0};
while (Bucket.Fetch(o)) {
printf("%3d | %s\n",o.jid, o.szUser);
}
return 0;
}
Your mongoose, OCR thingie, mongoose will Bucket.Add() and
each spawned OCR thread will do a Bucket.Fetch().
Do it right, it and ROCKS!
--
HLS
Hector Santos wrote:
Peter Olcott wrote:
I still think that the FIFO queue is a good idea. Now I
will have multiple requests and on multi-core machines
multiple servers.
IMO, it just that its an odd approach to load balancing.
You are integrating software components, like a web
server with an multi-thread ready listening server and
you are hampering it with a single thread only FIFO
queuing. It introduces other design considerations.
Namely, you will need to consider a store and forward
concept for your request and delayed responses. But if
your request processing is very fast, maybe you don't
need to worry about it.
In practice the "FIFO" would be at the socket level or
listening level with concepts dealing with load balancing
by restricting and balancing your connection with worker
pools or simply letting it to wait knowing that
processing won't be too long. Some servers have
guidelines for waiting limits. For the WEB, I am not
recall coming across any specific guideline other than a
practical one per implementation. The point is you don't
want the customers waiting too long - but what is "too
long."
What is your best suggestion for how I can implement the
FIFO queue?
(1) I want it to be very fast
(2) I want it to be portable across Unix / Linux /
Windows, and maybe even Mac OS X
(3) I want it to be as robust and fault tolerant as
possible.
Any good collection class will do as long as you wrap it
with synchronization. Example:
typedef struct _tagTSlaveData {
... data per request.........
} TSlaveData;
class CBucket : public std::list<TSlaveData>
{
public:
CBucket() { InitializeCriticalSection(&cs); }
~CBucket() { DeleteCriticalSection(&cs); }
void Add( const TSlaveData &o )
{
EnterCriticalSection(&cs);
insert(end(), o );
LeaveCriticalSection(&cs);
}
BOOL Fetch(TSlaveData &o)
{
EnterCriticalSection(&cs);
BOOL res = !empty();
if (res) {
o = front();
pop_front();
}
LeaveCriticalSection(&cs);
return res;
}
private:
CRITICAL_SECTION cs;
} Bucket;
--
HLS