Re: concurrent linked queue class for C++?
Chris M. Thomasson wrote:
"Branimir Maksimovic" <bmaxa@hotmail.com> wrote in message
news:hgs21l$pm0$1@news.albasani.net...
Andrew wrote:
I am designing a system where an app will need to spawn a child thread
then the child and parent thread will need to communicate. If this was
in java I would use ConcurrentLinkedQueue but what to do in C++? I
have googled and searched boost but cannot find anything.
There is a class that would serve in ACE but ACE is huge so I do not
want to introduce ACE to the project. The project is already using
boost and fighting the battle for more boost usage is hard enough.
Does anyone know if such a facility is planned for the upcoming std?
I would advise against using threads. Processes and shared memory is
much more easier to maintain,
I am curious as to what made you come to that conclusion? Anyway, which one
is easier: Creating a dynamic unbounded queue with threads or shared memory
and processes?
Depends. You can always use cout and simple pipe. Why queue?
When performance is concern vectorized operations on memory
parallel loops and such stuff have sense with threads.
There are many ways to do IPC... Depending on situation.
For example I had case when server version that is php which
with popen starts c executable which returns result with printf and
initialize data with every request, performs
three times faster than java multithreaded
server as search engine...
There is deque class in stdlib, it is good as queue, I use
it all the time...
OP can lock it with os mutex he have and that;s it...
Vector is also ok for this (push_back/back/pop_back), linked list etc...
I don;t see problem here. But since op asks this question,
probably he doesn;t know what is mutex...
That's why if he uses cout/pipe or sockets or something
else he will safe himself lot of maintenance problems...
Why would you think that all that is easier than using threads? What am I
missing here?
Maintenance problems. With processes, there is no problem.
For example there is pre forked and pre threaded version
of apache. People prefer forked version because of libraries
they have to link in. On my machine mt server serves
more than 60000 thousand simple echo requests per second
with 28000 connections on single cpu,
which is far too much , you get rarely more than 100 requests per
second...
Greets
For Op here is code for queue:
static std::deque<Service*> lstSvc_;
......
Mutex::Mutex()
{
pthread_mutex_init(&mutex_,0);
}
void Mutex::lock()
{
int rc=pthread_mutex_lock(&mutex_);
if(rc)throw Exception("mutex lock error:%s",strerror(rc));
}
void Mutex::unlock()
{
int rc = pthread_mutex_unlock(&mutex_);
if(rc)throw Exception("mutex unlock error:%s",strerror(rc));
}
Mutex::~Mutex()
{
pthread_mutex_destroy(&mutex_);
}
template <class Lock>
class AutoLock{
public:
explicit AutoLock(Lock& l):lock_(l)
{
lock_.lock();
}
void lock()
{
lock_.lock();
}
void unlock()
{
lock_.unlock();
}
~AutoLock()
{
lock_.unlock();
}
private:
AutoLock(const AutoLock&);
AutoLock& operator=(const AutoLock&);
Lock& lock_;
};
......
{
AutoLock<Mutex> l(lstSvcM_);
if(!lstSvc_.empty())
{
s= lstSvc_.front();
lstSvc_.pop_front();
}
else
{
more = false;
s=0;
continue;
}
}
.....
case Socket::Reading:
if(s->doneReading())
{
AutoLock<Mutex> l(lstSvcM_);
lstSvc_.push_back(s);
}
else
pl_.read(s);
break;
I don;t use condition variables, every thread performs both io and
service. so for waking other threads I use:
Poll::Poll(nfds_t size)
:fds_(new pollfd[size+1]),maxfds_(size),nfds_(0)
{
if(socketpair(AF_LOCAL,SOCK_STREAM,0,wake_)<0)
throw Exception("Poll init error: %s",strerror(errno));
int flags = fcntl(wake_[0], F_GETFL, 0);
fcntl(wake_[0], F_SETFL, flags | O_NONBLOCK); // set non blocking
flags = fcntl(wake_[1], F_GETFL, 0);
fcntl(wake_[1], F_SETFL, flags | O_NONBLOCK); // set non blocking
AutoLock<Mutex> l(lstPollM_);
lstPoll_.push_back(this);
}
void Poll::wake()
{
int rc = ::write(wake_[0],"1",1);
}
Hope this helps.
--
http://maxa.homedns.org/
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]