Re: St9bad_alloc exception in declaring double pointer
vijaykaus@gmail.com wrote:
Hi,
I am working on a C++ program where I need to work with manipulation
of large matrices. I am getting St9bad_alloc exception thrown when I
try to allocate large matrices (typically >5000x5000). However, it
works fine with smaller matrices. Is that expected or there is
anything that I can do to get rid of this?
It means you're out of memory. However, assuming the element type is
double (8 bytes), 5000x5000 doubles cost only 200000000 bytes, which is
about 190 MB. It's weird that you don't have that much free memory.
Or, if it is a memory
issue, are there better memory-saving ways to handle 2D arrays?
A pattern you're using for 2D arrays is:
double **a = new double*[rows];
for (i = 0; i < rows; i++) a[i] = new double[cols];
// use a[i][j]
for (i = 0; i < rows; i++) delete[] a[i];
delete[] a;
This costs (sizeof(double *) * rows + sizeof(double) * rows * cols),
plus some additional bookkeeping information per allocation, of which
you have (1 + rows). It's not that this is very relevant in your case,
nor that this makes a huge difference, but a problem is that there are
many allocations, especially if rows is relatively big and cols is
relatively small, in which case you have a lot of allocations of small
chunks. This is not very efficient.
There are some ways to reduce the number of allocations. For example,
double *a0 = new double[rows * cols];
double **a = new double*[rows];
for (i = 0; i < rows; i++) a[i] = a0 + cols * i;
// use a[i][j]
delete[] a;
delete[] a0;
This doesn't reduce the explicit memory cost, but reduces the number
of allocations to 2, without sacrificing the convenience of the notation
a[i][j].
double *a = new double[rows * cols];
// use a[cols * i + j]
delete[] a;
This reduces the explicit memory cost by (sizeof(double *) * rows),
and also reduces the number of allocations to 1. However, you have to
calculate the index yourself.
Of course, in all of the above cases, it's far better to use
std::vector<double> for any new double[]...delete[] pair. This way
you don't have to deal with the deallocation explicitly.
std::vector< std::vector<double> > a(rows, std::vector<double>(cols));
// use a[i][j]
std::vector<double> a0(rows * cols);
std::vector<double*> a(rows);
for (int i = 0; i < rows; i++) a[i] = &a0[cols * i];
// use a[i][j]
std::vector<double> a(rows * cols);
// use a[cols * i + j]
I am deallocating pointers in the standard way:
try
{
for(int i=0; i<3*M; i++)
delete[] AqT[i],Pq[i];
for(int i=0; i<3; i++)
delete[] Aq[i], dAq[i], AqAqT[i],IAqAqT[i];
delete[] Aq,AqT,AqAqT,IAqAqT,Pq,phit,dq;
}
catch (exception& e)
{
cout << "Standard exception in deAllocation: " << e.what() << endl;
exit(1);
}
Wait,
delete[] Aq[i], dAq[i], AqAqT[i], IAqAqT[i];
doesn't do what you probably meant:
delete[] Aq[i];
delete[] dAq[i];
delete[] AqAqT[i];
delete[] IAqAqT[i];
but the operand of delete[], (Aq[i], dAq[i], AqAqT[i], IAqAqT[i]),
just evaluates to IAqAqT[i], and the statement just boils down to:
delete[] IAqAqT[i];
therefore you leak memory pointed to by Aq[i], dAq[i], and AqAqT[i].
If you have been doing this allocation+deallocation many times in
a loop, then this could be the reason for the memory problem.
--
Seungbeom Kim
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]