Re: Segmentation faultwhy???
Cristian wrote:
On 11 May 2006 19:09:02 -0400, "kanze" <kanze@gabi-soft.fr> wrote:
Sebastian Redl wrote:
In sum, the expression is (almost) the same as the much
simpler:
3.0 * (j+1) * (j+1 )
and the fact that he didn't write it this was suggests that
it isn't actually the expression he wanted. I vaguely
suspect that what he was looking for was something like:
3.0 / pow( j+1, 4 )
But I'm not really sure -- that's so much simpler than what
he wrote as well, it's surprising that he wouldn't use it.
Yes, I want to perform the following calculation:
3.0 / pow( j+1, 4 )
The problem is however that I come from the windows'world and in
windows the use of pow() provoked me notable decelerations and at
times problems of overflow on these calculations (great matrixes.
Dimensions: 20000 x 20000 double)
Curious. I would expect the speed to be roughly the same, and I
certainly cannot see the results overflowing unless you were
picking up a pow(int,int) as an extension; casting the first
argument to double before the call should have fixed that. (In
earlier times, floating point on Intel machines was very slow,
and offering such functions as an extension made sense. Today,
I think the Intel processors are like most others, with floating
point as fast, if not faster, than integral arithmetic.)
In windows the management of the stack is probably different
It is. But I don't know too much about it.
considering that up to coarsely 10000x10000 I succeeded in
compiling without problems.
I have resolved using std::vector
For a two dimensional array, I'm not sure that that is the ideal
solution -- unless you wrap it in a class which allows [][].
(The first [] is a member of the class, and returns a helper
object -- double* actually works fine -- which supports the
second [].)
The problem is that using
const long dim1=...,dim2=...;
<vector<vector< double>> m(dim1,vector<double>(dim2) )
I get strange behaviors as it regards the speed :
I can imagine -- for a 20000x20000 matrice, you're doing 200001
allocations (plus the initializations).
for example:
m--> 10^4 * 10^4 run-time of the calculation: 3 sec
"-> 20^4 * 10^4 " : more than 100 sec
Why?
Well, you've doubled the number of allocations. In all likely
hood, you've stumbled across some undocumented internal boundary
which has led to the initialisations not being more or less
contiguous, have ended up with bad locality, and have started
paging.
in windows I get a doubling of the run-time instead in
reference to the said example above
It depends on a lot of external factors, over which you have
very little control. Using std::vector is, per se, a good idea,
but I'd allocate a single large array, and do the vector
arithmetic myself. Something like:
class Matrix
{
public:
Matrix( long dim1, long dim2 )
: myDim1( dim1 )
, myDim2( dim2 )
, myData( dim1 * dim2 )
{
}
double* operator[]( long i )
{
return &myData[ 0 ] + (j * myDim1) ;
}
// ...
private:
long myDim1 ;
long myDim2 ;
std::vector< double > myData ;
}
--
James Kanze GABI Software
Conseils en informatique orient?e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S?mard, 78210 St.-Cyr-l'?cole, France, +33 (0)1 30 23 00 34
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]