Re: Does c++(under linux) overcommit memory?
On 17 Feb., 11:10, "jon wayne" <jon.wayne...@gmail.com> wrote:
Hi
I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).
That I don't know.
In the sense , say I try and allocate dynamically a large array p (int
*p)
p = (int *) malloc(N * sizeof(int)); // ----
1
and replace it by
p = new int[ N * sizeof(int)]; // -- 2
That is not a replacement (unless sizeof(int) happens to be 1 on your
platform). The corresponding expression is
p = new int[ N];
where N = 1000000000000000 //
the second statement always generates a bad_alloc exception ---
Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??
I would normally recommend that you use std::vector. Here, you'd have:
std::vector<int> v(N);
(and use &v[0] whenever you want a pointer to its first element).
In that case you'd have a segment violation because the vector would
be initialised and the overcomitment would kick in right away.
[snip]
/Peter