Re: Memory issue
"Alf P. Steinbach"
I want to use std::vector::pushback function to keep data in
RAM. What will happened if no more memory available?
In practice, what happens on a modern system as free memory
starts to become exhausted and/or very fragmented, is that the
system slows to a crawl, so, you're unlikely to actually reach
that limit for a set of small allocations.
On modern systems, it's not rare for the actual memory to be the
same size as the virtual memory, which means that in practice,
you'll never page.
No, that theory isn't practice. I think what you would have meant to
write, if you thought about it, would have been "which means that,
provided there are no other processes using much memory, an ideal OS won't
page". And some systems may work like that, and when you're lucky you
don't have any other processes competing for actual memory. :-)
You set the swap file size to 0 and there is no swap, ever. On a machine
that has 2G or 4G ram, it is a reasonable setting. (I definitely use such
configurarion with windows, and only add swap after there is a clear
reason...)
Adding so much swap to gain your crawl effect is practical for what?
Andy Chapman's comment else-thread was more relevant, I think.
Because with enough RAM you can conceivably hit the limit of the address
space and get a std::bad_alloc before the OS starts trashing.
That is another hard limit, very common OS-es use memory layout model that
only gives out 2G or 3G of address space, then that is it. (When WIN32 was
introduced with this property back in 90s we were talking about the new
'640k shall be enough for everyone' problem, though the usual memory in
those days PCs were 8-16M, and filling a gig looked insane -- but bloatware
is like a gas, fills every cubic bit of space. ;)
(4GB for both seems to be a common figure,
even on 64 bit systems.) The phenomenon you describe would
mainly apply to older systems.
Well, my experience and interest in recent years has mostly been with
Windows.
Fighting memory-exhaustion is IME an lost battle on win, at least it was a
few years ago. You lauunch 'the elephant' (stress application from sdk) and
everything around you srashes, while the rest is there unusable, as dialogs
pop up with broken text, ill buttons, etc.
In MFC app, I abandoned hope of recovery from memory error -- no matter what
I did in my part, there were always portions of the lib that failed, also,
not using any memory on teardown is too tricky. (and as mentioned, GUI
tends to break...)
If the whole mem management would be done at a single place, it could
hibernate the process until memory appears, but it is not the case,
different libs use a mix of new, malloc in language, a couple of WIN32
API's, and later some COM interfaces too.
AFAIK it's quite difficult to make Windows refrain from paging.
system/performance/virtual mem-> set to 0. (with some recovery setting there
is ~2M lower limit)
As paging strategy is 'interesting' at least -- the system writes to swap
even when you just use 15% of the RAM, it is well advised to turn it off
having 2G for a simple 'desktop' use...
This is a problem for some people implementing Windows based servers.
Server is certainly another kind of thing, it needs much memory, and swap is
beneficial to survive usage bursts, also here the system have better chance
to fair swap use, keeping preprocessed tables (indexes, compilations)
offline until a next need.
However, a very large allocation might fail.
Or not. Some OS's don't tell you when there's not enough
virtual memory, in which case, the allocation works, but you get
a core dump when you use the memory.
Example?
Linux. google for memory overcommit. (or there is good description in
Exceprional C++ Style). It is exactly as nasty as it sounds -- OS just
gives you address space, and running out of pages you get shot on ANY
access. There is no way to write a conforming C++ implementatuion for such
environment. :(