Re: bad alloc
On Sep 2, 1:25 am, Joshua Maurice <joshuamaur...@gmail.com> wrote:
On Sep 1, 4:58 pm, James Kanze <james.ka...@gmail.com> wrote:
On Aug 31, 5:33 pm, Adam Skutt <ask...@gmail.com> wrote:
If the operating system's virtual memory allows for memory allocation
by other processes to cause allocation failure in my own, then
ultimately I may be forced to crash anyway. Many operating systems
kernel panic (i.e., stop completely) if they reach their commit limit
and have no way of raising the limit (e.g., adding swap automatically
or expanding an existing file). Talking about other processes when
all mainstream systems provide robust virtual memory systems is
tomfoolery.
All mainstream systems except Linux (and I think Windows, and
some versions of AIX, and I think some versions of HP/UX as
well), you mean. The default configuration of Linux will start
killing random processes when memory gets tight (rather than
returning an error from the system request for memory).
I agree this sounds nice in theory, but in current practice this
doesn't work out from what I understand IMHO. I have made a post or
two about it else-thread. You have no guarantee that the "misbehaving
process" no the "process which is doing a complex LDAP thing" is the
one that is going to get the out of memory malloc return failure code
NULL.
No. The problem isn't trivial, and it's quite possible that the
problem which gets the error return is someone else's editor,
and not the LDAP process. Still, a process which gets an error
can take some appropriate action (an editor might spill to disk,
for example). A process which doesn't get an error can't.
Another process, like an important system process, may also try
right then to allocate memory, and thus fail, which is bad for the
entire OS and all processes running.
The important thing is that the process knows that the
allocation has failed, and takes appropriate action.
It's the same problem. An abusive
process can cause an OOM killer on another process with overcommit on,
and that same abusive process can cause a malloc failure in another
process with overcommit off.
Or with overcommit on. DOS is a classical attack strategy, and
the only thing overcommit does in this regard is make it
impossible for the attacked services to recognize the problem
and react to it.
For most processes which are not the ones
being "abusive" but merely innocent bystanders, including system
processes, I suspect they will behave similarly. With overcommit on,
they will be killed with great prejudice. With overcommit off, when
they get the malloc error, most will respond just the same and die a
quick death.
That would be a very poorly written service that died just
because of a malloc failure.
To cut off a pre-emptive argument, I don't think it would work in
practice to say "Oh, critical components need to pre-allocate memory",
as that is unreasonable and will not actually happen. We need a
different solution.
All services, critical or not, should have some sort of
reasonable response to all possible error conditions.
Insufficient memory isn't any different in this respect to any
one of a number of other error conditions, like disk full.
PS: The obvious solution to me appears to be per-user virtual memory
limits, but I'm not sure if that would actually solve anything in
practice. I need more information and more time to consider.
All of the Unices I know do implement per-process limits. Which
can be useful in specific cases as well.
--
James Kanze