Re: bad alloc
On Sep 2, 11:40 am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
wrote:
I can easily come up with examples that demonstrate that for this
particular case, terminating on memory failure is the best situation.
Unfortunately, it seems to me that some peoples seems to advocate "you
must always terminate on any allocation failure on any situation and
this is and will forever be the only valid solution".
No one has done that. Several have argued the opposite: that never
failing (or as close as possible) is the correct solution for OOM,
however.
You post,
unfortunately seems to try to expand my voerly simplified example in
such a way to demontrate that this will never work.
No, it strives to demonstrate that handling OOM through something
other than termination is quite difficult, much more difficult than
your simplified example makes it out to be. Moreover, that even if you
can handle it, you stil; may not achieve your end goal, which was
isolation. Especially when you're running in a threaded environment.
Simple example: explode a JPEG image for editing or explode a zip file
in memory to look at the content. Multiple worker thread can all be
doing some processing. When one of the worker try to allocate a large
amount of memory to process the job, the large allocation will fail.
This is a relatively safe situation to recover from because the huge
probability is that what will fail is the single call to do a large
allocation.
No, I have no reason to believe your probability estimates. Besides,
who said there's necessarily going to be a single huge call anyway?
It's a bad premise, it's an even worse conclusion. Neither of your
two examples ipso facto require large amounts of memory allocation.
It depends entirely on what you're doing.
What will fail depends heavily on the sequence of operations and the
behavior of your allocators. It's not even deterministic. Plus, you
need to give a general purpose definition for a "huge" block. Good
luck with that.
Maybe and maybe not. Simply because as described above, what failed wa=
s
allocating the *large amount* of memory.
You don't know that. You cannot assume that the allocation failed
merely because it was large when writing an exception handler that
will reliably respond to an OOM condition, and when trying to make a
program more robust under low memory situations. You may be in a
state where all allocations are going to fail.
The following code is perfectly safe:
try
{
int *p1 = new int[aMuchTooLargeNumber];}
catch(std::bad_alloc &)
{
// OK to ignore
// I can even safely use memory to write logs}
To be explicitly clear, the assumption in the catch block is wrong.
In general, you don't know what 'aMuchTooLargeNumber' is at coding
time, compile time, nor run time. More importantly, it's generally
impossible to find out.
That's the problem with your reasoning. You can't decide to handle
std::bad_alloc or not based on the size of the allocation. You can
decide to do it based on the operation the program was attempting to
undertake, which is what you're actually trying to do.
Once you've done that, you still need to explain a handler that
achives your goal of isolation. Then you need to demonstrate it's
less work than doing something like running the workers in separate
processes instead of threads. Otherwise, you don't have a winning
solution.
In fact since the heap allocator will be thread safe (using various
methods such as mutex, lock and sub-heaps) there should be no point in
time when one attempting to allocate a "not too large" block
would fail because a different thread is currently attempting to
allocate "a much too large" block.
No, this is also incorrect, because it's predicated on the assumption
that you failed because the allocation was simply too large vs. the
other allocations your program will make.
Adam