Re: bad alloc
On Sep 8, 3:17 am, Paul <pchris...@yahoo.co.uk> wrote:
On Sep 7, 1:38 pm, Goran <goran.pu...@gmail.com> wrote:
On Sep 7, 1:29 pm, Paul <pchris...@yahoo.co.uk> wrote:
On Sep 7, 8:11 am, Goran <goran.pu...@gmail.com> wrote:
On Sep 6, 4:37 pm, Paul <pchris...@yahoo.co.uk> wrote:
On Sep 6, 2:43 pm, yatremblay@bel1lin202.(none) (Yannick Trembl=
ay)
wrote:
In article <2ab097e7-57e7-4e01-9d05-5630fe255...@f41g2000yqh.go=
oglegroups.com>,
Adam Skutt <ask...@gmail.com> wrote:
On Sep 4, 1:56pm, James Kanze <james.ka...@gmail.com> wrote:
On Sep 4, 1:03 am, Ian Collins <ian-n...@hotmail.com> wrote:
On 09/ 4/11 11:20 AM, James Kanze wrote:
On Sep 2, 6:45 am, Ian Collins<ian-n...@hotmail.com> wro=
te:
On 09/ 2/11 04:37 PM, Adam Skutt wrote:
[...]
I agree. On a descent hosted environment, memory exhaus=
tion is usually
down to either a system wide problem, or a programming =
error.
Or an overly complex client request.
Not spotting those is a programming (or specification) err=
or!
And the way you spot them is by catching bad_alloc:-).
No, you set upfront bounds on allowable inputs. This is wha=
t other
engineering disciplines do, so I'm not sure why computer progr=
ammers
would do something different. Algorithms that permit bounde=
d response
to unbounded input are pretty rare in the grand scheme of thin=
gs.
Even when they exist, they may carry tradeoffs that make them
undesirable or unsuitable (e.g., internal vs. external sort).
So, if I understand you correctly, you are saying that you must=
always
setup some artificial limits to the external inputs and set
artificially low so that no matter what is happening in the res=
t of
the system, the program will never run out of resources....
This seems like a very bad proposition to me. The only way t=
o win is
to reserve and grab at startup time all of the resources you mi=
ght
potentially ever need in order to meet the worse case scenario =
of your
inputs.
This is not possilbe in the situation where a program is limited =
by
system memory. As a crude example a text editor opening a new win=
dow
to display each text file, the number of windows is limited by
available system RAM.
Not at all. Say that you simply load said text into memory (crude
approach, works for a massive amount of uses). If the file is 3 byt=
es,
chances are, you'll open thousands. If the file is a couple of megs=
,
you won't get there.
But if the file size is unknown until say a user selects from a dailo=
g
window.
You can't predict how many buffers and what size each buffer will nee=
d
to be.
The only way this system can really work is if you grab a memory pool
and then have some kind of allocation handler that processes
allocation/deallocation from the pool.
I don't know how you would know the size of this pool to initially
grab. Perhaps a trial and error until a bad alloc is thrown . :)
Sorry, I poorly explained myself there. I was arguing with something
that wasn't written.
What I wanted to say is that the number of windows you'll get to open
will vary wildly depending on the file size. I agree that one you
can't predict anything.
Therefore, the best (and simplest if you ask me) way to proceed is to
try to allocate and do __not__ "handle" bad_alloc. I pretty much agree
with Skutt that "handling" OOM is impossible, especially not at the
spot where it occurred, because memory is possibly tightest there. In
the imaginary editor, imagine sequence of events:
ask the user which file to open
create "frame" window for the file
create, I dunno, borders, toolbar, whatever
create widget to host text
load the text (say, whole file into a buffer that you pass to the
widget for display)
In pseudo-C++, that might be:
auto_ptr<Frame> openFile(const char* name)
{
auto_ptr<Frame> frame = new Frame();
frame->Decorate();
EditorWidget& e = frame->GetEditor();
vector<char> text = LoadFile(name);
e.SetText(text);
}
In the above, you allocate all sorts of stuff: frame, "decorations",
editor widget inside the frame. (I presume that frame "owns" that, and
GetEditor creates actual EditorWidget e.g. on demand, therefore it
gives it out as a reference). I also presume that LoadFile is a
function that loads a file into vector<char>. I presume that any
function you see throw an exception in case of any problems.
I say that the above code is resilient to resource shortage, and that,
if there is a resource shortage at any point bar "new Frame()" line,
it will nicely clean up behind and leave with at least some resources.
You can call this as much as you like and you'll be fine. No arena
allocators, no pools, no try/catch, no nothing. I further say that C++
makes it +/- easy to write similarly correct code.
Finally, I say: boy did I go off the tangent here...
Well TBH I don't know WTF you are talking about bu if you had such an
app I would think the sensible resolution would be to stop opening
windows. Display a message to user and say no more windows until you
close some.
I'll try to clarify (my snippet is full of presumptions, I thought
they were +/- obvious; I am pretty much certain they are reasonable).
Suppose that this app has a "generic" exception handler (UI toolkits
do have such a thing in their UI-handling message loops). Typically,
said loop would receive a command-type message ("open a file") from
the user. That would end up in some function that asks for the file
name and then, possibly, my openFile would get called. Say that
openFile should return a pointer to a "Frame" (window) object that
displays the file.
So what happens if something, anything, goes wrong in openFile? Well,
nothing bad. Exception is thrown, hopefully containing, one way or
another, info on what has gone wrong. See that auto_ptr<Frame> there?
That ensures that, if exception is thrown, newly allocated Frame
instance will be deleted. See that vector<char> returned by LoadFile?
That ensures that, whatever storage might have been allocated for file
contents, will be freed. I claim: similar logic can be +/- trivially
applied to any bit of code for it to be error-resilient (a.k.a.
exception-safe).
About your "no more windows until you close some" idea: there's no
__need__ for that. What if file was too big for the current system
state, and what if user could open a smaller one? There's no __need__
to prevent further file opening, because:
1. there is __no__ harm in tying
2. some other file might open fine.
Key thing here, and has been, from the very start: exception safety
guarantees of any bit of code must be correct. For example, the
openFile function has "strong" exception guarantee: either it works as
described, either there has been an error an any temporary changes
that might have been made to the program were rolled back (e.g.
allocated Frame, EditorWidget, file contents buffer were all freed).
C++, in particular, offers enough mechanisms to make writing code,
well-designed WRT exception safety guarantees, a reasonably easy
affair.
Goran.