Re: Making subfoos not leak memory
Tom Hawtin wrote:
In your example, what happens if you have two WeakSubFoo from a single
BackingFoo...?
They've become independent of course. This isn't a semantic problem if
the objects are immutable (which is where you'd normally want this as an
efficiency thing anyway, as otherwise there are aliasing problems). It's
only an efficiency problem of the WeakSubFoos overlap substantially and
the BackingFoo wasn't substantially larger than their union.
This does suggest a more general method, which is to store ALL your foos
(strings, images, whatever) as size-bounded segments with weak links
among them, either in a flat or a tree structure, and the
using-code-visible objects hold strong references to all of the segments
they use as well as indexing information (begin and end, cliprect,
whatever). A subfoo is just created as another of these objects,
referring to only the segments it uses. Algorithms that respect the
indexing bounds may use the weak links but won't leave the set of
segments strongly held by the container in so doing. Or the weak links
may be omitted entirely in that case. Either way, segments no longer
reachable get collected. In this case it's possible to avoid Reference
objects *and* finalizers, at the cost of a bit more complexity. A string
for example could take the form of segments and logical-string objects,
the latter containing an array (or List) of consecutive segments. A
substring gets constructed with an array of just the segments that
overlap with the substring. This gives one area of actual decreased
complexity, in that "full" strings and substrings have the same
representation (but only substrings sometimes begin or end in the middle
of the first or last segment, respectively).
The main tuning parameter is segment size: a bigger segment means
subfoos carry more "baggage", but a smaller segment makes the array
copying in subfoo creation more substantial. The limiting case is a
segment that's a minimal element, in which case you've just got a string
as char array and a substring as copied subset, which is back to square
zero. The other limiting case is a segment that's the whole string, and
a substring prevents the whole string being collected, which is back to
square one. Intermediate segmenting gives intermediate performance
characteristics.
A fixed upper bound on segment size limits the wasted memory to at most
about two segments' worth per subfoo for strings or the like. The worst
case is that only one minimal element (character or whatever) is used
from the first and one from the last segment. With images it can still
be pretty bad -- the worst case amount of wasted memory is O(perimeter)
in general, so O(sqrt(N)) for 2D images and other 2D stuff, where N is
the minimum data size of the subfoo. Still better than O(N) or even
unlimited (O(size of parent)).
Tree structures are hairier to design and test, but give you multiple
levels of granularity. Subfoo creation is O(log N) instead of O(1)
(memory-wasting implementation) or O(N) (copying) in the size of the
subfoo. Memory wastage is more complex: the leaf-segment size is the
segment size limiting wastage in subfoos that outlive their parents, but
the amount of storage needed to maintain the tree structure itself is
nontrivial and O(total number of segments), unlike arrays (O(1)). Parent
links in the tree need to either be absent or weak but child links
should be string and foos should strong reference the deepest common
ancestor of any segment the foo intersects. Segment size is now a triple
tradeoff. Smaller makes wastage per subfoo less, copying greater (but
logarithmically instead of linearly), and overhead greater (n log n).
Larger makes wastage per subfoo more, copying less (but only
logarithmically), and overhead less (n log n). Trees make sense if
segments are fairly large, but the parent objects may be enormous --
30kx30k pixel images, multimeg disk files read in as strings, and the like.
Array or tree structures like the above under the hood also are useful
for handling large objects memory-efficiently by storing them as
temporary files on disk while keeping in memory the pieces only being
immediately used. Pieces not being used must be garbage collectable, so
the node object should hold a weak reference to the piece and a strong
reference to the information needed to load the piece from disk. The
getter has to dereference the former, if not null return the result, and
otherwise load the data from disk, fix the weak reference, and return a
strong reference. Soft reference may be used in place of weak if it's
likely recently accessed data will be traversed to again. This kind of
framework also supports persisting the object (use a *non*-temporary
file) and loading into RAM only the subset being worked on at a given
time. This granular disk-storage thing is useful if the data gets into
the hundreds of megs or even the gigs. (30kx30k images at 24/32bpp, for
instance.) Data with even larger sizes (in the tb and above) and complex
internal structure should be persisted in a database of course, and
worked on tiny pieces at a time. And any persistent data requiring real
transactional integrity needs a real database regardless of size.