Re: Crash in Java 1.6.0_13 ImageIO PNG decoder (and possibly later versions) loading large interlaced PNGs with low memory
On Sun, 11 Nov 2012 23:36:09 -0500, dy/dx wrote:
Using Java 1.6.0_13 -server -Xmx1100M what do you get if you run this code?
import java.lang.ref.SoftReference;
import java.io.File;
import javax.imageio.ImageIO;
public class Crash {
public static void main (char[] args) {
File f = new File("path-to-any-24-megapixel-RGB-PNG-goes-here");
SoftReference a = new SoftReference(ImageIO.read(f));
SoftReference b = new SoftReference(ImageIO.read(f));
SoftReference c = new SoftReference(ImageIO.read(f));
SoftReference d = new SoftReference(ImageIO.read(f));
SoftReference e = new SoftReference(ImageIO.read(f));
System.out.println("" + (a.get() == null) + (b.get() == null)
+ (c.get() == null) + (d.get() == null) + (e.get() == null));
}
}
It should be easy for any of you with a digital camera to adapt this --
just change the filename string to point to a 24-megapixel image you have
laying around. Failing that, there's one linked at the bottom left of
http://aerialphotographysandiego.com/samples-aerial-photography-san-diego.html
The above will work fine with jpegs and noninterlaced pngs, reporting
falsefalsefalsefalsefalse if you have more than a few hundred megs of mem
and the -server VM. Convert the image to an interlaced png and point the
above at the png, though, and it seems to behave as if System.exit was
called, at least on my system, which is clearly incorrect behavior. (I
tested it with the file from that link, converted to interlaced png with
Photoshop CS2, in case that somehow makes a difference -- with a decoder
bug, who knows? With the png created as described, it crashes with five
copies loaded, but not with four.)
Curiously, this change seems to prevent it:
import java.lang.ref.SoftReference;
import java.io.File;
import javax.imageio.ImageIO;
public class Crash {
public static void main (char[] args) {
File f = new File("path-to-a-24-megapixel-RGB-PNG-goes-here");
SoftReference a = new SoftReference(ImageIO.read(f));
System.gc();
SoftReference b = new SoftReference(ImageIO.read(f));
System.gc();
SoftReference c = new SoftReference(ImageIO.read(f));
System.gc();
SoftReference d = new SoftReference(ImageIO.read(f));
System.gc();
SoftReference e = new SoftReference(ImageIO.read(f));
System.out.println("" + (a.get() == null) + (b.get() == null)
+ (c.get() == null) + (d.get() == null) + (e.get() == null));
}
}
That's clearly buggy, because System.gc() added or removed is not supposed
to alter program semantics, only maybe performance; PLUS if it was running
out of memory some SoftReferences should have been cleared to make more
room without anything else in the way of consequences; PLUS if it somehow
ran out of memory anyway it should have thrown an OOME rather than
pretended the code called System.exit.
As near as I can tell from this, the ImageIO png decoder in Java 1.6.0_13
contains a crash-inducing bug that requires the png it's decoding to be
interlaced *and* requires heap space to be running low to trigger it.
Addendum: if the png is *either* interlaced *or* 32bpp (alpha channel) that
seems to suffice. Encoding a problem png in Photoshop as a 24bpp
non-interlaced png seems to make it "clean", i.e. non-bug-triggering for
Java use. In Photoshop CS2 that involves "flatten image" and then saving to
another directory and choosing a "none" radio button on a save options
popup. YMMV with other Photoshop versions -- you're probably all using CS4
or later. :)
Similarly, taking a non-troublesome png (or non-png) and reencoding it as a
png that's interlaced or 32bpp seems to make it crash ImageIO's decoder
*if* the heap space is low enough at the time of decoding. In particular it
makes the above code exhibit the crash. The size of the png matters, at
least insofar as how quickly the above code gets the heap space low enough
to enable the bug to strike. I pngcrushed a problem png and the number of
loads I could have without a crash went up from 3 to 5; pngcrush reported a
27% reduction in size. 5*0.73 = 3.65 so the bug enabling threshold was
somewhere between 3*original size and 3.65*original size with that png.
Moreover this was the *same image*; the BufferedImage object would have
been about 72 megs and identical down to the last byte for both cases.
So it's not the BufferedImage alone, it's also whatever temporary objects
the decoder makes that affect the bug on subsequent decodes, through their
lingering memory use as uncollected-as-yet garbage or some other mechanism,
and this effect is proportional to the problem png's file size, not its
uncompressed size, pointing to data structures created early in the
decoding -- likely, the byte arrays holding successive chunks of the file
itself.
Changing the decoder to recycle one array instead of constantly making and
discarding them might "fix" the bug, then, though it would really only be
working around it. I'd have to guess that ImageIO's png decoder contains
native code, and that native code does something to allocate memory on the
Java heap for something, likely the output's WritableRaster, in a way that
bypasses some safeguards. In particular, perhaps it doesn't check for heap
exhaustion, run a stop-the-world collection, try again, and then throw OOME
on failure like a normal allocation in non-native code, and some idiot put
if (buff == NULL) { /* Can't happen */ exit(0); } or something similar. In
any event, the bug should be found and fixed, if it hasn't been already,
and not simply papered over by finding a way to avoid as easily triggering
it. It would just end up happening with even
larger-but-should-still-fit-in-the-heap-space pngs, or even with smaller
pngs with big enough other data structures lying about.