Re: NIO not so hot
A remark upfront: Google Groups really screws up line breaks. Can you
please use a different text type or even a proper news reader?
On 02.06.2014 16:37, Rupert Smith wrote:
On Sunday, June 1, 2014 11:15:01 AM UTC+1, Robert Klemme wrote:
On 31.05.2014 11:29, Rupert Smith wrote:
Try using a direct byte buffer which has been pre-allocated. Direct
buffers are allocated outside the Java heap (using malloc()?), so the
allocation cost is high. They only really provide a performance boost
when re-used.
That of course depends on the usage scenario, e.g. the frequency of
allocation etc. If you serve long lasting connections the cost of
allocating and freeing a DirectByteBuffer is negligible and other
reasons may gain more weight in the decision to use a direct or heap
buffer (e.g. whether access to the byte[] can make things faster as I
assume is the case in my decoding tests posted upthread).
The allocation cost is unfortunately not negligable.
This is not what I said.
Allocation cost within the heap is very low, because it is easy to
do. Outside the heap with a malloc() type algorithm can be
considerably slower, because free blocks may need to be searched
for.
All true, but I did not question that at all.
Also, if you dig into the internals you will find that a heap buffer
reading a file or socket will copy bytes from a direct buffer anyway,
and that Java does its own internal pooling/re-allocation of direct
buffers.
Can you point me to more information about this? Or are you referring
to OpenJDK's source code?
Yes, I looked in the OpenJDK source code. You don't have to dig too
far under socket.read() or socket.write() to find it.
Thank you! I'll have a look once I find the time.
Often benchmarks will say heap buffers are faster, because they
allocate buffer then read some data then allow buffer to be garbage
collected.
I think heap byte buffers were faster in my tests (see upthread) not
because of allocation and GC (this was not included in the time
measurement) but rather because data would cross the boundary between
non Java heap memory (where they arrive from the OS) to Java heap more
infrequently because of the larger batches. If you have to fetch
individual bytes from a ByteBuffer off Java heap you have to make the
transition much more frequent.
As I say, this does seem to have been optimized, although I admit I
am a little unsure as to exactly how. It was certainly the case in
1.4 and maybe 1.5 that heap buffer array [] access was faster, and
get()/set() was slow. I have seen benchmarks and run my own
micro-benachmarks which suggest that get()/set() is now every bit as
fast as the array access.
On a heap buffer, yes.
In case I did not mention it: I tested with OpenJDK 7.55 64 bit.
I may be wrong but... are the byte get()/set() calls not trapped by
some compiler intrinsics and optimized away?
DirectByteBuffer.get() contains a native call to fetch the byte - and I
don't think the JIT will optimize away native calls. The JRE just does
not have any insights into what JNI calls do.
Exactly what I though, yet it does seem to be optimized.
I don't think so. I think my test showed the exact opposite. If you
believe differently please point out where exactly I am missing
something. And / or present a test which proves your point.
Cheers
robert