Re: Reliability of Java, sockets and TCP transmissions
"Qu0ll" <Qu0llSixFour@gmail.com> wrote in message
news:47064193$0$3593$5a62ac22@per-qv1-newsreader-01.iinet.net.au...
I am writing client and server components of an application that
communicate using Socket, ServerSocket and TCP. I would like to know just
how reliable this connection/protocol combination is in terms of
transmission errors. So far I have only been able to run the application
where the client and server are on the same local machine or separated by
an intranet/LAN so I have no results of an internet deployment to report
but I have not encountered any IO errors to this point.
So just how reliable are TCP and Java sockets over the actual internet? I
mean do I need to implement some kind of "advanced" protocol whereby check
sums are transmitted along with packets and the packet retransmitted if
the check sum is invalid or is all this handled by either the Java sockets
or the TCP protocol already?
Most local area networks these days are TCP/IP using Berkley sockets. It is
extremely reliable. Java sockets simply wrap the platform specific
implementation of Berkley sockets.
TCP (Transmission Control Protocol) is responsible for reliability, error
correction and in-order delivery of the data.
IP (Internet Protocol) is responsible for hardware abstraction, data
transfer, routing, etc., but does not guarantee data integrity (except for
the IP headers, without which TCP reliability would be impossible).
For more information, see http://en.wikipedia.org/wiki/Tcp/ip
It is certainly possible to design an application that is unreliable by
using Berkley sockets inappropriately in Java (or any language or platform,
for that matter). It is also possible to design extremely reliable
applications using Berkley sockets, but it requires some understanding of
detecting and recovering from network failures (such as unplugged cables,
switching and router failures, etc.). Fortunately, Java sockets throw
IOExceptions when things like this occur.
One situation that most sockets will not tell you there is a problem is if
(for example) someone disconnects a cable on the *far side* of an Ethernet
switch that you are connecting through. There is no "heartbeat" in the
TCP/IP protocol. So your socket could listen for hours if no one reconnects
the cable. You can program sockets to time out on a read, but it is quite
common for the distant terminal to remain quiet for hours in some cases. The
TELNET protocol provides ways to periodically poll a device for connection
presence. It is more common than not for applications to layer other
protocols on top of TCP/IP to implement application-specific signaling
requirements.