Google Answers Logo
View Question
 
Q: UDP reliability question ( No Answer,   9 Comments )
Question  
Subject: UDP reliability question
Category: Computers > Programming
Asked by: jakers99-ga
List Price: $15.00
Posted: 03 Oct 2005 12:20 PDT
Expires: 11 Oct 2005 09:57 PDT
Question ID: 575863
I have a question regarding UDP file transfer.

Preface: I am not a developer so please forgive me if I'm looking at
the problem incorrectly or not describing it well. UDP is an
unreliable mechanism. In order to do file transfer with UDP, a higher
layer application needs to fulfill many of the functions of TCP. For
imperfect networks (defined here as  packet loss and latency) TCP can
slow to a crawl. There are many work-arounds to get around this
(parallel streams, modified TCP stacks, new and evolving TCP stacks,
FEC?) but theoretically a UDP app should always beat a TCP app. UDT is
an example of this. I understand there are many parts to a transport
(congestion control, reliability, efficiency, fairness, etc?).
Congestion control seems to be a real tough function to provide well.

My question: Why is the reliability part so hard? 

Theoretically, why not transfer with UDP and then let the app layer
figure out what was actually received? Many of these congestion
schemes seem to try to ensure reliability with a congestion window.
TCP does this. UDT doesn't seem to use congestion windows but still
seemingly tries to ensure packet delivery with a single round trip
(RTT) or at least within a few RTT. There are lots of architectural
debates about wondering if a packet was really lost or still just
"in-flight".

In VERY non-technical terms, why not chunk a file with markers, throw
some checksum on that chunk, send from the start of the file to the
last chunk of the file, determine what chunks are missing and ask to
resend, then repeat.  There would be no worrying about in-flight
packets or resending packets. The reliability wouldn't be at the scope
of a subset of packets within a short duration of time. The
reliability would be spread out over a length of time over the whole
file. Or maybe there would be another way to transmit which chunks of
info have been successfully received?

I know it can't be this easy. What major problems am I missing with
this simplified approach. I have a sense I'm missing a something
simple and fundamental.

Thank you.
Answer  
There is no answer at this time.

Comments  
Subject: Re: UDP reliability question
From: crythias-ga on 03 Oct 2005 19:04 PDT
 
Take a look at TFTP? like...  http://kin.klever.net/pumpkin/
Theoretical data size limitations are at 32MB and maybe 64MB if both
client and server support larger words.
 
But it should hearten you to know that streaming audio and video can
take advantage of UDP's unreliability to provide content in a "best
effort" type connection, when the end user doesn't really care about
dropped frames or such.

This is a free comment. I am not a GA Researcher.
Subject: Re: UDP reliability question
From: efn-ga on 03 Oct 2005 20:48 PDT
 
I'm not an expert on this, so I'll give you my amateur opinion for free.

Reliability is not hard.  What's hard is cheap/efficient reliability. 
You can get reliability by using TCP, at a cost.  What's hard is
saving the cost and still getting the same reliability.

I don't see any fundamental flaw in your file transfer scheme as far
as it being workable, but I don't see the point.  It provides
reliability at a cost.  There may or may not be circumstances where it
would be more efficient than TCP, but you can't assume that it will be
more efficient just because it uses UDP.  Those chunk resends are not
free.  Maybe that's the flaw--the assumption that it will be an
improvement.

This is a free comment, but I am a Google Answers researcher, so if
you want to buy it, I'll be happy to sell it to you.
Subject: Re: UDP reliability question
From: vakulgarg-ga on 04 Oct 2005 22:26 PDT
 
Hi Jakers

To understand why TCP's reliability is so complex, we need to take
into account some features of the protocol.

TCP provides reliable data transport to variety of applications which
are not only restricted to file transfer applications like FTP. Also
it has been designed to work with different types of networks (in
terms of speed, delay charateristics, bandwidth) and different types
of sending and receiving hosts (in terms of memory, CPU speed, network
connectivity speed) etc. (Memory and CPU power of receiving host
determines the receiver's window).

TCP provides optimal performance both in case of reliable intranet
networks (which have low delay, almost nil packet loss) and in case of
worldwide internet (which has high delay, different bandwidth in
different networks, packet reordering, packet dropping during
congestion, packet loss etc.).

For your design of UDP application providing file transfer service, it
would work (may be with little modifications). TFTP is an example of
such application.

However, I have found in my experiments that TCP works faster for file
transfer than TFTP in my 100 Mbps ethernet LAN. (This may be because
in TFTP, there is one to one ACK of every transaction between sender
and receiver) and TCP works with outstanding ACKs of sent packets
(sliding window mechanism). For your implementation of file transfer
application using UDP, but you may achieve faster transfer than TCP
with few design changes if running under intranet setup.

Providing reliability for data transfer is not hard, but the tough
part is to provide maximum throughput, low delay, avoiding network
congestion (at routers) etc.

To provide maximum application throughput, TCP uses sliding windows
procedures, delayed ACK, RTT measurements etc. For network
friendliness TCP uses congestion control algorithms (which also avoid
un-needed retransmissions that can further increase congestion). TCP
uses slow start to judge network delay, bandwidth characteristics at
start and keeps adjusting these parameters on the fly also during data
transfer session.

TCP avoids overwhelming a slow receiver host by sending data too fast
by flow control mechanisms.

Further, TCP provides receving applications data in-sequence
progressively as it is received. This is needed for applications which
work on data streams. Thus TCP is not limited to file transfer
applications. Your design of requesting missing data chunks at the end
of session would stop data being delivered to receiving application
progressively. (Yes, This limitation can be mitigated easily by simple
design changes).

All of the above listed additional features (such as network
friendliness etc) contribute to the overall complexity of TCP.

Please feel free to ask any other clarification if you need.

Vakul
Subject: Re: UDP reliability question
From: jakers99-ga on 05 Oct 2005 19:22 PDT
 
I re-read my original question. I apologize for not writing more
carefully. I alluded to something without being direct about it.

I am most interested in UDP as a bulk transfer method over imperfect
networks (as defined by packet loss and latency). For relatively small
data transfer or chatty applications I am aware that TCP is the ideal
solution. However, transferring a file from San Francisco to Tokyo,
Japan, with 1% packet loss and 135ms RTT would roughly max out the
transfer around 1.05mb/s (per Mathis's equation). This is caused by
the congestion windows and loss based mechanisms of TCP. It wouldn't
matter if the connection was a DS3, the throughput cap would be
roughly 1mb/s. For a 1GB file this would roughly translate to 2hours
at 1mb/s vs 3min for a fully utilized DS3.

I know iperf can fill a DS3 irregardless of latency. The actual
throughput would  roughly be the difference of the packet loss.

I know TCP fairness would be a huge issue and flow control would be
tough. But theoretically, why can't someone write a UDP app that fills
a pipe and does reliability at a higher layer?
Subject: Re: UDP reliability question
From: clairvoyant1332-ga on 09 Oct 2005 07:44 PDT
 
It sounds like you have a fairly good idea about the protocol.  For
file transfers over a high latency network, TCP is just plain
inefficient because every packet has to be acknowledged before
receiving the next.  Sending each piece of the file once, then having
the receiver ask for the packets it missed (a negative acknowledgment,
or NAK) is the way to go.

I actually wrote a program that transfers files in just this fasion,
and also uses multicast to send to multiple receivers at once.  You
can check it out at http://www.tcnj.edu/~bush/uftp.html.  There's a
full description of the protocol as well as source code.
Subject: Re: UDP reliability question
From: jakers99-ga on 09 Oct 2005 10:47 PDT
 
"I actually wrote a program that transfers files in just this fasion,
and also uses multicast to send to multiple receivers at once."

Thanks for the pointer. How's the performance? Do you have test
results? If not, is there a general rule of thumb for improvement
(20%? 200%? 600?...) I have a feeling this completely depends on the
network conditions.

Also, I initially did a Google search on your "UFTP" and found another
reference. Are you associated with
http://www.phatpackets.com/papers/jzhang-UFTPpapersubdspac.pdf   ?

Thanks again.
Subject: Re: UDP reliability question
From: clairvoyant1332-ga on 09 Oct 2005 21:31 PDT
 
Generally, yes, it depends on the network.  Over a low latency link
such as alocal LAN, you'll tend to get speeds close to that of FTP. 
Over high latency links, it begins to see improvements over FTP. 
We've used UFTP over a 1500Kbps satellite link, which has a 600ms
round trip delay, and we typically see transfer speeds of around
1470Kbps.  Also, when running over a gigabit link, which TCP doesn't
keep up with too well, it runs around 10% faster even under ideal
conditions.

The other link you referenced is not associated with me at all.  Some
of the wording he uses is similar to what's on my website, so it's
possible he may have borrowed from it.
Subject: Re: UDP reliability question
From: jakers99-ga on 10 Oct 2005 10:20 PDT
 
How well does it handle packet loss?
Subject: Re: UDP reliability question
From: clairvoyant1332-ga on 10 Oct 2005 18:54 PDT
 
Well, unlike some protocols which will scale back the speed based on
packet loss, UFTP sticks with whatever speed is specified, and
retransmissions also move at the originally specified speed.  The
thought behind this is that if multiple receivers are involved, a
single slow receiver doesn't slow down the others.  Even so, if the
specified speed is too high for the network, you'll end up receiving
at roughly whatever the effective throughput of the line is.

Also, because UFTP uses a NAK based protocol on top of UDP, the sender
can push as much as 15MB of data before it requires feedback from the
receivers, meaning the overall speed is not as susecptable to packet
loss as a TCP based protocol.

In summary, yes it handles packet loss well.

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy