Hi all – If you've got a small payload situation where the payloads always fit into a single 1400-ish byte UDP/IPv6 packet, there's no explicit way to know about packet loss. It seems like timeout is the only way you know, by default. In fact, AFAIK this is how DNS is – a one-packet request that simply times out.
What if we want to know about packet loss sooner than a multi-second timeout? Would it be smart to string out the send into a couple of packets, and get a fast notification when one is missing? How would the math work on that solution? I mean if a packet has a 99% chance of arrival, the chance that one of two gets through should be higher than 99%, but I don't remember the formula. That's assuming independent probabilities, but two packets that are sent together are unlikely to be independent in that sense – the failure of one is probably going to correlate at least slightly with the other's chance of failure, but I don't know if there's a general rule here.
The chance of some kind of failure (like one of the two packets lost) goes up if you string out a one-packet send to two or more packets. Worst case is .99 * .99 = .98, assuming independent probabilities, but it should be better than .98 since these aren't fully independent. But maybe the slightly greater chance of some level of failure is worth the hit to us if it helps us detect a failure sooner than a timeout would. What do you think? Would breaking one packet into several smaller ones make sense? (We could get fast notification by switching to TCP instead of UDP, or maybe something out of band.)
Relatedly, does TCP have any solution for the one-packet use case in terms of notification of a lost packet? Do you just have to do something out of band in that case? ICMP?
Thanks.
No comments:
Post a Comment