Sunday, November 21, 2021

Effective ways of measuring packet loss rate

Hi, sorry this might be a dumb question, but I would like to know what do you think are the effective ways of measuring the packet loss rate of a network when TCP is used. Thank you very much in advance!

Essentially, I am simulating a linear network topology using Mininet, where two users at the two ends of the network are transmitting data through TCP using iPerf. To measure the packet loss rate, I am thinking either I can use Wireshark that sniffs at some point in the network, so I can get the number of packets that are not delivered successfully and the number of packets that are delivered. The loss rate is simply their ratio. However, if I understand correctly, the Wireshark will output all its monitored data to the disk, and given the simulated link has a bandwidth of 1gbps, the disk in my laptop (the device I use for simulation) should quickly go full. If I would like to run the simulation for hours, I guess Wireshark may not be a good solution. Or I am thinking maybe I can do some logging in the TCP kernel, so whenever the TCP receives an ack or a timeout/transmission is triggered, I can log correspondingly. In the end, I can just count the number of log lines. However, I think this might also be unnecessary and there should be a better solution for my case.



No comments:

Post a Comment