I just need to get a rough estimate. For example:
1) I have a synchronous Internet connection. 2) A standard 60byte ping packet takes, on average over a ton of pings, 23.732ms, roundtrip. 3) A 15,000 byte packet (using the -l option), takes 30.525ms. 4) So, 15,000 bytes took an extra 6.793ms, roundtrip. 5) That 'should' mean that it took an extra 3.393ms to get there(half the round trip). 6)15,000bytes/3.393ms = 35,366,932 bits/s = ~35mb/s
Am I looking at this completely wrong?
And in case anyone is wondering, we have someone complaining of slow offsite backups(6mbps on a 100mps link), and they did a traceroute. The traceroute shows the destination having no packets dropped, but it shows that 2 hops in the middle are dropping 99% of the pings to them. I have explained that is most likely not the issue since they are sending the packets to the next hops just fine.
Our ISP has said that everything is checking out fine. So I would like to be able to show that we are capable of much higher speeds all the way from source to destination. Is this a valid way to do that?
No comments:
Post a Comment