While running a few tests on an online application, I have noticed that the bandwidth consumed by the application is very high(value X) on a certain ISP and quite normal(value Y) on another.
I checked with another team in a different location and they got value Y on their side. So, value X is definitely a wrong one, but I'm curious to understand why would it be so.
Things I know: - Did a packet capture analysis on traffic from both ISPs and the rate of retransmitted packets was quite large in value X. But it doesn't look large enough to cause a big difference in the bandwidth usage ( X=~2*Y). - checked the firewall and found no rules that could cause this. - the ISP 1's NOC team has also conveyed that the amount of traffic observed on their end matches the amount of traffic on ours. But they could not say why. I could not escalate this to SOC without adding any more details. - one of the Network Engineers told me that different ISPs have different routing paths set up, which can cause this.
I'd really appreciate if anyone can help me understand why would there be such a difference running the same application on the same infrastructure on different networks.
Thanks
No comments:
Post a Comment