Following on from https://www.reddit.com/r/networking/comments/alf8h2/tcp_window_scaling_windows_vs_linux_crazy/ (now archived), for the last few months I've been spending far too many hours comparing and contrasting Windows machines with Linux and Mac machines on a superfast symmetric LFN (residential gigabit fibre).
Fundamentally, the same symptoms /u/gandalf8110 observed are what I'm also seeing - throughput performance is utterly terrible on the Windows machines, almost irrespective of what I modify/change on the NIC or Windows.
The PCs are powerful, easily capable of steady gigabit in a LAN scenario. With either machine booted to Linux, they're also easily capable of sustained maximum throughput in either direction to public or private iperf servers. A Mac running on the same network under same conditions is also fine on both LAN and WAN; the same tests performed under Windows (Windows 10 1908 or Windows 7 Pro) give awful results by comparison. Nowhere near the maximum available bandwidth utilisation in identical conditions.
The only thing I've not tried yet is testing over a 10 gig NIC or with a non-Intel chipset NIC, but I doubt it will make any difference. I have a laundry list of variables I've checked, disabled, tried or tuned.
The only conclusion I've come to so far is that Windows' TCP windowing behaviour seems erratic at best, horribly implemented at worst. What have they done with their CUBIC implementation combined with how Windows manages the TCP stack which is causing such a huge deterioration in performance? Is there any solution to this at all?
No comments:
Post a Comment