We have recently deployed our first 100GBASE switch. We deployed it with 100GBASE-SR4 transceivers.
Our server team has opened up a ticket with us that they’re not able to hit 100Gbps speeds on data transfer between two hosts connected to the fabric. They are declaring the service level is not met, and are rejecting our deliverable.
When they are transferring a 1TB file they are hitting speeds around 1-1.5Gbps on the line according to interface statistics.
We have verified there is no interface errors, no discards, and the end to end path has negotiated to 100GB.
I asked them to adjust their TCP window size to ensure maximum throughput, to which they said “that’s not a thing.” After pushing back they opened up a case with Microsoft who basically came back and said TCP Window is not adjustable on Server 2019 OS and that it scales the window size automatically to fill the pipe up.
I am starting to wonder if they’re hitting a bottleneck on disk read/write speed that’s throttling network throughput.
Any thoughts?
No comments:
Post a Comment