Thursday, September 26, 2019

Mixed 10gig/25gig/100gig Network Speeds During Migration

Hi,

I'm currently running 10gig/40gig in the datacenter with 10gig to hosts, and am looking at upgrading to 100gig with 100gig to hosts via ConnectX6 PCIE 4.0 NICs.

The upgrade would likely happen in phases spread out over a few years. I'm wondering if I should be looking at breaking out 100gig ports to 4x10gig and use QSA / QSFP+-to-SFP+ adapters on the NICs to keep hosts at 10gig until everything is capable of 100gig. This would also include running some of the 100gig ports at 40gig for uplinks into the current infrastructure.

Are there any major downsides to allowing hosts to run at mixed 10gig / 100gig for an extended period? I imagine switch buffer overruns would be a problem, but most of our applications are TCP and it would handle retransmits. ASIC would be Trident 3 on the 100gig switches, Trident 2+ on 10/40gig switches.

For internet facing applications, how important would you say it is that you have matched speeds out to the internet edge? We drop to 1gig at our internet edge so I'm curious if that will present any issues.

Thanks!



No comments:

Post a Comment