Thursday, September 26, 2019

Breakout Super Core

So a lot of the 100Gig switches out there can do 40Gig mode w/ four 10Gig breakouts. Depending on the model of switch that allows for over 120 10Gig connections which is a huge port density in a 1U form factor!

If your datacenter has less connections than that to accomdate, what are the actual technical disadvantages to just connecting everything to a single pair of switches? Assuming everything has redundancy, i.e. each server connects to both switch A and B. How is it more risky than doing a traditional spine/leaf, since you still need two switches to physically fail before you lose connectivity.

Not trying to be a smart arse just genuinely trying to figure out what makes it a bad idea.

You basically just collapsed a data center network into literally two switches. Everything would live on those switches... the SVIs, the VLANs, and physical server connections, plus northbound connections, all on one switch due to the high port density. This eliminates the need to do any kind of overlay network.

Interestingly enough these 100Gig switches are also much cheaper than 48port 10Gig switches despite offering triple the port density. They can also be licensed to do full routing and bgp. You can set them up for redundancy any way you want like MLAG or virtual switch stack.

Thoughts?



No comments:

Post a Comment