Hi All, i am part of a network team that tasked with project management of building private large data center (around total 2000 racks spread across 4 floor, and if just counted roughly 75% of that are servers, it would be around 1500 Leaf ).
it's beyond what we have now (although we have 600 racks spread in 3 datacenter, and there are 3 Cisco ACI fabric per DC, but the largest fabric we have is only 40 leaf switches). For existing 3 DC, existing design is evolving from old 3 tier design, so in each DC between ACI fabrics/segments is still connected through core switches as part of that old 3 tier. Though we don't have no more distribution switches.
The project is still early on blueprint design for physical data center stage, not yet entering the network stages, and even in order to find consultant i am searching for some input. Searching from a lot of data on the internet, most hyperscale are using bgp evpn with no proprietary fabric. My question is if we want to use arista with leaf and spine, do you guys have any experience/input on : 1. if we don't want to use chassis, does anyone have experience running arista using superspine with that many leafs? I researched arista site and see that they support 300k+ with leaf spine, but with chassis. The reason for no chassis is for more universal components. 2. what overlay software would you use, we use vmware as the hypervisor, do you have any experience using nsx with that amount of server (i estimated around total physical 72000 server if all the racks are used) 3. we still have mainframes, how would you handle baremetals/non vm's 4. or just keep going with ACI ? and swallow the bad pill with lots of bug scrubs etc ..
btw arista is still quite rare here, in my country mostly is cisco shops.
thanks and any input is welcome
No comments:
Post a Comment