Hyperconverged stuff goes smaller and smaller all the time, so now we're building a small DC with 4 to 8 switches. I'm thinking of doing this without any spine/leaf architecture, just connecting "leaf" switches to eachother as switches usually have 8x100Gbps ports. Also our bandwidth requirements are not filling the 100Gbps links so it probably doesn't matter if we need to jump through another "leaf" switch get to the last one?
As we have Aruba access switches + Aruba WLAN I've been looking into Aruba CX switches for DC too. I've verified the setup with few 6300F that also support EVPN over VXLAN and everything seems to work great.
One option would be to just use 2x 8400 modular switches. What are your thoughts about this kind of setup? In this scneario we would have MPO trunks from other racks to the rack housing the modular switch to not have so much cabling to do. Even one would probably have enough ports but at least with two running in a VSX pair you limit a bit the effect of someone doing an incorrect configuration and bringing the whole DC down :)
Also having a separate router for internet seems bit wasteful, we're just getting default route from the ISPs. Any thoughts why this would be a bad idea to connect the ISP router to DC switches? Probably 1Gbps links. I guess we could do a virtual router in case in the future we need full BGP tables for some reason?
Thanks for any thoughts!
No comments:
Post a Comment