Hi all,
Just wanted to get everyones thoughts here on the best way to go about this. We have recently added a new set of nexus's(in a vpc) that are going to be used pretty much for 1 use case (large amounts of NFS storage). Only one network will live on this switch.
My current plan is to uplink these switches to our pair of 9ks used for core routing. I'm planning on running 4 100G QSFP28 Links in order to fully mesh the two switch pairs and put all 4 links the same port-channel. As it stands this new switch pair just has the storage vlan on it (each storage node connects to the new 9k pairs using a VPC) + the storage vlan SVI. On the Core their is another vlan+svi that is used for bgp peering with our core firewall...both of these SVI's/VLANS will be in their own VRF
Just wanted to get other's thoughts on how they would handle the design of something like this. 400GB is not required and certainly overkill for the time being but this will need to be very highly available NFS storage with the best throughput possible(multiple petabytes and growing) .
Is full mesh between 9k vpc pairs needed or am I just wasting ports? Any things you would personally change about the design? I've considered ditching the SVI and doing it from the actual port but Im not familiar with l3 "port channels". Just a bit of a network newb looking for opinions.
No comments:
Post a Comment