Hey all,
Hoping for some constructive feedback and suggestions on how to best accomplish the networking element of a project I have coming up in just over three weeks. In short, a client is moving their office to a different floor of the same building. This client is not in need of replacement of any of their gear, so unfortunately the customary side by side approach is not viable. As they are in the same building, we did get fiber between the floors, and we ordered some small amount of swing gear for better flexibility.
Anyway, the current design (and this question is really focused on switching only) - is as follows:
They have 1 MDF and 4 IDFs. We have 2 stacks of 3850s in the MDF - each stack consists of 2 48 port POE switches, and a 12 port 10gb SFP+ switch. Presently, I have these two stacks connected via the 4 port 10g network modules with a LACP cross-stack etherchannel. I have the two core stacks running HSRP on all l3 vlans, and have set STP priorities with an even-odd for which stack is root per vlan using RPVST.
Each of the 4 closets has 2 3850s in a single stack, with a 1g uplink to each core stack (also, using ports on the 10\1g 4nm module on the 48 port switches in the mdf. We also have plenty of free 10\1gb SFP\SFP+ ports on the 12-x models. Presently, the 2 48s in each MDF stack are in their own power-stack, and the 12-x are not in a power stack. We also have each IDF stack in a power-stack, and every switch has dual PSUs (all 14 of them).
Other devices (WAN routers, ASAs, etc) are redundant and served via disparate power feeds + ATS units. This design as has been 100pct reliable - we have all the ESXi and NetApp systems redundantly connected and have been able to bounce stacks in the MDF with no disruption whatsoever.
Now, the new design will have roughly a 65/35 split on user (and WAP, etc) ports between the new MDF and SINGLE IDF. I have a bunch of twinax cables, but am concerned about layout and length. I also have bigger concerns about stack-wise and stack-power cabling.
Questions:
1) Right now, these 8 3850-24 switches in stacks will need to be configured as additional members to other stacks. What is the best way to do so? Any best practices would be appreciated. I know about setting priority, hot-adding, etc - should i wipe config on each first? Want to minimize those 18 minute reboots. Assume all will be on the same code prior to the project.
2) My IDF layout is pretty straightforward; and will certainly have one stack with a mix of 48 and 24 port switches. Am leaning towards having 2x48+ whatever number of 24s in there with 2 of my 4 4 port network modules and 10g-SR optics that i already have to have redundant 10g connectivity (I know, i could use single mode and third party optics - that doesn't solve the additional network module issue). I also have both a 12strand of OS2 in addition to my 12strand of OM4. However, i'm torn on the MDF design. Should I have a combined core\access layer and just have 2 stacks which have some user and some core functions each? I have 2 cabinets adjacent to each other, and to the right of which are three relay racks, on which all my patch panels (fiber and copper) are terminated. While stackwise cables and twinax can be fairly long, stack-power cables are very short. What layout is recommended? Or should I have two independant 12-xs switches each stacked with a 24 perform all core functionality, and buy a few more NMs to have 1 user stack in the MDF, 1 in the IDF, and 2 core stacks? I will need about 12 HA copper 1g connections for important management devices, and am using about 8 10gb twinax connections from infrastructure to each 3850-12-xs switch.
Any suggestions are welcomed
No comments:
Post a Comment