Hi,
I need outputs from you guys.
Let's go back to early 2020 when the company decided to hire a CCIE Data Center to re-vamp the whole Campus/DC and me, to help out with the deployment of Meraki SD-WAN across our remote sites. The SD-WAN project went smoothly, we are very happy with the results but that's not the topic here.
To put you in perspective, we have 1 primary DC in the same location as our Campus (HQ) and we have a secondary DC at a different location. We have around 100 remote sites, hub and spoke topology with our 2 DCs through SD-WAN. 3000 employees total, 250 located in the Campus.
In our main DC, everything was hooked up to 2X Nexus 5Ks + fabric extenders. Servers, firewalls, user stacks, WAN, etc. Throughout the year we replaced the Nexus 5Ks with 2X Nexus 9Ks. We also separated the DC bloc from the Campus bloc by adding new core switches (C9500) and connecting the user stacks, WAN, firewalls to the Campus bloc. This work was done by the CCIE alone with me helping with the physical work. We also installed 2X Nexus 9Ks at our secondary DC and put a DWDM fiber between both DCs. Everything caused a lot of maintenance windows and outside maintenance window outages.
As of right now, no DCI technology is implemented because they have in mind to go for ACI later-on this year. We have a regular VLAN spanning across both DCs with the HSRP gateway living in our primary DC. HSRP is only between our 2X N9Ks in our primary DC. Now to implement ACI, they want to buy 4 Spines + 4 Leafs + APIC controllers.
We also need to change our user stacks (C2960X) at our Campus. Same thing here they have in mind to implement an SDA fabrics so they are looking to buy 30X C9300 (same amount as C2960X right now) + 2 extra DNA controllers (we already have 1 that came free when they did a refresh in our remote sites). The thing is, we are missing space in our DC right now. Long story short we have to take a step back and re-install the N5Ks in production as a distribution layer because our core is full and to facilitate the transition to SDA.
We basically have to re-do a lot of what we did last year just because of SDA. I don't understand how this hasn't been planned before. Same problem with ACI. We will probably have to do some magic when it comes the time to implement the solution because nothing has been planned.
This whole implementation is on for more than a year now and we are far from being done. This will be time consuming and will cost A LOT of money for our company. I like Cisco products but to be honest, I'm starting to wonder if this is all worth it for the company. Do we need SD-Access ? Do we need ACI ? I mean we are only 200-250 users in our Campus and the setup is pretty standard. Software defined is nice (SW-WAN for 100 remote sites is amazing) but for a Campus with 5-6 user stacks... I don't think it's worth it unless I'm missing something. For our DCs we could simply use a technology like VXLAN.
I didn't even talk about the WLC and the new 50+ Cisco APs they will want to buy for SDA. They are on Aerohive APs since the last 4-5 years and it's been rock solid for them.
The CCIE is a lone wolf that doesn't want any help (apart from the physical work) and we are a small network team (3) so we all get the blame when things go wrong. He is literally a single point of failure for us. When I say "they" it actually means "him" since he is the one pushing for all this.
Bottom line is I don't think SD-Access and ACI are worth it for us. Especially as we start moving services to Azure and especially if the guy is working alone in this big project.
What do you guys think about SD-Access and ACI ?
Thank you for reading
No comments:
Post a Comment