Wednesday, June 5, 2019

Nexus 9K - NX-OS vs ACI

All,

I have done some reading on various threads here as well as checked out some videos and articles to get a feel for it. However, with that and some meetings with my VAR and SE, I still feel confused on the direction to take here. I'd appreciate input as well as suggestions on what resources to look at.

My background: 6 years of campus LAN/WAN (L2 and L3 to the access), ASA/PAN, ISE, some wireless, etc. For the Data Center, I have done Nexus 5K with 2K in vPC/HSRP, but I was more administering that (not engineering) and getting used to NX-OS as well as the UCS and FIs. Looking back, I'm lucky to not have broken much. I have been in the new environment for 2.5 months and getting up to speed with things. There's some knowledge gap as the long-term engineer/architect of 20+ years is no longer with us unfortunately (hence the position opening).

Current Situation:

-Data Center with non-Cisco core setup in that vendor's vPC equivalent. 2 chassis in the cores + 17 or so ToR switches. It's stable but aging and nearing complete EoL.

-The downstream switches where the servers plug into are connected to each core over that 'vPC' trunk. The SVIs live on the core, but each stack (1-5 switches) has unique VLANs.

-The WAN connections land on Catalyst switches, several routers are the BGP peers with these SPs, and there is a full-mesh iBGP configuration internally

-The firewalls plug into various DMZ switches (non-Cisco)

-Some servers plug directly into the core

-We have a second DR DC, but most of the storage/compute is rented during scale-up for testing. There's some switching there in an isolated environment that contains the same Layer 2 and Layer 3 VLANs and subnets, but it requires shuffling of cabling and other configuration modifications. We do not expect to have Active/Active DC due to the lack of infrastructure, but it would be nice to streamline the DR testing.

-Our team does not touch the storage/compute gear. It's not UCS and it is not part of this project, so it is being left alone

Design Considerations:

-Looking at pure Spine/Leaf with the N9Ks looks good, but I feel like we'll struggle with the L2 to L3 struggle due to the sprawl of the VLANs. How do we overcome this? We're talking 200+ port density for a single VLAN. Each Leaf appears to be its own L3 segment, so it seems like a similar issue to bringing routed access to the IDFs

-My VAR tried to tell me most people stick with L2 in the DC, but this seems wrong to me. He kept bringing up vPC, but that would be L2 and not L3 routed, which although would make this migration easier, would not be future-proofing

-ACI keeps coming up - My team does not have much Nexus experience, and he made it seem like going ACI is a learning curve with or without that experience, so it was a non-issue. However, having the two DCs and not expecting Active/Active seemed like it would double me up on APICs. I'm also concerned because the concepts of Bridge Domain, EPG, IS-IS, are foreign at the current moment.

-Potential ACI Benefits

- There is a desire to have more visibility and micro-segmentation of DC traffic (user to server as well as server-to-server). ACI seems to lend itself nicely to achieving that, albeit with some additional bolt-ons and a greater understanding of the server environment than I currently have. Also, the orchestration and automation aspects may be time-saving, given our team consists of two to three people supporting around 1500.

I'm currently going through CCNA Data Center material and trying to get up the speed. At the end of the day, I'm finding the DC networking to be a little overwhelming. I just haven't used the overlays like VXLAN in the past and a lot of other things come up which are new concepts to me. I have VIRL and dCloud access for virtual NX-OSv configuration as well as CBTNuggets, but I feel ill-prepared to make any decisions yet.



No comments:

Post a Comment