Friday, December 13, 2019

Potentially spineless EVPN VxLAN. Is this even possible?

So I've spent a week trying to wrap my head around VxLAN with EVPN and its configuration on Cisco. I'm still very much confused about many aspects of it, since the documentation and guides found on the internets are quite different and mainly meant for the standard leaf-spine architecture. Please forgive me if my use of terms in this post does not make sense at times, I'm still very green in this.

I have 3 physically different locations with the following devices connected to eachother: Catalyst 9300 stack <-> Nexus9k VPC pair <-> Nexus9k VPC pair. There's already a production environment on them with plain old stretched L2 (trunk from C9k to the N9k on the other side). Currently I also have only one physical interface available for these connections on each switch (which is already configured as L2 trunk port), but in the future we will potentially upgrade this to more physical connections for redundancy. It is entirely possible this is already a show stopper for what I'm trying to achieve, since I my only option is to use a vlan SVI for the VxLAN underlay, instead of a routed port, which is the suggested design in every guide I've read. I have not found any explanations yet on why I could not use an SVI, tho.

I'm trying to stitch it all together with EVPN VxLAN, but as you already know, it's not the standard leaf-spine design. I could in theory configure the middle N9k as a Spine, but I need to be able to have all the switches act as VTEPs. (Am I trying to create a monster? :)

From what I have read so far, I understand that it is possible to avoid using multicast entirely for BUM traffic with EVPN Control Plane (head-end replication). Just a minute ago discovered that this is not an available feature on our current C9k Fuji firmware version and will have to upgrade it, which is not a problem.

At first I tried to use only one instance of (e)BGP for underlay and overlay, but could not figure out the configuration for it and had lost the guide which mentioned this possibility. Maybe it's not possible on Cisco after all. But it seemed elegant to use only one instance of BGP for all the routing.

So right now I have configured OSPF for underlay, where I redistribute the Loopbacks for eBGP overlay. The neighboring for EVPN is up and running, but since I'm missing the head-end replication future I can not test it out yet.

Anyways, my question to the more wise is: Is it at all possible to achieve such a design, where we have basically 3 different switch stacks, which all act as VTEPs without any spine? Or if it is possible to make a spine (the N9k pair in-between) act as a VTEP as well? The role of the Spine is very much confusing for me still, besides route-reflecting in the iBGP design & interconnectivity between the leafs & load-balancing with ECMP. Do they have any other "special" roles? What I mean to ask is, could I not just connect leafs to leafs, if I have an EVPN connection between all of them?

Also might there be any problems with EVPN on these platforms? For example, I have read Catalyst 9300 can act only as a leaf. Which is not a problem in our case, but there might be some other similar caveats I'm unaware of.

Some of you may be thinking: WHYYY?

We just want to move away from this stretched L2 to a flexible L3 solution. So we could stretch L2 in that L3, if need be. ;)

This is not a setup for a datacenter. It's a budget setup for a small company, which houses user access ports on the Catalyst and some server access on both of the Nexus pairs. We do understand there are better (and thus more expensive) designs possible, but I hope this does not become the focus of this post. :)



No comments:

Post a Comment