So as I understand it the new trend in data center networking is to use a layer 3 fabric with an IGP as an underlay—often with “ip unnumbered” peering—and then use an Overlay Network like VXLAN with EVPN Control Plane on top of it. This can allow you to use the same subnet at different “pods” in the network... or even different data centers, without the need to stretch layer 2 domain. Also it allows for total cookie cutter builds like every leaf switch having the exact same config other than mgmt ip and host name.
My question is this. Has anyone tried a layer 3 fabric build without using an overlay via server-side bgp with /32 network advertisements?
In theory it should achieve the exact same connectivity without the need to run multicast, VXLAN encapsulation, or EVPN config.
I haven’t seen a setup like this in practice though so I don’t know if that makes the configuration more complex on the network side of things. Usually bgp requires static peer configuration so I’m not sure how ugly that looks. I know you can do “dynamic listener” for bgp but that may be pretty far from a best practice.
Alternatively you could not use BGP and just have the servers be an IGP node.
Anyway my question is has anyone seen this setup in production? Servers advertise a /32 for every virtual machine instead of using VXLAN/EVPN. Everything’s layer 3 everywhere. Any layer 2 is basically link local.
How is such a configuration set up? How does it scale versus VXLAN/EVPN? How friendly for automation and auto provisioning? How is the TCO versus overlay alternatives? Maintenance/Troubleshooting? Challenges/gotchas?
No comments:
Post a Comment