Thursday, September 27, 2018

DC SW <-> FW eBGP?

(TL;DR: /27-/31 subnets in DC with eBGP to FW, stupid idea or not?)

We run our own MPLS network in the campus, and it has been working great so far. Lot's of different VRFs for different use cases like "office pc" "lab analyzer" "surveillance cameras" "temperature sensors" etc. Every VRF gets the default route from FWs in DC, where we have MPLS capable switches. "Office PCs" for example has lot's of different subnets in different buildings, but they're all in the same VRF and get default route from the DC.

Currently our servers are in a single "servers" VRF with multiple subnets and we're thinking of segmenting them to multiple VRFs like the LAN is. At least all the new subnets, where we would create a small subnet holding the servers and then have BGP peering between the DC switches and the FW.

So the actual question is: is it a stupid idea to have /27-/31 DC subnets that are terminated on the firewall with eBGP peerings?

We have lot's of servers where the management is outsourced, and few where there are regulatory issues why we can't just run the windows/rhel updates there every month. I'd like to keep those as separate from everything as possible. And also if the server doesn't need to talk to other servers why should it?

NSX or something would probably be great but getting NSX for just this purpose would cost a lot more than configuring all those eBGP sessions :) We have Fortigate firewalls that can support 5000 BGP peerings IRC. Amount of servers we have is in hundreds not thousands.

If this would work fine I could also extend this to our "DMZ". Everything that's being accessed from the internet comes through our load balancers, so the servers would only need to talk to the load balancer. Using private VLANs would probably work but I'm expecting there might be few DMZ servers that need to talk to each other but not to other DMZ servers.

Thanks for any ideas!



No comments:

Post a Comment