Hello All,
I realize this is a broad question with probably not enough info for it to be answered comprehensively but as someone who is newish to networking and is trying to get ahold of a mess I could really use your advice in implementating a routing protocol.
Currently we have 3 datacenters - DC1, DC2, and DC3. All three datacenters are connected via interconnect circuits provided by our Colo. Things have turned into a mess as this has grown so rapidly we have never had time to go back and do things right so currently almost everything is being routed using static routes. Also, except for our newish VXLAN setup between DC2 and DC3... most of our setup is using the firewalls as routers and most subnets live on the core firewalls. I've thought about implementing OSPF but due to the 100 network limitation on our devices I dont believe that will work for us and we'd be better off going with something like BGP.
To give you an idea of whats in each datacenter...here goes:
DC1: All critical subnets live on core firewalls and vlans are tagged on core switches and passed up to the Core FW for their L3 interface We have edge firewalls that are responsible for routing traffic to the other two datacenters.
DC2: Again - most subnets live on core firewalls, vlans are tagged on the core switch and passed up to the core for their L3 interface. DC2 does have some L3 Routing on the core switches for a VXLAN overlay in order to extend l2 domains across DC2-DC3(if more details are needed on the underlay i can provide)
DC3: Same as DC2.
Description of DC by device:
Core Firewall: WAN link terminates here, 99% of our subnets L3 interfaces are built here and all routing except for the small amount of VxLAN traffic is done here.
Edge Firewall(Could almost consider this the spine?): Responsible for receveiving traffic from the core firewall and routing it to another datacenter
Core Switch: Mostly just a l2 switch for core vlans which are trunked up to the core firewall. We are doing some eBGP peering between this and core FW in DC2 and 3 for VxLAN.
Edge Switch: Strictly l2 switch. Links from other DC's terminate into this switch and trunk up to the edge firewalls for the l3 interface.
So to put it shortly. Most subnets in each DC live on that perspective DC's core firewall except for transit subnets that pass traffic between datacenters in which case that is handled by our edge firewalls/routers(L2 link between DC's terminating into edge switches with L3 interface living on the edge firewalls). The only exception to this is DC2 and DC3 where we have a few extended l2 subnets(VxLAN) whose layer 3 interfaces live on the core switches in each respective DC. These core switches perform EBGP peering with the core firewalls for VPN connectivity, connectivity to other subnets whose l3 interface lives on the core firewall, etc
Each extended subnet is within its own vrf and has p2p connectivity with the core firewall local to its datacenter in order for us to be able to control ACL's from a firewall level.
I'm aware that this setup is far from optimal but it was done this way because no one on our team was especially strong with Cisco ACL's and we grew from a very small shop to a larger environment pretty quickly.
TLDR; In order for us to stop this static routing nightmare I would like to implement something like IBGP peering between the edge firewalls in all 3 datacenters. Then eBGP peering from each datacenters core firewall to its respective edge firewall. Does this sound reasonable? If not, I would appreciate any ideas on how you guys would move forward with implementing an internal routing protocol.
Also, If anyone knows of a good guide on how to handle Cisco ACL's I would be very grateful as I'm trying to get away from all this firewall management but as someone who is not primarily a network guy I find ACL's a tad bit confusing.
I realize I'm asking alot here and will probably get flamed to hell but i figured i'd throw this our and see if someone felt charitable.
Thanks,
No comments:
Post a Comment