Tuesday, August 21, 2018

Lets break this down - how best to handle routed traffic without ACLs on the swotch?

So, I had been plannimg to implement a collapsed core spine/leaf setup to simplify the physical architecture of our datacentre traffic for a SaaS app.

It seems we can mostly agree on this being a good plan.

However, I read all the comments and concerns regarding having ACLs on these switches, and the crux of what makes me feel that need is required may be mistaken.

So if you wouldn't mind reading through a bit, maybe you can tell me what methods would work best here.


Problem 2 we havr is that we maintain a large number of routes on the individual hosts in the network to determine which firewall they need to reach.

We actually have 7bsets of ha pair firewalls used for various purposes:

1 (pair) for client inbound traffic (from internet), 1 for application traffic between (DMZ and Backend and from dmz and backend to internet for valid services), 1 for managment traffic (between management and DMZ, backend, internet in both directions) and site to site & ra vpns, 1 to reach vendors (external services over vpn), 2 to reach specific client networks (vpns so they don't use the app through the internet -- their requirement not ours), and 1 for our dev network.

Several of these will consolidate if we ever get that completed and eventually we'll be down to 5 ha firewall pairs:

1 app FW (external to dmz, and dmz to backend, backend to dmz, backend and dmz to internet for external vendor services, email, etc.) Vendor VPNs are going to end up being through the internet due to their changes so that firewall alsonrolls into here.

1 managment FW (remains unchanged from above)

2 for clients to reach our app by vpn (only changes if they changw their req)

1 dev firewall.

As you can see the main traffic between dmz, backend and internet is mostly handled by one firewall pair, which is the default gateway of the dmz and backend servers.

Currently The front end firewall basically just does some nating and acls for inbound to our LB which uses that fw for its default gateway, but would return the traffic regardless (we so use it for inbound from the other firewalls on alternate ip ranges and that works just fine.)

So this transition is mostly complete and we've changed the hosts tonuse the app firewall as their default gateway on dmz and backend networks, and managment traffic uses that network.

Management and lan traffic already are completely mixed vlans (running on same switches nearly everywhere), while dmz only mixes a bit.

All traffic between vlans moves betwene them based on the firewalls.

To make this work we havd to put a fair numbet of routes on the hosts to make sure they go to the correct firewall to reach the correct network.

We also NAT some of the traffic from management, and vpn access and in other firewalls to avoid changing or adding routes on the host os of all the hosts since we have about 1000, and its combersome to maintain.

We had pushed to change to using switches as the gateway for each vlan for some time.

However we now will be needing to roll up into the single pair of nexus switches (plus a small number of those legacy swtiches if required) what was previously running on 14 separate switches.

We're planning to leave the external traffic on it's separate switches and just consolidate the dmz/backend/management networks.

All other networks are to some extent "remote".


So, the 2nd goal of the change after physical improvement is to remove the use of host-based routes and any NATing by utilizing the switch as the gateway.

The problem we face is that if we use the switch as the gateway, it will automatically route between connected vlans any traffic comming to it, because they are local networks with vlan interfaces on them.

As we don't know of how we can overcome this behavior, and send the traffic on to the appropriate firewall for inter-vlan traffic between dmz/backend/management, we were forced into talking about usinging ACLs on the switches to handle our east-west traffic within the datacentre.


  • Would policy based routing, or vrf be options which could accomplish this so the firewalls are back to routing (and providing acls)?

  • What would be the better option here, and why?

  • We do only need a hand full of routes onthe switches to external networks other than those needed by implementing policy based routing or vrf to route these vlans through the firewall instead of at the switch.

  • If these aren't an option then how would you best handle that scenario?

  • I'm curious to know of a better way if we don't need to move the acls to the switch but can keeping it from routing the vlan traffic between vlans in some fashion and instead send it on tk the FW, I'd be happy to do that instead. :)



No comments:

Post a Comment