We've previously trusted on the physical hardware appliances on the DC edge. With varying degree (one of the problems) we've added each server/service there and tightly tried to choose what to allow where. I'm starting to wonder if this is actually just wasted man hours and do we actually prevent anyhing malicious.
Or maybe we just could put all the servers in one /24 and use host based firewalls and call it a day?
Maybe just have host firewall rules to first allow management SSH/RDP access, then allow traffic to the services (usually just couple ports max) and then deny eveything else. Quite simple and not that many rules to manage as we use jump hosts/VPN for management. All the servers are run by us.
Besides spending lots or hours trying to manage the hundreds of rules, this also makes VM mobility quite a lot harder than it could be. We could be running BGP on the host and after moving the VM to another DC we could just advertise the /32 IP from there. We run the core network between our sites and DCs so doing /32 wouldn't be a problem for us. Or maybe use LBs in front of the servers and do BGP advertisements for hosts when the LB sees it active no the local DC. Or something we could do with the EVPN VXLAN fabric we're going towards to.
And every time we create a new subnet, it takes couple of days to figure everything in every FW in every DC. This is also a documentation issue though.
Do you see any benefit of doing several /27 and /28 etc. and trying to firewall those on a hardware appliance, or should be just go more "cloud like" :) ?
Thanks for any ideas!
No comments:
Post a Comment