Monday, November 13, 2017

BGP on a VM, how to do it like it's 2017?

It has been mentioned few times here that running BGP on a VM would be the "correct way" to do host migrations between data centers. And having the "service IP address" on the loopback address.

In that kind of scenario, do you accept that the migration isn't as smooth as moving a VM to another virtualization host and the VM staying on the same VLAN? I tried this with couple of switches, different VLAN interfaces and BGP configured. However the time it took BGP to re-establish was longer than it took my SSH session to time out. I'm expecting other sessions would also have to be re-established when doing this kind of migration with default configurations. Do you tune BGP to be faster, or do you use this technique to applications that can withstand this?

How about the link between the VM and the switch, do you have same IP addresses on different switches and then not just advertise those links towards core, or do you use different subnets per switch? How do you handle moving VMs with vMotion, as it seems it doesn't flap the VMs interface so it would get a new IP from DHCP server? Though I guess forcing the link down and up would also reset sessions?

How would you firewall this, if by default VMs could access each other at the leaf switch level and packets wouldn't go to the firewalls? Or do you have all the subnets in different VRFs and have the default route towards the firewalls? Seems you'd need awful lot of subinterfaces on the FW in that case...

We currently have a "legacy data center" where everything is done with VLANs and we're trying to figure out how to do migrations to other DCs and how the firewalling between VMs should happen. Also our DC routers are connected with HSRP, so all the traffic from the secondary DC first has to go to the primary DC and then to the rest of the network.

I'd appreciate any ideas, thanks!



No comments:

Post a Comment