I was reading throgh all these VXLAN's/EVPN documentations and still have point unresolved...
in such a topology:
ESXI HOST <---> vtep switch <---- ip network -----> vtep switch 2 <----> ESXI host 2
if we do not use NSX, but using good-old dv-Switches we still have to configure VLANS on leaf switches and then to do a mapping to VNI (VXLAN) in it. Like:
! vlan 100 vni-segment 30100 !
or for Jun QFX
set vlans VLAN100 vlan-id 100 vxlan vni 30100 multicast-group 233.252.0.100
and we still have a limit of 4096 per switch. But if we using V-Motion, like DRS (resource scheduler, which moves VMs by it's own opinion to achieve smooth hosts workload) we have to have same vlans set on all of leaves. Then all our switch fabric limited to 4096 vlans. And we can still use something like FabricPath or any other TRILL implementation without bying new modern N9K :)
Another case - we have NSX or other sdn which do VXLANS by itself on hosts. Then we do not need vxlans in our fabric cause hosts produce pure ip traffic and all we need is ECMP or just bandwith. Only in case we want to connect bare-metal hosts or maybe AIX-monsters we maybe need hardware VTEP, but even in this case it can be solved by software vxlan gateway (NSX has it).
Where i'm wrong? )
No comments:
Post a Comment