We built our own MPLS network for our enterprise spanning few cities and six DCs, and used a design where we have two different core rings and separate P and PE devices.
Somewhat similar as this design in the Cisco Live presentation: https://snag.gy/BUaK9P.jpg
It's been running well, but I've started to wonder why we need the P switches there at all (we built it witch switches to get more 10G ports and there aren't that many labels/routes in the network). The idea was to have redundancy etc but if we didn't have those, everything would still be connected to at least two other switches.
And now that it seems our 2x10Gbps links aren't enough for a single use case and we'd need to upgrade, it'd be a lot nicer to upgrade to 40/100 gig if we didn't have to get line cards for those extra P switches too :) And of course our budget is on a bit low side currently so we could use those switches elsewhere...
Any ideas? Should we just go with PE switches or are those P switches useful for some reason I'm not seeing yet (we haven't had any major outages or anything, yet). Or maybe replace the P switches with cheaper 1U switches as they're only doing OSPF and not BGP/L3VPNs or anything.
Thanks!
Edit: we use MPLS in our campus core/distribution too. Everything is segmented in different VRFs that are terminated in the DC FW cluster and it all goes over our MPLS network, that's why the 2x10 seems a bit slow sometimes and we also have backup/replication traffic going to a third DR site over the same links
No comments:
Post a Comment