Hi,
From a network engineer’s point of view. I’m trying to learn Openstack and trying to build a learning environment for it.
Main question is, do we simply trunk all traffic down to the servers or make the servers run vxlan or let openstack do the config dynamically on the switches via ML2 or something? (how did you deploy yours in your DC?)
The other question is, can we use a simple 29xx switch just for a learning lab or do we need something with vxlan and ML2 like plugins? (i’m planning to run IRONIC on my GPU nodes for ML Proof of Concept)
If you have learning resources that properly explains this in a network engineers perspective, please do share, thanks in advance!
No comments:
Post a Comment