I'm running Cisco ACI and attempting to convert from FC to iSCSI on our storage networking side and we have been running into a ton of issues relating to our iscsi setup. On our FC side we have servers with a 2 10Gbe + 2 HBA setup. On our iscsi side we've been trying to get away with using just 2 10/25Gbe for converged LAN/SAN access however this has been failing spectacularly.
I have no doubt that dedicated nics for storage would help but Cisco docs indicate converged LAN/SAN with only 2 nics would work. Granted their design docs indicate UCS servers running through Fabric Interconnects which we are NOT doing.
I also recall being told that doing things like doing QoS in the datacenter is an absolute nightmare and when possible it's better to throw more bandwidth at the problem... so how am I still choking with 25Gb? Is SAN traffic just a different beast that needs dedicated nics? And do I also need dedicated nics for things like vmotion? The whole exercise was supposed to reduce the amount of cabling complexity to each host, but now it seems like there's more than ever...
No comments:
Post a Comment