Thursday, January 21, 2021

Any thoughts on current best practice surrounding iSCSI segmentation when moving to 40G or 100G?

Conventional wisdom for years and years has been segment your iSCSI traffic onto separate physical interfaces/hardware. Never combine iSCSI and your data traffic onto the same NIC. This worked well in the 1G days, and this thinking extended as the industry moved to 10G. I remember hot debates whether or not jumbo frames were still relevant once you moved servers and storage into 10G.

Now that 40G/100G is cheaper and cheaper our organization is moving our SAN's over to 40G. Our VM host servers are currently running 4x10G. active/passive for DATA, and active/passive for iSCSI. I'd like to hear from others that have moved their servers to 40G or beyond and what you're doing as it relates to iSCSI and DATA traffic sharing the same physical 40G interface. Are you continuing to segment on separate physical interfaces? If not, have you noticed any performance issues when DATA and iSCSI share a single 40G link? Or have you seen that once you move into 40G that your bandwidth is now more than what your servers can push and thus it's safe to bring iSCSI and DATA back onto the same physical interface?

(All the above is assuming enterprise datacenter level hardware, i.e. Cisco Nexus, Intel/Broadcom/Mellanox NIC's, ESXi vSphere clusters, etc.)



No comments:

Post a Comment