Our network has two data centers. Uplinks for both went from a HP 5412 at each site to a single 6509, the sole router, at a 3rd location, each on a single 10 gig link. I'm moving this in stages off the router on a stick design to Cisco 9500s at all 3 locations in a 3 point fiber ring using dynamic routing. Once the ring is fully formed, we'll start pushing SVIs down dispersed to all 3 locations where appropriate.
I moved the uplink for one DC, we'll call it DC1 and the other DC2, last night from the single 10 gig trunk that homeruns to the 6509 to a 2 10 gig link port channel on the Cisco 9500 stack on site. From there, it uplinks to the 9500 stack in the same location as the 6509. One of the 2 links was throwing massive CRCs. Not sure why yet. I'll start troubleshooting that at layer 1 piece by piece later. For now, I've disabled it.
iPerf tests to a server in DC1 are now about have the bandwidth they were before the cutover:
4] local 172.22.115.72 port 27267 connected to 172.16.16.131 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 54.2 MBytes 455 Mbits/sec
[ 4] 1.00-2.00 sec 56.4 MBytes 473 Mbits/sec
[ 4] 2.00-3.00 sec 56.0 MBytes 470 Mbits/sec
[ 4] 3.00-4.00 sec 55.8 MBytes 467 Mbits/sec
[ 4] 4.00-5.00 sec 56.1 MBytes 471 Mbits/sec
[ 4] 5.00-6.00 sec 55.4 MBytes 464 Mbits/sec
[ 4] 6.00-7.00 sec 54.4 MBytes 456 Mbits/sec
[ 4] 7.00-8.00 sec 56.0 MBytes 470 Mbits/sec
[ 4] 8.00-9.00 sec 56.2 MBytes 472 Mbits/sec
[ 4] 9.00-10.00 sec 56.1 MBytes 471 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 557 MBytes 467 Mbits/sec sender
[ 4] 0.00-10.00 sec 557 MBytes 467 Mbits/sec receiver
Whereas iPerf tests to a server in DC2, still on the legacy link, are about 2x that (and what DC1 speeds used to be before the cutover):
4] local 172.22.115.72 port 27399 connected to 172.16.16.48 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 112 MBytes 935 Mbits/sec
[ 4] 1.00-2.00 sec 111 MBytes 933 Mbits/sec
[ 4] 2.00-3.00 sec 112 MBytes 936 Mbits/sec
[ 4] 3.00-4.00 sec 112 MBytes 939 Mbits/sec
[ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec
[ 4] 5.00-6.00 sec 112 MBytes 938 Mbits/sec
[ 4] 6.00-7.00 sec 112 MBytes 940 Mbits/sec
[ 4] 7.00-8.00 sec 112 MBytes 935 Mbits/sec
[ 4] 8.00-9.00 sec 109 MBytes 916 Mbits/sec
[ 4] 9.00-10.00 sec 112 MBytes 940 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec receiver
My question is, while both links are technically 10G, would the link at DC1 being one half of a two link port channel cause it to run at only around half its capability? Is it somehow expecting the second link to balance and carry the load, even though it's disabled? Would changing this link to a standard trunk instead increase performance? Thanks in advance.
No comments:
Post a Comment