Hello all, I'm really desperate to try to figure out how to upload a 50MB file from a T1 branch over MPLS. This issue happens on and only on all 4 of our T1 branches. The issue is perfectly reproducable everytime. If the branch has higher than a T1 circuit, they will never experience this issue
So We have 4 T1 branches in 4 different cities. Every once in a while a user at the branch needs to upload a file that's roughtly 50MB back to our DC. This process is painfull and very slow - takes about 20 min. So I decided to play around with QOS and see what I can do about this. The traffic is all SSL so I made an SSL class-map to capture the traffic, then a policy-map to take action on it. As you can see I'm not good at all at QoS but I was able to get less packet drops on the output policy-map compared to the old one at least....at the expense of almost making my SSH session to the branch Router and RDP session unusable.
So here's the issue. A user attempts to upload the file, it moves and moves...then all of a sudden the transfer haults at some random time like 34.1MB into the transfer and just keeps staying there, then 5-10 min later it starts uploading again. during the time that its not moving, packets are still being dropped and the counters are still incrementing..so I know it's at least doing something still...it's not frozen or anything. This happens for all 4 branches where the transfer just randomly haults and stays there for 5-10 min, then starts transfering again. With playing more around with the QOS, I was actually able to build a QOS policy that experience almost no output drops...by pretty much giving everything I had in that circuit to SSL traffic. Please note that the below output of the policy-map drops is when there is no one at the branch using the circuit. This is during nighttime when I know the circuit is pretty much not being used by anyone and I have made sure of it.
Does anyone have any suggestions for how I can re-write this QoS? or is this even a problem QoS can solve? When I had written the profile to the point where there was no packet drops, but the issue was still happening, I was wondering if maybe this an ISP issue? can't tell, but the amount of packets it takes just for a 50 MB transfer is ridiculous. Such as if I do this transfer on the LAN and count the packets, it requires WAAY less packets to complete the upload.
Policy Map SSL_PRI
Class ssl-traffic
priority 70 (%)
queue-limit 8192000 packets
Branch-RT001#sh class-map ssl-traffic
Class Map match-all ssl-traffic (id 9)
Match protocol ssl
interface Serial0/1/0:0
description // T1-MPLS
ip address
1.1.1.1
255.255.255.252
ip nbar protocol-discovery ipv4
load-interval 30
no cdp tlv app
service-policy output SSL_PRI
end
This is right after the transfer finished. 2327 total output drops. It was lower drops than the old with the old Policy-map but If I give enough, I can even get it to the point where there are no output drops and the issue still happens.
Branch-RT001#sh policy-map int Serial0/1/0:0 output
Serial0/1/0:0
Service-policy output: SSL_PRI
queue stats for all priority classes:
Queueing
queue limit 8192000 packets
(queue depth/total drops/no-buffer drops) 0/2327/0
(pkts output/bytes output) 136695/99054199
Class-map: ssl-traffic (match-all)
139022 packets, 102316008 bytes
30 second offered rate 0000 bps, drop rate 0000 bps
Match: protocol ssl
Priority: 70% (1075 kbps), burst bytes 26850, b/w exceed drops: 2327
Class-map: class-default (match-any)
737519 packets, 200256978 bytes
30 second offered rate 0000 bps, drop rate 0000 bps
Match: any
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 707725/197798708
No comments:
Post a Comment