Thursday, September 23, 2021

Does drops on queueing affect IP SLA icmp-echo?

Hi All,

Is there any possibility that that drops on output queue affects the IP SLA probe when sending icmp-echo? Drop is randomly happening and when I ping point-to-point I'm not able to detect any drops.

LOGS: 05:00:20.847: %TRACK-6-STATE: 11 ip sla 11 reachability Up -> Down 05:00:20.847: %TRACK-6-STATE: 21 ip sla 21 reachability Up -> Down 05:00:20.847: %TRACK-6-STATE: 31 ip sla 31 reachability Up -> Down 05:00:55.899: %TRACK-6-STATE: 21 ip sla 21 reachability Down -> Up 05:01:00.899: %TRACK-6-STATE: 11 ip sla 11 reachability Down -> Up 05:01:00.899: %TRACK-6-STATE: 31 ip sla 31 reachability Down -> Up 06:38:26.360: %TRACK-6-STATE: 11 ip sla 11 reachability Up -> Down 06:38:26.360: %TRACK-6-STATE: 21 ip sla 21 reachability Up -> Down 06:38:26.360: %TRACK-6-STATE: 31 ip sla 31 reachability Up -> Down 06:39:06.408: %TRACK-6-STATE: 11 ip sla 11 reachability Down -> Up 06:39:06.408: %TRACK-6-STATE: 21 ip sla 21 reachability Down -> Up 06:39:06.408: %TRACK-6-STATE: 31 ip sla 31 reachability Down -> Up 11 ip sla 11 reachability Up 00:48:28 21 ip sla 21 reachability Up 00:48:28 31 ip sla 31 reachability Up 00:48:28 #ping 10.1.2.1 source 10.1.2.2 repeat 5000 size 1500 df-bit (cut) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Success rate is 100 percent (5000/5000), round-trip min/avg/max = 1/2/32 ms #sh policy-map int g0/1 GigabitEthernet0/1 Service-policy output: etm-GrupoSalinas-Elektra-lima Class-map: class-default (match-any) 261661475791 packets, 146841546500637 bytes 30 second offered rate 11024000 bps, drop rate 0000 bps Match: any Queueing queue limit 4096 packets (queue depth/total drops/no-buffer drops) 0/1011428/0 <--- no inc. as of now (pkts output/bytes output) 261660459030/152073955468691 shape (average) cir 30000000, bc 140000, be 0 target shape rate 30000000 Overhead Accounting Enabled No Congestion at the time that the total drop increased.. Could be bursty traffic.. 

Here's the QOS policy:

Router in question: policy-map parent-policy class class-default shape average 30000000 140000 0 account user-defined 20 queue-limit 4096 packets 

From other site, Nested QOS is being implemented. Should I replicate this on this site and this purpose of this child policy is prioritize the traffic and smoothen the connection?

Working site: policy-map parent-policy class class-default shape average 150000000 600000 0 service-policy child-policy <-- Child ! policy-map child-policy class realtime priority class priority bandwidth remaining percent 40 random-detect dscp-based class missioncritical bandwidth remaining percent 39 random-detect dscp-based class transactional bandwidth remaining percent 16 random-detect dscp-based class general bandwidth remaining percent 1 random-detect dscp-based class class-default bandwidth remaining percent 4 random-detect dscp-based 10 ip sla 10 reachability Up 1w2d <- Stable 

Thank you



No comments:

Post a Comment