So I've been struggling with this one for a minute. How do I calculate the aggregate throughput on a Cisco IOS-XE device in terms of the licensed bandwidth level for it?
Some documents I've seen suggest looking at the output of this command to see current use:
show platform hardware qfp active datapath utilization summary
This gives numbers which seem broadly correct. The numbers it gives are also available via SNMP in the CISCO-ENTITY-QFP-MIB::ceqfpUtilizationTable. However there are other stats exposed via SNMP, which I believe come from the throughput monitor the the router runs, under CISCO-ENTITY-QFP-MIB::ceqfpThroughputTable.
In my experience these latter "throughput average" values are always considerably lower (~70% of) than the "utilization one/five min" ones. I started graphing them to get a sense of it over the past few days and you can see the differential:
I did some testing with a CSR1000v and could see that the ceqfpThroughputAvgRate would actually hit the same value if a constant stream of traffic was sent for about 10mins or so. I also tried adjusting the "monitor interval" using the below command, but it did not seem to affect the results / averaging behaviour:
set platform hardware throughput-monitor threshold 80 interval 60
Obviously an Ethernet interface is either transmitting (at full speed / fully utilized) or it's idle (zero utilization). So any attempt to say what "speed" all the interfaces are doing has to have some notion of time - what proportion of the time is it fully used versus idle. So I understand why we may arrive at these different numbers - it's something we all have to consider for utilization monitoring.
What I'm trying to understand is which measure the router is going to use internally? At what point does it decide to start dropping packets due to licensed bandwidth being exceeded? I know I could try opening a TAC case but I've more confidence in r/networking - hopefully someone knows!
No comments:
Post a Comment