Some context here: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_plcshp/configuration/xe-17/qos-plcshp-xe-17-book/qos-plcshp-ether-ohead-actg.html
I have a 50mbit circuit with an ISP (L2 MPLS) but the bearer is 100mbit. We pay for the 'L2 rate' apparently.
I get that there's ethernet overhead that needs added but I would class it in three ways:
-
Layer 1 stuff: 8 bytes of preamble to sync coms, 1 byte of Start of frame delimited and 12 'bytes' of idle state between transmissions:
-
Layer 2 stuff: Source/Dest Mac, Ethertype and FCS
-
Layer 3 stuff: Frame payload e.g. the L3 packet.
So when they say they shape at the L2 rate I expect that my QoS policy can ignore 20 bytes per packet (L1 stuff above) and therefore can be built accounting for the L2 overheads only but when i look at what cisco accounts for by default i get this:
- Source/Dest mac = 12 bytes
- Ethertype = 2 bytes
- FCS = 4 bytes
So 18 bytes of overhead exist for each packet. But the Cisco documentation says that they only account for Source/dest mac and Ethertype - So 14-bytes in my scenario.
So why did Cisco ignore FCS in their calculations?
No comments:
Post a Comment