**__QOS NOTES / CONCEPTS__** __CLASSIFICATION AND MARKING__ * Always Mark traffic: Even if we are not honoring the marking, is a good practise to mark it. * Eg: we want to police some traffic. First we create the class map, we mark the traffic and then, in the same router, we police the traffic with that marking \\ IPP COS Markings:\\ * CoS (802.1p) 3bts (layer2) * ToS > DSCP+ECN * DSCP is 6 bits (0 to 2 is IPP) * ignore ENC for marking (2 bits). * QoS EF(101110) is one and is above AF (is twelve possible values) ; The CS is for backwards compatibility with IPP (3 bits) * Common values: * 48: Network Control * 46: EF * 32: Real-time * 0 : BE \\ Marking Example: Cs6 network critical ; EF voice ; Afxx for other important traffic ; BE rest ---- __CONGESTION AVOIDANCE__ * Policing * Shaping Shapers can only be used on **outbound traffic**. You can’t shape traffic arriving on an interface. **__It’s already arrived__**. There is no “buffer” to work with). ---- __CONGESTION MANAGEMENT__\\ Now, the network device has a reason to consider which packet to send next.\\ * **CBWFQ**: FQ is just some level of fairness for example don't let elephant flows choke the rest. * CBW is to obey the markings and apply whatever. policies we define. So Class here is obeying the Markings * **LLQ**: Means Priority Queue with a Policer: priority up to a max rate * Rule of thumb do not allocate more than 33% to the LLQ (note is singular) ---- ---- Advanced stuff:\\ FQ_CoDel: * CoDel measures how long long a packet has been in the queue, which it calls the “sojourn time.” When the sojourn time is too long, CoDel drops the packet. * This aggressive drop approach allows TCP to back off to an appropriate rate, and you end up with a more consistent flow rate, more even utilization of the interface, and better sharing among traffic flows. * CoDel per queue [https://datatracker.ietf.org/doc/html/draft-ietf-aqm-fq-codel-06]] ---- TCP disciplines: \\ * CUBIC: BIC and CUBIC TCP variants use math to compute how much to open or close the sliding window based on round trip time and how much data was lost in a given acknowledgement cycle. That’s different from default TCP behavior, which slides the window shut and starts all over again to ramp up the unacknowledged transfer amount whenever there’s packet loss. * BBR gives nominal TCP performance improvements over CUBIC, but when you operate at Google’s scale, every little bit matters.