pgb
Best Practices for Benchmarking CoDel and FQ CoDel (and almost any other network subsystem!)
https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/
...
Tuning fq_codel
By default fq_codel is tuned to run well, and with no parameters, at 10GigE speeds.
However, today’s Linux implementation of CoDel is imperfect: there are typically (at least) one or more packets of buffering under the Linux qdisc, in the device driver (or one packet in htb) even if BQL is available. This means that the “head drop” of CoDel’s design is not actually a true head drop, but several packets back in in the actual queue (since there is no packet loss at the device driver interface), and that CoDel’s square root computation is not exactly correct. These effects are vanishingly small at 1Gbps or higher, but when used at low speeds, even one packet of buffering is very significant; today’s fq_codel and codel qdiscs do not try to compensate for what can be significant sojourn time of these packets at low bandwidth. So you might have to “tune” the qdiscs in ways (e.g. the target) that in principle the CoDel algorithm should not require when used at low bandwidths. We hope to get this all straightened out someday soon, but knowing exactly how much buffering is under a qdisc is currently difficult and it isn’t clear when this will happen.
When running it at 1GigE and lower, today it helps to change a few parameters given limitations in today’s Linux implementation and underlying device drivers.
The default packet limit of 10000 packets is crazy in any other scenario. It is sane to reduce this to a 1000, or less, on anything running at gigE or below. The over-large packet limit leads to bad results during slow start on some benchmarks. Note that, unlike txqueuelen, CoDel derived algorithms can and DO take advantage of larger queues, so reducing it to, say, 100, impacts new flow start, and a variety of other things.
We tend to use ranges of 800-1200 in our testing, and at 10Mbit, currently 600.
We have generally settled on a quantum of 300 for usage below 100mbit as this is a good compromise between SFQ and pure DRR behavior that gives smaller packets a boost over larger ones.
Pro frontu kde tečou pakety jednotlivých zákazníků doporučují snížit frontu z 10000 paketů (default) jen na 1000 paket.
Ještě dopuručují quantum na 300
tc qdisc add dev $IFUP parent 1:${htbhex} handle ${htbhex}:0 fq_codel quantum 300 limit 1000 noecn interval 100ms