[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Confusion regarding TC (cbq), policing and bursts



> We noticed that setting this to more that 100 resulted in quite a bit of
> loss (we were sending 64byte packets at the time).  What happens if the
> packets are not average sized?  Suppose they are all 64byte?  Or suppose
> they vary all over the place.

This one is simple: if all packets are 64byte sized, use 64 as the avkpt for 
that class.
Now the complicated answer: for packet sizes the order of magnitude of avpkt, 
there's not a great rate variation (in terms of bytes per second) with the 
packet size, since avpkt is used to calculate an "offtime". At such small 
packet sizes though, the header overhead is reasonably high, so you might 
notice some rate decrease.


> So, does that mean that the class will look at the packet stream 1024
> times per second vs 100?

It's not a matter of looking at the packet stream, but rather scheduling of 
transmission for packets that must be delayed. In the kernel source, look at 
linux/net/sched/sch_tbf.c  and read the large comment. Better yet, read the 
last part of the reply.


> So, let me rephrase then... please correct me if I'm wrong.  With
> HZ=1024, there are now 1024 intervals per second which the parameters
> for the qdisc, class, etc will utilize.  This means, each millisecond,
> the packet stream will be examined for conformance to the parameters,
> that is, each millisecond, all the packets in the queue at that time are
> checked for conformance.  This should then mean that the burst count
> should decrease (as a user supplied parameter, not by some magic power)
> as the interval decreases, yes?

> If that is correct, the question then, is how to calculate the proper
> burst size.  Say, I want to send 200kbit of data, at a rate of 1.5mbps.
> I want tc to allow no more than 200kbit through in one second (the rate
> parameter would be set to 200kbps), but that 200kbit may be coming in at
> a much higher line rate (1.5mbps).  I'd then need to play with the burst
> size to avoid dropping packets, yes?  How do you calculate the burst?

Before going further, let me remind you this refers to the worst case. 
Usually it's not so hard, but with QoS just a probability isn't good enough.

Suppose a source that transmits enough packets to keep buffer of a TBF almost 
full most of the time. Now suppose the packets are sent at the beginning of a 
time slot. If packets have to be delayed, the qdisc schedules a timer that 
may only be awaked in the next time slot, so all packets have to wait until 
then. So, in the worst case, if you send a burst of N bytes and have to wait 
until the next time slot (or jiffie), then send another N bytes, then wait, 
and so on, you end up with a rate that's N*HZ bytes per second. If your burst 
size is N, you're limited to a maximum rate of N*HZ.
Notice that you can't control HZ (at least on runtime), so the only way to 
increase the maximum rate is by increasing the busrt size.
Usually this is not so hard, because packets are arriving or leaving, thus 
generating interrupts that may be used for sending some packets. But you just 
can't rely on that.

TBF implements a double token-bucket, which controls not only average rate 
but also peak rate. Now peak rate control is much more demanding than av. 
rate control, because the "burst" size for the peak rate token bucket is a 
single maximum sized packet (MTU), whereas the burst size for av. rate 
control is usually at least several MTU-sized packets.

Hope this makes it clear.


Rui Prior