> -----Original Message-----
> From: Rui Prior [mailto:firstname.lastname@example.org]
> Sent: Wednesday, December 06, 2000 6:41 PM
> To: Earle, Jonathan [KAN:1A31:EXCH]; 'Linux Diffserv List'
> Subject: Re: Confusion regarding TC (cbq), policing and bursts
> > Questions:
> > 1. I've gleaned from the net, that 'maxburst' refers to
> the number of
> > bytes allowed in one burst. Setting this value to much
> more than 100 seems
> > to result in high packet loss. But only 100bytes per
> burst? Seems awfully
> > small.
> I think maxburst is the number of avpkt-sized packets allowed
> to be sent in a
> burst after a period when the source has been idle (anyone
> please correct me
> if I'm wrong)
We noticed that setting this to more that 100 resulted in quite a bit of loss (we were sending 64byte packets at the time). What happens if the packets are not average sized? Suppose they are all 64byte? Or suppose they vary all over the place.
> > 2. I've modified as per good advice found here and
> elsewhere, the PSCHED
> > (now set to CPU) and HZ (now 1024) values. What then, do
> those values
> > mean? Why is one value better than another?
> With the timer interrupt happening 1024 times per second
> instead of 100, you
> have a granularity that's about ten times smaller. This means
> you have better
> control over non-work-conserving qdiscs, as is the case of TBF.
So, does that mean that the class will look at the packet stream 1024 times per second vs 100?
> > 3. What is the sampling rate, ie: how often does tc look
> at the packet
> > stream to determine if a drop/accept action should occur?
> Once per packet at enqueuing, although with some qdiscs it is
> possible to
> drop an already enqueued packet when enqueuing another. (Once
> again, correct
> me if I'm wrong)
So, let me replhrase then, please correct. With HZ=1024, there are now 1024 intervals per second which the parameters for the qdisc, class, etc will utilize. This means, each millisecond, the packet stream will be examined for conformance to the parameters. This should then mean that the burst count should decrease (as a supplied parameter).
If that is correct, the question then, is how to calculate the proper burst size. If say, I want to send 200kbit of data, at a rate of 1.5mbps. I want tc to allow no more than 200kbps (the rate parameter would be set to 200kbps), but that 200kbit may be coming in at a much higher line rate. I'd then need to play with the burst size to avoid dropping packets, yes? How do you calculate the burst?