[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: new WRR sched, updated TBF and "real" ingres implementation

jamal wrote:
> Out of curiosity, why do you think that we need to queue incoming packets
> on the ingress?

Hello jamal,

[First I hope I understood the term "ingress". I use it
with meaning: "incoming packets on some interface"]

well, I tried to explain it at the URL. I'll try again.
In our company we have one linux box acting as router
to the Internet. The router has several ethernet interfaces,
eth0 is connected to our upstream ISP using Breezenet
wireless link (2Mbit).
Eth1,2,3... are connected to our customers. With ISP we
have an agreement that we have to throtle output traffic
to the 256kbit. The ISP also throttles out incoming trafic
to 256kbit using so named "generic shaping" in cisco router.

The problem 1: cisco's shaping implementation is bad. I consulted
it with Cisco folks and they commited me that traffic is shaped
BEFORE it reach queuing. Thus cisco's WFQ queuing doesn't work
at all (when shaping is active). It leads into very unfair usage
of link : host which has low delay connection can take most of
bandwidth and all other TCP connections are slowed down (because
they now have longer interpacket times).
This can be solved by shaping incoming trafic also at our side
to, say, 240kbit. I can assign trafic into several classes and
thus sharing bandwidth between customers in given ratios.
Currently, incoming trafic is divided into 3 classes:
- small packets (<256B) - it can use 60% of link
- FTP data, SMTP data, NNTP data - 10%
- all others - 30%

You can object that I could attach the shaper to the outgoing
interface instead as incoming. Well, I can't because I have
several output interfaces and I really don't want to restrict
eth1 to 128kbit and eth2 to 128kbit.
I want eth2 to borrow unused bandwidth which would go to eth1 for

Is it clear now ?

regards, devik