[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: new WRR sched, updated TBF and "real" ingres implementation



I'm working on a system to do something very similar here at our ISP, so I 
have a question.  Wouldn't it be simpler just to limit the machine to two 
network cards: eth0 for the link to your upstream provider and eth1 to a 
hub/switch which feeds your clients, then put CBQ queues on eth0 incoming 
and eth1 outgoing traffic to limit everyone to 256kbit/[number of 
clients].  If you didn't use the bounded or isolated options, then you'd be 
able to fairly share the bandwidth amongst the customers, while allowing 
them to exceed their allotment if their was space available on the pipe.

I just started looking into traffic shaping under Linux, so please let me 
know if there is some sort flaw in my logic, as it would probably save me a 
lot of time. :-)

BTW: You could probably do the same thing with the multiple ethernet cards, 
using a  loopback or dummy ethernet interface if the CBQ classes won't let 
you have filters corresponding to the different interfaces on a class.

At 08:02 PM 7/25/00, jamal wrote:


>On Fri, 21 Jul 2000 devik@cdi.cz wrote:
>
> > jamal wrote:
> > >
> > > Out of curiosity, why do you think that we need to queue incoming packets
> > > on the ingress?
> >
> > Hello jamal,
> >
> > [First I hope I understood the term "ingress". I use it
> > with meaning: "incoming packets on some interface"]
> >
>
>yes.
>
> > well, I tried to explain it at the URL. I'll try again.
> > In our company we have one linux box acting as router
> > to the Internet. The router has several ethernet interfaces,
> > eth0 is connected to our upstream ISP using Breezenet
> > wireless link (2Mbit).
> > Eth1,2,3... are connected to our customers. With ISP we
> > have an agreement that we have to throtle output traffic
> > to the 256kbit. The ISP also throttles out incoming trafic
> > to 256kbit using so named "generic shaping" in cisco router.
> >
> > The problem 1: cisco's shaping implementation is bad. I consulted
> > it with Cisco folks and they commited me that traffic is shaped
> > BEFORE it reach queuing.
>
>This is not "shaping" then. Shaping is post-queue.
>
> > Thus cisco's WFQ queuing doesn't work
> > at all (when shaping is active). It leads into very unfair usage
> > of link : host which has low delay connection can take most of
> > bandwidth and all other TCP connections are slowed down (because
> > they now have longer interpacket times).
>
>I am afraid you lost me here.
>Are you talking about CISCOs CAR? or is it WFQ that is causing this
>effect?
>
> > This can be solved by shaping incoming trafic also at our side
> > to, say, 240kbit.
>
>I am confused. Is this on the ingress of each eth1/2/3? or do you mean
>outgoing traffic to the ISP?
>
> > I can assign trafic into several classes and
> > thus sharing bandwidth between customers in given ratios.
> > Currently, incoming trafic is divided into 3 classes:
> > - small packets (<256B) - it can use 60% of link
> > - FTP data, SMTP data, NNTP data - 10%
> > - all others - 30%
> >
>
>Is this done per interface eth1/2/3? could you just have achieved this by
>shaping to your provider on eth0?
>
> > You can object that I could attach the shaper to the outgoing
> > interface instead as incoming. Well, I can't because I have
> > several output interfaces and I really don't want to restrict
> > eth1 to 128kbit and eth2 to 128kbit.
>
>I obviously did suggest that above ;-> I am still not seeing how you would
>dynamically share the 256K on the eth0 by queueing on ingress eth1 and
>eth2 ...
>
> > I want eth2 to borrow unused bandwidth which would go to eth1 for
> > example.
> >
>
>How?
>
> > Is it clear now ?
> >
>
>Not really. Maybe you mean that the data direction is into eth0 towards
>eth1/2/3? i.e the users behind those interfaces are bringing data into the
>network as opposed to outputing towards the wireless side?
>In which case your limitation is first the bandwidth of the wireless
>device. But regardless of this, ingress policing does provide you scheme
>to share bandwidth using the "continue" construct. Look at
>examples/diffserv/Edge32-c*-u32 for samples.
>
>Queueing on ingress on a shared bus such as one found on PC is really
>useless. Even on very fast switching fabrics nobody does it. CISCO is
>correct not to implement it.
>If you _really really_ insist (and so far i see no good reason) then it is
>a trivial exercise; the infrastructure is already in place.
>But i dont think i have understood how you can have the 256K shared by
>having queueing on the ingress.... Please explain again. It could also be
>i never understood your problem description either.
>
>cheers,
>jamal