[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: new WRR sched, updated TBF and "real" ingres implementation

At 08:12 AM 7/26/00, you wrote:
> > I'm working on a system to do something very similar here at our ISP, so I
> > have a question.  Wouldn't it be simpler just to limit the machine to two
> > network cards: eth0 for the link to your upstream provider and eth1 to a
> > hub/switch which feeds your clients, then put CBQ queues on eth0 incoming
> > and eth1 outgoing traffic to limit everyone to 256kbit/[number of
> > clients].  If you didn't use the bounded or isolated options, then 
> you'd be
> > able to fairly share the bandwidth amongst the customers, while allowing
> > them to exceed their allotment if their was space available on the pipe.
>yes, it will work. I'd do it so but read bellow ..

You don't know how happy reading those words made me. :-)

> > BTW: You could probably do the same thing with the multiple ethernet 
> cards,
> > using a  loopback or dummy ethernet interface if the CBQ classes won't let
> > you have filters corresponding to the different interfaces on a class.
>Yeah ! You hit the point of my previous mails to jamal ! It would
>be nice to have one output queue for several interfaces but it is
>rather imposible. The qdisc is always tied to one interface.
>It would solve all my problems when I would be able to share
>qdisc for more than one interface and then attach CBQ or WRR classes
>to different dest. IP addresses.
>Possibly I'll try to do it. It'd be hard in 2.2 kernels as there
>qdiscs are dequeued by EOI event on interface or qdisc's watchdog.
>But in 2.3 there is new softirq net_tx_action which is called for
>all transmits.
>So only difference is that we should netif_schedule all devices which
>shares common queue (applies to TBF's & CBQ's wakeups).
>I can't imagine how to use loopback or dummy to hack around
>this limitation. Do you know how to do it ?
>regards, devik

Your getting into levels of the kernel that make my brain go even mushier 
than usual.  In *theory* however, you should be able to do it by routing 
all eth1,eth2, eth3... traffic through some sort of tunnel interface before 
it gets to eth0, and running the qdisc on that tunnel.  It would be very 
simple to do if you could break it into two machines, but then you'd 
probably be just as well off to replace all the eth1+ links with a single 
interface running multiple addresses and attached to a hub or switch (ie, 
eth1:1 eth1:2 eth1:3 so you can run your qdisc on dev eth1 but still 
maintain separate IP's for the different networks)

I am pretty sure you can use netfilter to do the routing through a tunnel 
from what I've read of it's docs ( http://netfilter.kernelnotes.org/ ) and 
their may even be a simpler way using straight ipchains rules.