[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: new WRR sched, updated TBF and "real" ingres implementation




> > The problem 1: cisco's shaping implementation is bad. I consulted
> > it with Cisco folks and they commited me that traffic is shaped
> > BEFORE it reach queuing. 
> 
> This is not "shaping" then. Shaping is post-queue.

yes I agree. Our ISP uses IOS commands:
(conf)# in e1
(conf-in)# traffic-shape group 127 256000
to shape our incoming bw (we are one of their customers
connected on their wireless net).
Cisco call this technique "generic shaping" but applies
it BEFORE packet reaches interface's output queue. So 
that you are right, it is not shaping but Cisco calls
it so.

> 
> > Thus cisco's WFQ queuing doesn't work
> > at all (when shaping is active). It leads into very unfair usage
> > of link : host which has low delay connection can take most of
> > bandwidth and all other TCP connections are slowed down (because
> > they now have longer interpacket times).
> 
> I am afraid you lost me here. 
> Are you talking about CISCOs CAR? or is it WFQ that is causing this
> effect?

No. The "generic shaping" causes it. Because it is badly
placed before output queue, packets are first shaped by
"generic shaper" and then they are going to output queue.B
But generic shaper uses it's own internal FIFO queue and
it is the reason why WFQ queue is not used at all.
The imformation above is true for pre 12.x IOSes (which
are used by our provider). 
The post 12 IOSes uses WFQ as generic shaper's queue
so that it is much better and v12 also supports CAR
which is even better. 
Problem is that our ISP doesn't use them ;-(

> > You can object that I could attach the shaper to the outgoing
> > interface instead as incoming. Well, I can't because I have
> > several output interfaces and I really don't want to restrict
> > eth1 to 128kbit and eth2 to 128kbit.
> 
> I obviously did suggest that above ;-> I am still not seeing how you would
> dynamically share the 256K on the eth0 by queueing on ingress eth1 and 
> eth2 ...

no no, I'm doing it by queuing on ingres eth0 ! Eth1/2/3 have no
queues at all. Eth0 has both ingres and outgres queue. So that
it does exactly what I want - it rate limits both incoming and
outgoing directions on ISP's link (eth0).

> > Is it clear now ?
> > 
> 
> Not really. Maybe you mean that the data direction is into eth0 towards
> eth1/2/3? i.e the users behind those interfaces are bringing data into the
> network as opposed to outputing towards the wireless side?

exactly!

> In which case your limitation is first the bandwidth of the wireless
> device. But regardless of this, ingress policing does provide you scheme
> to share bandwidth using the "continue" construct. Look at
> examples/diffserv/Edge32-c*-u32 for samples.

hmm it could be useful. I'm going to look at "continue" stuff. It
would be nice because my ingres hack works but it is dirty work.
I had to explicitly exclude ARP packets (they don't like to come
from lo if) and I also rewrote part of masquarading code in order
to work...

> Queueing on ingress on a shared bus such as one found on PC is really
> useless. Even on very fast switching fabrics nobody does it. CISCO is
> correct not to implement it.
> If you _really really_ insist (and so far i see no good reason) then it is
> a trivial exercise; the infrastructure is already in place.
> But i dont think i have understood how you can have the 256K shared by
> having queueing on the ingress.... Please explain again. It could also be
> i never understood your problem description either.

for me the ingres queuing has the same reason as ingres policing.
The only diference is that policing drops packets which exceeded
rate and ingres queuing delays them instead of droping.

But thanks to your email I'm going to try revalidate
my needs ..

regards, devik