[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Incorrect bw distribution and a hang - is it my script?




Ok, I need help (something my doctor has been telling me for some time).

I've been experimenting with some DS and CBQ things with some colleagues
and we seem to have
hit a number of problems.

The first problem was that it didn't work over a CLIP (ATM PVC)
interface..  When I have more exact information and maybe some logs then
I'll write more - it's probably an ATM problem though.  I'll try some
more up-to-date patches first though.

The second problem is script one - attached below.

/* Script one. */
#! /bin/sh
TC=/usr/local/iproute2/tc/tc
DEVICE=eth0
LINERATE=10Mbit
SHARE1ST=7.5Mbit
SHARE2ND=2.5Mbit
AVGPKT=1000
 
#1 Install cbq routing discipline
$TC qdisc add dev $DEVICE root handle 1:0 cbq \
                 bandwidth $LINERATE allot 1514 cell 8 avpkt $AVGPKT mpu
64
#2  Install the class corresponding to the root Qdisc
$TC class add dev $DEVICE parent 1:0 classid 1:1 cbq \
                 bandwidth $LINERATE \
                 rate $LINERATE avpkt  $AVGPKT  \
                 allot 1514 cell 8 weight 1Mbit prio 8 maxburst 20
#3      1ST LEAF CLASS for host1
$TC class add dev $DEVICE parent 1:1 classid 1:2 cbq \
                 bandwidth $LINERATE rate $SHARE1ST avpkt $AVGPKT prio 3
\
                 allot 1514 cell 8 weight 1Mbit maxburst 20 split 1:0
#4      2ND LEAF CLASS for host2
$TC class add dev $DEVICE parent 1:1 classid 1:3 cbq \
                 bandwidth $LINERATE \
                 rate $SHARE2ND avpkt $AVGPKT prio 7 \
                 allot 1514 cell 8 weight 1Mbit maxburst 20 split 1:0
# U32 filters to match ip addresses to flow ids
# Basic U32 handle
$TC filter add dev $DEVICE parent 1: protocol ip prio 4 handle 1: u32
divisor 1
# host1 to flowid 1:2
$TC filter add dev $DEVICE parent 1: protocol ip prio 4 u32 match ip src
10.5.5.1 flowid 1:2
# host2 to flowid 1:3
$TC filter add dev $DEVICE parent 1: protocol ip prio 4 u32 match ip src
10.5.5.49 flowid 1:3
 
This script should match incoming packets using u32 and put them in two
separate queues according to the source address.
i.e. different hosts should get a different share of the output
bandwidth.
It doesn;t seem to be assigning the correct proportions to each queue
though.  The results differ wildly every time we try it.


The third problem is a bit more serious - because it hangs the machine
completely.

/* Script two */
#! /bin/sh
#
TC=/usr/local/iproute2/tc/tc
DEV=eth0
$TC qdisc add dev $DEV handle 1:0 root dsmark indices 64 set_tc_index
 
$TC filter add dev $DEV parent 1:0 protocol ip prio 1 tcindex mask 0xfc
$TC qdisc add dev $DEV parent 1:0 handle 2:0 cbq bandwidth 10Mbit cell 8
avpkt 1000 mpu 64
$TC class add dev $DEV parent 2:0 classid 2:1 cbq bandwidth 10Mbit rate
1500Kbit avpkt 1000 prio 1 bounded isolated allot 1514 weight 1 maxburst
10
$TC qdisc add dev $DEV parent 2:1 pfifo limit 5
$TC filter add dev $DEV parent 2:0 protocol ip prio 1 handle 0x2c
tcindex classid 2:1 pass_on
$TC class add dev $DEV parent 2:0 classid 2:2 cbq bandwidth 10Mbit rate
5Mbit av
pkt 1000 prio 7 allot 1514 weight 1 maxburst 21 borrow split 2:0 defmap
0xffff
$TC qdisc add dev $DEV parent 2:2 red limit 60KB min 15KB max 45KB burst
20 avpkt 1000 bandwidth 10Mbit probability 0.4
$TC filter add dev $DEV parent 2:0 protocol ip prio 2 handle 0 tcindex
mask 0 classid 2:2 pass_on

The script I believe is a fiddle of efcbq but when we send packets with
0x2c then the machine just hangs.

We are using kernel 2.3.40, the latest iproute2 and atm-0.68 (but that
shouldn't be a problem
- the scripts always attach to an eth i/f).
I'm reluctant to go to newer kernels becuase the router is booted off
nfsroot and there are some complications with the newer kernels - or
problems with hardware etc..

I would be very grateful if someone could have a quick look - all we
want to do is assign different hosts different shares of the o/p bw.
The hang is a bit worrying - I can reproduce it (just by switching it on
and sending packets) and run any tests people want.

Loads of thanks in advance,
	Jason.