[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Bandwidth limiting 'virtual' subnets...

 I have a somewhat theoretical question:

	Is it possible to add traffic control components to interfaces
with multiple subnets bound?

 I ask because I'm attempting to establish a testbed configuration for
proving out websites at various simulated bandwidths. I tried to
implement this a few months ago but I couldn't get things to behave.
Rates simply wouldn't limit to anything predictable though that could
easily be through a configuration error on my part... Please take a look
at the following scenario and point out faults... Try not to laugh too
hard... ;-)


 The test environment is: A group of machines, connected at 100mbit to a
HP Procurve switch, all on the same IP subnet, with a single linux
machine acting as a router out to other networks. I want to bind a
second subnet to the local ethernet card on the router (eth0, and bind a local web server (IIS in this case) to both
logical subnets and provide access to the different interfaces at
different rates. Thus we would end up with something like:

	Wire rate (100mbit)	http://testserver/evalsite (interface:
	ISDN rate		http://testerver-isdn/evalsite

The configuration process went something like this:

	# First attach additional subnet interface to eth0
	/sbin/ip addr add dev eth0 broadcast scope global

	# Now we attach the root Class Based Queue processor to the eth0
	/sbin/tc qdisc add dev eth0 root handle 1: cbq bandwidth 100Mbit
cell 8 avpkt 1000 mpu 64

	# Next we drop in a CBQ branch (called 1:1) for full 100Mbit
	/sbin/tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth
100Mbit rate 100Mbit \
		allot 1514 cell 8 weight 10Mbit prio 5 maxburst 20 avpkt

	# Then we add a branch for traffic control (1:2) at the
throttled bandwidth
	/sbin/tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth
100Mbit rate 128Kbit \
		allot 1514 cell 8 weight 12Kbit prio 5 bounded isolated
maxburst 20 avpkt 1000

	# And we attach filters to allow traffic to be routed into each
of the leafs defined previously
	/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 100
route to 16 flowid 1:1
	/sbin/tc filter add dev eth0 parent 1:0 protocol ip prio 100
route to 17 flowid 1:2

	# Finally we add routing rules to move traffic into the
appropriate queue
	/sbin/ip route add via realm 16
	/sbin/ip route add via realm 17

Now if I run traffic, in this case by FTP'ing of a 13Mb binary file from
the server interface 48.253 to a machine (i.e. not via the router), I
get a datarate of 1Mbyte/sec. If I then get the same file off the 49.253
interface, I get a datarate of 870Kbytes/sec. I'm certain traffic
to/from the 49.253 interface is hitting the CBQ filter because if I

	/sbin/tc -s class show dev eth0

I get:

	class cbq 1: root rate 100Mbit (bounded,isolated) prio
	 Sent 1415052 bytes 12092 pkts (dropped 0, overlimits 0)
	  borrowed 0 overactions 0 avgidle 62 undertime 0
	class cbq 1:1 parent 1: rate 100Mbit prio 5
	 Sent 0 bytes 0 pkts (dropped 0, overlimits 0)
	  borrowed 0 overactions 0 avgidle 62 undertime 0
	class cbq 1:2 parent 1: rate 128Kbit (bounded,isolated) prio 5
	 Sent 745440 bytes 13800 pkts (dropped 0, overlimits 28378)
	  borrowed 0 overactions 2330 avgidle 60325 undertime 0

Clear as mud I know, but I'm open to suggestions....