[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Strange results using Linux DiffServ
On Tuesday 30 January 2001 20:55, Buhler, William wrote:
> I am trying to set up DiffServ on Linux to provide the EF, AF and BE
> queues. I have configured the edge routers to mark the packets for the
> appropriate flows which is working fine. I have checked this with tcpdump
> all the proper DS marks are being set for the various flows.
> Right now I am doing some test on the throughput for each flow and am
> getting some strange results. The links between the core and edge routers
> are wireless connections each with 3Mb/s bandwidth. I have set up an EF
> flow and allocated 1.5Mb/s to it. This flow is bounded and isolated as is
> called for the EF flow.
> For the four AF flows, I have allocated 256, 512, 1024, 1536Kb/s to
> AF1,2,3,4 respectively all of which are bounded.
> For the BE flow I have allocated 1Mb/s which is allowed to borrow any
> available bandwidth.
> I have setup that by default all traffic will be marked for the BE flow so
> that I have to explicitly define the traffic that goes through each of the
> EF and AF flows.
> I want to set this up so that if there is available bandwidth (up to the
> 3Mb/s) that the BE traffic can utilize this. However if traffic goes
> across the EF or AF flows, then the BE traffic will be throttled down
> giving priority to the EF and AF flows according to their priority.
> I have setup a test bed with four machines, the core router, one edge
> router and two workstations that will transfer information between each
> other. Each are connected via 10Mb/s Ethernet links with all systems
> running Linux. I have applied the DS-8 patches to the 2.2.17 kernel on the
> core and edge routers.
> In testing this I see that the traffic is being routed to the appropriate
> queues and that the BE traffic will utilize the unused bandwidth.
> The problems that I am seeing is that the EF flow will exceed it's 1.5Mb/s
> rate which if I understand this correctly should only be allowed up to
> 1.5Mb/s. I see the same thing with the AF flows.
> When I test each of the various flows individually, they seem constant.
> However if I send traffic over multiple flows the transfer rates for all
> flows increases dramatically. For example, if I send data over the EF flow
> I see transfer rates of
> around 2.8Mb/s (1.3Mb/s over the bounded rate). Data over the BE flow has
> transfer rates of 3.4Mb/s.
> Then when I transfer data over both the EF and BE flows at the same time,
> the transfer rates for the EF flow goes up to 5Mb/s and the BE flow is at
> 3.1Mb/s. This is really strange sine the rates for traffic over a single
> flow (no other traffic over the other flows) are closer to the bounded
> rates. Has anyone seen this happen?
Yes, I have. Probably the Linux CBQ implementation is buggy.
I've also seen other anomalies: For example, it is possible for a packet to
be indefinitely waiting on the queue, only to be sent when another packet
arrives at the system. I found this out when some tests resulted in all but
one packet out of sequence and with a peak jitter of more than 100
seconds(!!!). The last packet from the flow of a previous test was still
pending, so the first received packet would have a sequence number higher
than all others, and thus would be the only one considered in sequence. On
the other hand, it would take all the time between the two tests to
transverse the network, whereas the first packet of the second test would
take less than a second, so the measured peak jitter would be enormous.
X ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL