[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

average queue calculation in GRED and RED



Hi All ,
   After going through papers on RED , I found that gred is low pass filter implementation and the average queue calculation is given by the following
avg=avg+w ( q - avg ) where w is EWMA constant . In linux Diff Serv implementation w is calculated by tc and Wlog is passed to the kernel 
where w = 1 / 2^Wlog ,
   The equation which controls the average queue size in case the queue is non empty on linux is coded as 
   q->qave += q->backlog - ( q->qave  >> q->Wlog ) ; 
   in file sch_gred.c function gred_enqueue( )
  Where  q-> backlog is current queue size , this equation is not the same as the one mentioned above in Sally Floyd paper . Why this has been coded in this way and not as 
   q->qave += (q->backlog -  q->qave ) >> q->Wlog  ;  as per theory .

   Moreover in afcbq examples min Queue size is give as 15 Kb and Max Queue Size as 45Kb,    these queue sizes  in gred_change function is modified as  q->qth_min = ctl->qth_min << ctl->Wlog and 
q->qth_max = ctl->qth_max << ctl->Wlog , i.e both min and max size is increased 3 times in case of afcbq( min = 45Kb , max=135K) , The queue limit remains at 60K so the range where packet will be dropped with certain probability is 45K and 60K because if the queue length increases 60K the gred_enqueue() as such drops the packet .

   Why this has been the coded this way , would it not deviate the basic gred behaviour .

Thanks ,
Manoj