[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
ds patches for statistics
>Do you have any numbers/findings you would like to share?
No, at this moment, but we'll start tests soon;
>- it is a bad idea to use the general skb->stamp in case
>some other code (from ingress towards egress) uses it. why dont you
>your own variable on the skb?
yes, but my idea was that taking the time stamp in netif_rx, I have only
a more accurated value: does it disturb the other codes?
Moreover, if I create my own variable, than I have to call two funcions
in netif_rx (do_gettimeofday and get_fast_time) for each packet. I was
thinking that it can be too heavy. Isn't it?
> - the rate measurement: You need to use some averaging algorithm (look
>the policer code) to average over long periods of time. In your scheme,
>the "last valid b/w utilization" will be reported. Think of a situation
>where you have a long period of inactivity (i.e no packet arrivals);
>invoke tc and it instead reports the last one stored in the kernel.
Yes, I'm now working to fix it.
>- could your tables in the kernel be overwritten at some point with
>invalid data? eg in the situation where there are long periods of
I can't see how tables could be overwritten. Maybe I misunderstood your
>- In your scheme, you have to change every qdisc that you want to
>participate in the "profiling". You might want to sync with Ole
><firstname.lastname@example.org> -- i am not sure if he is on this mailing list
>Ole has a nice scheme with a classful qdisc which you "wrap around"
>a qdisc that you want to "profile". I dont think he had the concept of
>profiling rates and latencies, this is something you could add. His was
>more sort of a tcpdump/sniffer-tracer for qdiscs. i.e it would show you
>traces of the path that a packet takes as it wiggles its way through
Yes. In this first step I've considered only cbq, for it is the only one
that we're using for our tests.