[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[Libevent-users] Timer Optimization Needed ?



Hi,


      I am developing an application wherein the timer usage is as given below.
1) Around 125 timers each of 10 seconds would get added in the event loop per second per process to get some work done.
2) If the work is done within 10 seconds then we will remove that timer from the loop automatically.
3) If the work is not done for within 10 seconds then the timers will pile up. 125 x 10 = 1250. This is the max number/worst case of timers any event loop/process will have at any given point of time.

4) There can be multiple instances of the same process with similar behavior as mentioned above. (This should not matter as the event loop would be different. But adding this just for the sake of information).

 

I would like to know whether in such a scenario I should make use of the "Timer Optimizations" suggested in the libevent documentation. Please note that the data associated with each "work" would be different. So the callback in the event of the timeout will be called with different data.


Thanks for the help.