[erlang-questions] lowering jitter: best practices?

Chandru <>
Wed May 27 08:48:09 CEST 2015


On 26 May 2015 at 19:14, Felix Gallo <> wrote:

> For a game server, I have a large number (~3000) of concurrent gen_fsm
> processes which need to send out UDP packets every 30 milliseconds.  Each
> individual UDP-sending function can be presumed to be very short (<1ms) and
> not dominate or otherwise affect the scheduling.
>
> I've tested a number of different scenarios:
>
> 1.  Each gen_fsm schedules itself via the gen_fsm timeout mechanism.  This
> is the easiest and most natural way, but jitter can be +7ms in the 95%
> case, and I occasionally see unusual events (e.g. timeout event happens
> when only 25-28ms of real time have elapsed, despite 30ms being scheduled).
>
>

Here are a few ideas, obviously all untested.

* How about setting the timer to fire initially at 10-15ms, and adjust the
next timer interval based on observed drift?
* Let all the gen_fsm processes insert the packets they have to send into
an ordered_set (ordered by time) ets table and have a single process which
is checking the ETS table for messages to send at a certain point in time?
* Do you have any control on the receiving end? Can some smoothing of this
jitter be done there?

Chandru
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20150527/848a0f1d/attachment.html>


More information about the erlang-questions mailing list