[erlang-questions] writing a delay loop without now()

Per Hedeland per@REDACTED
Fri Feb 20 01:46:49 CET 2009


James Hague <james.hague@REDACTED> wrote:
>
>I have a graphical application that runs at a fixed frame rate (60
>frames per second).  The processing takes up part of the frame, then I
>need to make sure that roughly 16,667 microseconds have passed before

milliseconds, right?

>displaying the next frame.  os:sleep() is VERY coarse and inconsistent
>across platforms.  On some platforms os:sleep(1) is the same as
>sleep:(32000).  Using a timeout value with receive gives the same
>results (that's actually how os:sleep is implemented).

Yep, they depend on the OS system timer interrupt interval.

>erlang:statistics(wall_clock) is just as coarse.

That seems very strange though - at least I get the millisecond
resolution that is the best you can possibly get given the definition.
It is basically just rounded numbers from gettimeofday(). Of course
there may be platforms that even to this day provide no better than
multi-millisecond resolution in gettimeofday() or the equivalent - I
tested on FreeBSD and Linux, as usual I have no idea about Windows. But
obviously you can't get microseconds from it.

>So get around this, I resorted to just spinning in a tight loop using
>now/0 and timer:now_diff/2.  But now/0 isn't designed to be used like
>this, because it always returns increasing values.

But per my previous message, that should only happen if you manage to
call it more than once per microsecond - unless gettimeofday() returns
bizarre numbers. With a minimal loop

tc1(0) -> ok;
tc1(N) -> now(), tc1(N-1).

I manage to get just above 2 microseconds per call, and perfect
agreement between timer:tc/3 (which uses now/0 of course) and pre/post
statistics(wall_clock) calls (on FreeBSD, 3GHz Intel CPU). I.e. no
opportunity for artificial microsecond-bumping, and none occurring.

If you really manage more than one call per microsecond, consider
yourself lucky, but don't call it so often!:-) Or there is something
else wrong...

>This would have been so easy on an 8-bit system from 25 years ago, so
>surely there's a way to get raw microsecond time values in Erlang?

Linked-in driver...

--Per



More information about the erlang-questions mailing list