[erlang-questions] writing a delay loop without now()
Fri Feb 20 22:10:56 CET 2009
James Hague <> wrote:
>>>I have a graphical application that runs at a fixed frame rate (60
>>>frames per second). The processing takes up part of the frame, then I
>>>need to make sure that roughly 16,667 microseconds have passed before
>> milliseconds, right?
>No, microseconds. It's more or less 16 milliseconds between updates.
That's what I meant, just had a temporary misconnect between the weird
custom of sprinkling commas over integers and the equally weird custom
of using a comma for the decimal point.
>I could live with good millisecond accuracy.
That you should definitely be able to get from statistics(wall_clock) -
if you don't, I'll have to say that there is something wrong with your
OS/HW and not with Erlang. See snippet below from a tight-loop that gets
the result of gettimeofday() via os:cmd() every 10 calls to statistics()
(the actual call to statistics() amounts to some 0.2% of the execution
time, the rest is os:cmd() and io:format()).
>>But per my previous message, that should only happen if you manage to
>>call it more than once per microsecond - unless gettimeofday() returns
>On my MacBook, using a now/0 loop for a delay runs about 25% faster
>than it should, so I'm clearly getting more than one call in per
>microsecond, at least sometimes.
I can't follow that conclusion at all - if you don't know how long a
call takes even on average, I can't see how you can deduce it from an
assumption that the cause of the problems with your loop logic is
microsecond-bumping in now/0 - sounds like circular reasoning to me.
On the other hand laptops frequently do things like varying the CPU
frequency that can confuse applications that try to measure time.
More information about the erlang-questions