Interesting benchmark performance (was RE: Pitiful benchmark perf ormance)
Mon Jun 18 15:15:58 CEST 2001
> It says to me rather that Erlang is optimised for applications which are
> sufficiently complex that at least 2000 reductions are required following
> each external input or timeout. It seems to have little to do with just
> general stuff going on, and that for many applications Erlang would be
> otherwise superb at (web server, Mnesia based Online Transaction Processing
> server) there is a significant penalty.
Maybe the time has come to adaptively adjust the number of reductions
before yielding (ie, rescheduling)?
Here is a portable, straightforward approach: instead of compiling in
a constant number of reductions, the number of remaining reductions
should be loaded from the process structure when checking for
The interesting part, then, is deciding how many reductions you get
when you're scheduled. A simple approach is to permit the system to
set the reductions-per-yield at runtime (per process or for the entire
node), by using a BIF. But this must be supplemented by some way to
measure activity, so that the decision can be made systematically.
(Alternatively, one could take the approach that reductions-per-yield
is set _only_ inside the runtime, to avoid messing around with BIFs.)
A second, orthogonal, topic to consider is how well a "reduction"
corresponds to a time tick. A reduction can vary quite a bit in the
amount of time it requires, because of BIFs: today, there are ways to
bump reductions when a long-running BIF begins.
Another approach to yielding might be to measure the available time
slice in hardware cycles, rather than procedure calls. All desktop
processors have cycle counters, for example, so it is viable for a
wide range of systems. Unfortunately, the counters are often somewhat
messy to work with.
More information about the erlang-questions