[erlang-questions] Retrieving "semi-constant" data from a function versus Mnesia

Richard A. O'Keefe <>
Thu May 14 04:24:47 CEST 2015


On 14/05/2015, at 10:58 am, Jay Nelson <> wrote:

> ROK wrote:
> 
>> One of them is or was erts_find_export_entry(module, function, arity),
>> surprise surprise, which looks the triple up in a hash table, so it’s
>> fairly clear where the time is going.
> 
> It’s funny because I recall from the CLOS "Art of the Metaobject Protocol”
> that this was a high-speed solution to the method dispatch problem. The
> memoization was bragging rights and what made it feasible.

Memoization in dynamic function calling is old technology.
As I recall, Smalltalk-80 used a cache for dynamic dispatch,
and later implementations used distributed "polymorphic inline
caches".

Simple inline caching would turn

    Module:func(E1, ..., En)
into
    static atomic { mod, ptr } cache;
    atomic {
        if (Module == cache.mod) {
            p = cache.ptr;
        } else {
            p = lookup(Module, func, n);
            cache.mod = Module;
            cache.ptr = p;
        }
    }
    (*p)(E1, ..., En);

One form of polymorphic inline caching would yield

    static atomic { mod, ptr } cache[N];
    q = &cache[hash(Module)%N];
    atomic {
        if (Module = q->mod) {
            p = q->ptr;
        } else {
            p = lookup(Module, func, n);
            q->mod = Module;
            q->ptr = p;
        }
    }
    (*p)(E1, ..., En);

This is typically implemented so that the cache can grow.

Doing this in a lock-free way is a bit tricky, but possible.

My actual point was that since a dynamic call in Erlang
involves at least two C function calls in addition to the
intended Erlang call, 6 times slower than a direct call
is not unreasonable.

Oh, that's emulated.  I don't know what HiPE does with
dynamic function calls, but it *probably* makes normal
calls faster without touching dynamic ones much.



More information about the erlang-questions mailing list