[erlang-questions] Executing concurrently

Joe Armstrong erlang@REDACTED
Wed Mar 4 09:11:05 CET 2009


On Wed, Mar 4, 2009 at 12:44 AM, Richard O'Keefe <ok@REDACTED> wrote:
>
> On 4 Mar 2009, at 5:41 am, Joe Armstrong wrote:
> [an implementation of the two-calls-to-one-function-lets-one-
>  proceed-and-fails-the-other request.]
>
> Thanks to the existence of
>  - the self() function and
>  - the process dictionary
> it is possible for the behaviour of a function to depend on
> which process it is invoked within.
>
> Joe's solution has the function invoked in a new process.
> If that satisfies the original poster's need, then I wonder
> what that need might be.
>
> Another approach makes use of locks -- as Joe suggested in his
> message -- and goes roughly like
>
>        f(...) ->
>           magic:acquire()
>           Result = guts_of_f(...),
>           magic:release(),
>           Result.
>
> where ignoring the stuff about starting and registering
> the magic process,
>
>        acquire() ->
>            magic!{acquire, self()},
>            receive {magic,Outcome} -> ok = Outcome end.
>
>        release() ->
>            magic!{release, self()}.
>
>        magic_available() ->
>            receive
>                {acquire,Pid)) -> Pid!{magic,ok}, magic_held(Pid)
>              ; {release,_}    -> exit()
>            end.
>
>        magic_held(Owner) ->
>            receive
>                {release,Owner} -> magic_available()
>              ; {release,_}     -> exit()
>              ; {acquire,Pid}   -> Pid!{magic,fail}
>            end.
>
> WARNING: this is not only untested code, it is incomplete.
> Also, instead of the magic process exiting, we really want
> the client process to get an exception, but I'm too lazy
> to bother.  And of course you probably want to catch
> exceptions around the call to guts_of_f(...) so that the
> magic token is always released.
>
> In fact, what we have here is nothing other than a mutex
> using trylock() and unlock() instead of the usual
> lock() and unlock(), which makes me wonder why the process
> that doesn't get to run the function should be failed
> rather than just being told to try again later.
>
> Joe said "this does not seem like good design",
> and if he says it, you'd better believe it.

Thank you - in fact one of my predictions appears to be coming true.

I predicted (due to Amdahls law) that sequential bottlenecks would be
our next big problem - with 10% sequential code we can go at most 10
times faster
(on an infinite number of cores) but we'd need at least 10 cores
before this was a problem.

If 10% is sequential and we have 20 cores then we should have problems
and we won't
see a times ten speedup - despite the cores if we have nasty
sequential bottlenecks.

So now I'm running on 24 cores and seeing what we think are nasty
sequential bottlenecks.

At a detailed level we must be thinking about ways to isolate things
as much as possible -
the rule I guess should be "as much sharing as is essential for the
problem and no more"

This is probably going to get messy - I can imaging logarithmic trees
of processes where we
had a gen_server - the good news is that the API to (say) thinks like
gen_server can remain constant
but the implementation might get a tag complex :-)

/Joe Armstrong




>
>



More information about the erlang-questions mailing list