Getting locks and sharing: was RE: Getting concurrency

Ulf Wiger ulf@REDACTED
Sun Jun 19 01:13:18 CEST 2005


Den 2005-06-17 09:38:37 skrev Joe Armstrong (AL/EAB)  
<joe.armstrong@REDACTED>:

>> The most complex way is to make the mulator multi-CPU aware,
>> with all the hassle involved. Just the distributed garbage
>> collection could probably support a couple of Ph.D. theses.
>> On the plus side, load balancing is automatic.
>>
>
> We shouldn't do this :-)

I disagree.


> IMHO the single most important design "boundary" is theinter-processor  
> boundary. Today there are 3 different RPC's:

Which, disregarding microseconds, are semantically different.

> 	RPC1 - between two processes in the same Erlang node

Here, we are 'guaranteed' that messages cannot be dropped.

> 	RPC2 - between two processes in different nodes in a
       single CPU

Here, it is actually possible that communication may be
lost (e.g. due to overload and supervision timeout on the
link.

> 	RPC3 - between two processes in different nodes on
       different CPU's

Like RPC2, but the possible failure modes increase, and
the CPUs are not coupled by a common operating system as
they are in RPC2 (where there might only be one CPU)

>
> The times for these are *vastly* different RPC1 is in microseconds,
> RPC3 is in millisceconds.

Not quite. You should be able to get round-trip delays around
100 us on an RPC with fast boxes and a fast network. With
network technology like SCI or VIA, it should be possible with
round-trips under 100 us. It's still vastly different from RPC1
(around a factor 5-10x), but usually more significant is the
difference in failure modes.


> When we have multi-core CPUs well have:
>
> 	RPC4 - between two processes in different nodes in different
>     CPUs on the same chip
>
> Will RPC4 behave like RPC2? nobody knows.

Semantically no, but the most dramatic difference is one
that should be accounted for always: suddenly with 2 CPUs,
you can have true concurrency.


>
> As regards fault-tolerance we also get a number of different
> failure properties depending upon where the processes are located.

Indeed.


> My view is that the programmer should know about these boundaries
> and design their application with respect to the boundaries - we
> should not abstract away from the location of the processes.

I am convinced (as I'm sure you are too) that one should _never_
write code based on the assumption that the nature of the current
reduction counting scheduler and the single-CPU architecture
somehow guarantees serialization and mutual exclusion up to a
point. Subtle changes to the code, or changes in the implementation
of the scheduler, might trigger latent timing bugs.

One should always design as if the scheduler offers true
concurrency. Assuming this, using multiple CPUs within a
single node should not change any vital semantics or
failure modes. And it is not obvious that the cost of message
passing within the node will be drastically more expensive.
IMHO, introducing shared or hybrid heap would have a much
more profound effect on message passing cost (i.e. going from
proportional to the message size to being constant)

I'm not against introducing ways to explicitly control load
balancing and co-location, I think, but I think the default
mode should be that the erlang runtime system takes advantage
of multiple CPUs, if they're there. This should be transparent
to the program.

/Uffe




More information about the erlang-questions mailing list