Getting locks and sharing: was RE: Getting concurrency

Joe Armstrong (AL/EAB) joe.armstrong@REDACTED
Fri Jun 17 09:38:37 CEST 2005

> -----Original Message-----
> From: owner-erlang-questions@REDACTED
> [mailto:owner-erlang-questions@REDACTED]On Behalf Of Vlad Dumitrescu
> Sent: den 15 juni 2005 20:33
> To: erlang-questions@REDACTED
> Subject: Re: Getting locks and sharing: was RE: Getting concurrency
> From: "Vance Shipley" <vances@REDACTED>
> > I'm not sure what to do after that.  Do you make the "extra" nodes
> > hidden and try and load share behind the scenes or do you just leave
> > it at that and expect people to use normal erlang distribution?
> Hi,
> My two cents on this, very sketchy.
> The most complex way is to make the mulator multi-CPU aware, 
> with all the 
> hassle involved. Just the distributed garbage collection 
> could probably 
> support a couple of Ph.D. theses. On the plus side, load balancing is 
> automatic.

We shouldn't do this :-)

> The simplest way is to just strt several nodes, let them know 
> about each 
> other and leave it to the application developers to take 
> advantage of the 
> fact. 


>  In order for this to be useful, some kind of framework 
> is needed to 
> handle the housekeeping, like load balancing and a new global process 
> registration. Applications must be multi-CPU aware, in order 
> to use it.

Is this right? - I think not. 

IMHO the single most important design "boundary" is the inter-processor
boundary. Today there are 3 different RPC's:

	RPC1 - between two processes in the same Erlang node
	RPC2 - between two processes in different nodes in a single CPU
	RPC3 - between two processes in different nodes on different CPU's

The times for these are *vastly* different RPC1 is in microseconds, RPC3 is in 

When we have multi-core CPUs well have:

	RPC4 - between two processes in different nodes in different CPUs on the same chip

Will RPC4 behave like RPC2? nobody knows.

As regards fault-tolerance we also get a number of different failure properties
depending upon where the processes are located.

My view is that the programmer should know about these boundaries and design their
application with respect to the boundaries - we should not abstract away from the  
location of the processes.

Now there *is* a case for library support for allocating processes etc onto nodes
*after* this design has been performed - IF the design leads to the conclusion that
process location is unimportant THEN we can use libraries that allocate processes
depending upon load etc.

In any case the layers must be *very* clear and the consequences of using a
library must be clear to the programmer. 

> In-between I see another way, that I like most: let the 
> framework be hidden 
> behind the regular bifs, making the whole mechanism 
> completely transparent. 
> Then all applications could use it. Since it's integrated in 
> the vm, it 
> could do things better than an Erlang-only solution.
> What I think is needed at the most basic level is:
>     - a spawn bif that does load balancing behind the scenes 
> (but looks like 
> a local spawn)
>     - a way to let the node cluster look as just one node to 
> the outside 
> (including the global registration service), but still be 
> able to identify 
> each other.
> And I think it should be enough to get started... There's 
> plenty of useful 
> things to add later.
> What do you think?

I think we should "just" make OTP/Erlang run on single nodes
(and on single CPU's in a multi-CPU chip) << and I expect even 
"just" to be difficult :-) >>

Then we should add a few new bifs that are "multi-CPU aware".

Then we should write some OTP behaviours to abstract the multi-CPU


> regards,
> Vlad

More information about the erlang-questions mailing list