[erlang-questions] shared data areas (was Re: [erlang-questions] OOP in Erlang)
Nicholas Frechette
zeno490@REDACTED
Sat Aug 14 02:23:36 CEST 2010
Co-locating processes on a shared heap on 1 scheduler might be dangerous.
What if a process migrates to another core/scheduler? Should that be
allowed? Blocked? Should you pay the cost of a largely uncontested lock
(azul claims it is almost free)? What if one of those processes forks
another one? Should it belong to the flock and be bound to that scheduler?
Should it be allowed to migrate to another scheduler? Should the message it
is receiving dictate the scheduler it is going to process it on? (ie: on
receive, check where the message originates from (heap wise) and force the
process to reschedule on the proper scheduler so it can have access to the
memory) etc.
I think it will be quite hard to do transparently (sharing msg data between
processes without copying) mainly because it might be hard to tell data that
is a msg from data that isn't. If a process sends a message then dies,
you'll have to either: allocate the msg from the shared heap, somehow
knowing or copying, or copy it in case the process heap gets garbage
collected/freed (or prevent that from happening, which could have a
detrimental effect on the system if the receiving process has a very large
queue and takes a very long time before consuming said message)
This could be mitigated by adding a special syntax for message creation
where they would be created on a shared heap (or a special syntax for shared
heap allocation). Perhaps something like {Foo} = Whatever (or Foo }=
Whatever.). Where 'Foo' would be copied on the shared heap. Binaries are
already reference counted so it might not be too bad to implement.
IMO though, the whole sending a large datastructure as a message is largely
a non-issue. You can easily wrap it in a process and allocate it from that
processe's heap and just pass the PID around. Sure you have to pay an
overhead for accessing said data through messages but if you are clever,
that isn't too bad. If you are going to communicate accross nodes, you don't
have much of a choice anyway: either you copy to the other node or you pass
the PID and pay the price of cross node communication. That will be largely
determined by your overall design. How much data can you afford to transfer?
How likely are you to query from it? What if the data node goes down? etc.
The only case requiring shared memory i've run into (ever, programming, with
or without erlang) is large number crunching. You either need to access
large amounts of data concurrently or you need to access non trivial amounts
of data from many processes/threads. If that is what you need to do, I would
advice against using erlang (as much as I like it). You'll be better off
making a C++/C#/Java node for those, or a driver. Chances are if you need to
do number crunching or process large amounts of data you'll care about cache
alignment and such and erlang isn't the tool for that particular job.
I guess the only other viable alternative would be to have all processes
allocated from a shared heap and implement a concurrent GC. That would make
sharing messages quite easy as you would be guaranteed that a message has a
live reference somewhere (the sender, the receiver or both). This would
however impact just about every system out there and could be quite risky...
Then again, concurrent GCs are the new hot stuff nowadays and erlang is in
dire need of modernizing.
Anyhow, 2cents. I like the current erlang design and probably wouldn't
change it much if i could.
Nicholas
On Thu, Aug 12, 2010 at 4:01 AM, Ulf Wiger
<ulf.wiger@REDACTED>wrote:
> Fred Hebert wrote:
>
>>
>> On Wed, Aug 11, 2010 at 8:44 AM, Jesper Louis Andersen <
>> jesper.louis.andersen@REDACTED <mailto:jesper.louis.andersen@REDACTED>>
>> wrote:
>>
>>
>> There is one idea here I have been toying with. One problem of Erlangs
>> memory model is that sending a large datastructure as a capability to
>> another process, several megabytes in size, will mean a copy. In the
>> default VM setup that is. But if you had a region into which it got
>> allocated, then that region could safely be sent under a proof that
>> the original process will not touch it anymore. [...]
>>
>> One interesting point of *always* copying data structures is that you need
>> to plan for small messages (as far as possible) whether you are on a single
>> node or in a distributed setting. Moving up from a [partially] shared memory
>> model to a fully isolated one when going distributed is likely going to have
>> its share of performance problems and might create a dissonance between
>> "what is acceptable locally" and "what is acceptable when distributed".
>>
>
> So a number of different variations on this theme have been tried
> in the past and discussed as future extensions:
>
> - Robert Virding used to have his own implementation called VEE.
> It had a global shared heap and incremental GC.
>
> - A 'shared heap' option in BEAM once had experimental status.
> It passed messages by reference. Note that in neither of these
> cases is there any change in semantics - conceptually, msg
> passing was still copying. The main problem with this version
> was that it still used the old copying garbage collector. The
> idea was to implement a reference-counting GC, but for various
> reasons, it didn't happen. When multicore started becoming
> interesting, the shared-heap version was left behind.
>
> - Hybrid heap was an evolution of 'shared heap', where only
> data sent in messages were put on a global heap. In the first
> implementation, data was copied to the global heap on send
> (unless already there) instead of being copied to the receiver's
> heap. This implementation was also broken by SMP.
>
> - Lately, some exploration has gone into allowing a set of
> processes to share the same heap. This could be done in (at
> least) two ways:
> a) either co-locate all processes in the group on the same
> scheduler. This would ensure mutual exclusion and mainly
> serve to reduce message-passing cost in a process group.
> b) allow processes in a group to run on different schedulers,
> using mutexes to protect accesses to the heap data. This
> could allow for parallel processing, but the locking
> granularity would either be heap-level or ...very subtle,
> I guess. I favour option (a).
>
> I think it is appropriate to use under-the-cover tricks to
> speed up message passing as much as possible, as long as the
> semantics stay the same. In other words, in all the above cases,
> isolation has been a given, and conceptually, messages are
> still copied.
>
> BR,
> Ulf W
> --
> Ulf Wiger
> CTO, Erlang Solutions Ltd, formerly Erlang Training & Consulting Ltd
> http://www.erlang-solutions.com
>
> ________________________________________________________________
> erlang-questions (at) erlang.org mailing list.
> See http://www.erlang.org/faq.html
> To unsubscribe; mailto:erlang-questions-unsubscribe@REDACTED
>
>
More information about the erlang-questions
mailing list