[erlang-questions] Ideas for a new Erlang

Christian S chsu79@REDACTED
Wed Jun 25 10:31:01 CEST 2008


On Tue, Jun 24, 2008 at 4:23 PM, Sven-Olof Nystr|m <svenolof@REDACTED> wrote:
> I have written down some thoughts and ideas on the future development
> of Erlang.

This is fresh air compared to the constant complaints that only
contain description of
what people do not want (without the suggested replacements).

>  - an alternative to Erlang's selective receive

I have not been following the discussions around problems related to very long
message queues, why people get into the situation and such. But to me
the problem
of slow selective receive on large mailboxes suggests not to get into
that situation to
begin with, not to optimize so it doesn't hurt as much. :)

A single process can only process the mailbox sequentially. Ideally,
if you got a bottleneck
there, throw more processes at it (i.e. cpu cores), make it scale!

Another reflection is what the implementation of gen_server:call using
channels would look
like. The current one-time-monitor used now looks very similar to a
one-time channel
that is used for the synchronous reply. Almost as if one would want a
monitor that is
"messageable/receiveable".

>  - a new mechanism for introducing local variables with a more cleanly
>   defined semantics

I like your thinking here. Very good examples showing how let-forms
make code less succinct.

When it comes to making changes to the Erlang syntax, I think those creating
automatic refactoring tools should review it and steer in a way that make
their job easier, allowing us all to get better automatic refactoring tools.

The worst(?) that can happen is that we get a lisp syntax. :)

>  - a mini-language to allow the efficent implementation of low-level
>   algorithms

This is very interesting. I do not understand it fully.  But it is
very interesting.
You had me at compiling down to the Cell processor SPEs.

With a language like this, then common-lisp like macros are even more
interesting. My experience with performance at SIMD parallelism is that
only 20% is being able to perform the SIMD operations, and the remaining
80% work is in managing memory accesses so data already in cache lines
are efficiently used while it is still there.

Applying approaches such as http://en.wikipedia.org/wiki/Loop_nest_optimization
aid here, and then it is nice to parameterize various block-sizes for
the specific
model and generation of the cpu used. When sqeezing mips out of a cpu
everything is game.
C++ programmers typically use their templates to parameterize this.

I can imagine code wanting to generate and compile these small number-crunchers
at runtime for the conditions of the specific task at hand.



More information about the erlang-questions mailing list