[erlang-questions] Erlang math libraries
Wed May 16 12:01:42 CEST 2007
Matthias > > This sort of stuff is a fair way off Erlang's beaten path.
Jay > That's too bad, many big numerical computations are highly
Jay > parallelizable which seems to be the big sell of erlang.
Jay > Are there reasons these problems are not solved with erlang?
The first big and successful Erlang projects were telecom control
systems. Using Erlang in similar applications is thus fairly
low-risk. Which makes it the beaten path by default.
Attempt at a big picture:
There's a range of ways to "do things in parallel" and they
differ radically in scale. At one extreme you have
parallelism at the instruction level, e.g. a TI 6x DSP executes 8
instructions every clock cycle and doesn't care (1) about data
dependencies. Then you move up through things like hyperthreading
to approaches with multiple cores sharing one on-die L2 cache and
then separate CPUs sharing main memory. After that come shared-bus
systems (blades) and then LAN-coupled approaches (clusters). After
that come even more loosely coupled systems.
You can expect Erlang to be a good candidate for exploiting
hardware which "does things in parallel" for two mostly
One reason is basically as per the 1977 Backus paper (2), which I
think boils down to "functional programs are more amenable to
fine-grained parallelisation". The sort of parallelisation which,
say, a TI 6x DSP provides. I don't think Erlang even attempts to
exploit this in practice.
Another reason is that Erlang programs are naturally divided
into many largely independent processes. This lets you exploit
the sort of parallelism in the range from multicore to cluster.
Lots of Erlang systems win from this in practice, even ones which
weren't even written with such hardware in mind. This is a big
selling point for people writing, say, control systems for
For people writing numerical applications, the easily exploited
gains from coarse-grained parallelism might be attractive, or
they might not. It's certainly not as clear a win, especially not
when the competition in that area is so stiff.
If anyone has a reference to something, anything, which formalises the
concept of "scale in paralellism", I'd love to know about it. Others
on the list have tried to distinguish between "concurrency" and
"parallelism", but I haven't seen anything which suggests that there's
a widely accepted difference in meaning.
(1) I.e. if two of those 8 instructions write to the same register,
that is your problem, not the CPUs. In practice, the C compiler
takes care of that, but it sure makes single-stepping the machine
I last read this paper years ago. It's possible I've completely
mischaracterised its contents.
More information about the erlang-questions