[erlang-questions] Keeping massive concurrency when interfacing with C
Peer Stritzinger
peerst@REDACTED
Wed Oct 5 09:24:19 CEST 2011
On Wed, Oct 5, 2011 at 12:17 AM, Richard O'Keefe <ok@REDACTED> wrote:
>
> On 5/10/2011, at 5:37 AM, Peer Stritzinger wrote:
> [how about a linear algebra library built on top of binaries, not entirely
> unlike NumPy]
>
> Hasn't something like this already been done? I'm sure I remember reading
> about it.
I have to admit I was not aware of this. OTOH it seems not to be
available, can't find anything except the paper and EEP7 which is the
foreign function interface to the external number crunching libraries
they invented.
However I was thinking along different lines, the approach of "HPTC
with Erlang" (and also NumPy) is to slap on the big chunk of proven
numerical routines as some external library. Which is the way to go
if you want to do serious number crunching, since its quite hard to
develop trusted and efficient numerical routines.
The price you have to pay for the slapped on heavyweight library is
that these usually don't scale up to the number of processes Erlang
can handle. Therefore the need of the impedance adaption I mentioned.
Keeping a pool of numerical processes to keep the cores busy but not
too many of them that the OS is upset. Having work queues that adapt
these to the 20k processes. BTW @John: this would be one solution for
your problem.
What I was suggesting is a more integrated and lightweight way to make
some number crunching available. The suggested n-dim matrix type
(e.g. a record containing the metadata and a binary for the data)
combined with some NIFs on these that speed up the parts where Erlang
is not so fast. Keeping in mind not to do too much work in the NIFs
at one time not to block the scheduler.
This is for the use cases where there is some numerical stuff needed
but having real time responsiveness and Erlang process counts in mind.
The use case I have e.g. is some neuronal networks stuff combined
with a lot of symbolic computing to prepare the input. And its
embarrassingly parallel and needs only some simple vector times matrix
and n-dim array slicing.
For real heavy numerical stuff I think the best way is to do this in
the systems are built for this and interface them somehow to erlang
with ports or sockets. Or try to get the code released from the HPTC
paper Jachym mentioned.
For interfacing with BLAS and their ilk some more native Erlang
numerical capabilities would also be nice to have. Since they also
use a kind of binary buffer with some metadata approach it would not
be too hard to interface efficiently.
More information about the erlang-questions
mailing list