[erlang-questions] Erlang Event Loop Question

Scott Lystig Fritchie fritchie@REDACTED
Tue Mar 30 19:41:38 CEST 2010


Evans, Matthew <mevans@REDACTED> wrote:

me> I'm hoping that this got missed because I sent it on a Friday
me> afternoon.

As far as I can tell, looking at my own archive of erlang-questions
traffic and the Google Groups archive, your question still doesn't have
any followups.  I'd been on the road last night, both for work-work and
for the Erlang Factory conference, so this followup is over a full week
later than Matthew probably wished.

Assuming you've already been down the paths suggested below...

... time to invoke help for the Ericsson OTP team.  Hello?  :-) The OTP
folks who'd travelled to California are probably still jetlagged from
their return home.

Looking at the archive for past postings, it looks like you're already
aware of the end/bottom of the "ErlDrvEntry" structure.  Fiddling with
those items "correctly" is suppose to help play nicely with the SMP
scheduler(*).  If you're still in doubt, posting some code excerpts to
the list would be helpful.

me> If someone knows how to benchmark this it'll be great (I tried a
me> simple echo - server type application, but that'll include network
me> slowness too)...

If you haven't fiddled with during on & off the SMP scheduler entirely
to see if behavior changes significantly, it's probably worth trying.
Also, assuming you have a build that supports poll/epoll/kqueue, then
enabling & disabling via "+K enable/disable" could also be useful.

me> I'm wondering about the inner workings of the Erlang event loop. Is
me> the event loop single threaded?

me> We have a service (the VM has epoll enabled) that is doing a lot of
me> TCP (HTTP) ingesting and parsing of large files (25MB of data or
me> more per second). It is also doing a lot of IPC, both to other
me> Erlang nodes, and via a proprietary transport to C++
me> applications. We currently support two methods to ingest these
me> files: straight TCP (gen_tcp), and a specialized linked-in driver
me> (which passes the epoll socket to the Erlang event loop with
me> driver_select and passes the driver-created socket to gen_tcp; we
me> also have a pool of threads for this driver).

If your gen_tcp code is reading a single (or very small number) of
binaries (not lists), then passing it/them into a driver via binaries
should be quite quick.  Using lists with that much data will be
slower... though not as slow as one might think.(**)

me> My concern is that under load the event loop is getting very busy
me> resulting in time-sensitive IPC messages getting delayed/slowed
me> down.

If your driver is inadvertantly blocking the VM's scheduler(s), then
you're in for a serious problem.  If using a large number of schedulers
helps, e.g. "+S 32" on a dual- or quad-core box, then your driver might
be the culprit.

The "percept" app can be helpful in finding causes of scheduling bursts
and lulls.

If you haven't subscribed to system events via the
erlang:system_monitor/1,2 BIF, then now is the time to do it.  If the
TCP connection between node A and node B is congested, then *any and all
processes* on A that tries to send a message to B will be
blocked/unscheduled.  The events from the system_monitor are the only
way I know of to be able to see if/when that's happening to you.

Other suggestions are more speculation than anything else.

If you're using the I/O worker pool that's created with the "erl +A"
flag, then you may be bitten by the port -> thread mapping algorithm
that the VM uses: IIRC it is still possible to be forced to wait for
certain I/O worker pool threads while others are completely idle.

The Berkeley DB driver for Erlang, developed by Chris Newcombe, created
& managed its own Pthread pool to avoid interacting badly with the VM's
Pthreads, both scheduler threads and I/O worker threads.

-Scott

(*) I don't have direct experience with them, sorry, so I can't offer
much advice beyond what the docs say.  But others on the list certainly
do.

(**) Way back in my Sendmail, Inc. days, using the Erlang/OTP R6B
release, we had an Erlang app using "inefficient", list-based I/O for
bulk data transfers that performed on par with a hand-coded C program
that took pains to be efficient both in terms of memory management and
data copying.  The Erlang VM has improved enormously since then.


More information about the erlang-questions mailing list