The Mystery of the Vanishing Message's Dead Lock

Fred Hebert mononcqc@REDACTED
Thu Jul 23 18:18:06 CEST 2020

We've noticed interesting things with RabbitMQ locking up while trying to
log data (with Lager) after massive disconnect storms where thousands of
connections suddenly vanish.
The issue is however not related to RabbitMQ itself.

The set-up for the logging is essentially two gen_events talking to each

   - There's a process called rabbit_log_connection_lager_event that just
   receives log messages and forwards them to lager_event; this forward can
   either be sync or async, but has no control flow going on (

   - The sync or async setting is actually handled by its target process (
   lager_event) dynamically setting that value based on overload
   - The rabbitmq's gen_event only has the one handler above in it (
   - The lager gen_event has two handlers in it, the lager_console_backend (
   and the lager_backend_throttle (
   - The sync_notify/async implementation in gen_event is done at a lower
   level that relies on whether all the log handlers' calls have returned or
   not before sending an acknowledgement:
   called by
   Any failure is going to be caught at the gen_event level. Do note that
   the synchronous call in gen_event:sync_notify() calls gen_event:rpc(),
   which in turns calls gen:call() with an infinity timeout. There is no
   way to configure this value
   - There's not any place in there that seems like it could be snooping in
   the mailbox and dropping messages it shouldn't

So here are the observations made on the system:

   - There are 18,000 messages stuck in the rabbitmq's gen_event:
   (rabbit@REDACTED)2> recon:proc_count(message_queue_len, 10).
   - The gen_event is stuck in gen:do_call/4, which has an infinite timeout:






   - By looking at the processes monitored by the RabbitMQ gen_event, I
   find that it's blocked on lager_event (within lager), which has an empty
   mailbox and is just patiently waiting for a message:



So clearly, the sync_notify call was made (hence we see the gen_event:rpc()
call in the stacktrace), is not active on the receiving end (which is
waiting to receive something else). The infinity timeout leaves us in a
locked pattern with an ever-growing queue.

We have managed to reproduce this twice after hours of work, but currently
don't have living instances suffering the issue.

The two things I have as a theory for now is either:

   1. the reply was received but not processed (the bug at could be at play but the bug
   report's optimization requires a conditional that doesn't match the format
   of gen:call() here)
   2. the reply was "sent" but never actually enqueued in the destination

I am reminded of a problem we had on some older Erlang instances at an
older job where I node would suddenly lock up entirely; the pattern we'd
see was the same but within the groups and io:put_chars never receiving an
IO protocol confirmation that the data had left the system. The process was
stuck waiting for a response which the other process either already sent or
never sent (it was also in a waiting state stuck in a receive according to
its stacktrace).

At this point I'm not quite sure where to look. It seems that under some
very rare scenarios, message passing isn't quite behaving right. A timeout
in the code would hide that fact and let the system eventually recover, but
the specific pattern at hand in gen_event.erl does not allow for it.
Nothing in the Erlang logic on either side seems to be able to properly
explain the issue.

I'm also nervous to call out the VM's message enqueuing (or dequeuing)
mechanism as possibly responsible for this bug, but maybe someone else has
a different reading from mine?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the erlang-questions mailing list