[erlang-questions] Erlang VM hanging on node death

Lukas Larsson lukas@REDACTED
Thu Jul 13 09:12:25 CEST 2017


Hello Steve,

On Mon, Jul 10, 2017 at 4:14 PM, Steve Cohen <scohen@REDACTED> wrote:

> Now, when one of our guild servers dies, as expected it generates a large
> number of DOWN messages to the sessions cluster. These messages bog down
> the sessions servers (obviously) while they process them, but when they're
> done processing, distribution appears to be completely broken.
>
>
On Thu, Jul 13, 2017 at 1:10 AM, Steve Cohen <scohen@REDACTED> wrote:

> Here's the sequence of events:
>
1. One of our machines was inadvertently shut off, killing all of the
> processes on it
> 2. We immediately saw a drop in CPU across the board on the sessions
> cluster. CPU on the sessions cluster eventually went to zero.
> 3. We were completely unable to use remote console on any of the machines
> in the cluster, and they all needed to be restarted.
>

The two scenarios you are describing seem to contradict each other? First
you talk about the sessions servers being bogged down, and then that the
CPU of the sessions cluster went to almost zero? What is it that I'm
missing?

Did you gather any port mortem dumps from these machines? i.e. a
erl_crash.dump or a core dump?

Also you have forgotten to mention what version of Erlang/OTP that you are
using.


> So, to answer your question, we don't know how long it took for down
> messages to be processed, since we didn't have visibility at the time.  We
> suspected a problem with the net_ticktime, but what's confusing to us is
> that the host that went down went down hard, so the DOWN events should have
> been created on the other nodes, not sent across distribution (correct me
> if I'm wrong here).
>

When a TCP connection used for the erlang distribution is terminated, all
the down messages are (as you say) generated locally.


> Also, my intuition is that processing DOWN messages would cause CPU usage
> on the cluster to go up, but we saw the exact opposite.
>
>
With the poweroff of the machine, are you sure that the TCP layer caught
the shutdown? If it didn't, then the next fail-safe is the net_ticktime.


> Since we couldn't connect to the machines via remote console, we couldn't
> call connect_node. It was my understanding that the connect call would
> happen when the node in question reestablished itself.
>

Yes, it should re-connect when needed. It is quite strange that you
couldn't connect via remote shell. A crash dump or core dump would really
help to understand what is going on.


>
>
> On Tue, Jul 11, 2017 at 8:34 PM, Juan Jose Comellas <juanjo@REDACTED>
> wrote:
>
>> How long does it take for all the DOWN messages to be sent/processed?
>>
>> These messages might not be allowing the net tick messages (see
>> net_ticktime in http://erlang.org/doc/man/kernel_app.html) to be
>> responded in time. If this happens, the node that isn't able to respond
>> before the net_ticktime expires will be assumed to be disconnected.
>>
>> What happens if after processing all the DOWN messages you issue a call
>> to net_kernel:connect_node/1 for each of the nodes that seems to be down?
>>
>> On Mon, Jul 10, 2017 at 4:14 PM, Steve Cohen <scohen@REDACTED>
>> wrote:
>>
>>> Hi all,
>>>
>>> We have 12 nodes in a our guilds cluster, and on each, 500,000
>>> processes.  We have another cluster that has 15 nodes with roughly four
>>> million processes on it, called sessions. Both clusters are in the same
>>> erlang distribution since our guilds monitor sessions and vice-versa.
>>>
>>> Now, when one of our guild servers dies, as expected it generates a
>>> large number of DOWN messages to the sessions cluster. These messages bog
>>> down the sessions servers (obviously) while they process them, but when
>>> they're done processing, distribution appears to be completely broken.
>>>
>>> By broken, I mean that the nodes are disconnected from one another,
>>> they're not exchanging messages, CPU usage was 0 and we couldn't even
>>> launch the remote console.
>>>
>>> I can't imagine this is expected behavior, and was wondering if someone
>>> can shed some light on it.
>>> We're open to the idea that we're doing something very, very wrong.
>>>
>>>
>>> Thanks in advance for the help
>>>
>>> --
>>> Steve Cohen
>>>
>>> _______________________________________________
>>> erlang-questions mailing list
>>> erlang-questions@REDACTED
>>> http://erlang.org/mailman/listinfo/erlang-questions
>>>
>>>
>>
>
>
> --
> -Steve
>
> _______________________________________________
> erlang-questions mailing list
> erlang-questions@REDACTED
> http://erlang.org/mailman/listinfo/erlang-questions
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20170713/02994fba/attachment.htm>


More information about the erlang-questions mailing list