<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Tue, Feb 20, 2018 at 2:53 PM Loïc Hoguin <<a href="mailto:essen@ninenines.eu">essen@ninenines.eu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thanks, that helped a lot.<br>
<br>
What we ended up doing was call mnesia:set_debug_level(debug) and<br>
subscribe to system events and schema table events using<br>
mnesia:subscribe/1 and this gave us both the transaction/lock that keeps<br>
getting restarted and the transaction/lock that is the cause for this<br>
restart. We then inspected things in Observer and could get a very clear<br>
view of what is going on.<br>
<br></blockquote><div><br></div><div>Great</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
By the way is there a search function for finding a process in Observer?<br>
That would be useful to find the ones we are looking. :-)<br>
<br></blockquote><div><br></div><div>Not yet, sounds useful, you can sort columns to ease the scrolling,</div><div>but no I have not received an PR on that yet :-)</div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Cheers,<br>
<br>
On 02/14/2018 07:32 PM, Dan Gudmundsson wrote:<br>
> Well you will need to figure out what ,<6502.2299.18> <6502.2302.18> are<br>
> doing, but they probably waiting<br>
> for other locks which are occupied by the busy processes you wrote about.<br>
> But you will have to look at that, debugging mnesia is just following<br>
> the breadcrumbs around the system.<br>
><br>
> mnesia_locker:get_held_locks() and mnesia_locker:get_lock_queue() may<br>
> also help.<br>
><br>
> Using observer to attach to the different nodes is probably easiest,<br>
> then you can get a stacktrace of each process,<br>
> normally when I do it I don't have a live system. If I want to debug<br>
> post mortem I use mnesia_lib:dist_coredump()<br>
> to collect each mnesia nodes state and analyse them. Though with many<br>
> nodes it will take some time to debug or<br>
> figure out why it appears to be hanging.<br>
><br>
><br>
> On Wed, Feb 14, 2018 at 6:39 PM Loïc Hoguin <<a href="mailto:essen@ninenines.eu" target="_blank">essen@ninenines.eu</a><br>
> <mailto:<a href="mailto:essen@ninenines.eu" target="_blank">essen@ninenines.eu</a>>> wrote:<br>
><br>
> Hello,<br>
><br>
> We are trying to debug an issue where we observe a lot of contention<br>
> when a RabbitMQ node go down. It has a number of symptoms and we are in<br>
> the middle of figuring things out.<br>
><br>
> One particular symptom occurs on the node that restarts, it gets stuck<br>
> and there are two Mnesia locks:<br>
><br>
> [{{schema,rabbit_durable_route},read,{tid,879886,<6502.2299.18>}},<br>
> {{schema,rabbit_exchange},read,{tid,879887,<6502.2302.18>}}]<br>
><br>
> The locks are only cleared when the other node in the cluster stops<br>
> being so busy deleting data from a number of tables (another symptom)<br>
> and things go back to normal.<br>
><br>
> Part of the problem is that while this is going on, the restarting node<br>
> cannot be used, so I would like to understand what conditions can result<br>
> in these locks staying up for so long. Any tips appreciated!<br>
><br>
> Thanks in advance,<br>
><br>
> --<br>
> Loïc Hoguin<br>
> <a href="https://ninenines.eu" rel="noreferrer" target="_blank">https://ninenines.eu</a><br>
> _______________________________________________<br>
> erlang-questions mailing list<br>
> <a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a> <mailto:<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a>><br>
> <a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
><br>
<br>
--<br>
Loïc Hoguin<br>
<a href="https://ninenines.eu" rel="noreferrer" target="_blank">https://ninenines.eu</a><br>
</blockquote></div></div>