<html><head></head><body><div class="ydpebaf151cyahoo-style-wrap" style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:13px;"><div></div>
<div dir="ltr" data-setdir="false">I ended up using microstate accounts plus lock counting. ETS locks and MSAC was quite low. Surprisingly I ended up having very high contention for the memory allocator, msac showed aux and allocator using around 15-30% each (not sure if these 2 values combine or are seperate). Again to refresh, my problem was I had 100% scheduler CPU usage (backed up runqueue) with only 50% system cpu usage.<br><br>I ended up bumping the minimal heapsize by a large order of magnitude to the average words count the worker processes were using. Dropped the lock contention on alloc and scheduler usage in alloc to near 0, also aux dropped by a large amount.<br><br>I noticed when doing this some processes spiked insanely in their memory allocated, I am wondering if theres a way to profile this? The ideal scenario would be, line number + amount of objects allocated / type of object. A still good scenario, process pid + objects (so can inspect them for what they are). A pretty hard to debug scenario, just the memory consumption and type.</div><div><br></div>
</div><div id="yahoo_quoted_0337336373" class="yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
On Saturday, January 18, 2020, 03:15:41 a.m. EST, Dan Gudmundsson <dangud@gmail.com> wrote:
</div>
<div><br></div>
<div><br></div>
<div><div id="yiv1527735272"><div><div dir="ltr"><div><br clear="none"></div>mnesia:table_info(..)<div><br clear="none"></div><div>But mnesia is implemented with ets tables so ets:info should work just fine :-)</div></div><br clear="none"><div class="yiv1527735272yqt4980568763" id="yiv1527735272yqt00483"><div class="yiv1527735272gmail_quote"><div class="yiv1527735272gmail_attr" dir="ltr">On Sat, Jan 18, 2020 at 8:01 AM Vans S <<a rel="nofollow" shape="rect" ymailto="mailto:vans_163@yahoo.com" target="_blank" href="mailto:vans_163@yahoo.com">vans_163@yahoo.com</a>> wrote:<br clear="none"></div><blockquote class="yiv1527735272gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><div><div style=""><div></div>
<div dir="ltr">The table is a mnesia table so ets:info/2 does not seem to work. I narrowed it down and it seemed to indeed be match_object just costing too much cpu time and perhaps locking the table. Ended up rewriting the table scanning algo (instead of match_object running around 100 * 2000 times, dump full table once and use Process dictionary to manipulate / filter / organize) and building a cache.</div><div dir="ltr"><br clear="none"></div><div dir="ltr">The runtime seems stable, it would still be interesting to diagnose those locks does mnesia have something similar to ets:info/2 ?</div><div><br clear="none"></div>
</div><div id="yiv1527735272gmail-m_-1090353788856988258yahoo_quoted_9397839014">
<div style="">
<div>
On Friday, January 17, 2020, 03:06:07 p.m. EST, Sverker Eriksson <<a rel="nofollow" shape="rect" ymailto="mailto:sverker@erlang.org" target="_blank" href="mailto:sverker@erlang.org">sverker@erlang.org</a>> wrote:
</div>
<div><br clear="none"></div>
<div><br clear="none"></div>
<div><div id="yiv1527735272gmail-m_-1090353788856988258yiv9762609011"><div><div>Have you tried without read_concurrency?</div><div><br clear="none"></div><div>What does ets:info(T, stats) after running for a while?</div><div><br clear="none"></div><div><br clear="none"></div><div><br clear="none"></div><div id="yiv1527735272gmail-m_-1090353788856988258yiv9762609011yqt10596"><div>On fre, 2020-01-17 at 19:27 +0000, Vans S wrote:</div><blockquote type="cite"><div style=""><div></div>
<div><br clear="none"></div><div dir="ltr">I really want to measure this so I can have some facts, IMO the performance is degrading way too much for such a small workload. The frequency is these 3000 processes do 1 write to the table every 15 minutes, so about 3.3 writes per second. (as the processes start at different times). The processes match_object on the table about 30000 times per second, but in bursts, so 10 operations can happen in a single function then it would back off for a few seconds or more.</div>
</div><div id="yiv1527735272gmail-m_-1090353788856988258yiv9762609011yahoo_quoted_0162028058">
<div style="">
<div>
On Friday, January 17, 2020, 02:20:05 p.m. EST, Sverker Eriksson <<a rel="nofollow" shape="rect" ymailto="mailto:sverker@erlang.org" target="_blank" href="mailto:sverker@erlang.org">sverker@erlang.org</a>> wrote:
</div>
<div><br clear="none"></div>
<div><br clear="none"></div>
<div><div id="yiv1527735272gmail-m_-1090353788856988258yiv9762609011"><div><div id="yiv1527735272gmail-m_-1090353788856988258yiv9762609011yqtfd64394"><div>On fre, 2020-01-17 at 20:09 +0200, Led wrote:</div><blockquote type="cite"><div dir="ltr"><div><div dir="ltr"></div><blockquote type="cite"><div><div style=""><div dir="ltr">I am having some performance trouble in a system that does a few queries on a small ets table of around 10,000 records.<br clear="none"><br clear="none">Basically with around 500 concurrent processes, everything is fine, 1500 I start to notice some small degradation, at around 3000 concurrent processes the schedulers grind to a halt, TOP system CPU usage is around 50%, but Erlang scheduler usage (scheduler:<span>utilization</span>) is 100% and capped out on all 40 threads.<br clear="none"><br clear="none">I am guessing the schedulers are all waiting on locks on the ets table. I thought match_object and ets was quite optimized these days, using R22, I am wondering if there is some synchronization/locking issues that could be addressed. Because I mean at 3000 processes maybe hitting that table 10 times per second on average, does not seem like much. 30k match_objects per second, with ongoing inserts. <br clear="none"><br clear="none">Also would there be a way to debug/pinpoint this is the exact issue? I just did A/B testing where I turned off parts of the system, when I turned off the part that does the match_objects on the ETS table, the system ran fine and never deadlocked at 100% scheduler usage. Its also hard to profile, as the system is so locked up the profiler barely runs.<br clear="none"><br clear="none">For now it seems the solution is to rework the architecture and put a second cached view ETS table, so the match_objects can be replaced with key lookups. Which gets filled by a single process running that pulls via match_object from the main table and fills the cache.</div></div></div><br clear="none"></blockquote></div><div><br clear="none"></div><div></div><span lang="en"><span title="">You didn't specify parameters of your table.</span></span><br clear="none"><div><br clear="none"></div><div><br clear="none"></div></div></blockquote></div><div><br clear="none"></div><div>And what's the frequency of those inserts that you mention.</div><div><br clear="none"></div><div>ets:match_object is a read-only operation and should only inflict lock contention with other write operations, such as ets:insert.</div><div><br clear="none"></div><div><br clear="none"></div><div>/Sverker</div><div id="yiv1527735272gmail-m_-1090353788856988258yiv9762609011yqtfd57042"><div><br clear="none"></div></div></div></div></div>
</div>
</div></blockquote></div></div></div></div>
</div>
</div></div></blockquote></div></div></div></div></div>
</div>
</div></body></html>