<div dir="auto">First thing to consider, use HiPE on everything or nothing at all. Switching between normal Erlang and HiPE native code is expensive. </div><div class="gmail_extra"><br><div class="gmail_quote">On May 24, 2017 9:04 AM, "Oliver Korpilla" <<a href="mailto:Oliver.Korpilla@gmx.de">Oliver.Korpilla@gmx.de</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:Verdana;font-size:12.0px"><div>Hello.</div>
<div> </div>
<div>We wrote a moderately complex telco application at work using elixir, but I think the performance we observe would be similar in Erlang since basically it all depends on BEAM and OTP anyway, so please don't mind me asking here.</div>
<div> </div>
<div>We run message scenarios that involve ASN.1 encoding/decoding, several IP protocols, a proprietary transport we access through an Erlang port, etc. Message sequences will be handled through gen_fsm processes.</div>
<div> </div>
<div>The application is split over several nodes.</div>
<div> </div>
<div>What we've observed is:</div>
<div>* First time scenario runs is super slow.</div>
<div>* Second time, time is halved.</div>
<div>* Third time, performance is very good, especially given the complex scenario.</div>
<div> </div>
<div>When analyzing bottlenecks in the software, we observed and tried the following:</div>
<div>* I preloaded all modules into the VM and times improved significantly. </div>
<div>* ASN.1 encoding on 1st try can take as much as 20ms on first run but go <1ms after for same message with minimally different content.</div>
<div>* Using HiPE on the ASN.1 generated codec did worsen performance.</div>
<div>* Executing the message codec at least once during startup improved ASN.1 performance the most.</div>
<div>* Two DB writes to same table row in short order can significantly decrease performance by causing multi-ms delays.</div>
<div> </div>
<div>(I know this is probably no surprise to anyone here but we're learning how the system behaves during runtime.)</div>
<div> </div>
<div>The curve described above still persists - the totals are just less.</div>
<div> </div>
<div>Our working assumption is that caching is impacting performance here, but we actually know too little about the BEAM runtime.</div>
<div> </div>
<div>* Are there mechanisms internal to BEAM impacting this? Or is it purely a property of the CPU architecture?</div>
<div>* Could I stimulate BEAM to cache certain parts of the code ahead of time? </div>
<div> </div>
<div>The measurements are rough ones, done by log timestamps. ms granularity is enough for us to gauge overall performance of our signalling.</div>
<div> </div>
<div>Message performance for user 3 and onward are okay but users 1 and/or 2 could be dropped because of timeouts, so optimizing these still matters.</div>
<div> </div>
<div>Thank you for any advice you can give,</div>
<div>Oliver</div></div></div>
<br>______________________________<wbr>_________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/<wbr>listinfo/erlang-questions</a><br>
<br></blockquote></div></div>