<div dir="ltr">Hello,<div><br></div><div>The performance loss seems to be unrelated to whether it is integer or float. What seems to make the difference is the size of the data created. The textual size of 50 floats are about 3 times larger than the size of the integers you use in the benchmark. If you change it so that erts_debug:size(Floats) is the same for both the floop and iloop (i changed iloop seq from 50 to 162) you see the same drop in speed inbetween R15B03 and R16B. </div>
<div><br></div><div>So most probably the performance decrease has something to do with either changes in memory allocation or garbage collection. I don't really know what it could be and don't have the time right now to look into it. If you want to help figure out what it is, then doing a git bisect in between R15B03 and R16B and getting the exact commit that introduced the performance loss would be a great help.</div>
<div><br></div><div>As a side note, using float_to_list(Float) instead of hd(io_lib:format("~p",[Float])) in jsx_to_json.erl more than trippled the number of floats encoded per second (10kps vs 35kps) and using float_to_list(Float,[{decimals,4},compact]) doubled that again, giving a total of 7.6 times greater performance (10kps vs 76kps).</div>
<div><br></div><div>Lukas</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jan 28, 2014 at 4:57 PM, Dmitry Kolesnikov <span dir="ltr"><<a href="mailto:dmkolesnikov@gmail.com" target="_blank">dmkolesnikov@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word">Hello,<div><br></div><div>Here I’ve compiled a small project to benchmark the issue:</div>
<div><br></div><div>git clone <a href="https://github.com/fogfish/fjsx" target="_blank">https://github.com/fogfish/fjsx</a></div><div>make</div><div>make run</div><div>(<a href="mailto:fjsx@127.0.0.1" target="_blank">fjsx@127.0.0.1</a>)1> fjsx:run().</div>
<div><br></div><div>My results are following:</div><div>R15B03: min 2.9K, avg 3.1K, max 3.3K</div><div>R16B03: min 2.7K, avg 2.8K, max 3.0K </div><div><br></div><div>(In production I do much more staff, it shown even worse degradation)</div>
<div><br></div><div>I run the test on virtual machine, cent os 6 x86_64 with 4 virtual CPU (underlying HW MacBook Pro i5, 2.5GHz).</div><div>Virtual CPU</div><div>vendor_id<span style="white-space:pre-wrap"> </span>: GenuineIntel<br>
cpu family<span style="white-space:pre-wrap"> </span>: 6<br>model<span style="white-space:pre-wrap"> </span>: 58<br>model name<span style="white-space:pre-wrap"> </span>: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz<br>stepping<span style="white-space:pre-wrap"> </span>: 9<br>
cpu MHz<span style="white-space:pre-wrap"> </span>: 2535.252<br>cache size<span style="white-space:pre-wrap"> </span>: 6144 KB<br>physical id<span style="white-space:pre-wrap"> </span>: 0<br>siblings<span style="white-space:pre-wrap"> </span>: 4<br>
core id<span style="white-space:pre-wrap"> </span>: 3<br>cpu cores<span style="white-space:pre-wrap"> </span>: 4<br>apicid<span style="white-space:pre-wrap"> </span>: 3<br>initial apicid<span style="white-space:pre-wrap"> </span>: 3<br>
fpu<span style="white-space:pre-wrap"> </span>: yes<br>fpu_exception<span style="white-space:pre-wrap"> </span>: yes<br>cpuid level<span style="white-space:pre-wrap"> </span>: 5<br>wp<span style="white-space:pre-wrap"> </span>: yes<br>
flags<span style="white-space:pre-wrap"> </span>: fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good pni ssse3 lahf_lm<br>bogomips<span style="white-space:pre-wrap"> </span>: 5070.50<br>
clflush size<span style="white-space:pre-wrap"> </span>: 64<br>cache_alignment<span style="white-space:pre-wrap"> </span>: 64<br>address sizes<span style="white-space:pre-wrap"> </span>: 36 bits physical, 48 bits virtual<br>
power management:<br><br>otp configuration is identical for R15 and R16</div><div><br></div><div>R15B03: config.log</div><div> $ ./configure --prefix=/usr/local/otp_R15B03 --enable-threads --enable-smp-support --enable-kernel-poll --enable-hipe --disable-dynamic-ssl-lib --with-ssl=/usr/local/ssl --enable-native-libs</div>
<div><br></div><div>R16B03: config.log</div><div> $ ./configure --prefix=/usr/local/otp_R16B03 --enable-threads --enable-smp-support --enable-kernel-poll --enable-hipe --disable-dynamic-ssl-lib --with-ssl=/usr/local/ssl --enable-native-libs</div>
<div><br></div><div>I have not run the test on real HW:</div><div> - my mac’s OTP configurations are different R16B03 enables --enable-darwin-64bit therefor it outperforms R15</div><div> - my production is virtual machines </div>
<div><br></div><div>I’ve been using eep to profile the issue. You can go same to compare R16 and R15 differences.</div><div><br></div><div>Best Regards, </div><span class="HOEnZb"><font color="#888888"><div>Dmitry</div></font></span><div>
<div class="h5"><div><br></div><div><div><div>On 28 Jan 2014, at 11:54, Lukas Larsson <<a href="mailto:lukas@erlang.org" target="_blank">lukas@erlang.org</a>> wrote:</div><br><blockquote type="cite"><div dir="ltr">Hello,<div>
<br></div><div>The code for formatting float when doing it through io_lib:format it written in pure Erlang. The reason that io_lib:format is implemented in Erlang is because it allows much greater cross platform formatting capabilities, alas at the cost of performance. </div>
<div><br></div><div>Why you see a performance drop in between R15B03 to R16B03 I don't know, if you could create a minimal reproducible benchmark that shows the difference that would be great.</div><div><br></div><div>
If you want to have a speedy conversion of something you know is a float to a textual format you should use float_to_list/binary as that is meant to be a fast conversion, but with less flexibility.</div><div><br></div><div>
Lukas</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jan 28, 2014 at 10:05 AM, Max Lapshin <span dir="ltr"><<a href="mailto:max.lapshin@gmail.com" target="_blank">max.lapshin@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">btw, why io_lib:format is so slow? simple changing it to nif with fprintf reduces cpu a lot.</div>
</div>
<br>_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></blockquote></div><br></div>
</blockquote></div><br></div></div></div></div><br>_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></blockquote></div><br></div>