<html><head></head><body><div class="ydp6a4fc19fyahoo-style-wrap" style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:13px;"><div></div>
<div><br></div><div>Removing unoptimised validity checks (parsing the request) got about a 2x speed up. from 300k to 600k~.<br><br>Removing the inet_drv and replacing it with a very innefficient poll every 10ms C NIF that was polling from vm processes on 6000 connections + not doing validity checks got 1.25m~ theoretical as I ran out of cores to benchmark from. So say a 2x speed up from using the inet_drv. This is not optimized, a optimized C NIF for polling would use edge polling on groups of descriptors to avoid a kernel call every 10 ms + the enter C NIF overhead per connection.<br><br>What I did remove in the inet_drv was some what I consider useless guarantees but guarantees that result in 1 extra memory copy / allocation plus extra cpu overhead. For example inet_drv provides a guarantee that send will ALWAYS send all the bytes you pass it, keeping a copy of the buffer. I removed this gaurantee I dont think its appropriate for performance, the process doing the send should just keep track of the buffer, if a send results in a partial send, return the amount of bytes that was sent and allow the caller to mark its buffer.<br><br>Also indeed the benchmark is not appropriate for 99.9% of the workloads non-cdn and non-caching-based web servers do, where the overhead of the work to generate the response itself will far outweight the overhead of parsing the request, BUT it is never the less interesting to see just how low the VM overhead is. (I dont consider inet_drv as part of VM when I refer to VM)<br><br></div>
</div><div id="yahoo_quoted_5903234173" class="yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
On Wednesday, December 19, 2018, 8:04:17 a.m. EST, Fred Hebert <mononcqc@ferd.ca> wrote:
</div>
<div><br></div>
<div><br></div>
<div>On 12/18, Vans S wrote:<div class="yqt2186980897" id="yqtfd75661"><br clear="none">> I think OTHER is something to do with Ports / polling, because I just <br clear="none">> removed the inet_drv and wrote a simple c nif to do TCP networking, <br clear="none">> and the throughput doubled. I did not get around to recompiling erlang <br clear="none">> with microstate accounting but without inet driver using an <br clear="none">> unoptimized nonblocking tcp nif I got the msacc report to look like<br clear="none">><br clear="none">>Using 2 schedulers because 10 physical cores generating load now just <br clear="none">>barely fully saturated. now 90% of the time is spent in emulator, 6% <br clear="none">>is other, I am guessing 6% other is the NIF calls to the socket calls?<br clear="none">><br clear="none">>The throughput was 250k for 2 physical cores. If all scales linearly <br clear="none">>that is 1.25m RPS for simple GET hello world benchmark.<br clear="none">><br clear="none">>The NIF is <br clear="none">>PoC <a shape="rect" href="https://gist.github.com/vans163/d96fcc7c89d0cf25c819c5fb77769e81Â ofcourse " target="_blank">https://gist.github.com/vans163/d96fcc7c89d0cf25c819c5fb77769e81 ofcourse </a><br clear="none">>its only useful in the case there is constant data on socket, otherwise <br clear="none">>this PoC will break if there is idle connections that keep getting <br clear="none">>polled. This opens the possibility though to using something like <br clear="none">>DPDK.</div><br clear="none"><br clear="none">I think you might have achieved this:<br clear="none"><br clear="none"><a shape="rect" href="https://twitter.com/seldo/status/800817973921386497" target="_blank">https://twitter.com/seldo/status/800817973921386497</a><br clear="none"><br clear="none">Chapter 15: 300% performance boosts by deleting data validity checks<br clear="none"><br clear="none">Of course, the driver may have a higher baseline overhead than a NIF, <br clear="none">but you also got rid of all validation and handling of any edge case <br clear="none">whatsoever.<br clear="none"><br clear="none">You claim your NIF is not optimized, but it is _extremely_ optimized: <br clear="none">you removed all code that could have been useful for scenarios that are <br clear="none">not the one you are actively testing, therefore getting rid of all their <br clear="none">overheads.<br clear="none"><br clear="none">And you are doing so on a rather useless benchmark: hello worlds that <br clear="none">parse nothing and therefore have nothing in common with any application <br clear="none">in the wild that might care about the request's content. The benchmark <br clear="none">results you get can therefore not be extrapolated to be relevant to any <br clear="none">application out there.<br clear="none"><br clear="none">I would likely urge you, unless you are doing this for the fun of <br clear="none">micro-optimizing edge cases, to consider basing your work on more <br clear="none">representative benchmarks.<div class="yqt2186980897" id="yqtfd95747"><br clear="none"></div></div>
</div>
</div></body></html>