[erlang-questions] 300k HTTP GET RPS on Erlang 21.2, benchmarking per scheduler polling

Vans S vans_163@REDACTED
Thu Dec 20 06:52:24 CET 2018


 
Removing unoptimised validity checks (parsing the request) got about a 2x speed up. from 300k to 600k~.

Removing the inet_drv and replacing it with a very innefficient poll every 10ms C NIF that was polling from vm processes on 6000 connections + not doing validity checks got 1.25m~ theoretical as I ran out of cores to benchmark from.  So say a 2x speed up from using the inet_drv.  This is not optimized, a optimized C NIF for polling would use edge polling on groups of descriptors to avoid a kernel call every 10 ms + the enter C NIF overhead per connection.

What I did remove in the inet_drv was some what I consider useless guarantees but guarantees that result in 1 extra memory copy / allocation plus extra cpu overhead.  For example inet_drv provides a guarantee that send will ALWAYS send all the bytes you pass it, keeping a copy of the buffer. I removed this gaurantee I dont think its appropriate for performance, the process doing the send should just keep track of the buffer, if a send results in a partial send, return the amount of bytes that was sent and allow the caller to mark its buffer.

Also indeed the benchmark is not appropriate for 99.9% of the workloads non-cdn and non-caching-based web servers do, where the overhead of the work to generate the response itself will far outweight the overhead of parsing the request, BUT it is never the less interesting to see just how low the VM overhead is. (I dont consider inet_drv as part of VM when I refer to VM)

    On Wednesday, December 19, 2018, 8:04:17 a.m. EST, Fred Hebert <mononcqc@REDACTED> wrote:  
 
 On 12/18, Vans S wrote:
> I think OTHER is something to do with Ports / polling, because I just 
> removed the inet_drv and wrote a simple c nif to do TCP networking, 
> and the throughput doubled. I did not get around to recompiling erlang 
> with microstate accounting but without inet driver using an 
> unoptimized nonblocking tcp nif I got the msacc report to look like
>
>Using 2 schedulers because 10 physical cores generating load now just 
>barely fully saturated.  now 90% of the time is spent in emulator, 6% 
>is other, I am guessing 6% other is the NIF calls to the socket calls?
>
>The throughput was 250k for 2 physical cores.  If all scales linearly 
>that is 1.25m RPS for simple GET hello world benchmark.
>
>The NIF is 
>PoC https://gist.github.com/vans163/d96fcc7c89d0cf25c819c5fb77769e81 ofcourse 
>its only useful in the case there is constant data on socket, otherwise 
>this PoC will break if there is idle connections that keep getting 
>polled.  This opens the possibility though to using something like 
>DPDK.

I think you might have achieved this:

https://twitter.com/seldo/status/800817973921386497

Chapter 15: 300% performance boosts by deleting data validity checks

Of course, the driver may have a higher baseline overhead than a NIF, 
but you also got rid of all validation and handling of any edge case 
whatsoever.

You claim your NIF is not optimized, but it is _extremely_ optimized: 
you removed all code that could have been useful for scenarios that are 
not the one you are actively testing, therefore getting rid of all their 
overheads.

And you are doing so on a rather useless benchmark: hello worlds that 
parse nothing and therefore have nothing in common with any application 
in the wild that might care about the request's content. The benchmark 
results you get can therefore not be extrapolated to be relevant to any 
application out there.

I would likely urge you, unless you are doing this for the fun of 
micro-optimizing edge cases, to consider basing your work on more 
representative benchmarks.
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20181220/b737773b/attachment.htm>


More information about the erlang-questions mailing list