[erlang-questions] Cowboy (Erlang) VS Haskell (Warp)

Loïc Hoguin essen@REDACTED
Tue Jun 25 20:38:55 CEST 2013


On 06/25/2013 07:38 PM, BM Kim wrote:
> Damian Dobroczyński <qoocku <at> gmail.com> writes:
>
>>
>> W dniu 25.06.2013 13:51, BM Kim pisze:
>>> Hi folks,
>>>
>>> First of all, I want to apologise for my poor english skills,
>>> since english is not my first language, but I'll try my best
>>> to formulate my quesions as clear as possible.
>>>
>>> Second, I've just begun to learn erlang, so if I'm asking
>>> obvious "noob" questions I apologise for that too in advance...
>>>
>>> Anywho, now to my actual question:
>>>
>>> I am planning to write a high-performance server application in erlang,
>>> which will primarily handle HTTP requests. After some reseach with
> google,
>>> I narrowed down my choices to cowboy, misultin and mochiweb and decided
>>> to go with the cowboy library first...
>>>
>>> Looking at some tutorials, I've quickly built a small server capable of
>>> serving static files and was eager to see first benchmark-results...
>>> I've also built a small Haskell server using Warp library to compare it
>>> with erlang's cowboy...
>>>
>>> But my first impression was, that my cowboy server is much much slower
> than
>>> expected when serving static-files and after some research I found a
> presentation
>>> of the cowboy's author claiming that cowboy shouldn't be used for
> serving
>>> static-files. So I modified the server code, so that it replies to every
>>> request with in-memory 4Kb binary blob and compared it with my haskell
> warp
>>> server serving 4kb static file...
>>>
>>> this is my simple cowboy's http handler:
>>>
>>> ----------------------------------------------------------------------
>>>
>>> blob() ->
>>>      [<<0:8>> || _ <- lists:seq(1,4096)].
>>
>> First, try to replace blob/0 function with this:
>>
>>    blob() -> <<0:(4096*8)>>.
>>
>> Then, restart the test and report ;)
>>
>
>
> Hi,
>
> Thank you very much for pointing out the obvious mistake...
> After correcting it, I got improvement from 5940 req/s to 8650 req/s...
>
> But still much slower than the haskell warp-server, which has throughput
> of 38000 req/s...

That's not surprising at all, you are performing the same thing exactly 
all the time, so of course Haskell is going to be fast at this. Same 
goes for JIT enabled environments like Java. The JIT can easily compile 
it to machine code once and be done with it.

You're not actually testing the HTTP server, or even the language 
performance, you are testing the ability of the platform to optimize one 
operation to death.

> But I have another question regarding blob/0. Is it going to be evaluated
> only once (like GHC would do) since it is a pure expression? I'm not
> so sure, since erlang is not pure and any function can have side-effects
> which you can't mark as with the IO monad in Haskell...

Erlang doesn't do that. Closest is at compile time when A = 1 + 1 
becomes A = 2 in the compiled file, but it's only done for a very small 
subset of all expressions.

Erlang shines not in synthetic benchmarks, but in production, when 
thousands of clients connect to your server and expect their requests to 
arrive as quick as if they were alone on the server. Erlang is optimized 
for latency, and this latency will be the same regardless of there being 
only one user or ten thousands.

Your benchmark on the other hand is evaluating throughput. Throughput is 
boring, and not really useful for Web applications. (See Max' email for 
more details on that.)

-- 
Loïc Hoguin
Erlang Cowboy
Nine Nines
http://ninenines.eu



More information about the erlang-questions mailing list