[erlang-questions] [ANN] misultin v0.7

Loïc Hoguin essen@REDACTED
Sun Apr 10 20:16:46 CEST 2011

On 04/10/2011 07:36 PM, Bob Ippolito wrote:
> On Sun, Apr 10, 2011 at 10:20 AM, Loïc Hoguin <essen@REDACTED> wrote:
>> On 04/10/2011 06:47 PM, Max Lapshin wrote:
>>> It seems to be an eternal process: new webserver which takes not 50
>>> microseconds, but 10 of them per request and is 5 times faster becomes
>>> the same mochiweb when it reaches it by functionality.
>> I disagree. Mochiweb is slower because of design issues, not because it
>> has more functionality. Most of the functionality in mochiweb that
>> hasn't been done in misultin or cowboy so far is either irrelevant
>> (json, globals, reloader) or aren't related to the GET method, which is
>> the prime target of benchmarks since it's the most used method by far.
> This is true, mochiweb is slightly slower than other solutions for two
> main reasons:
>  * It hasn't made the backwards incompatible change to switch
> everything to binaries (wasn't a sensible thing to do four years ago
> when I wrote it)

Did you benchmark this yet? In my case, binaries for headers is both
slower and uses more memory than plain lists. Why, I do not know. :(

>  * It spends time doing header parsing and normalization when it
> probably shouldn't (if it was trying to compete on synthetic
> benchmarks)

Yes, that's one part where I've been focusing my effort (or lack of
efforts), by only caring about the request-line, Host and Connection
headers and lazy-evaluating everything else.

Another issue though is that timers are slow, and you use a lot of
timers (implicitly). Misultin is also a culprit there. Basically,
everytime you do a 'receive ... after N -> ... end', a timer is started
to handle the 'after' clause. This is awfully slow, especially
considering you do it many times per request (one for the request-line,
one per header, maybe more?). On the other hand you don't have a speed
penalty using gen_tcp:recv/3 with a timeout. We didn't investigate the
gen_tcp code enough to know the exact reasons for this though, afaik. To
reiterate, 'receive ... end' is fast, 'receive ... after N -> ... end'
is slow.

Right now we are wondering whether using {packet, raw} and calling
erlang:decode_packet ourselves would be faster than {packet, http}. It
could, since many requests are received in a single TCP packet (default
MTU is 1460 on most systems). That means a single recv sent directly to
decode_packet and processed instead of many recv. You can also directly
receive the binary data from the socket in which case it's probably much
faster than lists. But this is all just ideas to explore atm.

Loïc Hoguin

More information about the erlang-questions mailing list