[erlang-questions] fast file sending - erlang & nginx
Morten Krogh
mk@REDACTED
Sun Nov 21 18:11:45 CET 2010
Hi Robert
That was a big speedup. So what you can do, is to read all static files
into memory at server startup, and have some notification whenever the
directory changes. The you can ignore the performance of erlang file
reading, and sendfile as well. Actually, your server should perform
better than
any sendfile based server, I would guess. It is not completely clear,
but I don't think sendfile, even from disk cache, can outperform a
memory buffer.
I don't know if erlang has a way to send you a notification whenever a
directory changes. But you could do it with epoll or kqueue. Or just
implement some simple
timed poller from erlang.
To get further improvements, you could look at header parsing. What
happens to your numbers if you drop header parsing, and just look for
the url and \r\n\r\n, or
maybe just \r\n\r\n for this simple test.
About gen_tcp, that is a serious problem and something the otp team
should rectify. If you have to implement a nif for kqueue or epoll, why
is the server written in erlang to start with?
Just make a full server, written in c, with a callback to erlang handlers.
Morten.
On 11/21/10 5:10 PM, Roberto Ostinelli wrote:
> 2010/11/21 Morten Krogh<mk@REDACTED>:
>> Robert, what throughput do you get by sending a preloaded binary instead?
>>
>> In other words, read the file into a binary once, and then just serve the
>> replies from that binary.
>>
>> Morten.
> here are the details, running ab -n 50000 -c 5
> http://ubuntu.loc/image.jpeg with Document Length = 4163 bytes
>
> NGINX:
> Time per request: 0.410 [ms] (mean)
> Time per request: 0.082 [ms] (mean, across all concurrent requests)
> Transfer rate: 52161.96 [Kbytes/sec] received
> Percentage of the requests served within a certain time (ms)
> 50% 0
> 66% 0
> 75% 0
> 80% 0
> 90% 1
> 95% 1
> 98% 1
> 99% 1
> 100% 6 (longest request)
>
> MISULTIN:
> Time per request: 1.428 [ms] (mean)
> Time per request: 0.286 [ms] (mean, across all concurrent requests)
> Transfer rate: 14525.79 [Kbytes/sec] received
> Percentage of the requests served within a certain time (ms)
> 50% 1
> 66% 2
> 75% 2
> 80% 2
> 90% 2
> 95% 2
> 98% 2
> 99% 2
> 100% 6 (longest request)
>
> MISULTIN WITH FILE READ IN MEMORY:
> Time per request: 0.795 [ms] (mean)
> Time per request: 0.159 [ms] (mean, across all concurrent requests)
> Transfer rate: 26088.81 [Kbytes/sec] received
> Percentage of the requests served within a certain time (ms)
> 50% 1
> 66% 1
> 75% 1
> 80% 1
> 90% 1
> 95% 1
> 98% 1
> 99% 1
> 100% 2 (longest request)
>
> ratios:
>
> nginx : misultin = 1 : 3.6
> nginx : misultin with cache = 1 : 2
>
> code to perform these tests is hereby provided.
>
> r.
>
> ============== normal file sending ==============================
>
> -module(misultin_file).
> -export([start/1, stop/0]).
>
> % start misultin http server
> start(Port) ->
> misultin:start_link([{port, Port}, {loop, fun(Req) -> handle_http(Req) end}]).
>
> % stop misultin
> stop() ->
> misultin:stop().
>
> % callback on request received
> handle_http(Req) ->
> Req:file("image.jpeg").
>
> ===========================================================
>
> ============== cached file sending =============================
>
> -module(misultin_file2).
> -export([start/1, stop/0]).
> -include_lib("kernel/include/file.hrl").
>
> % start misultin http server
> start(Port) ->
> % load file
> FilePath = "roberto2.jpeg",
> {ok, Binary} = file:read_file(FilePath),
> {ok, FileInfo} = file:read_file_info(FilePath),
> FileSize = FileInfo#file_info.size,
> HeadersFull = [{'Content-Type',
> misultin_utility:get_content_type(FilePath)}, {'Content-Length',
> FileSize}],
> misultin:start_link([{port, Port}, {loop, fun(Req) ->
> handle_http(Req, Binary, HeadersFull) end}]).
>
> % stop misultin
> stop() ->
> misultin:stop().
>
> % callback on request received
> handle_http(Req, Binary, HeadersFull) ->
> Req:stream(head, HeadersFull),
> Req:stream(Binary).
>
> ===========================================================
>
> ________________________________________________________________
> erlang-questions (at) erlang.org mailing list.
> See http://www.erlang.org/faq.html
> To unsubscribe; mailto:erlang-questions-unsubscribe@REDACTED
>
More information about the erlang-questions
mailing list