[erlang-questions] Concurrent requests with ibrowse

Edwin Fine <>
Sat Feb 14 08:19:11 CET 2009


You may be having an ephemeral port starvation problem.

Try increasing the number of local ports:

sudo /sbin/sysctl -w net.ipv4.ip_local_port_range='1024 65535'

If that doesn't help, try decreasing TIME_WAIT (but first read
http://www.erlang.org/pipermail/erlang-questions/2008-September/038154.htmland
http://www.developerweb.net/forum/showthread.php?t=2941).

# Set TIME_WAIT timeout to 30 seconds instead of 120
sudo /sbin/sysctl -w net.ipv4.tcp_fin_timeout=30


2009/2/13 steve ellis <>

> We're trying to build an app that uses ibrowse to make concurrent requests.
> We are not able to get more than a few concurrent requests at a time to
> return successfully. We repeatedly get "conn_failed" (when we know the sites
> are available). It seems like we are running out of sockets when we loop
> through our list of URLs.
>
> Here's how it happens. We start ibrowse and run our test code. The first
> loop of about 60 requests so goes fine. A few seconds later, a second loop
> returns fewer; the third loop fewer still. The more requests we make, the
> fewer successful requests we have.
>
> Others appear to have had this problem (see
> http://www.trapexit.org/forum/viewtopic.php?p=44231). We've tried tweaking
> the settings of two machines with different distros (openSUSE 10.3 and
> Fedora Core 4). After making the adjustments on each machine, csh limit says
> we have 30000 descriptors (set through a modification we made to
> /etc/security/limits.conf. The /etc/sysctl.conf file has the following
> settings.
>
> net.ipv4.conf.all.rp_filter = 1
> #net.ipv6.conf.all.forwarding = 1
> fs.inotify.max_user_watches = 65536
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> net.ipv4.tcp_rmem = 4096 87380 16777216
> net.ipv4.tcp_wmem = 4096 65536 16777216
> net.ipv4.tcp_syncookies = 1
> net.ipv4.ip_local_port_range = 1024 4999
> net.ipv4.tcp_mem = 50576   64768   98152
> net.core.netdev_max_backlog = 2500
> kern.maxfiles = 25000
> kern.maxfilesperproc = 20000
>
> We'd be happy if we could get a few thousand requests to return
> successfully. What are we doing wrong?
>
> Here is one way we have tried invoking ibrowse (probably the simplest).
>
> -module(test7).
> -export([run/0,  send_reqs/0, do_send_req/1]).
>
> run() ->
>     proc_lib:spawn(?MODULE, send_reqs, []).
>
> some_urls() ->
> ["http://www.url1.com",
> "http://www.url2.com"
> %% and so on...
> ].
> send_reqs() ->
>     spawn_workers(some_urls()).
>
> spawn_workers(Urls) ->
>     lists:foreach(fun do_spawn/1, Urls).
>
> do_spawn(Url) ->
>     proc_lib:spawn_link(?MODULE, do_send_req, [Url]).
>
> do_send_req(Url) ->
>     Result = (catch ibrowse:send_req(Url, [], get, [], [], 10000)),
>     case Result of
>         {ok, SCode, _H, _B} ->
>             io:format("~p ~p~n", [Url, SCode]);
>         Err ->
>             io:format("~p~p~n", [Url, Err])
>     end.
>
> Here are some references that have helped us get this far (which
> unfortunately isn't that far at all). :)
>
> Sockets/File Descriptors
> http://oscar.hellstrom.st/blog/2007/04/benchmarking-with-tsung
> http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html<http://www.cs.uwaterloo.ca/%7Ebrecht/servers/openfiles.html>
>
> TCP
> http://www.metabrew.com/article/tag/http/
> http://ipsysctl-tutorial.frozentux.net/ipsysctl-tutorial.html
>
> Any help will be much appreciated!
>
> Steve
>
> _______________________________________________
> erlang-questions mailing list
> 
> http://www.erlang.org/mailman/listinfo/erlang-questions
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20090214/1e16b552/attachment.html>


More information about the erlang-questions mailing list