<div dir="ltr"><div><div><div><div><div><div><div><div><div>After more tests the basic questions that remains ..<br><br></div>Is there a way to have more than one process be blocked<br></div>in gen_udp:recv/2 call as this seems to not be possible, <br>probably because the way udp sockets work.<br><br></div>Sequentially works as expected, but when when I try to spawn another process<br></div>that makes and attempt to execute gen_udp:recv/2 while the first process already does<br>gen_udp:recv/2 , the second process gives elready error. This means that 2 process <br>cannot concurrently do gen_udp:recv/2 .<br><br></div>In scenario with socket {active, once} or {active, true} there is only one process that can<br></div>receive the message from socket (the one that does gen_udp:open/2 ), <br>and for multi-threaded applications this quickly can become a bottleneck.<br></div><div>In this case, however, elready error disappears of course. </div>.<br></div><div>I tried both variants and both have disavantages.<br></div><div><br></div>Is there an idiom for designing a udp concurrent server in Erlang?<br></div>So far, it seems impossible.</div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 8, 2015 at 1:18 PM, Bogdan Andu <span dir="ltr"><<a href="mailto:bog495@gmail.com" target="_blank">bog495@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div><div>Hi,<br><br></div>I try to build a concurrent UDP server in Erlang,<br><br></div>The socket is opened in a supervisor like this:<br>init([]) -><br> %%xpp_config_lib:reload_config_file(),<br> [{port, Port}] = ets:lookup(config, port),<br> [{listen, IPv4}] = ets:lookup(config, listen),<br> <br> %% for worker<br> [{ssl_recv_timeout, SslRecvTimeout}] = ets:lookup(config, ssl_recv_timeout),<br> <br><br> {ok, Sock} = gen_udp:open(Port, [binary,<br> {active, false},<br> {reuseaddr, true},<br> {ip, IPv4},<br> {mode, binary}<br> ]), <br><br><br> MpsConn = {mps_conn_fsm,{mps_conn_fsm, start_link, [Sock, SslRecvTimeout, false]},<br> temporary, 5000, worker, [mps_conn_fsm]},<br> <br> {ok, {{simple_one_for_one, 0, 1}, [MpsConn]}}.<br><br></div> and in worker I have:<br><br><br> init([Sock, SslRecvTimeout, Inet6]) -><br> process_flag(trap_exit, true),<br> <br> {ok, recv_pckt, #ssl_conn_state{lsock = Sock, ssl_recv_timeout = SslRecvTimeout, <br> conn_pid = self(), inet6 = Inet6}, 0}.<br><br>%% --------------------------------------------------------------------<br>%% Func: StateName/2<br>%% Returns: {next_state, NextStateName, NextStateData} |<br>%% {next_state, NextStateName, NextStateData, Timeout} |<br>%% {stop, Reason, NewStateData}<br>%% --------------------------------------------------------------------<br>recv_pckt(Event, #ssl_conn_state{lsock = ListenSock, inet6 = Inet6, <br> ssl_recv_timeout = SslRecvTimeout} = StateData) -><br>%% io:format("**epp_login~n", []),<br>%% gen_udp:close(ListenSock),<br> {ok, {Address, Port, Packet}} = gen_udp:recv(ListenSock, 0, SslRecvTimeout),<br> io:format("~p~n", [Packet]),<br> gen_udp:close(ListenSock),<br> {stop, normal, StateData}.<br>%% {next_state, recv_pckt, StateData, 0}.<br><br></div>.. and in Erlang shell :<br>Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false]<br><br><a href="mailto:mps_dbg@10.10.13.104" target="_blank">mps_dbg@10.10.13.104</a>)1> <br>(<a href="mailto:mps_dbg@10.10.13.104" target="_blank">mps_dbg@10.10.13.104</a>)1> mps_conn_sup:start_child().<br>{ok,<0.62.0>}<br>(<a href="mailto:mps_dbg@10.10.13.104" target="_blank">mps_dbg@10.10.13.104</a>)2> mps_conn_sup:start_child().<br>{ok,<0.64.0>}<br><br>=ERROR REPORT==== 8-Dec-2015::13:09:55 ===<br>[<0.64.0>]:[2]:TERMINATE:REASON={{badmatch,{error,ealready}},<br> [{mps_conn_fsm,recv_pckt,2,<br> [{file,<br> "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"},<br> {line,80}]},<br> {gen_fsm,handle_msg,7,<br> [{file,"gen_fsm.erl"},{line,518}]},<br> {proc_lib,init_p_do_apply,3,<br> [{file,"proc_lib.erl"},{line,239}]}]}:undefined<br>(<a href="mailto:mps_dbg@10.10.13.104" target="_blank">mps_dbg@10.10.13.104</a>)3> <br>=ERROR REPORT==== 8-Dec-2015::13:09:55 ===<br>** State machine <0.64.0> terminating <br>** Last event in was timeout<br>** When State == recv_pckt<br>** Data == {ssl_conn_state,#Port<0.1632>,60000,undefined,undefined,<br> undefined,[],[],<0.64.0>,undefined,undefined,<br> undefined,"en",undefined,false,undefined,<br> undefined,undefined,undefined,undefined,<br> false,<<>>,<<>>,undefined,0}<br>** Reason for termination = <br>** {{badmatch,{error,ealready}},<br> [{mps_conn_fsm,recv_pckt,2,<br> [{file,"/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"},<br> {line,80}]},<br> {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]},<br> {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}<br><br>=CRASH REPORT==== 8-Dec-2015::13:09:55 ===<br> crasher:<br> initial call: mps_conn_fsm:init/1<br> pid: <0.64.0><br> registered_name: []<br> exception exit: {{badmatch,{error,ealready}},<br> [{mps_conn_fsm,recv_pckt,2,<br> [{file,<br> "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"},<br> {line,80}]},<br> {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]},<br> {proc_lib,init_p_do_apply,3,<br> [{file,"proc_lib.erl"},{line,239}]}]}<br> in function gen_fsm:terminate/7 (gen_fsm.erl, line 626)<br> ancestors: [mps_conn_sup,<0.60.0>,<0.56.0>]<br> messages: []<br> links: [<0.61.0>]<br> dictionary: []<br> trap_exit: true<br> status: running<br> heap_size: 610<br> stack_size: 27<br> reductions: 295<br> neighbours:<br><br>=SUPERVISOR REPORT==== 8-Dec-2015::13:09:55 ===<br> Supervisor: {local,mps_conn_sup}<br> Context: child_terminated<br> Reason: {{badmatch,{error,ealready}},<br> [{mps_conn_fsm,recv_pckt,2,<br> [{file,<br> "/home/andu/remote/hp/home/andu/web/mps/src/mps_conn_fsm.erl"},<br> {line,80}]},<br> {gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,518}]},<br> {proc_lib,init_p_do_apply,3,<br> [{file,"proc_lib.erl"},{line,239}]}]}<br> Offender: [{pid,<0.64.0>},<br> {id,mps_conn_fsm},<br> {mfargs,{mps_conn_fsm,start_link,undefined}},<br> {restart_type,temporary},<br> {shutdown,5000},<br> {child_type,worker}]<br><br><br></div>When I try to start another concurrent worker I get ealready error.<br><br></div>How can be this error avoided.<br><br></div>The option to make socket active introduces a bottleneck as all messages are sent to <br></div>controlling process.<br><br></div>Is there any configuration parameter at os kernel level or via inet_dist to fix the error?<br><br></div>OS :<br>Linux localhost 4.0.4-301.fc22.x86_64 #1 SMP Thu May 21 13:10:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux<br><br></div>Any help very much appreciated,<br><br></div>Bogdan<br><div><div><br></div></div></div>
</blockquote></div><br></div>