[erlang-questions] UDP concurrent server

Bogdan Andu bog495@REDACTED
Wed Dec 9 18:29:26 CET 2015

Update to my previous email:

Doing some tests gave in the following conclusion:

in Erlang/OTP 17 [erts-6.3] the leaks does not appear at all:
entop output:
Node: 'mps_dbg@REDACTED' (Disconnected) (17/6.3) unix (linux 4.2.6)
CPU:2 SMP +A:10
Time: local time 19:21:51, up for 000:00:02:45, 0ms latency,
Processes: total 53 (RQ 0) at 159238 RpI using 4516.0k (4541.4k allocated)
Memory: Sys 8348.8k, Atom 190.9k/197.7k, Bin 134.3k, Code 4737.9k, Ets

It is Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:2:2]
[async-threads:10] [kernel-poll:false]

where the leaks happens. In 2 minutes I would have about 220 MB of binaries.

So may be in OTP 18 is something changed that needs to be taken into

More long running test must be done for better conclusions

On Wed, Dec 9, 2015 at 6:13 PM, Fred Hebert <mononcqc@REDACTED> wrote:

> On 12/09, Bogdan Andu wrote:
>> init([Port, Ip]) ->
>>        process_flag(trap_exit, true),
>>        {ok, Sock} = gen_udp:open(Port, [binary,
>>                            {active, false},
>>                             {reuseaddr, true},
>>                            {ip, Ip}
>>        ]),
>>        {ok, #udp_conn_state{sock = Sock}, 0}.
>> handle_info({udp, Socket, Host, Port, Bin}, State) ->
>>     {noreply, State, 1};
>> handle_info(timeout, #udp_conn_state{sock = Sock} = State) ->
>>    inet:setopts(Sock, [{active, once}]),
>>    {noreply, State};
>> handle_info(Info, State) ->
>>    {noreply, State}.
Uh, interesting. So one thing I'd change early would be to go:
>    handle_info({udp, Socket, Host, Port, Bin}, State=#udp_conn_state{sock
> = Sock}) ->
>        inet:setopts(Sock, [{active,once}]),
>        {noreply, State};
> (and do the same in `init/1')
> @ Fred: yes you are right, is cleaner and faster that way.
But initially I wanted a separation of these.

> This at least would let yo consume information much faster by avoiding
> manual 1 ms sleeps everywhere. Even a `0' value may help. It doesn't
> explain the leaking at all though.

> What would explain it is that if you're not matching the message (as in
> your original email), then you never set the socket to 'active' again, and
> you never receive traffic.
>  Not matching the message caused no leaks

> If that's the problem, then you have been measuring a process that
> receives traffic to a process that does not. It certainly would explain the
> bad behaviour you've seen.
> If you want to try to see if a garbage collection would help, you can try
> the 'recon' library and call 'recon:bin_leak(10)' and it will take a
> snapshot, run a GC on all processes, then return you those that lost the
> most memory. If yours is in there, then adding 'hibernate' calls from time
> to time (say, every 10,000 packets) could help keep things clean.
> It sucks, but that might be what is needed if the shape of your data is
> not amenable to clean GCs. If that doesn't solve it, then we get far funner
> problems with memory allocation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20151209/ab75785f/attachment.htm>

More information about the erlang-questions mailing list