[erlang-questions] Deterministic destruction of NIF resources

Lukas Larsson garazdawi@REDACTED
Fri Jul 15 23:29:09 CEST 2016


Apparently I wasn't clear enough. You want to look at the 'exiting' trace
flag, not the 'procs' flags. i.e. see this code:


go() ->
>     TFun = fun TFun() ->
>                    receive
>                        Other ->
>                            io:format("~p~n",[Other]),
>                            TFun()
>                    end
>            end,
>     TPid = spawn(TFun),
>     erlang:trace(self(), true, [set_on_spawn, procs, exiting, {tracer,
> TPid}]),
>     Pid = spawn(fun loop/0),
>     timer:sleep(100),
>     exit(Pid, die).
> loop() ->
>     [ets:new(hej,[]) || _ <- lists:seq(1,100)],
>     receive after infinity -> ok end.


when you run Mod:go() you will get a bunch of printouts:

1> trace:go().
> {trace,<0.33.0>,spawn,<0.36.0>,{erlang,apply,[#Fun<trace.1.87118278>,[]]}}
> {trace,<0.36.0>,in_exiting,0}
> {trace,<0.36.0>,exit,die}
> true
> {trace,<0.36.0>,out_exiting,0}
> {trace,<0.36.0>,in_exiting,0}
> {trace,<0.36.0>,out_exiting,0}
> {trace,<0.36.0>,in_exiting,0}
> {trace,<0.36.0>,out_exiting,0}
> {trace,<0.36.0>,in_exiting,0}
> {trace,<0.36.0>,out_exiting,0}
> {trace,<0.36.0>,in_exiting,0}
> {trace,<0.36.0>,out_exiting,0}
> {trace,<0.36.0>,in_exiting,0}
> {trace,<0.36.0>,out_exited,0}


as you can see, the 'exit' trace message is sent relatively early, but the
heap of the process is not cleaned up until when the 'out_exited' trace
message is sent. In my example the process gets all of these extra
schedules after it has decided to die in order to clean up the ets tables
that is responsible for. There are a bunch of reasons why this may happen,
not just ets table cleanup. What the reason is cannot be observed without
instrumenting the vm.

On Fri, Jul 15, 2016 at 9:05 PM, Roger Lipscombe <roger@REDACTED>
wrote:

> Assuming I'm tracing correctly (see below), that's not the cause. I
> see the 'exit' trace events fairly soon after the process dies, but I
> don't see the resource destructor being called until later, and
> usually in a clump:
>
> Alloc wrapper resource 0x2b038eb01620
> sup <0.2167.0> spawned <0.3013.0>
> Alloc wrapper resource 0x2b038ec80328
> <0.3013.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.3049.0>
> Alloc wrapper resource 0x2b038ec801e0
> Free wrapper resource 0x2b038ec80328
> Free wrapper resource 0x2b038eb01620
> <0.3049.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.15256.0>
> Alloc wrapper resource 0x2b038ea80c70
> <0.15256.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.17734.0>
> Alloc wrapper resource 0x2b038eac7de0
> <0.17734.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.18825.0>
> Alloc wrapper resource 0x2b038eb48098
> <0.18825.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.19267.0>
> Alloc wrapper resource 0x2b038eb00260
> <0.19267.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.19616.0>
> Alloc wrapper resource 0x2b038ea80b88
> <0.19616.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.20068.0>
> Alloc wrapper resource 0x2b038eac2930
> <0.20068.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.20424.0>
> Alloc wrapper resource 0x2b038eac1328
> <0.20424.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.20649.0>
> Alloc wrapper resource 0x2b038eac2aa0
> <0.20649.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.20663.0>
> Alloc wrapper resource 0x2b038ea84118
> Free wrapper resource 0x2b038eac2aa0
> Free wrapper resource 0x2b038eac1328
> Free wrapper resource 0x2b038eac2930
> Free wrapper resource 0x2b038ea80b88
> Free wrapper resource 0x2b038eb00260
> Free wrapper resource 0x2b038eb48098
> Free wrapper resource 0x2b038eac7de0
> Free wrapper resource 0x2b038ea80c70
> Free wrapper resource 0x2b038ec801e0
> <0.20663.0> exit {timeout,moo}
> sup <0.2167.0> spawned <0.20679.0>
> Alloc wrapper resource 0x2b038ea80b88
>
> The "... wrapper resource ..." lines are from the NIF; the other lines
> are from tracing:
>
> SupPid = whereis(agents_sup).
>
> TFun = fun TFun(State) ->
>         receive
>             {trace, ByPid, spawn, Pid, _} when ByPid =:= SupPid ->
>                 io:format("sup ~p spawned ~p~n", [ByPid, Pid]),
>                 State2 = [Pid | State],
>                 TFun(State2);
>             {trace, Pid, exit, Reason} = M ->
>                 case lists:member(Pid, State) of
>                     true ->
>                         io:format("~p exit ~p~n", [Pid, Reason]);
>                     _ ->
>                         ok
>                 end,
>                 TFun(State);
>             Other ->
>                 TFun(State)
>         end
>     end.
> TPid = spawn(fun() -> TFun([]) end).
> erlang:trace(all, true, [procs, {tracer, TPid}]).
>
>
> On 15 July 2016 at 15:53, Lukas Larsson <garazdawi@REDACTED> wrote:
> > How do you know when the process dies?
> >
> > If you aren't already you should use the trace flag 'exiting'
> > (http://erlang.org/doc/man/erlang.html#trace-3) to figure out when a
> process
> > actually has died. ERTS will delay process death after you send it a
> > termination signal for various reasons.
> >
> > On Fri, Jul 15, 2016 at 4:39 PM, Roger Lipscombe <roger@REDACTED
> >
> > wrote:
> >>
> >> Sorry, let me rephrase: they're not garbage collected *when* that
> >> process dies. The destructor is sometimes called immediately, and
> >> sometimes at some point in the future. Can I make this deterministic?
> >>
> >> And, no, the resource has not been sent to another process or stored in
> >> ETS.
> >>
> >> On 15 July 2016 at 15:10, Sergej Jurečko <sergej.jurecko@REDACTED>
> wrote:
> >> >
> >> > On 15 Jul 2016, at 15:57, Roger Lipscombe <roger@REDACTED>
> >> > wrote:
> >> >
> >> > because they're not garbage collected when that
> >> > process dies.
> >> >
> >> >
> >> > Then there is an outstanding reference to the resource. Has it been
> sent
> >> > to
> >> > another process or stored in an ETS?
> >> >
> >> >
> >> > Sergej
> >> _______________________________________________
> >> erlang-questions mailing list
> >> erlang-questions@REDACTED
> >> http://erlang.org/mailman/listinfo/erlang-questions
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160715/6c36bbc8/attachment.htm>


More information about the erlang-questions mailing list