<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Suggestions for things to look at:<br>
      - See what data size is sent, as seen from the Erlang side. Is the
      2GB number correct?<br>
      - Verify endian-ness of the timestamps and data lengths you read
      from the file. "native"-endian may be correct, but is a bit of a
      funny thing to have in your file format. A mistake here may well
      cause your program to write more data than you intended.<br>
      <br>
      As for how writev handles large values, my quick test on 64-bit
      Ubuntu shows that (on a non-socket file descriptor) it returns
      2147479552=0x7FFFF000 for an input size of 2158022464 -  i.e, it
      does return something reasonable and positive, but writes less
      than 2GB.<br>
      That doesn't necessarily say anything about how the behaviour is
      on a closed socket on CentOS, of course.<br>
      <br>
      /Erik<br>
      <br>
      On 26-11-2012 12:35, Peter Membrey wrote:<br>
    </div>
    <blockquote
cite="mid:CA+Qi7RC+mztXEMb-b32bUtkFAGJ08mGh5vWf7-uwH49afnSdwg@mail.gmail.com"
      type="cite">Hi all,
      <div><br>
      </div>
      <div>Trying to send again under a new account...</div>
      <div><br>
      </div>
      <div>Cheers,</div>
      <div><br>
      </div>
      <div>Pete<br>
        <br>
        <div class="gmail_quote">---------- Forwarded message ----------<br>
          From: <b class="gmail_sendername">Peter Membrey</b> <span
            dir="ltr"><<a moz-do-not-send="true"
              href="mailto:peter@membrey.hk">peter@membrey.hk</a>></span><br>
          Date: 24 November 2012 21:57<br>
          Subject: Re: [erlang-bugs] VM locks up on write to socket (and
          now it seems to file too)<br>
          To: Patrik Nyblom <<a moz-do-not-send="true"
            href="mailto:pan@erlang.org">pan@erlang.org</a>><br>
          Cc: <a moz-do-not-send="true"
            href="mailto:erlang-bugs@erlang.org">erlang-bugs@erlang.org</a><br>
          <br>
          <br>
          Hi guys,<br>
          <br>
          Thanks for getting back in touch so quickly!<br>
          <br>
          I did do an lsof on the process and I can confirm that it was<br>
          definitely a socket. However by that time the application it
          had been<br>
          trying to send to had been killed. When I checked the sockets
          were<br>
          showing as waiting to close. Unfortunately I didn't think to
          do an<br>
          lsof until after the apps had been shut down. I was hoping the
          VM<br>
          would recover if I killed the app that had upset it. However
          even<br>
          after all the apps connected had been shut down, the issue
          didn't<br>
          resolve.<br>
          <br>
          The application receives requests from a client, which
          contains two<br>
          data items. The stream ID and a timestamp. Both are encoded as
          big<br>
          integer unsigned numbers. The server then looks through the
          file<br>
          referenced by the stream ID and uses the timestamp as an
          index. The<br>
          file format is currently really simple, in the form of:<br>
          <br>
<Timestamp:64/native-integer,Length:32/native-integer,Data:Length/binary>><br>
          <br>
          There is an index file that provides an offset into the file
          based on<br>
          time stamp, but basically it opens the file, and reads
          sequentially<br>
          through it until it finds the timestamps that it cares about.
          In this<br>
          case it reads all data with a greater timestamp until the end
          of the<br>
          file is reached. It's possible the client is sending an
          incorrect<br>
          timestamp, and maybe too much data is being read. However the
          loop is<br>
          very primitive - it reads all the data in one go before
          passing it<br>
          back to the protocol handler to send down the socket; so by
          that time<br>
          even though the response is technically incorrect and the app
          has<br>
          failed, it should still not cause the VM any issues.<br>
          <br>
          The data is polled every 10 seconds by the client app so I
          would not<br>
          expect there to be 2GB of new data to send. I'm afraid my C
          skills are<br>
          somewhat limited, so I'm not sure how to put together a sample
          app to<br>
          try out writev. The platform is 64bit CentOS 6.3 (equivalent
          to RHEL<br>
          6.3) so I'm not expecting any strange or weird behaviour from
          the OS<br>
          level but of course I could be completely wrong there. The OS
          is<br>
          running directly on hardware, so there's no VM layer to worry
          about.<br>
          <br>
          Hope this might offer some additional clues…<br>
          <br>
          Thanks again!<br>
          <br>
          Kind Regards,<br>
          <br>
          Peter Membrey<br>
          <div class="HOEnZb">
            <div class="h5"><br>
              <br>
              <br>
              On 24 November 2012 00:13, Patrik Nyblom <<a
                moz-do-not-send="true" href="mailto:pan@erlang.org">pan@erlang.org</a>>
              wrote:<br>
              > Hi again!<br>
              ><br>
              > Could you go back to the version without the
              printouts and get back to the<br>
              > situation where writev loops returning 0 (as in the
              strace)? If so, it would<br>
              > be really interesting to see an 'lsof' of the beam
              process, to see if this<br>
              > file descriptor really is open and is a socket...<br>
              ><br>
              > The thing is that writev with a vector that is not
              empty, would never return<br>
              > 0 for a non blocking socket. Not on any modern (i.e.
              not ancient) POSIX<br>
              > compliant system anyway. Of course it is a *really*
              large item you are<br>
              > trying to write there, but it should be no problem
              for a 64bit linux.<br>
              ><br>
              > Also I think there is no use finding the Erlang code,
              I'll take that back,<br>
              > It would be more interesting to see what really
              happens at the OS/VM level<br>
              > in this case.<br>
              ><br>
              > Cheers,<br>
              > Patrik<br>
              ><br>
              ><br>
              > On 11/23/2012 01:49 AM, Loďc Hoguin wrote:<br>
              >><br>
              >> Sending this on behalf of someone who didn't
              manage to get the email sent<br>
              >> to this list after 2 attempts. If someone can
              check if he's hold up or<br>
              >> something that'd be great.<br>
              >><br>
              >> Anyway he has a big issue so I hope I can relay
              the conversation reliably.<br>
              >><br>
              >> Thanks!<br>
              >><br>
              >> On 11/23/2012 01:45 AM, Peter Membrey wrote:<br>
              >>><br>
              >>> From: Peter Membrey <<a
                moz-do-not-send="true" href="mailto:peter@membrey.hk">peter@membrey.hk</a>><br>
              >>> Date: 22 November 2012 19:02<br>
              >>> Subject: VM locks up on write to socket (and
              now it seems to file too)<br>
              >>> To: <a moz-do-not-send="true"
                href="mailto:erlang-bugs@erlang.org">erlang-bugs@erlang.org</a><br>
              >>><br>
              >>><br>
              >>> Hi guys,<br>
              >>><br>
              >>> I wrote a simple database application called
              CakeDB<br>
              >>> (<a moz-do-not-send="true"
                href="https://github.com/pmembrey/cakedb"
                target="_blank">https://github.com/pmembrey/cakedb</a>)
              that basically spends its time<br>
              >>> reading and writing files and sockets.
              There's very little in the way<br>
              >>> of complex logic. It is running on CentOS 6.3
              with all the updates<br>
              >>> applied. I hit this problem on R15B02 so I
              rolled back to R15B01 but<br>
              >>> the issue remained. Erlang was built from
              source.<br>
              >>><br>
              >>> The machine has two Intel X5690 CPUs giving
              12 cores plus HT. I've<br>
              >>> tried various arguments for the VM but so far
              nothing has prevented<br>
              >>> the problem. At the moment I'm using:<br>
              >>><br>
              >>> +K<br>
              >>> +A 6<br>
              >>> +sbt tnnps<br>
              >>><br>
              >>> The issue I'm seeing is that one of the
              scheduler threads will hit<br>
              >>> 100% cpu usage and the entire VM will become
              unresponsive. When this<br>
              >>> happens, I am not able to connect via the
              console with attach and<br>
              >>> entop is also unable to connect. I can still
              establish TCP connections<br>
              >>> to the application, but I never receive a
              response. A standard kill<br>
              >>> signal will cause the VM to shut down (it
              doesn't need -9).<br>
              >>><br>
              >>> Due to the pedigree of the VM I am quite
              willing to accept that I've<br>
              >>> made a fundamental mistake in my code. I am
              pretty sure that the way I<br>
              >>> am doing the file IO could result in some
              race conditions. However, my<br>
              >>> poor code aside, from what I understand, I
              still shouldn't be able to<br>
              >>> crash / deadlock the VM like this.<br>
              >>><br>
              >>> The issue doesn't seem to be caused by load.
              The app can fail when<br>
              >>> it's very busy, but also when it is
              practically idle. I haven't been<br>
              >>> able to find a trigger or any other
              explanation for the failure.<br>
              >>><br>
              >>> The thread maxing out the CPU is attempting
              to write data to the socket:<br>
              >>><br>
              >>> (gdb) bt<br>
              >>> #0  0x00007f9882ab6377 in writev () from
              /lib64/libc.so.6<br>
              >>> #1  0x000000000058a81f in tcp_inet_output
              (data=0x2407570,<br>
              >>> event=<value optimized out>) at
              drivers/common/inet_drv.c:9681<br>
              >>> #2  tcp_inet_drv_output (data=0x2407570,
              event=<value optimized out>)<br>
              >>> at drivers/common/inet_drv.c:9601<br>
              >>> #3  0x00000000004b773f in
              erts_port_task_execute (runq=0x7f98826019c0,<br>
              >>> curr_port_pp=0x7f9881639338)  at
              beam/erl_port_task.c:858<br>
              >>> #4  0x00000000004afd83 in schedule
              (p=<value optimized out>,<br>
              >>> calls=<value optimized out>) at
              beam/erl_process.c:6533<br>
              >>> #5  0x0000000000539ca2 in process_main () at
              beam/beam_emu.c:1268<br>
              >>> #6  0x00000000004b1279 in sched_thread_func
              (vesdp=0x7f9881639280) at<br>
              >>> beam/erl_process.c:4834<br>
              >>> #7  0x00000000005ba726 in thr_wrapper
              (vtwd=0x7fff6cfe2300) at<br>
              >>> pthread/ethread.c:106<br>
              >>> #8  0x00007f9882f78851 in start_thread ()
              from /lib64/libpthread.so.0<br>
              >>> #9  0x00007f9882abe11d in clone () from
              /lib64/libc.so.6<br>
              >>> (gdb)<br>
              >>><br>
              >>> I then tried running strace on that thread
              and got (indefinitely):<br>
              >>><br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> writev(15, [{"", 2158022464}], 1)       = 0<br>
              >>> ...<br>
              >>><br>
              >>>  From what I can tell, it's trying to write
              data to a socket, which is<br>
              >>> succeeding, but writing 0 bytes. From the
              earlier definitions in the<br>
              >>> source file, an error condition would be
              signified by a negative<br>
              >>> number. Any other result is the number of
              bytes written, in this case<br>
              >>> 0. I'm not sure if this is desired behaviour
              or not. I've tried<br>
              >>> killing the application on the other end of
              the socket, but it has no<br>
              >>> effect on the VM.<br>
              >>><br>
              >>> I have enabled debugging for the inet code,
              so hopefully this will<br>
              >>> give a little more insight. I am currently
              trying to reproduce the<br>
              >>> condition, but as I really have no idea what
              causes it, it's pretty<br>
              >>> much a case of wait and see.<br>
              >>><br>
              >>><br>
              >>> **** UPDATE ****<br>
              >>><br>
              >>> I managed to lock up the VM again, but this
              time it was caused by file<br>
              >>> IO,<br>
              >>> probably from the debugging statements.
              Although it worked fine for some<br>
              >>> time<br>
              >>> the last entry in the file was cut off.<br>
              >>><br>
              >>>  From GDB:<br>
              >>><br>
              >>> (gdb) info threads<br>
              >>>    53 Thread 0x7f83e988b700 (LWP 8621)
               0x00007f83ea6da54d in read ()<br>
              >>> from /lib64/libpthread.so.0<br>
              >>>    52 Thread 0x7f83e8c8f700 (LWP 8622)
               0x00007f83ea6d743c in<br>
              >>> <a class="moz-txt-link-abbreviated" href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> () from
              /lib64/libpthread.so.0<br>
              >>>    51 Thread 0x7f83e818d700 (LWP 8623)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    50 Thread 0x7f83e816b700 (LWP 8624)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    49 Thread 0x7f83e8149700 (LWP 8625)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    48 Thread 0x7f83e8127700 (LWP 8626)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    47 Thread 0x7f83e8105700 (LWP 8627)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    46 Thread 0x7f83e80e3700 (LWP 8628)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    45 Thread 0x7f83e80c1700 (LWP 8629)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    44 Thread 0x7f83e809f700 (LWP 8630)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    43 Thread 0x7f83e807d700 (LWP 8631)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    42 Thread 0x7f83e805b700 (LWP 8632)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    41 Thread 0x7f83e8039700 (LWP 8633)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    40 Thread 0x7f83e8017700 (LWP 8634)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    39 Thread 0x7f83e7ff5700 (LWP 8635)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    38 Thread 0x7f83e7fd3700 (LWP 8636)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    37 Thread 0x7f83e7fb1700 (LWP 8637)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    36 Thread 0x7f83e7f8f700 (LWP 8638)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    35 Thread 0x7f83e7f6d700 (LWP 8639)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    34 Thread 0x7f83e7f4b700 (LWP 8640)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    33 Thread 0x7f83e7f29700 (LWP 8641)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    32 Thread 0x7f83e7f07700 (LWP 8642)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    31 Thread 0x7f83e7ee5700 (LWP 8643)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    30 Thread 0x7f83e7ec3700 (LWP 8644)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    29 Thread 0x7f83e7ea1700 (LWP 8645)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    28 Thread 0x7f83e7e7f700 (LWP 8646)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    27 Thread 0x7f83d7c5a700 (LWP 8647)
               0x00007f83ea6db09d in waitpid<br>
              >>> () from /lib64/libpthread.so.0<br>
              >>>    26 Thread 0x7f83d7c53700 (LWP 8648)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    25 Thread 0x7f83d7252700 (LWP 8649)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    24 Thread 0x7f83d6851700 (LWP 8650)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    23 Thread 0x7f83d5e50700 (LWP 8651)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    22 Thread 0x7f83d544f700 (LWP 8652)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    21 Thread 0x7f83d4a4e700 (LWP 8653)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    20 Thread 0x7f83d404d700 (LWP 8654)
               0x00007f83ea20be7d in write ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    19 Thread 0x7f83d364c700 (LWP 8655)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    18 Thread 0x7f83d2c4b700 (LWP 8656)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    17 Thread 0x7f83d224a700 (LWP 8657)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    16 Thread 0x7f83d1849700 (LWP 8658)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    15 Thread 0x7f83d0e48700 (LWP 8659)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    14 Thread 0x7f83d0447700 (LWP 8660)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    13 Thread 0x7f83cfa46700 (LWP 8661)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    12 Thread 0x7f83cf045700 (LWP 8662)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    11 Thread 0x7f83ce644700 (LWP 8663)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    10 Thread 0x7f83cdc43700 (LWP 8664)
               0x00007f83ea215ae9 in syscall<br>
              >>> () from /lib64/libc.so.6<br>
              >>>    9 Thread 0x7f83cd242700 (LWP 8665)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    8 Thread 0x7f83cc841700 (LWP 8666)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    7 Thread 0x7f83cbe40700 (LWP 8667)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    6 Thread 0x7f83cb43f700 (LWP 8668)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    5 Thread 0x7f83caa3e700 (LWP 8669)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    4 Thread 0x7f83ca03d700 (LWP 8670)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    3 Thread 0x7f83c963c700 (LWP 8671)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>>    2 Thread 0x7f83c8c3b700 (LWP 8672)
               0x00007f83ea215ae9 in syscall ()<br>
              >>> from /lib64/libc.so.6<br>
              >>> * 1 Thread 0x7f83eb3a8700 (LWP 8597)
               0x00007f83ea211d03 in select ()<br>
              >>> from /lib64/libc.so.6<br>
              >>> (gdb)<br>
              >>><br>
              >>><br>
              >>> (gdb) bt<br>
              >>> #0  0x00007f83ea20be7d in write () from
              /lib64/libc.so.6<br>
              >>> #1  0x00007f83ea1a2583 in _IO_new_file_write
              () from /lib64/libc.so.6<br>
              >>> #2  0x00007f83ea1a3b35 in _IO_new_do_write ()
              from /lib64/libc.so.6<br>
              >>> #3  0x00007f83ea1a21fd in _IO_new_file_xsputn
              () from /lib64/libc.so.6<br>
              >>> #4  0x00007f83ea17589d in vfprintf () from
              /lib64/libc.so.6<br>
              >>> #5  0x00007f83ea18003a in printf () from
              /lib64/libc.so.6<br>
              >>> #6  0x000000000058f0e8 in tcp_recv
              (desc=0x2c3d350, request_len=0) at<br>
              >>> drivers/common/inet_drv.c:8976<br>
              >>> #7  0x000000000058f63a in tcp_inet_input
              (data=0x2c3d350, event=<value<br>
              >>> optimized out>) at
              drivers/common/inet_drv.c:9326<br>
              >>> #8  tcp_inet_drv_input (data=0x2c3d350,
              event=<value optimized out>)<br>
              >>> at drivers/common/inet_drv.c:9604<br>
              >>> #9  0x00000000004b770f in
              erts_port_task_execute (runq=0x7f83e9d5d3c0,<br>
              >>> curr_port_pp=0x7f83e8dc6e78) at
              beam/erl_port_task.c:851<br>
              >>> #10 0x00000000004afd83 in schedule
              (p=<value optimized out>,<br>
              >>> calls=<value optimized out>) at
              beam/erl_process.c:6533<br>
              >>> #11 0x0000000000539ca2 in process_main () at
              beam/beam_emu.c:1268<br>
              >>> #12 0x00000000004b1279 in sched_thread_func
              (vesdp=0x7f83e8dc6dc0) at<br>
              >>> beam/erl_process.c:4834<br>
              >>> #13 0x00000000005bb3e6 in thr_wrapper
              (vtwd=0x7fffe8266da0) at<br>
              >>> pthread/ethread.c:106<br>
              >>> #14 0x00007f83ea6d3851 in start_thread ()
              from /lib64/libpthread.so.0<br>
              >>> #15 0x00007f83ea21911d in clone () from
              /lib64/libc.so.6<br>
              >>> (gdb)<br>
              >>><br>
              >>> (gdb) bt<br>
              >>> #0  0x00007f83ea6da54d in read () from
              /lib64/libpthread.so.0<br>
              >>> #1  0x0000000000554b6e in
              signal_dispatcher_thread_func (unused=<value<br>
              >>> optimized out>) at sys/unix/sys.c:2776<br>
              >>> #2  0x00000000005bb3e6 in thr_wrapper
              (vtwd=0x7fffe8266c80) at<br>
              >>> pthread/ethread.c:106<br>
              >>> #3  0x00007f83ea6d3851 in start_thread ()
              from /lib64/libpthread.so.0<br>
              >>> #4  0x00007f83ea21911d in clone () from
              /lib64/libc.so.6<br>
              >>> (gdb)<br>
              >>><br>
              >>> (gdb) bt<br>
              >>> #0  0x00007f83ea215ae9 in syscall () from
              /lib64/libc.so.6<br>
              >>> #1  0x00000000005bba35 in wait__
              (e=0x2989390) at<br>
              >>> pthread/ethr_event.c:92<br>
              >>> #2  ethr_event_wait (e=0x2989390) at
              pthread/ethr_event.c:218<br>
              >>> #3  0x00000000004ae5bd in erts_tse_wait
              (fcalls=<value optimized out>,<br>
              >>> esdp=0x7f83e8e2c440, rq=0x7f83e9d5e7c0) at
              beam/erl_threads.h:2319<br>
              >>> #4  scheduler_wait (fcalls=<value
              optimized out>, esdp=0x7f83e8e2c440,<br>
              >>> rq=0x7f83e9d5e7c0) at beam/erl_process.c:2087<br>
              >>> #5  0x00000000004afb94 in schedule
              (p=<value optimized out>,<br>
              >>> calls=<value optimized out>) at
              beam/erl_process.c:6467<br>
              >>> #6  0x0000000000539ca2 in process_main () at
              beam/beam_emu.c:1268<br>
              >>> #7  0x00000000004b1279 in sched_thread_func
              (vesdp=0x7f83e8e2c440) at<br>
              >>> beam/erl_process.c:4834<br>
              >>> #8  0x00000000005bb3e6 in thr_wrapper
              (vtwd=0x7fffe8266da0) at<br>
              >>> pthread/ethread.c:106<br>
              >>> #9  0x00007f83ea6d3851 in start_thread ()
              from /lib64/libpthread.so.0<br>
              >>> #10 0x00007f83ea21911d in clone () from
              /lib64/libc.so.6<br>
              >>> (gdb)<br>
              >>><br>
              >>><br>
              >>> (gdb) bt<br>
              >>> #0  0x00007f83ea6db09d in waitpid () from
              /lib64/libpthread.so.0<br>
              >>> #1  0x0000000000555a9f in child_waiter
              (unused=<value optimized out>)<br>
              >>> at sys/unix/sys.c:2700<br>
              >>> #2  0x00000000005bb3e6 in thr_wrapper
              (vtwd=0x7fffe8266d50) at<br>
              >>> pthread/ethread.c:106<br>
              >>> #3  0x00007f83ea6d3851 in start_thread ()
              from /lib64/libpthread.so.0<br>
              >>> #4  0x00007f83ea21911d in clone () from
              /lib64/libc.so.6<br>
              >>> (gdb)<br>
              >>><br>
              >>><br>
              >>> **** END UPDATE ****<br>
              >>><br>
              >>><br>
              >>> I'm happy to provide any information I can,
              so please don't hesitate to<br>
              >>> ask.<br>
              >>><br>
              >>> Thanks in advance!<br>
              >>><br>
              >>> Kind Regards,<br>
              >>><br>
              >>> Peter Membrey<br>
              >>><br>
              >><br>
              >><br>
              ><br>
              > _______________________________________________<br>
              > erlang-bugs mailing list<br>
              > <a moz-do-not-send="true"
                href="mailto:erlang-bugs@erlang.org">erlang-bugs@erlang.org</a><br>
              > <a moz-do-not-send="true"
                href="http://erlang.org/mailman/listinfo/erlang-bugs"
                target="_blank">http://erlang.org/mailman/listinfo/erlang-bugs</a><br>
            </div>
          </div>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
    <br>
    <div class="moz-signature">-- <br>
      <div style="color: black; text-align: center;"> <span>Mobile: +
          45 26 36 17 55</span> <span> </span> <span style="color:
          black; ">| Skype: eriksoesorensen</span> <span> </span> <span
          style="color: black; ">| Twitter: @eriksoe</span>
      </div>
      <div style="text-align: center; color: gray;"> <span>Trifork A/S
           |  Margrethepladsen 4  |  DK-8000 Aarhus C | </span> <a
          href="http://www.trifork.com/"><span style="text-decoration:
            underline; color: gray;">www.trifork.com</span></a>
      </div>
    </div>
  </body>
</html>