<br><br><div class="gmail_quote">On Thu, Dec 15, 2011 at 11:48 AM, Zabrane Mickael <span dir="ltr"><<a href="mailto:zabrane3@gmail.com">zabrane3@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">> On thread initialization the pipes get created and read pipe is placed in epoll/kqueue. Write pipe is from erlang to NIF, and read pipe is for NIF to read that data. I have a simple struct:<br>
> typedef struct kqmsg<br>
> {<br>
> char what;<br>
> int fd;<br>
> ErlNifPid pid;<br>
> void* data;<br>
> }kqmsg;<br>
> So I just fill up this struct with whatever info is required and do: write(pipe_write,&msg,sizeof(struct kqmsg)<br>
><br>
> As for sockets, I do not use prim_inet:getfd, sockets are completely separate from gen_tcp. The NIF thread keeps the socket FD until it reads the first buffer from it. Once this happens it creates a socket resource, then sends the binary and socket with enif_send to the Erlang process that is in charge of deciding what to do with it.<br>
<br>
<br>
</div>Does this approach "really" increase performances of your server?<br></blockquote><div><br>In the live streaming use case absolutely. HTTP is half-duplex. Once the server receives all the headers, it does not need to listen on that socket anymore. This means there is no need to keep the socket in the NIF listen thread and all you are doing is periodic write's on the FD from the stream buffer. This is where the real optimization is. Maintaining one circular buffer and just looping sockets over it and doing writes.<br>
On my servers traffic goes through haproxy on port 80. Once there is a good amount of users on the server, haproxy actually uses more CPU than my streaming server.<br><br><br>Sergej<br></div></div>