[erlang-questions] 20k messages in 4s but want to go faster!
Mon Jul 13 08:22:59 CEST 2009
Looking at the inet_drv.c, it should not be copying memory. I was using
send_timeout though, perhaps the timer for every socket was causing it.
Since it's a streaming server, I was a bit nervous about using send_timeout
infinity, there is a ton of data constantly moving through the server.
My CPU usage was very linear with gen_tcp, I never got to try sending out
data at 1gbit, but it looked like it would max out the quadcore xeon it was
running on at that speed.
Using my driver I hardly notice any CPU difference between 10mbit or 200mbit
But you definitely should be using process per socket model.
On Sun, Jul 12, 2009 at 9:49 PM, Joel Reymont <joelr1@REDACTED> wrote:
> On Jul 12, 2009, at 7:38 PM, Rapsey wrote:
> Well I have a similar problem with my streaming server. I suspect the main
>> issue with using gen_tcp is that every gen_tcp:send call will involve a
>> memory allocation and memcpy.
> Why do you think so? I thought binaries over 64 bytes are not copied.
> My server needs to be able to fill up at least a gigabyte connection and
>> this involves a very large number of gen_tcp:send calls. The only way I
>> could achieve that number is by writing my own driver for data output.
> Did you write your own driver already?
> What kind of throughput were you getting with your old Erlang code?
> This means bypassing gen_tcp completely and writing the socket handling
>> stuff by hand. Basically whenever I need to send the data, I loop through
>> the array of sockets (and other information) and send from one single
>> (the sockets are non-blocking).
> I'll take a closer look at the TCP driver but other Erlang internals I
> looked at use iovecs and scatter-gather IO (writev, etc.).
> Mac hacker with a performance bent
More information about the erlang-questions