[erlang-questions] How to dig why get_tcp/port allocate so muchbinary memory?
Thu Sep 22 14:35:16 CEST 2016
Thanks for your reply.
My OS version is OS X 10.11.3; the erlang version is OTP18.3; the erlang compile flag is default, we don't
change any configuration.
The problem also exist on CentOS 6.4.
------------------ Original ------------------
From: "Motiejus Jakštys"<>;
Date: Thu, Sep 22, 2016 07:59 PM
Subject: Re: [erlang-questions] How to dig why get_tcp/port allocate so muchbinary memory?
Replying personally to reduce noise in the list, but it would be super helpful to know your OS, Erlang version and Erlang compile flags. A follow-up email would be worthwhile (as I can't answer you, but, with the information, there are people in this list that can).
On Thu, Sep 22, 2016 at 11:41 AM, 叶少波 <> wrote:
I wrote a server that accepts TCP connections. The listen socket starts with the options below:
On the server node every new connection will spawn a new gen_server to handle it.
And then I spawn 5000 gen_servers on another erlang node(I call it Client node), every gen_server will connect to the server via TCP.
It is a really simple case.
After setup 5000 connections I found the binary memory on server node was used up to 17G;
and the binary memory on the Client node was 42M. It is a huge difference.
Then I rebooted the erlang node with "+Mim true" and "+Mis true"; after re-setup 5000 connections again, I used
instrument:memory_status(types) to check the memory status, I found the dry_binary allocated 17G memory:
My question is : How can I decrease the drv_binary memory? What parameter caused the server used so much memory?
erlang-questions mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions