Megaco simple Media Gateway bleeds memory under load.
Wed Aug 28 16:43:06 CEST 2002
On Tue, 27 Aug 2002, Peter-Henry Mander wrote:
Pete> I'm not sure I understand you here. I know how many the maximum number
Pete> of concurrent unacknowledged requests is going to be as I have full
Pete> control over the MGC, so I expect to need a similar number of
Pete> receive_message processes and possibly a similar number of timeout
Pete> processes too, which I hope to avoid having collected at all for the
Pete> duration of the test.
Sorry. What I was getting at, was that it is not obvious that spawning
new processes should be avoided by performance reasons. In the Megaco
application one process (megaco_tcp/megaco_udp) reads the bytes off a
socket and spawns a new process (megaco_messenger) with a binary as
argument. The new process decodes the binary and creates lots of
temporary data and eventually terminates. If the initial size of the
process heap is set large enough (see spawn_opt), no GC is needed at
Pete> Well, at the moment the congestion handling may be safe, but I would
Pete> like to remove all congestion at the MG end if this is possible. It
Pete> seems to me that the solution may lie in distributing the MG over two or
Pete> more physical nodes, but I'm unclear how I'm going to achieve this. The
Pete> Megaco manuals hint at doing exactly this, but I haven't found an
Pete> example of how to do it. I will need to use distributed nodes for the
Pete> MGC anyway, to push performance into four-figure setup/teardown rates,
Pete> so any information or instructions will be very welcome!
There are some (limited) documentation about distributed MG's/MGC's in
the reference manual for megaco:connect/4.
The basic idea is that you invoke megaco:connect/4 as usual on the
Erlang node (A) that holds the connection to the MGC. Then you may
invoke megaco:connect/4 on another node (B) using the SAME ControlPid.
Now you have enabled usage of megaco:call/3 and megaco:cast/3 from the
B node (as well as from node A). The encoding work is performed on the
originating node (B) while the decoding work is performed on the node
holding the connection (A).
In order to off-load the A node as much as possible, we have been
looking into a more sophisticated solution where the message is
partially decoded on the A node. And then based on the extracted
transaction id, the message is forwarded as a binary to node B where
the complete decoding is performed. The implementation of this is
however not complete yet.
More information about the erlang-questions