Megaco simple Media Gateway bleeds memory under load.

Peter Mander erlang@REDACTED
Wed Sep 11 00:24:37 CEST 2002


Hi Håkan, hi Ulf

I've been offline/bedridden with the flu, hence the delay on returning
emails. This has forced me to think (in bed) rather than hack away at a
"solution", and I now understand the problem better: messages are being
queued by spawning new threads, but they are not being processed quickly
enough to clear a growing backlog of messages. Hence your suggestion below
of throttling the socket when the number of currently processed messages
exceeds some upper threshold.

Sorry for accusing Erlang of bleeding memory, it's simply not true!

I'll hopefully soon be able to supply some feedback on my attempts at
distributed encoding and decoding, there is a growing need to prove the
scalability of the MGC and MG. They may end up being used as performance
testing tools for our product, not just functional testing tools.

Pete.

----- Original Message -----
From: "Hakan Mattsson" <hakan@REDACTED>
To: "Peter-Henry Mander" <erlang@REDACTED>
Cc: <erlang-questions@REDACTED>
Sent: Wednesday, August 28, 2002 9:11 PM
Subject: Re: Megaco simple Media Gateway bleeds memory under load.


On Wed, 28 Aug 2002, Peter-Henry Mander wrote:

Pete> I think understand you now, lets see if I got it. With a large enough
Pete> heap, a spawned process will not require further chunks from system
Pete> memory, and therefore will not cause garbage collection sweeps, but
only
Pete> while the process doesn't terminate. When the process itself
terminates,
Pete> garbage collection reclaims the process heap and anything else
allocated
Pete> from it in one sweep. Am I correct? If I am, I can understand why it
Pete> would be more efficient than allocating and freeing small fragments of
Pete> system memory, and that it would avoid memory fragmentation.

As a dead process has no data alive, no GC sweep is needed at all
(unless you use unified heap).

Pete> But what if the megaco_messenger processes are spawned at a high rate,
Pete> as they appear to be in the MG when receiving almost 1000
Pete> transactions/second? I suspect that memory will get eaten up very
Pete> quickly by spawned processes with large heaps. Is it possible that the
Pete> garbage collector process is starved (since CPU usage is 99%) due to
the
Pete> rate at which megaco_messenger processes are being spawned? My idea of
Pete> maintaining a pool of megaco_messenger may not be an elegant solution,
Pete> and I may be accused of micro-managing memory as would a C programmer!
Pete> But I would like to persue it, simply to convince myself, either yes
or
Pete> no, whether this imperative pardigm may have value in a functional
Pete> language.

Regardless of the issue of spawning new fresh processes or reusing a
pool of old ones, you can still get a more predictable memory
consumption by explicitly blocking the socket when the number of
currently processed messages exceeds some upper threshold and then
unblock it at some lower threshold.

Pete> I'll try to keep you updated on this, and the distributed MG
Pete> as you describe below, and the Megaco V2 work I mentioned earlier.
Pete>
Pete> I wonder, since you seem to still be developing the Megaco stack,
Pete> whether my feedback is useful to you?

Yes, your feedback is useful. Our Megaco application will also support V2
(both text and binary), but currently not know when it will be completed.
We need to discuss this internally when Micael returns from his vacation.

My own work situation is quite unclear right now, as the Ericsson
Computer Science Lab recently has ceased to exist. I hope that I still
will have the opportunity to continue working with Erlang, but some
dark forces in the new organization want me to work with something
completely different (and boring) stuff.

Pete> I am extremely interested in the
Pete> partial/distributed decoding you mentioned in the last paragraph. Is
Pete> there a complementary distributed encoding project in the pipeline?

The encoding part is already distributed! There is no automatic load
balancing, but you have the opportunity to choose any Erlang node
and perform the encoding there. The encoding is performed on the node
where the megaco:call/3 or megaco:cast/3 is invoked.

/Håkan





More information about the erlang-questions mailing list