[erlang-questions] Measuring message queue delay
Wed Apr 29 13:24:49 CEST 2015
maybe you can use the tracing facilities of Erlang. At a low level it is
reasonably easy to use erlang: trace/3 with options
[send,'receive',timestamp] and get all messages from designated processes
with timestamps sent to a "tracer" process. You can then do whatever
analysis you need on that.
On Apr 29, 2015 10:12 AM, "Roger Lipscombe" <roger@REDACTED> wrote:
> For various reasons, I want a metric that measures how long messages
> spend in a process message queue. The process is a gen_server, if that
> has any bearing on the solution. Also, this is for production, so it
> will be always-on and must be low-impact.
> I could do this by timestamping _every_ message sent to the process
> and then reporting the deltas, but I think that's ugly.
> I thought of posting a message to self(), with a timestamp and then
> measuring the delta. I could then post another one as soon as that
> message is processed. Obviously, this leaves the process continually
> looping, which is not good.
> So, instead, I could use erlang:send_after, but that requires two
> messages: a delayed one triggered by erlang:send_after, and an
> immediate one for measuring the delay.
> That's a bit complicated.
> Would it be sensible to send a message, with erlang:send_after, with
> the _expected_ timestamp, and then compute the delta from that?
> Or, alternatively, what other ways could I measure how long a process
> is taking to handle its message queue?
> erlang-questions mailing list
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions