fprof
is a profiling tool that can be used to get a picture of
how much processing time different functions consumes and in which
processes.
fprof
uses tracing with timestamps to collect profiling
data. Therfore there is no need for special compilation of any
module to be profiled.
fprof
presents wall clock times from the host machine OS,
with the assumption that OS scheduling will randomly load the
profiled functions in a fair way. Both own time i.e the
time used by a function for its own execution, and
accumulated time i.e execution time including called
functions.
Profiling is essentially done in 3 steps:
1
2
3
Since fprof
uses trace to file, the runtime performance
degradation is minimized, but still far from negligible,
especially not for programs that use the filesystem heavily
by themselves. Where you place the trace file is also important,
e.g on Solaris /tmp
is usually a good choice,
while any NFS mounted disk is a lousy choice.
Fprof can also skip the file step and trace to a tracer process of its own that does the profiling in runtime.
The following sections show some examples of how to profile with Fprof. See also the reference manual fprof(3).
If you can edit and recompile the source code, it is convenient
to insert fprof:trace(start)
and
fprof:trace(stop)
before and after the code to be
profiled. All spawned processes are also traced. If you want
some other filename than the default try
fprof:trace(start, "my_fprof.trace")
.
Then read the trace file and create the raw profile data with
fprof:profile()
, or perhaps
fprof:profile(file, "my_fprof.trace")
for non-default
filename.
Finally create an informative table dumped on the console with
fprof:analyse()
, or on file with
fprof:analyse(dest, [])
, or perhaps even
fprof:analyse([{dest, "my_fprof.analysis"}, {cols, 120}])
for a wider listing on non-default filename.
See the fprof(3) manual page for more options and arguments to the functions trace, profile and analyse.
If you have one function that does the task that you want to
profile, and the function returns when the profiling should
stop, it is convenient to use
fprof:apply(Module, Function, Args)
and related for the
tracing step.
If the tracing should continue after the function returns, for
example if it is a start function that spawns processes to be
profiled, you can use
fprof:apply(M, F, Args, [continue | OtherOpts])
.
The tracing has to be stopped at a suitable later time using
fprof:trace(stop)
.
It is also possible to trace immediately into the profiling process that creates the raw profile data, that is to short circuit the tracing and profiling steps so that the filesystem is not used.
Do something like this:
{ok, Tracer} = fprof:profile(start), fprof:trace([start, {tracer, Tracer}]), %% Code to profile fprof:trace(stop);
This puts less load on the filesystem, but much more on the Erlang runtime system.