8 Profiling
8.1 Do not guess about performance, when you can know!
If you have time critical code that is running too slow. Do not waste your time trying to guess what might be slowing it down. The best approach is to profile your code to find the bottlenecks and concentrate your efforts on optimizing them. Profiling Erlang code is in first hand done with the tools
fprof
andeprof
. But also the toolscover
andcprof
may be useful.
Do not optimize code that is not time critical. When time is not of the essence it is not worth the trouble trying to gain a few microseconds here and there. In this case it will never make a notable difference.
8.2 Big systems
If you have a big system it might be interesting to run profiling on a simulated an limited scenario to start with. But bottlenecks has a tendency to only appear or cause problems, when there are many things going on at the same time, and when there are many nodes involved. Therefore it is desirable to also run profiling in a system test plant on a real target system.
When your system is big you do not want to run the profiling tools on the whole system. You want to concentrate on processes and modules that you know are central and stand for a big part of the execution.
8.3 What to look for
When analyzing the result file from the profiling activity you will in first hand look for functions that are called many times and has a long "own execution time" (time excluded calls to other functions). Functions that just are called very many times can also be interesting, as even small things can add up to quite a bit if they are repeated often. Then you need to ask yourself what can I do to reduce this time. Appropriate types of questions to ask yourself are:
- Can I reduce the number of times the function is called?
- Are there tests that can be run less often if I change the order of tests?
- Are there redundant tests that can be removed?
- Is there some expression calculated giving the same result each time?
- Is there other ways of doing this that are equivalent and more efficient?
- Can I use another internal data representation to make things more efficient?
These questions are not always trivial to answer. You might need to do some benchmarks to back up your theory, to avoid making things slower if your theory is wrong. See benchmarking.
8.4 Tools
8.4.1 fprof
fprof
measures the execution time for each function, both own time i.e how much time a function has used for its own execution, and accumulated time i.e. including called functions. The values are displayed per process. You also get to know how many times each function has been called.fprof
is based on trace to file in order to minimize runtime performance impact. Using fprof is just a matter of calling a few library functions, see fprof manual page under the application tools.
fprof
is introduced in version R8 of Erlang/OTP. Its predecessoreprof
that is based on the Erlang trace BIFs, is still available, see eprof manual page under the application tools. Eprof shows how much time has been used by each process, and in which function calls this time has been spent. Time is shown as percentage of total time, not as absolute time.8.4.2 cover
cover
's primary use is coverage analysis to verify test cases, making sure all relevant code is covered.cover
counts how many times each executable line of code is executed when a program is run. This is done on a per module basis. Of course this information can be used to determine what code is run very frequently and could therefore be subject for optimization. Using cover is just a matter of calling a few library functions, see cover manual page under the application tools.8.4.3 cprof
cprof
is something in betweenfprof
andcover
regarding features. It counts how many times each function is called when the program is run, on a per module basis.cprof
has a low performance degradation (versusfprof
andeprof
) and does not need to recompile any modules to profile (versuscover
).8.4.4 Tool summarization
Tool Results Size of result Effects on program execution time Records number of calls Records Execution time Records called by Records garbage collection fprof
per process to screen/file large slowdown yes total and own yes yes eprof
per process/function to screen/file medium significant slowdown yes only total no no cover
per module to screen/file small moderate slowdown yes, per line no no no cprof
per module to caller small small slowdown yes no no no 8.5 Benchmarking
A benchmark is mainly a way to compare different constructs that logically have the same effect. In other words you can take two sequential algorithms and see which one is most efficient. This is achieved by measuring the execution time of several invocations of the algorithms and then comparing the result. However measuring runtime is far from an exact science, and running the same benchmark two times in a row might not give exactly the same figures. Although the trend will be the same, so you may draw a conclusion such as: Algorithm A is substantially faster than B, but you can not say that: Algorithm A is exactly 3 times faster than B.
If you want to write a benchmark program yourself there are a few things you must consider in order to get meaningful results.
- The total execution time should be at least several seconds
- That any time spent in setup before entering the measurement loop is very small compared to the total time.
- That time spent by the loop itself is small compared to the total execution time.
To help you with this we provide a benchmarking framework located in the doc/efficiency_guide directory of the Erlang/OTP installation, which consists of bench.erl , bench.hrl , and all.erl . To find out how it works please consult README , you can also look at the example benchmark: call_bm.erl . Here follows an example of running the benchmark defined in
call_bm.erl
in a unix environment:unix_prompt> ls all.erl bench.erl bench.hrl call_bm.erl unix_prompt> erl Erlang (BEAM) emulator version 5.1.3 [threads:0] Eshell V5.1.3 (abort with ^G) 1> c(bench). {ok,bench} 2> bench:run(). Compiling call_bm.erl... Running call_bm: local_call external_call fun_call apply_fun apply_mfa ok 3> halt(). unix_prompt> ls all.erl bench.erl bench.hrl call_bm.erl index.html unix_prompt>The resulting index.html file may look like: index.html .
The results of a benchmark can only be considered valid for the Erlang/OTP version that you run the benchmark on. Performance is dependent on the implementation which may change between releases.