[erlang-questions] HiPE performance gain (Was: Re: [erlang-questions] Erlang static linked interpreter)
Kostis Sagonas
kostis@REDACTED
Thu Jan 20 23:33:20 CET 2011
Ciprian Dorin Craciun wrote:
> On Thu, Jan 20, 2011 at 08:54, Bengt Kleberg <bengt.kleberg@REDACTED> wrote:
>> Greetings,
>>
>> This will not solve your main question, but instead try to help you the
>> one that came up.
>>
>> If you want to know the performance gain from using HiPE you should
>> measure the code execution time with and without HiPE.
>>
>> bengt
>
>
> I am perfectly aware that no technology is a silver bullet in all
> circumstances, and that you have to benchmark a particular technology
> in a particular case to see its effect.
>
> But in my original question (although I've not clearly stated so)
> I was curios what is the impact of HiPE in general, meaning what is
> the experience different people had with HiPE?
I will provide some answers to your questions below, but since I've
read/heard this asked many times before, I think I mostly agree with
Bengt's suggestion here.
Suppose you are told that other people experience a 10% slowdown or a 3
times speedup in their application when using HiPE, what exactly does
that tell you about how HiPE will perform on *your* application?
(which presumably you care most about)
We've spent a lot of effort to smoothly integrate the HiPE compiler in
Erlang/OTP and it's just a matter of adding +native in the ERLC options
in your Makefile to try HiPE out. So why don't you try this and see for
yourself?
Anyway, some answers to these questions:
> More exactly what are the use-cases which are likely to benefit
> from HiPE? I would guess that in a CPU bound application HiPE would do
> better than without, but what about network I/O bound applications, or
> applications that deal mainly with strings (represented as lists), or
> applications that deal mostly with binary matching?
Where HiPE really shines is in programs containing binaries. In the
past, we've observed speedups of 8-10 times in such programs. But this
is a speedup that is difficult to achieve in the general case. I would
guess that in applications which are CPU bound one can get a speedup
that on average is in the range 50% to 2.5 times faster than BEAM.
If our application is mainly network bound, then you can probably forget
about native code compilation.
But as I wrote, it's one's *own* use case that matters most!
I for once, I am spending most of my time these days running dialyzer on
various code bases. In large code bases, I would definitely not like it
if dialyzer was not insisting on using native code by default:
$ dialyzer --build_plt --output_plt ~/.dialyzer_plt-nn --apps erts
kernel stdlib mnesia
Compiling some key modules to native code... done in 0m0.09s
Creating PLT /home/kostis/.dialyzer_plt-nn ...
...
done in 8m21.87s
Compare with:
$ dialyzer --no_native --build_plt --output_plt ~/.dialyzer_plt-nn
--apps erts kernel stdlib mnesia
Creating PLT /home/kostis/.dialyzer_plt-nn ...
...
done in 17m12.80s
I am using dialyzer as an example, because it's not a toy benchmark.
Instead it is a very complex program (about 30KLOCs of Erlang code)
where all sorts of things are involved (I/O for reading files, accessing
ETS tables, big data structures, uses of many stdlib files, etc.)
But don't just take my word, try it for yourself.
Kostis
More information about the erlang-questions
mailing list