Parse transformery (Was: Re: Calling internal functions - foo::bar() ?)
Ulf Wiger (AL/EAB)
Wed Mar 9 10:04:23 CET 2005
> If you want a lot of real people to benefit from compiler
> optimizations then please finish HiPE so that we can actually
> use it.
In fairness, OTP nowadays contains lots of stuff that came from
the HiPE team, and that we all benefit from.
My personal view on the original question (whether it should be
possible to call local functions):
I do not like the idea of establishing a system that essentially
makes all functions 'exported'. For debugging, I can imagine a
few ways to achieve the same effect.
- The parse_transform option that Luke suggested. If I understood
it correctly, it _doesn't_ export the local functions per se,
but rather a wrapper function through which you can reach all
local functions. I don't see how this interferes much with
the optimization potential.
- If debug_info is present, one could do the same thing using a
support function that reads the debug_info chunk and recompiles
it on the fly with export_all, or inserting a wrapper function
as in Luke's parse_transform.
I think that one of the big problems in large, complex systems
is that the code tends to become difficult to follow, with too
many module hops etc. Another problem I think I see is that
people do low-level optimizations in order to make the code
go faster. My own experience is that code thus 'optimized' is
quite often less efficient than textbook Erlang run through
a reasonably efficient compiler.
We have stated on occasion that native code compilation has
no positive effect on the performance-critical code in AXD 301.
This is based on performance measurements where we've tried to
hipe-compile parts of our code. My own take on this is that
the code is written such that it leaves very little for the
compiler to work with (the large number of module hops is
one factor working against hipe.) But I am absolutely
convinced that the code could be made faster (perhaps much
faster) by re-writing it in a way that it becomes both more
readable and easier for a compiler to optimize(*). Since our
code appears to be fast enough, and robust enough, as it is,
there is little-to-no short-term incentive to do this.
Besides not making the code faster, such pseudo-optimizations
- clutter up the code,
- making it more difficult to understand,
- which leads to systems that are more difficult to debug,
- which increases the need for hacks, like calling local
functions from the shell
- which makes it more difficult for the compiler to
optimize the code
- which increases the perceived need for 'manual
optimizations', which clutters up the code, etc.
In my mind, this is not a question about speed. It's a question
about conceptual integrity. If the debugging need can be
addressed through a reasonable workaround, then let's not
change the language.
(*) I would like to state for the record that I _do not_ mean
that the code isn't competently written. It is -- in many
places very competently written. But there are many forces
at work in large projects, and some of the programming habits
were established at a time when the Erlang compiler did
very few optimizations (and the computers were not nearly
as fast as today), and optimization by hand was an
absolute must in order to meet the performance requirements.
More information about the erlang-questions