geometric memory growth
Wed Nov 30 16:18:55 CET 2005
--- "Ulf Wiger (AL/EAB)" <>
> Thomas Lindgren wrote:
> > > and
> > > semantically, the access _has_ to be done at the
> > occurrence - if it is delayed, one has to lug the
> > record along to make sure that the record field
> > has the same result as it would have had at the
> > occurrence.
> > I would say that semantically this evaluation MUST
> NOT be
> > done at the binding occurrence, IF we can observe
> > functional difference compared to the original
> > evaluated naively.
> In the case of a record access (assuming that a
> guard test
> or other exists that asserts that it is in fact a
> you can't observe a functional difference in this
As long as we know the variable has the right record
> Of course, there could
> other, similar cases where the programmer simply
> realise that by being more specific, the compiler is
> more opportunities to make the code efficient.)
Common Lisp compilers (such as cmucl and sbcl) provide
such feedback, so it can be done.
> Of course, the really naiive apporach to binding the
> environment would be to include _all_ bound
> every time.
Well sort of, as long as you keep track of shadowed
variables properly, but going that far is not
(It's also an example of a transformation which is
safe but generally not profitable.)
> > You have told the compiler that the
> > program shall behave AS IF the evaluation occurs
> inside the
> > closure; changing program behaviour is usually
> > bad form.
> So if I wrote:
> f() ->
> X = 5,
> fun() -> X + 5 end.
> it would be bad form to reduce to.
> f() ->
> fun() -> 10 end.
It's safe to do this because the code behaves the same
way in both cases (functionally, not operationally --
memory allocation may differ, for example). The
compiler also judged it to be profitable as well, from
what it sounds like.
> Apparently, it's OK to do such constant propagation
> in some cases, but not in others. That's fine, I
> don't question that. The question is: where do you
> draw the line?
The precise actions of the beam compiler is basically
up to Björn et al. Can't help you there :-) But if
turning this optimization on/off makes the program
behave (functionally) differently, then the
optimization is not safe. If the program runs more
slowly with the optimization on, the optimization is
not profitable. Those are the rules of thumb. Unsafe
is a bug, unprofitable is disappointing.
(The "proper" way of reasoning for a compiler is to
talk about ALL programs, rather than specific ones.
So, an optimization has to be safe for all programs;
while profitability is a tradeoff by the compiler
writer, influenced by the compiler users, of course.)
(And let's not even get into the topic of interacting
optimization passes :-)
> Of course, the compiler can consider none of those,
> since it
> has no way of knowing. It can of course guess... If
> guessed in my case, it was completely off base.
Yes, that's why general code motion can be tricky.
Production compilers tend to use carefully tuned
heuristics or profiling to decide. (It's essentially a
harder version of guessing the number of iterations of
Often, expensive operations are simply left where they
occur, while lightweight operations can be moved
around (since the cost of being wrong is slight).
> > The compiler usually resorts to heuristics at this
> point. For
> > a production compiler, the idea is usually to
> avoid making
> > the rare but spectacularly bad decisions :-)
> And in this particular case, importing the whole
> was a spectacularly bad decision (whether it was
> mine or
> the compilers.)
To make it clearer, the compiler has to be careful not
to make spectacularly bad decisions on its own.
For example, if you had written your OPTIMIZED code at
first and the compiler then turned it into the
original code -- ie, if the compiler moved the record
INTO the closure -- then you probably would have
complained bitterly. (I know I would :-)
Yahoo! Mail - PC Magazine Editors' Choice 2005
More information about the erlang-questions