[erlang-questions] Updates, lenses, and why cross-module inlining would be nice
Richard A. O'Keefe
ok@REDACTED
Mon Dec 7 05:45:03 CET 2015
On 3/12/2015, at 8:11 pm, Michael Truog <mjtruog@REDACTED> wrote:
> If we had header files that were only allowed to contain functions and
> types that we called "template files" for lack of a better name
> (ideally, they would have their own export statements to help
> distinguish between private functions/types and the interface for modules
> to use)
> AND
> we had the include files (and template files) versioned within the beam
> output (to address the untracked dependency problem).
>
> Wouldn't that approach be preferable when compared
> to trying to manage the module dependency graph during a
> runtime code upgrades? Why would the "template files" approach
> not be sufficient?
These are the differences I can see between 'template files'
and 'import_static modules'.
(1) 'template files' would not be modules.
Having their own export directives would make them
modul*ar*, but they could not use 'fun M:F/A' to refer
to their own functions, having no M they could use.
This does not seem like a good thing.
(2) Headers can be nested. If 'template files' were to be
like headers, this would create nested scopes for top
level functions, and the possibility of multiple
distinct functions with the same name and arity and
in some sense "in" the very same module.
I don't see that as an insuperable problem, but it's a
very big change to the language in order to avoid what
is really not likely to be a practical problem.
(3) 'template files' are copied into the including module,
which allows different modules to have included
different versions of a 'template file'.
Like headers, this DOES NOT SOLVE the dependency and
version skew problems, IT CREATES THOSE PROBLEMS.
So with 'template files', either you have precisely the
same problems you have with headers plus a whole lot of
extra complexity, or you have to track dependencies anyway.
If we can take a step away from Erlang,
we can see that we have a fundamental problem.
Resource A is prepared from foundations x and c.
Resource B is prepared from foundations y and c.
Resources A and B have to "fit together" in some fashion.
Common foundation c has something to do with how that
fitting together works.
c is revised to c'.
A is re-prepared to A'.
If B were re-prepared, B' and A' would be compatible,
just as B and A were compatible.
But B and A' are not compatible.
As far as I can see, there are three general ways to
deal with this kind of problem.
1. Detection. When you try to use A' and B together,
detect that they were prepared from c' and c and
refuse to allow this use.
This requires dependency tracking.
2. Prevention. When c is revised to c', use dependencies
forward and queue A and B for rebuilding.
This requires dependency tracking.
3. Avoidance. Make the preparation step fairly trivial
so that whenever A (B) makes references to c, the
latest version of c is used.
In programming language terms, this is not even lazy
evaluation, it's call-by-name. It's the way Erlang
currently handles remote calls. (More or less.)
As a rule of thumb, early binding is the route to
efficiency (and early error detection), late binding
is the route to flexibility (and late error detection).
The performance cost may be anywhere between slight and
scary depending on the application.
3'. It is only necessary to provide the *appearance* of late
binding. I have a faint and probably unreliable memory
that the SPITBOL implementation of SNOBOL4 could generate
"optimised" code but could back it out and recompile it
when the assumptions it had made turned out to be wrong.
Amongst other things, this requires the system to keep
track of this-depends-on-that at run time, so it's still
dependency tracking, but it need not be visible to the
programmer.
What might be noticeable would be a performance blip as
loading c' caused everything that had been run-time compiled
against c to be de-optimised. TANSTAAFL.
More information about the erlang-questions
mailing list