[erlang-questions] Updates, lenses, and why cross-module inlining would be nice

Michael Truog <>
Wed Feb 17 08:50:45 CET 2016


On 02/16/2016 11:20 PM, Pierre Fenoll wrote:
> Release inlining could be a compiler option only activated when desired.
> It would require no additional semantics to the language, only the additional lantency associated with swapping inlined code times the number of modules, though we may be able to go sublinear here.
If you are unable to update a system with separate components (and instead need to update everything to make sure the system is able to function correctly), the difference between that and simply restarting the operating system process becomes negligible, unless you consider the extra complexity when not simply restarting the operating system process.  There are other details like keeping ports open that could work when changing everything, or it might not.

>
> On 17 Feb 2016, at 06:53, Michael Truog < <mailto:>> wrote:
>
>> On 02/15/2016 07:38 AM, Pierre Fenoll wrote:
>>> This is surely a very naive idea on how to one day get cross-module inlining
>>> or at least enlarge the range of possibilities for the JIT.
>>>
>>> A lot of inlining is impossible due to the possibility to hot-swap code.
>>> But what about in the context of a release?
>> Inlining isn't impossible due to the possibility to hot-swap code.  However, inlining is simpler when modules are grouped together into a blob you can call a release.
>>
>> Inlining could rely on the inlined version of a function and a non-inlined version of the function where the inlined version of a function contains references to all the versions of various modules that it depends on.  Any hot-code loading should then create a ripple effect, changing the rest of the system in a way not limited to the module being updated, which makes the module update process error-prone (if only due to the difference in latency with a module being indirectly changed due to a function inline) when abandoning the concept of isolation.
>>
>> Tracking the dependencies that can be cyclic should be a complexity that is at least linear, so as you add more inlined functions, the latency with a module update grows in at least a linear way.  That means a module update could cause longer downtime due to a module update when many functions are inlined, where the downtime is worse as more functions get inlined, and that is only when considering the inlining in the old version of the module.  When inlining is used, you have the tendency to group modules together, as you have described, to try to make sure everything is inlined as a group, so you preserve the lower latency function calls.  If all the modules that are inlined together are updated together, you can minimize any problems dealing with non-inlined versions of the functions due to dependencies on old modules, but that would make the module update process longer, and would avoid the isolation a single module update currently provides.
>>
>> Forcing inlined modules to be updated in a big group might be nice with application versions, if you say functions can only be inlined within a single OTP application and not across OTP application boundaries, but as the documentation does show, changing an OTP application during runtime is ad-hoc since it depends on what you are doing to your supervisor(s) and everything else, while a single module update is a simple atomic change between two module versions.
>>
>> It is a really nice property of a module update, with the current situation in Erlang, where you know only the module will be changing when you update the source code.  Keeping that isolation of change to make sure potential errors have a definite and clear scope is important.  The problem of dealing with dependencies due to inlining is why I suggested using a templating concept instead, since that achieves inlining without making module updates error-prone.
>>
>>>
>>> When creating a release (or updating an old one),
>>> the compiler/relx/whatever is generating the release has a clear and finite scope
>>> of all the code that will be shipped and run.
>>> Maybe not a clear idea of extra dependencies or resources,
>>> but at least all the Erlang code should be there.
>>> Then a compiler would be able to do supercompilation, agressive inlining, …
>>>
>>> I was thinking that we could then get a release that resembles an executable
>>> and if we can link external dependencies (OpenSSL, …)
>>> we would then have a somewhat standalone binary that kind of works like escripts.
>>>
>>> Is my reasoning right?
>>> Where am I on the scale from extremely naive all the way up to ROK?
>>>
>>>
>>> Cheers,
>>> -- 
>>> Pierre Fenoll
>>>
>>>
>>> On 7 December 2015 at 05:45, Richard A. O'Keefe < <mailto:>> wrote:
>>>
>>>
>>>     On 3/12/2015, at 8:11 pm, Michael Truog < <mailto:>> wrote:
>>>     > If we had header files that were only allowed to contain functions and
>>>     > types that we called "template files" for lack of a better name
>>>     > (ideally, they would have their own export statements to help
>>>     > distinguish between private functions/types and the interface for modules
>>>     > to use)
>>>     > AND
>>>     > we had the include files (and template files) versioned within the beam
>>>     > output (to address the untracked dependency problem).
>>>     >
>>>     > Wouldn't that approach be preferable when compared
>>>     > to trying to manage the module dependency graph during a
>>>     > runtime code upgrades?  Why would the "template files" approach
>>>     > not be sufficient?
>>>
>>>     These are the differences I can see between 'template files'
>>>     and 'import_static modules'.
>>>
>>>     (1) 'template files' would not be modules.
>>>         Having their own export directives would make them
>>>         modul*ar*, but they could not use 'fun M:F/A' to refer
>>>         to their own functions, having no M they could use.
>>>
>>>         This does not seem like a good thing.
>>>
>>>     (2) Headers can be nested.  If 'template files' were to be
>>>         like headers, this would create nested scopes for top
>>>         level functions, and the possibility of multiple
>>>         distinct functions with the same name and arity and
>>>         in some sense "in" the very same module.
>>>
>>>         I don't see that as an insuperable problem, but it's a
>>>         very big change to the language in order to avoid what
>>>         is really not likely to be a practical problem.
>>>
>>>     (3) 'template files' are copied into the including module,
>>>         which allows different modules to have included
>>>         different versions of a 'template file'.
>>>
>>>         Like headers, this DOES NOT SOLVE the dependency and
>>>         version skew problems, IT CREATES THOSE PROBLEMS.
>>>
>>>     So with 'template files', either you have precisely the
>>>     same problems you have with headers plus a whole lot of
>>>     extra complexity, or you have to track dependencies anyway.
>>>
>>>     If we can take a step away from Erlang,
>>>     we can see that we have a fundamental problem.
>>>
>>>     Resource A is prepared from foundations x and c.
>>>     Resource B is prepared from foundations y and c.
>>>     Resources A and B have to "fit together" in some fashion.
>>>     Common foundation c has something to do with how that
>>>     fitting together works.
>>>
>>>     c is revised to c'.
>>>     A is re-prepared to A'.
>>>     If B were re-prepared, B' and A' would be compatible,
>>>     just as B and A were compatible.
>>>     But B and A' are not compatible.
>>>
>>>     As far as I can see, there are three general ways to
>>>     deal with this kind of problem.
>>>
>>>     1.  Detection.  When you try to use A' and B together,
>>>         detect that they were prepared from c' and c and
>>>         refuse to allow this use.
>>>
>>>         This requires dependency tracking.
>>>
>>>     2.  Prevention.  When c is revised to c', use dependencies
>>>         forward and queue A and B for rebuilding.
>>>
>>>         This requires dependency tracking.
>>>
>>>     3.  Avoidance.  Make the preparation step fairly trivial
>>>         so that whenever A (B) makes references to c, the
>>>         latest version of c is used.
>>>
>>>         In programming language terms, this is not even lazy
>>>         evaluation, it's call-by-name.  It's the way Erlang
>>>         currently handles remote calls.  (More or less.)
>>>
>>>         As a rule of thumb, early binding is the route to
>>>         efficiency (and early error detection), late binding
>>>         is the route to flexibility (and late error detection).
>>>         The performance cost may be anywhere between slight and
>>>         scary depending on the application.
>>>
>>>     3'. It is only necessary to provide the *appearance* of late
>>>         binding.  I have a faint and probably unreliable memory
>>>         that the SPITBOL implementation of SNOBOL4 could generate
>>>         "optimised" code but could back it out and recompile it
>>>         when the assumptions it had made turned out to be wrong.
>>>
>>>         Amongst other things, this requires the system to keep
>>>         track of this-depends-on-that at run time, so it's still
>>>         dependency tracking, but it need not be visible to the
>>>         programmer.
>>>
>>>         What might be noticeable would be a performance blip as
>>>         loading c' caused everything that had been run-time compiled
>>>         against c to be de-optimised.  TANSTAAFL.
>>>
>>>
>>>     _______________________________________________
>>>     erlang-questions mailing list
>>>      <mailto:>
>>>     http://erlang.org/mailman/listinfo/erlang-questions
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> erlang-questions mailing list
>>> 
>>> http://erlang.org/mailman/listinfo/erlang-questions
>>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20160216/8bbef24e/attachment.html>


More information about the erlang-questions mailing list