Child modules draft feedback wanted
Romain Lenglet
rlenglet@REDACTED
Tue Apr 4 12:08:44 CEST 2006
Sorry, in this discussion, emails get longer and longer...
Richard A. O'Keefe wrote:
[...]
> Practically, there is a problem. Modules have to do at least
> two different things. They are the units of hot loading.
> They are the units of encapsulation. I'm sure I remember
> someone talking about using different mechanisms for different
> jobs. I'm proposing "full" modules as units of hot loading,
> and "child" modules as units of encapsulation.
[...]
Ah! I agree with that new objective of separating both concerns.
And you should state that objective explicitly in your draft
document, too.
Is there any motivation for defining a new
smaller-than-module-level concept (child module) instead of a
new larger-than-module-level concept that would aggregate
functions exported by several modules, and be a facade to those
modules?
In the latter case, modules would still be units of hot loading,
but units of encapsulation would be modules, and facades (or
composite modules, ... or whatever).
I am thinking out loud.
A quick and dirty draft of what would be such a facade:
-facade(example).
-export_from_module(some_module, [func1/0, func2/12]).
-export_from_module(another_mod, [func3/0]).
Or it could be simply a normal module:
-module(example).
-export_from_module(some_module, [func1/0, func2/12]).
-export_from_module(another_mod, [func3/0]).
And such a facade could be used by modules just like any normal
module:
-module(client).
start() -> example:func1().
Maybe a facade could allow to rename functions when re-exporting
them.
[...]
> Of course we can manage without that; we have. But there is a
> big difference between looking at a function and KNOWING that
> it is only used in a small part of a large module and having
> to CHECK the whole module to find out.
>
> Being able to split a large module into pieces for editing
> also means that we can extend version control below the module
> level. If someone wants to edit the sorting routines in
> lists.erl, why should they have to check out the mapping and
> filtering functions at the same time?
I just don't understand why you assume that we must introduce a
lower-scale-than-module concept to do that?
Couldn't you just cut modules into other modules?
> Then we can go the other way.
>
> There are things which are currently expressed as collections
> of modules, where there isn't the slightest intention that
> some of the modules should ever be used anywhere else, where
> indeed things could go badly wrong if some of the "private"
> modules were used by unrelated code. There is no way of
> marking modules as "private", although the
>
> -export_to(Module, [F1/N1, ..., Fk/Nk]).
>
> directive that I proposed many years ago would make this kind
> of thing a lot safer and a lot easier to grasp the structure
> of than it presently is. Having *one* full module providing
> the "public" interface and making the other modules
> replaceable out-of-line children provides all of the benefits
> of the present setup, with fewer risks, and making them
> in-line children would remove the remaining risks. Here the
> absence of structure within modules has forced people to
> expose interfaces that should not have been exposed.
I understand that you have two objectives:
1- encapsulation: make parts of modules implementations and
interfaces private to modules;
2- structuring: introduce more structure in implementations.
I think that your out-of-line child modules do not help for
encapsulation: any module can declare use it as a child, even if
that child module was meant to be "private" to a specific
collection of modules. And as you wrote, "things could go badly
wrong if some of the "private" modules were used by unrelated
code": this applies also to out-of-line child modules.
>From your documentation, "To_The_Child is like an -export
directive". Just like -export, there is the risk of being forced
to export "too private" features.
In-line child modules, on the other hand, are really
encapsulated, and not accessible from outside of the declaring
module. But do they really help structuring modules? It does not
help about splitting modules into several more easily editable
pieces. You complained above about modules such as lists.erl:
with in-line chlid modules it would be just as difficult to
browse the code.
More generally, we need something that would provide
encapsulation like your "in-line" modules, but sharing like your
"out-of-line" modules.
How about my naive "facade" concept above?
It would not really help about encapsulation: composed modules
would still be accessible separately. But it would help
structuring: a facade could gather be a subset of the exported
functions of a module (hence the interface of modules such as
lists.erl could be split into several facades / interfaces), or
gather functions from several modules (hence, would provide a
structure to a set of modules).
A larger-than-module concept needs more thoughts, and introducing
such naive facades may be too basic. For instance, we could gain
from introducing composite modules, which would completely
encapsulate modules.
[...]
> > I completely fail to see how [the renaming trick] can
> > possibly be done at compile time.
>
> I was considering a solution based on external calls only,
> not on implicit applies.
>
> Right. You were considering, in short, a "solution" which
> doesn't work, and now you are talking about patching it into
> working.
No, I considered external calls only, from the start.
> foo:g(42) is an external call, while M:g(42)
> is an implicit apply. The distinction is very clear in
> Erlang.
>
> Untrue. There is NO distinction between the semantics of
> foo:g(42) and the semantics of M = foo, M:g(42) in Erlang
> (other than the existence of the variable M).
>
> For instance, cf. section 4.2 of the Efficiency Guide.
>
> Er, which version? In the current (5.4.13) version, section
> 4.2 is "Ets specific". Can you mean section 3.2?
Yes, sorry. Typo.
> But that is a fact about the current implementation, and the
> performance of so-called "implicit apply" could be GREATLY
> improved. Heck, it should be trivial. In fact, it's so
> trivial I am astonished that it's not already done.
Perhaps because your solution below is already slower than
implicit applies? See below.
> Here's what you do.
>
> 1. Each module has some number of functions
>
> '0'(a) -> a();
> ... one rule for each a/0 that is exported
> '0'(F) -> erlang:no_such_function(?MODULE, F, []).
>
> ...
>
> '8'(X1, ..., X8, a) -> a(X1, ..., X8);
> ... one rule for each a/8 that is exported
> '8'(X1, ..., X8, F) -> erlang:no_such_function(?MODULE, F,
> [X1,...,X8]).
>
> up to whatever size the implementors deem appropriate.
> These rules are built at compile time.
>
> 2. The system has a set of functions, which I'll put in
> 'erlang:'.
>
> '0'(F, mod1) -> mod1:'0'(F);
> one rule for each module
> '0'(F, M) -> erlang:no_such_module(M, F, []).
>
> ...
>
> '8'(X1, ..., X8, F, mod1) -> mod1:'8'(X1, ..., X8, F);
> one rule for each module
> '8'(X1, ..., X8, F, M) -> erlang:no_such_module(M, F,
> [X1,...,X8]).
>
> These rules are built at run time, but that's old
> technology. The new rules are added at the front of the
> existing code, and it's basically just a matter of adding an
> entry to a hash table.
>
> 3. A call M:f(X1,...,Xk) where k <= 8 is compiled as
>
> erlang:'k'(X1, ..., Xk, f, M)
>
> and a call apply(M, F, [X1,...,Xk]) where k <= 8 is known
> at compile time or the equivalent M:F(X1, ..., Xk) is compiled
> as
>
> erlang:'k'(X1, ..., Xk, F, M).
>
> For example, M:foo(42) should be compiled as
> erlang:'1'(42, foo, M).
>
> If an 'external call' costs 1.08 units relative to a 'local
> call', then an 'implicit apply' with not too many arguments
> should just cost two of these (one call to erlang:'k'/k+2 and
> one call to M:'k'/k+1), or 2.16 units. It really has no
> business being 7.76 units; that's much slower than it should
> be.
Let's consider facts, and let's profile that module:
-module(test).
-export([start/0,call_me/0]).
start() ->
Module = ?MODULE,
start_implicit_apply(Module, 100000),
start_matching(Module, 100000).
start_implicit_apply(_Module, 0) -> ok;
start_implicit_apply(Module, Count) ->
Module:call_me(),
start_implicit_apply(Module, Count - 1).
start_matching(_Module, 0) -> ok;
start_matching(Module, Count) ->
invoke_call_me(Module),
start_matching(Module, Count - 1).
call_me() -> ok.
invoke_call_me(a) -> ok;
invoke_call_me(b) -> ok;
invoke_call_me(c) -> ok;
invoke_call_me(d) -> ok;
invoke_call_me(e) -> ok;
invoke_call_me(f) -> ok;
invoke_call_me(g) -> ok;
invoke_call_me(h) -> ok;
invoke_call_me(i) -> ok;
invoke_call_me(j) -> ok;
invoke_call_me(k) -> ok;
invoke_call_me(l) -> ok;
invoke_call_me(test) -> test:call_me().
And let's profile using fprof:
1> l(test).
2> fprof:apply(test, start, []).
3> fprof:profile().
4> fprof:analyse().
In the output, we can see:
...
{ {test,start_matching,2}, 100001, 3523.577,
1451.172}, %
...
{ {test,start_implicit_apply,2}, 100001, 2944.968,
1901.568}, %
...
3523.577 > 2944.968
But perhaps you will find a much better way to do pattern
matching in calls and it will be so trivial that you will be
astonished that it's not already done? ;-)
[...]
> This kind of thing gets an order of magnitude harder if module
> names are not, as they are now, context-independent.
>
> And both are compiled in a very different way in beam code.
>
> That is a fact about the present implementation which should
> clearly change. It is not a fact about the semantics of the
> language.
OK.
> So both kinds of statements can be manipulated differently by
> the compiler,
>
> The compiler is at liberty to manipulate things differently
> any way it likes, AS LONG AS IT PRESERVES THE SEMANTICS. The
> semantics of m:f(X1) and M:f(X1) do NOT different in any way
> that entitles a compiler to give them translations with
> incompatible effects.
They have compatible effects: transformed calls are still calls
to functions in modules.
>
> and if only the module name in some external calls
> need to be replaced by a compiler, this is possible and easy.
>
> That would CHANGE THE SEMANTICS of existing code, which my
> proposal does not do. Making m:f(X) and (M=m, m:f(X)) do
> different things would be downright evil.
>
> Just take a look at lib/compiler/src/genop.tab in the OTP
> sources for the list of beam opcodes:
> - local calls are compiled into call/2 ops;
> - external calls are compiled into call_ext/2 ops;
> - implicit apply calls are compiled into normal calls to
> apply.
>
> They shouldn't be. As noted above, it is possible to do MUCH
> better. (Actually, section 3.2 of the Efficiency Guide says
> that "The compiler will now optimise this [implicit apply]
> syntax giving it better performance than apply/3], which
> cannot be true if implicit applies are just "compiled into
> normal calls to apply". Something is wrong here.
Yes, in fact, they are optimized calls to apply. Optimization is
done by the compiler.
> HOW it is done is not in fact relevant. The point is that
> whatever the compiler does (and it can make local calls work
> by sending e-mail if it wants to) it has to get the SEMANTICS
> right, which according to all the Erlang documentation
> (including the specification "books") I've ever seen means
> that "implicit apply" and apply and external call all have the
> SAME semantics.
I agree that I take a different viewpoint: I am thinking in terms
of the Erlang VM, and its instruction set. I consider that it is
a better abstraction level to think about program transformation,
than the source language level.
Now, I agree that the semantics of my external calls is
different. But in fact, I have not changed the semantics of
existing external calls, but I have *added* a variant form of
external call. In Mod:call(), if Mod is statically determined to
be an atom that is declared in a -require clause in the module,
then this is a "modified" external call, otherwise it is a
"normal" external call.
It is not a change in semantics, because all existing code would
have the same semantics. Only new code, explicitly using
-require and virtual module names in external calls, would use
the new variant of external calls.
I think that what bothers you is that the syntax of that variant
of external calls is the same as that of "normal" external
calls? But I believe that introducing a new syntax for that
variant would be unnecessary, since there would be no ambiguity
from a compiler's viewpoint.
And I would like to argue why I consider a variant semantics
(i.e., with renaming of modules) of external calls only, and not
a modification of the semantics of all function calls.
The purpose is to remove explicit bindings between modules from
the modules' source code. When using implicit applies or calls to
apply(), the module name is a variable, hence it can already be
a parameter coming from outside of the module implementation,
and be determined at runtime. We don't need to add any new
mechanism here to allow to specify bindings out of the modules'
source code.
My proposal is to remove explicit bindings between modules, from
the source code of modules, 1) when relationships between
modules need not change at runtime (otherwise, implicit applies
must be used), and 2) when we need better performance than
implicit applies or calls to apply().
Therefore, we need external calls. But since external calls have
the module name statically compiled in, we need to make the
called module name parameterized from outside of a module's
code, and modified at compile time.
> So yes, I claim it once again: replacing the module name in
> external calls is easy at compile time. We need only to
> transform the parameters of the call_ext ops.
>
> In short, you are proposing an IMCOMPATIBLE CHANGE to the
> semantics of the language, for no other reason than that it's
> easy.
It is compatible with existing Erlang code. I propose to add a
new variant of function calls.
[...]
> - File names never appear in configuration files, only module
> names.
>
> That can't possibly work. File names have to appear
> SOMEWHERE. One of the reasons for having a configuration
> language is to cope with the vagaries of file systems. Module
> names are case sensitive. In several M$ajor file systems, file
> names are not. So mapping from module names to file names is
> not trivial. Module names do not contain directory
> information, but modules do not all live in the same
> directory.
Mapping real module names (not my logical ones...) with module
file names is the job of the code loader. And it already does
that job well, doesn't it?
I think that it is better to centralize all file name handling in
one place. And the code loader seems to be the right place.
[...]
> - The interface between a module and another module is
> already explicit (it is the list of functions exported by
> every module).
>
> No, only HALF of the interface is there, and only HALF of the
> information you actually need in the interface.
>
> In general, you need to know not just "what do I offer to the
> whole world" but "what do I offer to which particular
> modules". Think about the way Eiffel lets you export features
> to specified classes.
I am not so fond of Eiffel's visibility mechanism. It makes you
make strong assumptions about the environment of a class, inside
the code of a class.
But I agree that we should have a way to limit the visibility of
features.
> Think about the fact that in Erlang,
> when you use a behaviour, you export some functions *TO THE
> BEHAVIOUR*, which are never ever supposed to be called by
> anything *BUT* the behaviour, BUT YOU CAN'T SAY THAT.
>
> My -use_child directive has TWO function lists because every
> relationship has two sides,
No, not every relationship has two sides. I prefer to focus on
modules' interfaces, instead on bindings. Yes, every module (or
any unit of encapsulation) should declare both provided
features, and required features. But that does not mean that it
must always provide its features to the same modules it uses.
Usage relationships are generally one-way, not two-way.
When one of my modules uses the lists module, lists does not need
to use the features of my module.
> and because it is important to
> distinguish between "I am willing to offer this function to
> that child" and "this child currently needs that function from
> its parent".
I agree that it is important to distinguish between "what
functions I am willing to offer", and "what functions I
require". But we can talk about usage relationships without
introducing child and parent roles.
> > As I understand it, the main use of -ifdef, etc. is to
> > select alternative architectures. Using an ADL, you would
> > simply have to write several specs.
> >
> > Yes, but that could mean repeating hundreds of lines.
> > I'm thinking in terms of something like C/MESA.
>
> Hundreds of lines???
>
> Yes. I'm thinking of configuration files that cover
> *applications*, not single modules.
Nice, I am thinking about that, too.
[...]
> > I think that it is a bad idea to try to specify a single
> > construct that would replace all actual usages of -include
> > and -ifdef: either the new construct will be as bad as what
> > it replaces, or it will not cover all usages.
> >
> > This seems to me to be a matter of personal taste; I see no
> > reason why the claim should be true.
>
> OK, it is mainly a matter of personal taste.
>
> There are two statements:
> (1) "It is a bad idea .. single construct ... -ifdef."
> (2) "Either the new construct will be as bad ... all
> usages."
>
> The first statement appears to express a matter of taste; you
> don't like the idea. The second statement is the grounds for
> the first. Maybe it is true. The second statement, however,
> is not a matter of personal taste. As I said, that one
> _could_ be true, and I would like to see some evidence for it.
Well. We would need first to analyze what are all usages of the
preprocessor.
> > The Erlang/OTP support people have already made it plain
> > that they will not tolerate any replacement for -record
> > that *requires* cross-module inlining.
>
> Can you please point me to such a discussion on the Erlang
> mailing-list or elsewhere?
>
> The thread about "when will abstract patterns be implemented"
> within the last month or possibly two.
Ok. Thanks.
> And who has talked about cross-module inlining in this
> discussion?!
>
> I have. For heaven's sake, it's half of what my proposal is
> ABOUT! Abstract patterns were rejected by the OTP implementors
> (in the thread mentioned above) on the grounds that they
> required cross-module inlining. (As the original paper made
> clear, they DON'T, but that's another matter.)
>
> The fact that the current Erlang system doesn't do
> cross-module inlining is one of the major reasons for the
> preprocessor. If you want to get rid of the preprocessor, you
> have to provide some means whereby constants &c can be
> compiled efficiently without requiring general cross-module
> inlining (which amongst other things messes up naive
> hot-loading). People who use records want them to be as
> efficient as any other kind of pattern matching, so that there
> is no performance penalty for writing readable code. Since
> there is no cross-module inlining, they don't WANT records to
> be imported from other modules. (If they were going to put up
> with that, they might as well use abstract patterns.) So
> there has to be something that shared record definitions can
> go in which is not a full module and so that record
> definitions CAN be fully presented to the compiler.
>
> The current answer is that the something in question is a .hrl
> file. My answer is that it's an integrated child module.
My answer is that it's the value returned by a function call
(record_info/2), that function's implementation being generated
automatically by the compiler.
> -module(mod1).
> -record(record1, {field1, field2}.
> ...
>
> -module(mod2).
> -import_records(mod1, [record1]).
> ...
>
> Would be strictly equivalent (and can be easily transformed
> into):
>
> -module(mod1).
> -record(record1, {field1, field2}).
> % Generated automatically by the compiler:
> -export([record_info/2]).
> record_info(fields, record1) -> [field1, field2];
> record_info(size, record1) -> 2.
> ...
>
> -module(mod2).
> % This -record is generated from the result of calls to
> % mod1:record_info/2 by the compiler:
> -record(record1, {field1, field2}.
>
> WHOOPS! You are doing cross-module inlining! You just made
> an incompatible change to the semantics of the language!
> (Haven't we seen that before?) You are *supposed* to be able
> to load a new version of mod1 WITHOUT recompiling mod2. Now
> you can't.
If a new version mod1 is loaded,
- if the record definition has been changed in mod1 (the -record
statement has been changed, and mod1 has been recompiled), then
mod2 must also be recompiled.
- if the record definition has not been changed in mod1, then
mod2 needs not be recompiled.
This is true for both the actual way of using -record based on
-include of .hrl files, and for my proposal.
The fact that there may be cross-module inlining in my solution,
is because the first implementation would directly rely on the
current -record construct and the current way of compiling it,
and because the current implementation of -record does imply
cross-module inlining.
It is an implementation detail.
If one day there is an implementation of -record that does not do
cross-module inlining, then my solution would not do
cross-module inlining either! Look at the source code before
transformation:
-module(mod2).
-import_records(mod1, [record1]).
...
There are no record definitions in the source code.
The purpose of my proposal is not to replace the way record
definitions are translated by the compiler, but to get rid of
-include to share record definitions.
If -record currently require such inlining, it is not directly my
problem: my purpose is to get rid of -include and .hrl files.
And I think that implementing records without inlining is a
separate problem. If that more general problem is solved in the
future, then my proposal may even be discarded, but I think that
for now it is an acceptable intermediate solution which gets rid
of include files.
By the way, you have the same problem: to share a record
definition with child modules, you have to declare the -record
in an out-of-line child module, and have mod1 and mod2 both be
parents of that child module. When you modify the child module
(e.g. to modify the record definition), you have to recompile
both mod1 and mod2.
"WHOOPS! You are doing cross-module inlining!" ;-)
> % Since importing a record def is just like declaring
> % it locally, it would be exported from that module also:
> -export([record_info/2]).
> record_info(fields, record1) -> [field1, field2];
> record_info(size, record1) -> 2.
> ...
>
> Please point me where you see inlining in that.
>
> Great balls of fire. Isn't it obvious? You are talking
> about the compiler processing mod2 WITH FULL KNOWLEDGE OF A
> DEFINITION IN mod1. That is precisely cross-module inlining.
Ok.
> Now, I happen to believe that cross-module inlining is a good
> thing. (And I know it has been tried in an Erlang context.) I
> also believe that hot loading is a good thing. But I also
> know that this kind of thing has been done before. Robert
> Dewar's SPITBOL implementation of SNOBOL coped with a language
> (SNOBOL) which allowed new code to be created at run time
> (even creating new data types!) and could undo code generation
> which had made assumptions that were no longer true. That was
> in the 70s. There are some Smalltalk compilers which can do
> some inlining, and recompile when the original definitions
> change. That kind of thing was also done for Self. It
> requires excellent dependency tracking, and it's not at all
> easy.
>
> So I think that refusing to do cross-module inlining (yet) is
> a good engineering decision by the OTP team.
>
> By the way, anyone who has looked at my proposal recently
> should pretend they never saw the configuration language
> draft. It is MUCH too complicated. I have a whole new draft
> of that, but haven't had time today to type it up. For one
> thing, it never occurred to me that people were trying to use
> case sensitive module names, but they are.
>
> In my proposal, I am taking a very strong stand on semantics.
> NO CHANGE to the semantics of existing Erlang code. NONE.
I propose to add a new form of external call, without changing at
all the semantics of existing code.
> One consequence of that is that things which seem very easy on
> the assumption that you are allowed to make such changes may
> in fact be very hard. Another consequence is that the design
> one ends up with may not be as nice, in many ways, as the
> design one would have produced for a new language.
>
> I think many people trying to get real work done with Erlang
> would say "if you want me to put up with a change in the
> meaning of module names just so that you can get rid of the
> preprocessor, no thanks, I'll stick with the preprocessor".
--
Romain LENGLET
More information about the erlang-questions
mailing list