Child modules draft feedback wanted

Richard A. O'Keefe ok@REDACTED
Tue Apr 4 08:12:19 CEST 2006


I wrote:	
	> I also want to repeat that for Prolog and Erlang I have
	> several times (and in the case of Prolog, several times in the
	> last week) heard a "user" demand for some kind of hierarchical
	> structure within a module. "Child modules" were inspired by
	> Ada, although they don't have much in common with Ada 95 child
	> modules.

Romain Lenglet <rlenglet@REDACTED> replied:
	And why is there a demand? What problem do they solve that cannot 
	be solved otherwise and more or less simply?
	
Obviously, there is no such problem.  We can do everything "more or
less simply" by using Turing machines, possibly extended with "!"
and "receive".

Practically, there is a problem.  Modules have to do at least two different
things.  They are the units of hot loading.  They are the units of
encapsulation.  I'm sure I remember someone talking about using different
mechanisms for different jobs.  I'm proposing "full" modules as units of
hot loading, and "child" modules as units of encapsulation.

To see that there is a problem, I made some size measurements on
Erlang R9C.  (The R11 release I downloaded turned out to be corrupted,
so I couldn't measure that, and while I _am_ downloading R10 to do my
measurements on, it's taking long enough that I decided not to wait.)

> summary(s)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
    2.0    44.5   127.0   291.4   314.0 22508.0 

That is, there is at least one module with 2 SLOC,
1/4 of the modules have 44 or fewer SLOC,
1/2 of the modules have 127 or fewer SLOC,
3/4 of the modules have 314 or fewer SLOC,
but there is at least one module with 22,508 SLOC.

The distribution of SLOC sizes shows two peaks, one around 8 SLOC
(yes, that small) and one around 192 SLOC.  But there are quite a
few big ones.

Unsurprisingly, the very largest files are parsers generated by Yecc.
(The very biggest is the Megaco parser.)  So we had better remove
those from discussion.

> summary(s)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
    2.0    45.0   126.0   263.3   309.0  5927.0 
> summary(log10(s))
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  0.301   1.653   2.100   2.043   2.490   3.773 

The median size is somewhere about 125 SLOC, and files that size
probably don't need named parts, let alone replaceable named parts.

More than 5% of the files are over 1000 SLOC, however, and it
defies belief that there is no significant structure in a file
that size.  In fact, I'd say that a file with 500 SLOC (and
remember, that's SLOC, not raw lines) probably has significant
structure, and about 15% of the modules are that size or more.
(And several of the smaller files I've looked at would benefit
from explicit structure.)  While >500SLOC modules are 15% of
modules, they contain more than 55% of the SLOC.  So I can say
that more than half of the Erlang code in R9C is in modules
that, in my view, would be easier to read with some explicit
structure.

Of course we can manage without that; we have.  But there is a
big difference between looking at a function and KNOWING that it
is only used in a small part of a large module and having to
CHECK the whole module to find out.

Being able to split a large module into pieces for editing also
means that we can extend version control below the module level.
If someone wants to edit the sorting routines in lists.erl, why
should they have to check out the mapping and filtering functions
at the same time?

Then we can go the other way.

There are things which are currently expressed as collections of
modules, where there isn't the slightest intention that some of
the modules should ever be used anywhere else, where indeed things
could go badly wrong if some of the "private" modules were used by
unrelated code.  There is no way of marking modules as "private",
although the

    -export_to(Module, [F1/N1, ..., Fk/Nk]).

directive that I proposed many years ago would make this kind of thing
a lot safer and a lot easier to grasp the structure of than it presently
is.  Having *one* full module providing the "public" interface and
making the other modules replaceable out-of-line children provides all
of the benefits of the present setup, with fewer risks, and making them
in-line children would remove the remaining risks.  Here the absence of
structure within modules has forced people to expose interfaces that
should not have been exposed.

In summary, hierarchy within a module provides
 - explicit structure and
 - CHECKED explicit encapsulation
which are not needed for the majority of *modules*,
but which would be useful for reading the majority of *lines*.
This would help people write code that was easier to read.


	> But a parent/child relationship is one of the things I am
	> trying to ADD, not take away.  I repeat, there are people who
	> *WANT* hierarchical structure within modules, and, if not
	> taken to excess, I do not regard that as unreasonable.  (The
	> ersatz Java dotted module names, which I DO regard as
	> unreasonable, are an attempt to provide some kind of
	> hierarchical structure above the level of modules; I think
	> replaceable children can satisfy that need in a much better
	> way.)
	
	Java packages are only namespaces. No more. There are not even 
	hierarchical relationships between packages.
	
You mistook me completely.  I was not talking about Java packages,
but about the "ersatz Java[-like] dotted module names" FOR ERLANG.
Of *course* there is a hierarchical relationship between such names.
It's not a relationship present in anything *other* than the names,
which is why it's "unreasonable".  Nevertheless, the whole point of
trying to introduce such names into Erlang was to provide applications
with their own namespaces for modules, which is a form of hierarchy.

	> I completely fail to see how [the renaming trick] can possibly
	> be done at compile time.

	I was considering a solution based on external calls only, not on 
	implicit applies.

Right.  You were considering, in short, a "solution" which doesn't work,
and now you are talking about patching it into working.  

	foo:g(42) is an external call, while M:g(42) 
	is an implicit apply. The distinction is very clear in Erlang. 

Untrue.  There is NO distinction between the semantics of foo:g(42)
and the semantics of M = foo, M:g(42) in Erlang (other than the existence
of the variable M).

	For instance, cf. section 4.2 of the Efficiency Guide.

Er, which version?  In the current (5.4.13) version, section 4.2
is "Ets specific".  Can you mean section 3.2?

But that is a fact about the current implementation, and the performance
of so-called "implicit apply" could be GREATLY improved.  Heck, it should
be trivial.  In fact, it's so trivial I am astonished that it's not already
done.

Here's what you do.

    1.  Each module has some number of functions

    '0'(a) -> a();
	... one rule for each a/0 that is exported
    '0'(F) -> erlang:no_such_function(?MODULE, F, []).

    ...

    '8'(X1, ..., X8, a) -> a(X1, ..., X8);
	... one rule for each a/8 that is exported
    '8'(X1, ..., X8, F) -> erlang:no_such_function(?MODULE, F, [X1,...,X8]).

    up to whatever size the implementors deem appropriate.    
    These rules are built at compile time.

    2.  The system has a set of functions, which I'll put in 'erlang:'.

    '0'(F, mod1) -> mod1:'0'(F);
	one rule for each module
    '0'(F, M) -> erlang:no_such_module(M, F, []).

    ...

    '8'(X1, ..., X8, F, mod1) -> mod1:'8'(X1, ..., X8, F);
        one rule for each module
    '8'(X1, ..., X8, F, M) -> erlang:no_such_module(M, F, [X1,...,X8]).

    These rules are built at run time, but that's old technology.
    The new rules are added at the front of the existing code, and
    it's basically just a matter of adding an entry to a hash table.

    3.  A call M:f(X1,...,Xk) where k <= 8 is compiled as

    erlang:'k'(X1, ..., Xk, f, M)

    and a call apply(M, F, [X1,...,Xk]) where k <= 8 is known at compile
    time or the equivalent M:F(X1, ..., Xk) is compiled as

    erlang:'k'(X1, ..., Xk, F, M).

    For example, M:foo(42) should be compiled as erlang:'1'(42, foo, M).

If an 'external call' costs 1.08 units relative to a 'local call',
then an 'implicit apply' with not too many arguments should just cost
two of these (one call to erlang:'k'/k+2 and one call to M:'k'/k+1),
or 2.16 units.  It really has no business being 7.76 units; that's much
slower than it should be.

The space cost for the extra hidden rules (and I am NOT seriously suggesting
'0' to '8' as names for them) is linear in the size of the program; there
are exactly 2(kmax + 1) clauses for each module.

There are other ways to manage this, including at least one fairly obvious
way that should get the cost down even further, but this is the easiest
scheme to explain that I've come up with yet.

This kind of thing gets an order of magnitude harder if module names
are not, as they are now, context-independent.

	And both are compiled in a very different way in beam code.

That is a fact about the present implementation which should clearly change.
It is not a fact about the semantics of the language.

	So both kinds of statements can be manipulated differently by the 
	compiler,

The compiler is at liberty to manipulate things differently any way it
likes, AS LONG AS IT PRESERVES THE SEMANTICS.  The semantics of
m:f(X1) and M:f(X1) do NOT different in any way that entitles a compiler
to give them translations with incompatible effects.

	and if only the module name in some external calls 
	need to be replaced by a compiler, this is possible and easy.
	
That would CHANGE THE SEMANTICS of existing code, which my proposal
does not do.  Making m:f(X) and (M=m, m:f(X)) do different things
would be downright evil.

	Just take a look at lib/compiler/src/genop.tab in the OTP sources 
	for the list of beam opcodes:
	- local calls are compiled into call/2 ops;
	- external calls are compiled into call_ext/2 ops;
	- implicit apply calls are compiled into normal calls to apply.
	
They shouldn't be.  As noted above, it is possible to do MUCH better.
(Actually, section 3.2 of the Efficiency Guide says that "The compiler
will now optimise this [implicit apply] syntax giving it better
performance than apply/3], which cannot be true if implicit applies
are just "compiled into normal calls to apply".  Something is wrong here.

HOW it is done is not in fact relevant.  The point is that whatever the
compiler does (and it can make local calls work by sending e-mail if it
wants to) it has to get the SEMANTICS right, which according to all the
Erlang documentation (including the specification "books") I've ever
seen means that "implicit apply" and apply and external call all have
the SAME semantics.

	So yes, I claim it once again: replacing the module name in 
	external calls is easy at compile time. We need only to 
	transform the parameters of the call_ext ops.
	
In short, you are proposing an IMCOMPATIBLE CHANGE to the semantics
of the language, for no other reason than that it's easy.  Silly me,
I thought you had in mind an implementation which *only* made module
names context-sensitive but otherwise preserved the semantics of the
language.  Well, if we don't have to preserve the semantics of the
language, any change can be as easy as we want.

	With the -require construct I propose a solution in which 
	- There are modules, just like we had before, and only modules.

No, modules are no longer "just like we had before".
You've CHANGED THE SEMANTICS of module names in two different ways.

	- File names never appear in source files, only module names.

We both have that.

	- A separate configuration language says how to map logical 
	module names in the scope of client modules, and actual module 
	names. 

We both have that.

	- File names never appear in configuration files, only module 
	names.

That can't possibly work.  File names have to appear SOMEWHERE.
One of the reasons for having a configuration language is to cope
with the vagaries of file systems.  Module names are case sensitive.
In several M$ajor file systems, file names are not.  So mapping from
module names to file names is not trivial.  Module names do not
contain directory information, but modules do not all live in the
same directory.

	- By having more than one configuration file, you can have 
	several different configurations for a set of modules. 

We both have that.

	- The interface between a module and another module is already 
	explicit (it is the list of functions exported by every module).
	
No, only HALF of the interface is there, and only HALF of the information
you actually need in the interface.

In general, you need to know not just "what do I offer to the whole world"
but "what do I offer to which particular modules".  Think about the way
Eiffel lets you export features to specified classes.  Think about the
fact that in Erlang, when you use a behaviour, you export some functions
*TO THE BEHAVIOUR*, which are never ever supposed to be called by anything
*BUT* the behaviour, BUT YOU CAN'T SAY THAT.

My -use_child directive has TWO function lists because every
relationship has two sides, and because it is important to distinguish
between "I am willing to offer this function to that child" and
"this child currently needs that function from its parent".

	> 	As I understand it, the main use of -ifdef, etc. is to select
	> 	alternative architectures. Using an ADL, you would simply
	> have to write several specs.
	>
	> Yes, but that could mean repeating hundreds of lines.
	> I'm thinking in terms of something like C/MESA.
	
	Hundreds of lines???
	
Yes.  I'm thinking of configuration files that cover *applications*,
not single modules.

	Please explain what problems do such hierchical namespaces solve?

I've done this above.

	> 	I think that it is a bad idea to try to specify a single
	> 	construct that would replace all actual usages of -include
	> and -ifdef: either the new construct will be as bad as what it
	> replaces, or it will not cover all usages.
	>
	> This seems to me to be a matter of personal taste; I see no
	> reason why the claim should be true.
	
	OK, it is mainly a matter of personal taste.
	
There are two statements:
    (1)	"It is a bad idea .. single construct ... -ifdef."
    (2) "Either the new construct will be as bad ... all usages."

The first statement appears to express a matter of taste; you don't
like the idea.  The second statement is the grounds for the first.
Maybe it is true.  The second statement, however, is not a matter
of personal taste.  As I said, that one _could_ be true, and I would
like to see some evidence for it.


	> The Erlang/OTP support people have already made it plain that
	> they will not tolerate any replacement for -record that
	> *requires* cross-module inlining.
	
	Can you please point me to such a discussion on the Erlang 
	mailing-list or elsewhere? 
	
The thread about "when will abstract patterns be implemented" within
the last month or possibly two.

	And who has talked about cross-module inlining in this 
	discussion?!
	
I have.  For heaven's sake, it's half of what my proposal is ABOUT!
Abstract patterns were rejected by the OTP implementors (in the thread
mentioned above) on the grounds that they required cross-module inlining.
(As the original paper made clear, they DON'T, but that's another matter.)

The fact that the current Erlang system doesn't do cross-module inlining
is one of the major reasons for the preprocessor.  If you want to get
rid of the preprocessor, you have to provide some means whereby constants
&c can be compiled efficiently without requiring general cross-module
inlining (which amongst other things messes up naive hot-loading).
People who use records want them to be as efficient as any other kind
of pattern matching, so that there is no performance penalty for writing
readable code.  Since there is no cross-module inlining, they don't WANT
records to be imported from other modules.  (If they were going to put
up with that, they might as well use abstract patterns.)  So there has
to be something that shared record definitions can go in which is not a
full module and so that record definitions CAN be fully presented to the
compiler.

The current answer is that the something in question is a .hrl file.
My answer is that it's an integrated child module.

	-module(mod1).
	-record(record1, {field1, field2}.
	...
	
	-module(mod2).
	-import_records(mod1, [record1]).
	...
	
	Would be strictly equivalent (and can be easily transformed 
	into):
	
	-module(mod1).
	-record(record1, {field1, field2}).
	% Generated automatically by the compiler:
	-export([record_info/2]).
	record_info(fields, record1) -> [field1, field2];
	record_info(size, record1) -> 2.
	...
	
	-module(mod2).
	% This -record is generated from the result of calls to
	% mod1:record_info/2 by the compiler:
	-record(record1, {field1, field2}.

WHOOPS!  You are doing cross-module inlining!  You just made an
incompatible change to the semantics of the language!  (Haven't
we seen that before?)  You are *supposed* to be able to load a
new version of mod1 WITHOUT recompiling mod2.  Now you can't.

	% Since importing a record def is just like declaring
	% it locally, it would be exported from that module also:
	-export([record_info/2]).
	record_info(fields, record1) -> [field1, field2];
	record_info(size, record1) -> 2.
	...
	
	Please point me where you see inlining in that.
	
Great balls of fire.  Isn't it obvious?   You are talking about the
compiler processing mod2 WITH FULL KNOWLEDGE OF A DEFINITION IN mod1.
That is precisely cross-module inlining.

Now, I happen to believe that cross-module inlining is a good thing.
(And I know it has been tried in an Erlang context.)  I also believe
that hot loading is a good thing.  But I also know that this kind of
thing has been done before.  Robert Dewar's SPITBOL implementation of
SNOBOL coped with a language (SNOBOL) which allowed new code to be
created at run time (even creating new data types!) and could undo
code generation which had made assumptions that were no longer true.
That was in the 70s.  There are some Smalltalk compilers which can
do some inlining, and recompile when the original definitions change.
That kind of thing was also done for Self.  It requires excellent
dependency tracking, and it's not at all easy.

So I think that refusing to do cross-module inlining (yet) is a good
engineering decision by the OTP team.

By the way, anyone who has looked at my proposal recently should
pretend they never saw the configuration language draft.  It is MUCH
too complicated.  I have a whole new draft of that, but haven't had
time today to type it up.  For one thing, it never occurred to me that
people were trying to use case sensitive module names, but they are.

In my proposal, I am taking a very strong stand on semantics.
NO CHANGE to the semantics of existing Erlang code.  NONE.

One consequence of that is that things which seem very easy on the
assumption that you are allowed to make such changes may in fact be
very hard.  Another consequence is that the design one ends up with
may not be as nice, in many ways, as the design one would have
produced for a new language.

I think many people trying to get real work done with Erlang would say
"if you want me to put up with a change in the meaning of module names
just so that you can get rid of the preprocessor, no thanks, I'll stick
with the preprocessor".




More information about the erlang-questions mailing list