[erlang-questions] Why do we need modules at all?

Tim Watson watson.timothy@REDACTED
Tue May 24 15:36:36 CEST 2011


> Yes - how this would work I don't know - my Haskell friends say that
> searching for
> function by type signatures is good.

I've only used OCaml commercially, but what little Haskell I've played
with this is a very useful feature (hoogle and hackage combined with
cabal) for finding and using stuff.

>
> What do I do today when I suspect that somebody has written some code ?
>
>
>     1) will it take me < ten minutes to write the code?
>         yes - write it
>     2) google or ask a friend
>     3) this gets me to git or sowhere
>     4) search
>     5) cut-and-paste
>
> Not a good method - and this is *best practise*

Not quite. At point (3) you probably get to github, bitbucket or some
such. At that point, you *may* be able to reuse the library and/or
application code in your project, in which case you install it with
epm/sutro/agner or pull it in as a dependency using rebar. Only when
the code is hidden away in the module and not reusable do you have to
copy-paste - I think it's this (latter) case that you're not happy
with and I agree it's poor show.

By way of example, I'm working on a node/cluster monitoring tool
(primarily a web application with web sockets notifications). I built
the library code (for monitoring) by re-using the eper (performance
monitoring) library and a couple of other things (zero-conf for
network discovery and whatnot). I built the web application on
misultin for its websocket support. I also have some common (to the
project) library code which uses the esl/parse_trans library from
github to support better use of records and things like that. For the
database application, I used the esl/setup library from github to
separate the mnesia database/schema setup from the general purpose use
cases and to generate (run once) setup scripts and the like. None of
this is copy and paste re-use.

> Actually for development it would be great to say
>
>     -import("http://a.b.c/modname", [foo/2]).
>
> this would just make an rcp to foo/2 on the remote host and save all the
> pain of locally installing
> the necessary code on my machine.

I think it would be worth distinguishing between two disparate concerns here.

1. Where does the (binary or other representation of AST) code come from
2. Where does the/any (runtime) state reside

I don't always want to RPC just to run a library routine, especially
if that call is happing frequently in a tight "loop" and the code
needs to run fast. Much as I love Erlang's location transparency, one
cannot just assume that the network isn't there.

Things I think are potentially awesome here are:

1. Importing from a URI
2. Having the binary code in a repository that is searchable, versioned, etc

In fact this is hardly an unfamiliar model - consider couchdb and/or
riak. When you define a map-reduce function as say javascript, you
distribute/upload this function definition to the server for later
use. I think my two major concerns (that are by no means
unsurmountable) are:

1. wanting to be able to work when I'm not connected to the code server
2. wanting to be able to define the import based on more than just the name

Point (2) is telling. In your example, you imported from
http://a.b.c/modname - you still haven't gotten rid of the module. I
know it was a flippant example and besides the point. Question is, can
you define the search in such a way as to determine exactly where the
dispatch must go? The Haskell folks say searching by type is immensely
useful, and they're right. But they don't compile based on this - they
find what they're looking for and then reference it (explicitly) by
name.



More information about the erlang-questions mailing list