[erlang-questions] The quest for the perfect programming language for massive concurrency.
Richard A. O'Keefe
Mon Feb 3 03:08:35 CET 2014
On 31/01/2014, at 11:40 PM, Vlad Dumitrescu wrote:
> Why does automating something have anything to do with that thing being textual or graphical? vim runs in a console, but also as gVim (graphical). The same can be said about emacs and XEmacs.
gVim is still a textual interface.
XEmacs is not "Emacs using X11" but a fork of Emacs;
the latest stable release of XEmacs appears to be 2009.
The fact that emacs uses a GUI does not make the thing that
is being edited "graphical".
> Whether we want it or not, to the human brain images are lower level than text or speech.
If you mean that recognising a line is lower level than
recognising "a line", that's true, but it's hardly relevant.
Have you ever seen Penrose's notation?
A diagram like this
is composed from shapes recognised by our low level visual
processing systems, but the meaning of the diagram is *not*
immediate. It takes rather more training and practice than
I've had to be able to read these things; at my present
stage of understanding I would find a page of text _easier_
The (first edition of) the book that explains how to *use*
Eclipse is bigger than the listing of my emacs-like editor
PLUS the listing of the C compiler I originally used to
write it PLUS the listing of the UNIX kernel it originally
ran on PLUS the manuals.
> We can't read text files directly, we interpret the graphical representation of that text, be it on a console or a window. Some people can handle abstractions in their heads more easily than others. The latter category will need help, possibly in the form of tools to visualize the code (or whatever).
Diagrams are *also* abstractions. In fact they are even *more*
abstract than text.
> In my opinion, what a programming environment brings to the table is multiple ways to look at the code, to quickly see what is going on.
I think Joe agrees with you, except that Joe places a heavy
- something that has precise definition
- explicit and straightforward semantics
- that he can understand.
> colleague or two to look at it.
> Taking a TeX example, wouldn't you find it helpful to have a window alongside your editor that show in real time how the document is rendered, without the need to run "tex mydoc.tex | pdfview&" (or whatever) yourself once in a while?
Actually, we _have_ that. It's called TeXShop.
> Or in Erlang (and doing some wishful thinking), wouldn't it be useful if one could actually _see_ in real time the network of processes in a node, how they are linked/monitoring each other, how they communicate, which ones are growing large or seem deadlocked, zoom out for a bird-eye view or in for details?
Let's take a form of diagram that is widely agreed
to be straightforward. A box represents a class;
an arrow from X to Y represents "X inherits directly
I have a Smalltalk system of my own. When you compile
a program, it spits out a "map" file, which includes a
list of classes at the top using indentation to show
inheritance. This can be turned automatically into a ".dot"
file for display using GraphViz or any of several other
programs that read ".dot" files.
The one that's actually _useful_ is the text file, because
there doesn't seem to be any good way to display nearly
800 classes in one diagram. Heck, the collection hierarchy
alone has 176 classes; try getting just _that_ in one display.
One of the other Smalltalk systems I use has a "show the
hierarchy of the class I'm browsing" button that actually
generates UML. It is a pain in the posterior, because you
can't actually *see* very much, and that's even with your
attention limited to something with only a handful of
ancestors and/or descendants.
There is a beautiful little paper by Alan Blackwell,
"Correction: a picture is worth 84.1 words"
The references of that paper are pretty good.
Now, visualising the process structure would be nice, except
that someone *did* this using UbiGraph and the result was
as dismaying as it was cool. Erlang lets you have *lots* of
processes, and that's just what doesn't suit a diagram.
> As a user, one shouldn't need to understand how the environment works (you think Eclipse is complex, I think Emacs is; it's a matter of what we are accustomed to).
We can measure the complexity of an environment in itself
without reference to what we are accustomed to.
The text editor I normally use takes less than 15 000 raw
lines of C (less than 8 000 SLOC); executables and help
files come to 173 kB on disc.
That _has_ to be less complex than Emacs.app, which is
161 MB on disc.
And that _has_ to be less complex than Eclipse for C/C++
where the smallest download seems to be 141 MB compressed.
And the really annoying thing is that the actual text
editing support in Eclipse is far more limited than that
in the 173 kB program!
> Of course, this changes when I need to do something that the environment doesn't support. But even so, there is always the fallback of doing it as if no IDE was available, so I don't think that having an IDE can make things worse.
In one sense, what makes things worse is indeed not having
an IDE, but _having_ to have an IDE.
In another sense, using an IDE can indeed result in worse
code. It's actually quite easy to write a component correctly.
The problem is interfaces. Nancy Leveson has a very nice
little avionics example (FLAPS-EXTENDED) where the machine-
checked interface remains exactly the same but the semantics
on one side changes. The only way I know to deal with this
is to frequently put *both* some clients *and* some suppliers
of an interface on screen *at the same time* so that I can
review them. An IDE _can_ do this, and in fact Smalltalk
systems are especially good at that. But the more screen
space the IDE takes up for _being_ an IDE (in Smalltalk,
about 50%; in Eclipse, it's worse) the harder it is to actually
> Knuth could do this amazing thing with TeX (practically no bugs) for all the reasons you stated, but also because he set very strict boundaries for what TeX is and can do. Basically, he not only froze the specifications, he set them in stone. If that would have been enough, would there have been a need for XeTeX, pdfTeX, laTeX and other tools that extend and improve the basic TeX?
Yes. And those things were only made *possible* by the existence of
the stable core.
When Knuth wrote TeX, PDF did not yet exist. In fact, Postscript
did not yet exist. When Knuth wrote TeX, Unicode did not yet exist.
LaTeX is basically a set of macros sitting on top of TeX that depends
utterly on TeX underneath, and it's clearly not "needed". (ConTeXt
http://en.wikipedia.org/wiki/ConTeXt would be LaTeX's principal rival.)
This is rather like saying "if having a C++ standard was a good idea,
why would TBB or MCSTL exist?" And the answer is that TBB and MCSTL
wouldn't be any _use_ if there weren't a stable C++ underneath.
> Would it have been possible to keep the bug levels as low if these extensions and improvements had been part of the core TeX?
You are right: that is an excellent argument for keeping a small
stable core. But weren't you just arguing _against_ that?
For some reason text interfaces seem to be better specified and
more stable than GUI interfaces. Every release of Word I have
to relearn the interface; while I have LaTeX documents from 1984
I can still use.
From my own experience of the TeX family, it's a lot easier to get
an everyday understanding of HTML+CSS than it is TeX, but every
time I've needed something less quotidian, I have found the
*documentation* of the TeX family to be better than the *documentation*
of say CSS. (For example, when I read the book about CSS by its
I was frustrated to find no more semantics than was in the W3C
specifications, which was to say, not enough to let me figure out
how to get what I wanted.
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 3549 bytes
Desc: not available
More information about the erlang-questions