[erlang-questions] The Beauty of Erlang Syntax

Richard O'Keefe ok@REDACTED
Thu Feb 26 01:59:46 CET 2009

On 25 Feb 2009, at 11:22 pm, Michael T. Richter wrote:
> You're begging the question, Joe.  WHY did it gain this temporary  
> victory?

Surely it's incredibly simple?

Do you remember when "Eight Megabytes And Continually Swapping"
was a sneer at Emacs?  Heck, I remember when "Lisp programs
take a minimum of 1MB of memory" was considered as a complete
and final answer against ever using Lisp for anything anywhere.

The machine I happily used as an undergraduate was a *mainframe*
with a cycle time of 2.4 microseconds and a theoretical maximum
of 6MB of ferrite core memory.  The laptop the department lets
me use has 2 processors running at 2.53 GHz, so I have >12,000
times as many cycles per second available.  It has 4GB of memory,
or roughly 3,000 times as much memory.  It has, in a 2..5 inch
form factor, about a thousand times as much disc as used to fill
half a large room.

And that old machine was a GIANT compared with the machines that
languages like Fortran and COBOL were devised for.  The great old
machine, the IBM 650, had room for 4000 words of memory on its
drum.  (I think some models had 2000 words.)  People didn't just
write compilers for it, but on it.  The computing club at my old
university had a business computer that had 15 kwords of memory,
and that ran a COBOL compiler.

The old imperative languages like Fortran and COBOL and PL/I
and Bliss were devised for machines that were unbelievably tiny
and excruciatingly slow by today's standards.  If you didn't
keep reasonably tight control on memory, you'd get nothing done.

It wasn't just functional languages that were affected.
C++ was devised by Bjarne Stroustrup because he had used
Simula 67 and liked it.  Simula was *wonderful* for its day.
*BUT* Simula depended completely on garbage collection.  It
didn't even use a stack for procedure calls, because it
supported quasi-parallel programming (the same thing you get
using Java on a uniprocessor).  It was type safe, it was THE
object-oriented language, it had a large library (or at least
DEC-10 Simula, the version I used, did).  BUT it used
garbage collection.

And for that reason, practically everyone turned away.
"If I don't know what's happening to memory, I can't trust
the program."  "Simula programs run 20% slower than Algol
programs, and I just can't afford that."

It's that simple.  Nobody objected to automatic memory management
in the shell, or in AWK, or in REXX.  They were "scripting"
languages, not real programming languages.  Nobody expected them
to be efficient.  But for *real* programming, oh no, garbage
collection was too slow, too space hungry, too hard, don't want
to go there.

What were the two major changes that Stroustrup made to Simula
to create the success of C++?

(1) His users were familiar with C, not with Algol.
     So he gave his OO language C syntax.
(2) He removed garbage collection, giving programmers
     the control over memory they thought they needed.

Everything else about C++ came later.  It was the belief that
they could have the *efficiency* of C with the expressiveness
of Simula that made C++ popular initially.  Largely an
unexamined belief:  if there were ever any real performance
comparisons between equivalent Simula and C++ programs I've
not seen them.

What made Java so successful?


By the time Java came along, computers were fast enough and had
enough memory that the cost of garbage collection didn't bother
a new generation of programmers.  And there had been a lot more
work on garbage collection algorithms.

By the time C# came along, Java had not so much convinced a
generation of programmers that garbage collection could be
afforded after all, but had caused them to forget that there
*was* such a thing as garbage collection most of the time.

Let's be honest about this: functional programming languages
really genuinely *do* touch memory a lot more than imperative
ones.  As Joe pointed out, imperative programmers can stuff
information into global variables and pick it up, while
functional programmers have to move it around to get from one
place to another.  (For a long time now I've thought of this
as the "speed of light" limitation, with imperative languages
imagining that FTL is not only possible but free.  With
distrubted programming, the absence of real FTL becomes painfully
obvious.)  Let's also recognise that for a long time now memory
has been getting slower and slower compared with the speed of
the CPU.  Alan Mycroft had a paper studying GHC performance on
a 2003-modern machine where an L2 cache miss cost 206 cycles.
DDR memory helps a lot, IF you are marching along arrays, NOT
if you are skipping around trees and lists.

So the best performance you can get out of a functional language
really *is* less than the best performance you can get out of an
imperative language.  Some compilers, like the GHC and YHC
compilers, can do *awesome* optimisations, but they are always
starting from a much bigger handicap.

Of course, "we are writing in a language that permits extremely
efficient computation" and "we are writing extremely efficient
code" are different propositions.  The efficiency of most code
is an unexamined belief rather than an established fact.  And
a lot of code doesn't have to be all that fast.  I've been shown
an impressive speedup in a Haskell program by having it use
gzlib: what the program was *really* doing most of the time was
waiting for the disc.

To this day, functional programmers are willingly trading
increased memory size and increased execution time for
decreased development time and increased reliability.
This makes a lot of sense when you have memory and time to
spare.  It makes a lot of sense when development is expected
to be very hard.  On yesteryear's machines, for yesteryear's
problems, it made rather less sense.

More information about the erlang-questions mailing list