CPU/Hardware optimized for Erlang

Richard A. O'Keefe <>
Thu Jul 28 04:19:54 CEST 2005


Thomas Lindgren <> wrote about ECOMP:
	At that time, power wasn't really on the map, but
	presumably a similar comparison could be made by
	porting BEAM or HIPE to a suitable incumbent.
	
On the contrary, the ECOMP slides mention low power as an issue
more than once.

	(And maybe there is a niche for Java chips -- we will see.)

Note that the ARM/Jazelle stuff isn't *exactly* a Java chip.
The ARM/Jazelle machines support *three* instruction sets:
    ARM	    a mildly unconventional architecture that's recognisably a RISC
    Thumb   a compressed version of ARM (16-bit instructions instead of
    	    32-bit instructions, but basically only instruction fetch
    	    and decode are different)
    JVM     most of the JVM instructions are executed natively,
	    holding up to 4 top of stack items in ARM registers.
Some of the more complicated JVM instructions trap out to ARM code.
So ARM/Jazelle machines can run Java pretty much directly, quite a bit
faster than any JVM emulator for ARM, AND at lower CPU power and in
less instruction memory than any possible native code translation.

	As someone whose name I can't remember liked to ask in
	situations like this, "what has changed?" Why is it
	going to be different this time?
	
What has changed for ARM at any rate is "Mobile phones".  When Lisp
machines walked the earth (and I still have fond memories of the Xerox
Lisp machines) there weren't any mobile phones.  Now there are mobile
phones with WAP and pxting and games and there are people wanting to
run Java on them.  So there is now a large market for machines that can
 - run Java
 - in a fixed amount of memory that isn't as large as one might like
 - with a fixed amount of battern power that is never large enough

Now the ECOMP work was about designing a *core* so that you could easily
drop an "Erlang machine" into any ASIC.  That was a plausible niche for
an Erlang processor:  the area HAD to be small and the memory HAD to be
small and the processor's share of the power budget couldn't be too large.
What was missing was a large enough market for Erlang-controlled ASICs.

	Designing out the middleman sounds like "closing the
	semantic gap", which sets my hackles on end :-)
	Closing the semantic gap was about providing
	high-level machine instructions so that programmers
	could express their intentions more clearly in their
	asm programs. RISC trounced the "semantic gap" theory
	so hard it wasn't even funny. (Well, the funny part
	was hearing sonorous pronouncements in the class room
	one year about how we should overcome the semantic
	gap, and then seeing all that rapidly reduced to dust
	:-)
	
But something important changed.  Memory technology changed.  In the
era of "small semantic gap" machines like the B6700, memory was small
and exceeding expensive.  When large semicondutor memories came along,
suddenly nobody *minded* all that much of the number of bytes needed
for executable code tripled.  Of course, these days, as we have been
so eloquently reminded in this thread, RISC has been "trounced" by
the well known BISC (Bizarre ...) the Pentium, and the Itanium 2
architecture that HP and Intel dreamed up is seriously weird.

	The particular point made at that time was that an
	optimizing compiler could do at least as well as fancy
	high-level instructions. (And usually far better.)
	
But this too depended on the change from ferrite core memory to semiconductor
memory and the general improvement in CPU speed:  on slow machines with
small memories you couldn't _afford_ fancy compilers.  The old machines with
instructions tuned to COBOL, for example.  15 kilowords was a large memory!

A quick skim through beam_emu.c (is the BEAM instruction set actually
documented anywhere?) suggests to me that it could be emulated at least
twice as fast on stock hardware, or if not BEAM, then something very like
it.  I don't see any great need for an Erlang machine to replace the current
Erlang system for what the current Erlang system does well.  But technology
does change, and there are new kinds of computing coming.

I've recently been reading about electrochemical transistors.
They are large and slow, but it looks to me as though in a couple
of years you'd be able to print an ECOMP-like machine on a piece
of paper the size of a paper-back book page for cents or possibly
fractional cents.  It would NOT be fast.  On the other hand, EC
stuff includes electrochomic (clear/dark blue) and light sensor
stuff.  Yep, we're about to enter the Age of Intelligent Wallpaper!




More information about the erlang-questions mailing list