CPU/Hardware optimized for Erlang

Thomas Lindgren <>
Thu Jul 28 11:37:14 CEST 2005

--- Ulf Wiger <> wrote:

> Den 2005-07-27 13:41:21 skrev Thomas Lindgren
> <>:
> > Well, ECOMP had 30x speedup compared to JAM, not
> > On similar programs at that time, HIPE had a
> 10-20x
> > speedup over JAM (when compiling JAM bytecodes to
> > Sparc asm), if memory serves. So, assuming no
> further
> > compiler improvements, ECOMP would have to remain
> > within a factor of 3 or so of the desktop systems,
> to
> > be competitive in speed.
> AFAIK, in 2000, AXD 301 was using BEAM (we never did
> use
> JAM in the final product), and the most relevant
> benchmark was compiling a call control prototype to
> ECOMP and comparing it to the same code running on
> the AXD 301 target platform with 450 MHz UltraSPARC
> 2i
> processors with Solaris and BEAM.

I was thinking of a predecessor system that Robert
Tjarnstrom reported about a couple of years before
that I believe. At that point, the projected
performance was 30x speedup over JAM unless I entirely

Slide 14 in the ECOMP presentation is a bit unclear:
did you get 30x speedup on the call control software
vs the BEAM/UltraSparc setup above? Or was this on
other benchmarks vs the BEAM/UltraSparc? Or vs some
other baseline?

Also, does the line "measured per use of clock cycles"
on slide 14 mean that the 30x speedup was 1/30 the
number of clock cycles were used running on ECOMP vs
the UltraSparc for the benchmark(s)?

That aside, and most importantly: did you decide to go

> > So, I seriously doubt the viability of rewriting
> some
> > emulator or interpreter in silicon, which is the
> > closest I can think of "executing the language
> > directly". Every few years, someone comes up with
> a
> > new BEAM but you will then be stuck with JAM.
> But this is not what ECOMP did. The ECOMP compiler
> prototype - written by Peter Lundell on his spare
> time, while working as our System Project Manager

Well done.

> - hooked into the OTP compiler framework, about at
> the same level as beam_asm, I think. As such, it
> ought to have been fairly future-proof.

Let me expand my comment a bit, then: if you model
your instruction set too closely on the language
(which was what I was arguing against), you will not
be able to take advantage of future improvements in
how things are represented and done when implementing
the language. In effect, by dictating the policy in
hardware, you are discounting the effects of
improvements in compiler- and runtime system
intelligence (to a greater or lesser extent, depending
on your choices).

Thus, as an analogy (only): if you implement your
hardware based on, say, the JAM instruction set,
subsequent radical innovations such as BEAM are more
difficult (at the very least) to exploit.

Hence my diatribe against too high-level hardware.


Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 

More information about the erlang-questions mailing list