CPU/Hardware optimized for Erlang

Thomas Lindgren thomasl_erlang@REDACTED
Thu Jul 28 12:39:20 CEST 2005



--- "Richard A. O'Keefe" <ok@REDACTED> wrote:

> Thomas Lindgren <thomasl_erlang@REDACTED> wrote
>
> On the contrary, the ECOMP slides mention low power
> as an issue more than once.

Well, I was thinking of a somewhat earlier event, and
at that time, power wasn't a big argument yet. See my
reply to Ulf. My apologies to all.

> Note that the ARM/Jazelle stuff isn't *exactly* a
> Java chip.

A clever way of doing it, I'll say. Having competitive
ARM/Thumb support in the first place is likely vital
in this market, though.

> >["what has changed?"]
> 	
> What has changed for ARM at any rate is "Mobile
> phones". 

Yes. A new market where previous offerings didn't fit,
followed by a snowball effect.

> Now the ECOMP work was about designing a *core* so
> that you could easily
> drop an "Erlang machine" into any ASIC.  That was a
> plausible niche for
> an Erlang processor:  the area HAD to be small and
> the memory HAD to be
> small and the processor's share of the power budget
> couldn't be too large.
> What was missing was a large enough market for
> Erlang-controlled ASICs.

In principle you could build an SOC with one or more
Erlang processors on it. Cell with ECOMPs instead of
vector processors ... Wouldn't that be neat, hardware
lovers? :-) 

[Previous caveats still applied. But IBM would build
it if you paid them.]

> Of course, these days,
> as we have been
> so eloquently reminded in this thread, RISC has been
> "trounced" by
> the well known BISC (Bizarre ...) the Pentium, 

I'd say the first shudder was when Yale Patt et al
showed  that one could translate VAX instructions to
RISC on the fly. ("Runtime generation of HPS micro
instructions from a VAX instruction stream", MICRO-19,
1986.) And then Robert Colwell and Intel's P6 showed
that it also could be done well in practice ten years
later or so. CISC was back in the game (reincarnated
as a superscalar RISC with a somewhat complex frontend
...).

> and the Itanium 2
> architecture that HP and Intel dreamed up is
> seriously weird.

I actually liked the EPIC principles, since they
addressed many of the issues I wanted to get at when
exploiting ILP in a compiler ... though alas, it seems
Intel/HP also made the end product too complex. 

And to be honest it doesn't seem to be quite
competitive with x86 either :-) The original intention
was, I believe, to have a simple decoder and a simple
in-order implementation, which would lead to fast
clocking and high performance. The compiler would take
care of the nasty stuff. 

In practice, things have been different. Partly due to
people now being _really_good_ at building x86:s but
not EPICs. In the same vein, people aren't as good at
building EPIC compilers either (and some problems,
such as memory latency, are difficult for a compiler
in the first place).

>> [compiler technology made the difference]
>
> But this too depended on the change from ferrite
> core memory to semiconductor
> memory and the general improvement in CPU speed:  on
> slow machines with
> small memories you couldn't _afford_ fancy
> compilers.  The old machines with
> instructions tuned to COBOL, for example.  15
> kilowords was a large memory!

A natural but painful mistake to make. (Speaking from
the safety of 20/20 hindsight, I haste to add.)

The general theme here probably is underestimating
Moore's law. (Or failing to dominate the world in the
timespan given you by Moore's law.)

> But technology
> does change, and there are new kinds of computing
> coming.

Indeed. And I am, of course, fully willing to eat my
words at a future date :-)

Best,
Thomas



		
____________________________________________________
Start your day with Yahoo! - make it your home page 
http://www.yahoo.com/r/hs 
 



More information about the erlang-questions mailing list