[erlang-questions] Application granularity (was: Parallel Shootout & a style question)
Mats Cronqvist
mats.cronqvist@REDACTED
Fri Sep 5 10:02:15 CEST 2008
Jay Nelson wrote:
> ... in response to a flurry of messages about automatically
> parallelizing list comprehensions ...
>
> If I go through all the code I have written and count the number of
> list comprehensions relative to the total amount of code, there are
> not that many occurrences to worry about. The length of each
> comprehension's data set is not that great in typical code, and
> unless it is a very data parallel algorithm such as matrix
> manipulation, there is little to be gained overall. Mats toss off of
> 10% would over-estimate the likely benefit in the code I typically
> have to implement.
>
agreed. parallelizing list comprehensions will accomplish nothing in
phone switch/web server type applications. but that's not what i'm
talking about either. to quote myself;
"in an ideal world, all the basic OTP libs should be rewritten to be
parallel."
{snipped some stuff i more or less agree with}
> I believe the compiler writers and tool builders should focus on
> making it easier to produce more numerous, but smaller processes,
> rather than trying to make the sequential code eke out an additional
> 10% of performance. I want my code to run 1000x faster when I get a
> 1000 core machine. I likely will need 100,000 processes to realize
> that goal.
the problem is that it doesn't really matter how many processes you
have. to make use of a 1,000 core machine you'll need to have 1,000
*runnable* processes at any one time.
E.g. a large phone switch, that handles 10,000 ongoing calls, will
typically have ~10 runnable processes (given that you model a call as a
process). I.e. for such a system it will buy you nothing to use more
than 10 cores.
mats
More information about the erlang-questions
mailing list