# [erlang-questions] Feedback for my first non-trivial Erlang program

zxq9 zxq9@REDACTED
Fri Dec 18 03:21:15 CET 2015

```On 2015年12月17日 木曜日 19:34:02 you wrote:
> Good illustration. Fibonacci to the rescue - for once...
> I feel compelled to point out, however, that the talk about memory
> explosion is not applicable here. In Fibonacci - and, I believe, in the
> original program (though I didn't read it closely), the maximum stack
> depth, and therefore the amount of stack memory required, is only linear in
> n. (Well, with lists and such, it might be quadratic for the OP.) The
> explosion is only timewise.
> The accumulating, tail recursive version runs in constant space, of course.
> Better, but linear would still do.
>
> /Erik

You're right.

The sequential, naive version is indeed linear in space. A senselessly
"parallelized" version grows much larger. Hilariously so.

%%% Stupid, "parallelized" definition. NEVER DO THIS.
fib_para(N, P) ->
ok = io:format("Calling fib_para(~tp, ~tp)~n", [N, P]),
fib_p(N, P).

fib_p(0, P) -> P ! {self(), 0};
fib_p(1, P) -> P ! {self(), 1};
fib_p(N, P) ->
PA = spawn(?MODULE, fib_para, [N - 1, self()]),
PB = spawn(?MODULE, fib_para, [N - 2, self()]),
fib_p(P, {PA, none}, {PB, none}).

fib_p(P, {PA, none}, B) -> receive {PA, A} -> fib_p(P, {PA, A}, B) end;
fib_p(P, A, {PB, none}) -> receive {PB, B} -> fib_p(P, A, {PB, B}) end;
fib_p(P, {_, A}, {_, B}) -> P ! {self(), A + B}.

As TOTALLY INSANE as the above example is, this is exactly the kind of thing
I see people actually do from time to time. Try a search for "parallel
fibonacci" and you'll find countless tutorials in various languages that
demonstrate the ridiculousness above. A *few* of them actually mention how
stupid this is; many don't. I've run into production code (AHHH!) that winds
up making the same exactly mistake as this, processizing something instead
of actually parallelizing it.

(For some reason this is especially prevalant in languages, environments or
code where words like "promise" and "future" come up a lot. Always makes me
want to say "You keep using that word. I do not think it means what you
think it means." But I think nobody would get the reference. (>.<)
https://www.youtube.com/watch?v=wujVMIYzYXg )

It is interesting to run each on a high enough value that you get an
answer the same day and don't crash the VM, but see the problems fib_desc/1
and fib_para/2 have:

2> timer:tc(fibses, fib_desc, [20]).
% ...
% snip
% ...
Calling: fib_desc(3)
Calling: fib_desc(2)
Calling: fib_desc(1)
Calling: fib_desc(0)
Calling: fib_desc(1)
Calling: fib_desc(2)
Calling: fib_desc(1)
Calling: fib_desc(0)
{732419,6765}
3> timer:tc(fibses, fib_asc, [20]).
Calling: fib_asc(20)
Calling: fib_asc(0)
Calling: fib_asc(1)
Calling: fib_asc(20, 2, 0, 1)
Calling: fib_asc(20, 3, 1, 1)
Calling: fib_asc(20, 4, 1, 2)
Calling: fib_asc(20, 5, 2, 3)
Calling: fib_asc(20, 6, 3, 5)
Calling: fib_asc(20, 7, 5, 8)
Calling: fib_asc(20, 8, 8, 13)
Calling: fib_asc(20, 9, 13, 21)
Calling: fib_asc(20, 10, 21, 34)
Calling: fib_asc(20, 11, 34, 55)
Calling: fib_asc(20, 12, 55, 89)
Calling: fib_asc(20, 13, 89, 144)
Calling: fib_asc(20, 14, 144, 233)
Calling: fib_asc(20, 15, 233, 377)
Calling: fib_asc(20, 16, 377, 610)
Calling: fib_asc(20, 17, 610, 987)
Calling: fib_asc(20, 18, 987, 1597)
Calling: fib_asc(20, 19, 1597, 2584)
Calling: fib_asc(20, 20, 2584, 4181)
{1322,6765}
3> timer:tc(fibses, fib_para, [20, self()]).
% ...
% snip
% ...
Calling fib_para(0, <0.21803.0>)
Calling fib_para(1, <0.21817.0>)
Calling fib_para(0, <0.21817.0>)
Calling fib_para(1, <0.21905.0>)
Calling fib_para(0, <0.21905.0>)
{627543,{<0.33.0>,6765}}
4> flush().
Shell got {<0.33.0>,6765}
ok
5>

Wow. <0.21905.0> That escalated quickly.

On second look, the times are pretty interesting. The stupidly parallel
version performed slightly faster than the naive recursive version. Whoa...
Checking a few more times, it is consistently a little bit faster. Both are
hundreds of times slower than the iterative one, of course, but it is
incredible to me that "spawn, spawn, call, send, send, receive, call, receive,
call" is faster than "call, call, return, return".

-Craig

```

More information about the erlang-questions mailing list