<div dir="ltr">Look again :-) Where does the time go?<div><br><div>Times/results for asc, desc, and para versions:</div><div>- With printing: {{4,6765},{488211,6765},{491135,{<0.44.0>,6765}}}</div><div> I.e. para only a little slower than desc. I don't doubt that para appears a bit faster in your environment.</div><div>- Without printing: {{3,6765},{1233,6765},{71173,{<0.44.0>,6765}}}</div></div><div><br></div><div>That's a clearer picture: The stupidly parallel version is plenty slower than the naïvely recursive version.</div><div><br></div><div>/Erik</div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-12-18 3:21 GMT+01:00 zxq9 <span dir="ltr"><<a href="mailto:zxq9@zxq9.com" target="_blank">zxq9@zxq9.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 2015年12月17日 木曜日 19:34:02 you wrote:<br>
> Good illustration. Fibonacci to the rescue - for once...<br>
> I feel compelled to point out, however, that the talk about memory<br>
> explosion is not applicable here. In Fibonacci - and, I believe, in the<br>
> original program (though I didn't read it closely), the maximum stack<br>
> depth, and therefore the amount of stack memory required, is only linear in<br>
> n. (Well, with lists and such, it might be quadratic for the OP.) The<br>
> explosion is only timewise.<br>
> The accumulating, tail recursive version runs in constant space, of course.<br>
> Better, but linear would still do.<br>
><br>
> /Erik<br>
<br>
</span>You're right.<br>
<br>
The sequential, naive version is indeed linear in space. A senselessly<br>
"parallelized" version grows much larger. Hilariously so.<br>
<br>
<br>
%%% Stupid, "parallelized" definition. NEVER DO THIS.<br>
fib_para(N, P) -><br>
ok = io:format("Calling fib_para(~tp, ~tp)~n", [N, P]),<br>
fib_p(N, P).<br>
<br>
fib_p(0, P) -> P ! {self(), 0};<br>
fib_p(1, P) -> P ! {self(), 1};<br>
fib_p(N, P) -><br>
PA = spawn(?MODULE, fib_para, [N - 1, self()]),<br>
PB = spawn(?MODULE, fib_para, [N - 2, self()]),<br>
fib_p(P, {PA, none}, {PB, none}).<br>
<br>
fib_p(P, {PA, none}, B) -> receive {PA, A} -> fib_p(P, {PA, A}, B) end;<br>
fib_p(P, A, {PB, none}) -> receive {PB, B} -> fib_p(P, A, {PB, B}) end;<br>
fib_p(P, {_, A}, {_, B}) -> P ! {self(), A + B}.<br>
<br>
<br>
As TOTALLY INSANE as the above example is, this is exactly the kind of thing<br>
I see people actually do from time to time. Try a search for "parallel<br>
fibonacci" and you'll find countless tutorials in various languages that<br>
demonstrate the ridiculousness above. A *few* of them actually mention how<br>
stupid this is; many don't. I've run into production code (AHHH!) that winds<br>
up making the same exactly mistake as this, processizing something instead<br>
of actually parallelizing it.<br>
<br>
(For some reason this is especially prevalant in languages, environments or<br>
code where words like "promise" and "future" come up a lot. Always makes me<br>
want to say "You keep using that word. I do not think it means what you<br>
think it means." But I think nobody would get the reference. (>.<)<br>
<a href="https://www.youtube.com/watch?v=wujVMIYzYXg" rel="noreferrer" target="_blank">https://www.youtube.com/watch?v=wujVMIYzYXg</a> )<br>
<br>
It is interesting to run each on a high enough value that you get an<br>
answer the same day and don't crash the VM, but see the problems fib_desc/1<br>
and fib_para/2 have:<br>
<br>
2> timer:tc(fibses, fib_desc, [20]).<br>
% ...<br>
% snip<br>
% ...<br>
<span class="">Calling: fib_desc(3)<br>
Calling: fib_desc(2)<br>
Calling: fib_desc(1)<br>
Calling: fib_desc(0)<br>
Calling: fib_desc(1)<br>
Calling: fib_desc(2)<br>
Calling: fib_desc(1)<br>
Calling: fib_desc(0)<br>
</span>{732419,6765}<br>
3> timer:tc(fibses, fib_asc, [20]).<br>
Calling: fib_asc(20)<br>
Calling: fib_asc(0)<br>
Calling: fib_asc(1)<br>
Calling: fib_asc(20, 2, 0, 1)<br>
Calling: fib_asc(20, 3, 1, 1)<br>
Calling: fib_asc(20, 4, 1, 2)<br>
Calling: fib_asc(20, 5, 2, 3)<br>
Calling: fib_asc(20, 6, 3, 5)<br>
Calling: fib_asc(20, 7, 5, 8)<br>
Calling: fib_asc(20, 8, 8, 13)<br>
Calling: fib_asc(20, 9, 13, 21)<br>
Calling: fib_asc(20, 10, 21, 34)<br>
Calling: fib_asc(20, 11, 34, 55)<br>
Calling: fib_asc(20, 12, 55, 89)<br>
Calling: fib_asc(20, 13, 89, 144)<br>
Calling: fib_asc(20, 14, 144, 233)<br>
Calling: fib_asc(20, 15, 233, 377)<br>
Calling: fib_asc(20, 16, 377, 610)<br>
Calling: fib_asc(20, 17, 610, 987)<br>
Calling: fib_asc(20, 18, 987, 1597)<br>
Calling: fib_asc(20, 19, 1597, 2584)<br>
Calling: fib_asc(20, 20, 2584, 4181)<br>
{1322,6765}<br>
3> timer:tc(fibses, fib_para, [20, self()]).<br>
% ...<br>
% snip<br>
% ...<br>
Calling fib_para(0, <0.21803.0>)<br>
Calling fib_para(1, <0.21817.0>)<br>
Calling fib_para(0, <0.21817.0>)<br>
Calling fib_para(1, <0.21905.0>)<br>
Calling fib_para(0, <0.21905.0>)<br>
{627543,{<0.33.0>,6765}}<br>
4> flush().<br>
Shell got {<0.33.0>,6765}<br>
ok<br>
5><br>
<br>
<br>
Wow. <0.21905.0> That escalated quickly.<br>
<br>
On second look, the times are pretty interesting. The stupidly parallel<br>
version performed slightly faster than the naive recursive version. Whoa...<br>
Checking a few more times, it is consistently a little bit faster. Both are<br>
hundreds of times slower than the iterative one, of course, but it is<br>
incredible to me that "spawn, spawn, call, send, send, receive, call, receive,<br>
call" is faster than "call, call, return, return".<br>
<div class="HOEnZb"><div class="h5"><br>
-Craig<br>
_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
</div></div></blockquote></div><br></div>