<div dir="ltr"><div class="gmail_quote"><div class="Ih2E3d"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">For example, if each of your player processes has a handle to each other<br>
</blockquote>
player process, spin up a process for a given match instead, and give the<br>
player processes a hook to that and nothing else.<br>
</blockquote>
<br></div>
How does this work? Can you elaborate a bit?</blockquote></div><div><br>Every mailing list needs ascii art!<br><br>If it used to be<br><br> A---B<br> /|\ /|\<br>C-+-*-+-D (plus connections I can't draw<br> \|/ \|/ from A-D, B-C, D-E, C-F)<br>
E---F<br><br>Then change it to<br><br> Z<br> /|\<br> // \\ <br> / | | \<br> /\ | | /\<br>A B C D E F<br><br>Instead
of (6*5)=30 pids, you're now tracking (6*1)+6=12 pids. The savings
goes up the larger the games get, because in the old fully connected
network you need N*(N-1) connections per game, whereas in the new
network you need 2N connections per game, and exponential vs linear
growth, and rada rada rada.<br>
<br><br><br> </div><div class="Ih2E3d"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Bang: 7/8 or so savings<br>
on PID space, and probably a significant speed increase despite the extra<br>
messaging, since you're going to get so many more correct branch predictions<br>
and cache wins.<br>
</blockquote>
<br></div>
And how do you measure cache wins and branch predictions with Erlang?</blockquote></div><div><br>Ancient
wisdom. When you've been programming 20+ years, you can smell what's
going to work and what isn't. And, I know, for someone who says
"intuition is worthless, profile it" as much as I do, that's kind of
hypocritical in the way that the ocean is kind of wet, but I haven't
really taken the time to learn Erlang's profiling situation yet, so
it's the best I've got. <br>
<br>But, I mean, when you drop the size of a block of data by 80-85%,
you're just going to keep a lot more of it in cache, and when the data
has stronger locality, it's more likely to get a good branch
prediction. That's just the nature of modern branch prediction
algorithms. There's a fascinating (if somewhat poorly written)
discussion of these issues in the original Judy Tree papers by that guy
from Hewlett Packard. <br>
</div></div><br> - John</div>