<div dir="ltr">Joel,<div><br></div><div>Like any technical project you are dealt a hand and you have to play it.<br><br><div class="gmail_quote">On Fri, Sep 12, 2008 at 9:52 AM, Joel Reymont <span dir="ltr"><<a href="mailto:joelr1@gmail.com">joelr1@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">I sell a poker server written in Erlang. It's supposed to be super-<br>
robust and super-scalable. I'm about to move to the next level by<br>
adding the missing features, e.g. tournaments and a Flash client.<br>
<br>
I appreciate everything that the Erlang/OTP is doing but I thought I<br>
would vent a few of my recent frustrations with Erlang. I'm in a good<br>
mood after spending a day with OCaml and I have calmed down. Still,<br>
prepare yourself for a long rant ahead!<br>
<br>
My development workstation is a Mac Pro 2x2.8Ghz Quad Xeon, 12Gb of<br>
memory, one 250Gb and two more drives 500Gb each, all 7200RPM SATA. I<br>
use R12B3, SMP and kernel poll, i.e.<br>
<br>
Erlang (BEAM) emulator version 5.6.3 [source] [64-bit] [smp:8] [async-<br>
threads:0] [kernel-poll:true]<br>
<br>
My overwhelming frustration is the opacity of a running Erlang system.<br>
There are no decent tools for peering inside. No usable ones whatsoever!<br>
<br>
With any other language you can profile, make changes, evaluate<br>
performance and make a judgement but not with Erlang.<br>
<br>
I first wrote OpenPoker using OTP everywhere. My players, games, pots,<br>
limits, hands, decks, etc. were all gen_server processes. I used<br>
Mnesia transactions everywhere and I used them often.<br>
<br>
Then I discovered that I cannot scale past 2-3k concurrent players<br>
under heavy use.<br>
<br>
I have a test harness that launches bots which connect to the server<br>
and play by the script. The bots don't wait before replying to bet<br>
requests and so launching a few thousand bots heavily loads the server.<br>
<br>
I don't want just a few thousand concurrent bots, though! I want at<br>
least 10k on a single VM and hundreds of thousands on a cluster, so I<br>
set to optimize my poker server.<br>
<br>
The Erlang Efficiency Guide recommends fprof as the tool. I ran fprof<br>
on my test harness and discovered that the result set cannot be<br>
processed in my 12Gb of memory. I made this discovery after leaving<br>
fprof running for a couple of days and realized this because the fprof<br>
data files were approaching 100Gb and my machine became unusable due<br>
to heavy swapping.<br>
<br>
fprof usets ets tables to analyze the trace results and ets tables<br>
must fit in memory.<br>
<br>
I shortened my test run and was able to see the output of the fprof<br>
trace analysis. To say that it's dense would be an understatement! I<br>
realize that dumping out tuples is easy but aren't computers suppose<br>
to help us humans?<br>
<br>
The final output from fprof is still too raw for me to analyze.<br>
There's absolutely, positively, definitely no way to get a picture of<br>
a running system by reading through it. I understand that I can infer<br>
from the analysis that certain functions take a lot of time but what<br>
if there are none?<br>
<br>
The bulk of the time in my system was taken by various OTP functions<br>
and processes, Mnesia and unknown functions. All I could infer from it<br>
is that perhaps I have too many processes.<br>
<br>
Another thing that I inferred is that the normal method of writing<br>
gen_server code doesn't work for profiling.<br>
<br>
I had to rewrite the gen_server clauses to immediately dispatch to<br>
functions, e.g.<br>
<br>
handle_cast('LOGOUT', Data) -><br>
handle_cast_logout(Data);<br>
<br>
handle_cast('DISCONNECT', Data) -><br>
handle_cast_disconnect(Data);<br>
<br>
otherwise all the clauses of a gen_server are squashed together,<br>
regardless of the message pattern. I don't know if there's a better<br>
way to tackle this.<br>
<br>
Next, I rewrote most of my gen_servers as data structures, e.g. pot,<br>
limit, deck, etc. A deck of cards can take a message to draw a card<br>
but the message can just as well be a function call. The deck<br>
structure will need to be modified regardless and the tuple will be<br>
duplicated anyway. There didn't seem to be any advantage in using a<br>
process here, much less a gen_server.<br>
<br>
Next I carefully went trough my Mnesia schema and split some tables<br>
into smaller tables. I made sure that only the absolutely necessary<br>
tables were disk-based. I wish I could run without updating Mnesia<br>
tables during a game but this is impossible since player balances and<br>
status need to be updated when players join or leave a game, as well<br>
as when a game finishes.<br>
<br>
All my hard work paid off and I was able to get close to 10K players,<br>
with kernel poll enabled, of course. Then I ran out of ETS tables.<br>
<br>
I don't create ETS tables on the fly but, apparently, Mnesia does. For<br>
every transaction!!!<br>
<br>
This prompted me to go through the server again and use dirty_read,<br>
dirty_write wherever possible. I also placed balanced in two separate<br>
"counter" tables, integers to be divided by 10000 to get 4 decimal<br>
points of precision. This is so that I could use dirty_update_counter<br>
instead of a regular read, bump, write pattern.<br>
<br>
My frustration kept increasing but I gained more concurrent players. I<br>
can now safely run up to 8K bots before timeouts start to appear.<br>
<br>
These are gen_server call timeouts when requests for game information<br>
take longer than the default 5 seconds. I have an average of 5 players<br>
per game so this is not because a large number of processes are trying<br>
to access the game.<br>
<br>
I suppose this is a reflection of the load on the system, although CPU<br>
usage never goes past 300% which tells me that no more than 3 cores<br>
are used by Erlang.<br>
<br>
The straw that broke my back was when stopping a bot's matching player<br>
gen_server by returning {stop, ... } started causing my observer<br>
process to receive tcp_close and exit. I could repeat this like<br>
clockwork. Only spawning a separate process to send player a stop<br>
message would fix this.<br>
<br>
Then I changed the way I represent cards started seeing this behavior<br>
again, in just one of my tests. What do cards have to do with<br>
tcp_close? I don't know and dbg tracer is my best friend! What I know<br>
is what git tells me and git says cards were the only difference.<br>
<br>
Anyway, I don't think I have fully recovered yet. I may need a weekend<br>
just to regain my sanity. I will try to spread the load among several<br>
VMs but my hunch is that my distributed 100k players target is far far<br>
away. I'll may have to keep flying blind, with only traces and<br>
printouts to my rescue.<br>
<br>
Thanks for listening, Joel<br>
<font color="#888888"><br>
--<br>
<a href="http://wagerlabs.com" target="_blank">wagerlabs.com</a><br>
<br>
_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://www.erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://www.erlang.org/mailman/listinfo/erlang-questions</a><br>
</font></blockquote></div><br><br clear="all"><br>-- <br>John S. Wolter President<br>Wolter Works<br>Mailto:<a href="mailto:johnswolter@wolterworks.com">johnswolter@wolterworks.com</a><br>Desk 1-734-665-1263<br>Cell: 1-734-904-8433<br>
</div></div>