<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi Ulf,<div><br></div><div>Have you done any concurrent tests? I only ask because I've seen our own pooling code (<a href="https://github.com/openx/gen_server_pool">https://github.com/openx/gen_server_pool</a>) have issues under load. Now in our case</div><div>it's because of a single gen_server acting as a dispatch layer, which should not be the</div><div>case for gproc as IIRC it uses ets to provide for fast concurrent access (something also</div><div>done in a novel way by <a href="https://github.com/ferd/dispcount/">https://github.com/ferd/dispcount/</a> which I keep meaning to try</div><div>out), but I'd be curious to know if you've done any concurrent testing which shows that.</div><div><br></div><div>I think the number of pool implementations in erlang has probably finally surpassed</div><div>the number of json parsers ;)</div><div><br></div><div>-Anthony</div><div><div><br><div><div>On Jun 4, 2013, at 2:18 AM, Ulf Wiger <<a href="mailto:ulf@feuerlabs.com">ulf@feuerlabs.com</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><br>I pushed a new gproc component called gproc_pool the other day.<br><br>The main idea, apart from wanting to see how well it would work, was that I wanted to be able to register servers with gproc and then have an efficient way of pooling between them. A benefit of using gproc throughout is that the registration objects serve as a 'footprint' for each process - by listing the gproc entities for each process, you can tell a lot about its purpose.<br><br>The way gproc_pool works is that:<br>1. You define a pool, by naming it, and optionally specifying its size<br> (gproc_pool:new(Pool) | gproc_pool:new(Pool, Type, Options))<br>2. You add worker names to the pool<br> (gproc_pool:add_worker(Pool, Name))<br>3. Your servers each connect to a given name<br> (gproc_pool:connect_worker(Pool, Name))<br>4. Users pick a worker for each request (gproc_pool:pick(Pool))<br><br>My little test code indicates that the different load-balancing strategies perform a bit differently:<br><br>(<a href="https://github.com/uwiger/gproc/blob/master/src/gproc_pool.erl#L843">https://github.com/uwiger/gproc/blob/master/src/gproc_pool.erl#L843</a>)<br><br>(Create a pool, add 6 workers and iterate 100k times, <br>incrementing a gproc counter for each iteration.)<br><br>3> gproc_pool:test(100000,round_robin,[]).<br>worker stats (848):<br>[{a,16667},{b,16667},{c,16667},{d,16667},{e,16666},{f,16666}]<br>{2801884,ok}<br>4> gproc_pool:test(100000,hash,[]). <br>worker stats (848):<br>[{a,16744},{b,16716},{c,16548},{d,16594},{e,16749},{f,16649}]<br>{1891517,ok}<br>5> gproc_pool:test(100000,random,[]).<br>worker stats (848):<br>[{a,16565},{b,16542},{c,16613},{d,16872},{e,16727},{f,16681}]<br>{3701011,ok}<br>6> gproc_pool:test(100000,direct,[]).<br>worker stats (848):<br>[{a,16667},{b,16667},{c,16667},{d,16667},{e,16666},{f,16666}]<br>{1766639,ok}<br>11> gproc_pool:test(100000,claim,[]).<br>worker stats (848):<br>[{a,100000},{b,0},{c,0},{d,0},{e,0},{f,0}]<br>{7569425,ok}<br><br><br>The worker stats show how evenly the workers were selected,<br>and the {Time, ok} comes from timer:tc/3, i.e. Time/100000 is the per-iteration cost:<br><br>round_robin: 28 usec (maintain a 'current' counter, modulo Size)<br>hash: 19 usec (gproc_pool:pick(Pool, Val), hash on Val)<br>random: 37 usec (pick a random worker, using crypto:rand_uniform/2)<br>direct: 18 usec (gproc_pool:pick(Pool, N), where N modulo Size selects worker)<br>claim: 76 usec (claim the first available worker, apply a fun, then release)<br><br>I think the per-selection cost is acceptable as-is, but could perhaps be improved (esp. the 'random' strategy is surprisingly expensive). All the selection work is done in the caller's process, BTW - no communication with the gproc or gproc_pool servers (except for admin tasks).<br><br>The 'claim' strategy is also surprisingly expensive. I believe it's because I'm using gproc:select/3 to find the first free worker. Note also that it results in an extremely uneven distribution. That's obviously because the test run claims the first available worker and then releases it before iterating - it's always going to select the first worker.)<br><br><a href="https://github.com/uwiger/gproc/blob/master/doc/gproc_pool.md">https://github.com/uwiger/gproc/blob/master/doc/gproc_pool.md</a><br><br>Feedback welcome, be it with performance tips, usability tips, or other.<br><br>BR,<br>Ulf W<br><br>Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc.<br>http://feuerlabs.com<br><br><br><br>_______________________________________________<br>erlang-questions mailing list<br>erlang-questions@erlang.org<br>http://erlang.org/mailman/listinfo/erlang-questions<br></blockquote></div><br></div></div></body></html>