<div dir="ltr">Thanks for the explanation.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jun 5, 2013 at 8:47 AM, Ulf Wiger <span dir="ltr"><<a href="mailto:ulf@feuerlabs.com" target="_blank">ulf@feuerlabs.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><br><div><div class="im"><div>On 4 Jun 2013, at 21:30, pablo platt wrote:</div><br><blockquote type="cite">
<div dir="ltr"><div><div><div><div>What's the use case for workers in the pool?<br></div>Is it only for distributing a task or also for implementing a pool of DB connections like <a href="https://github.com/devinus/poolboy" target="_blank">https://github.com/devinus/poolboy</a> ?<br>
</div></div></div></div></blockquote><div><br></div><div><br></div></div><div>I believe it *is* fairly similar to poolboy, but I thought it would be consistent with the gproc philosophy to have a pool concept in gproc, since:</div>
<div><br></div><div>- One of the things you need to do in a worker pool implementation is to keep track of the worker processes, and gproc is good at this</div><div><br></div><div>- A benefit of using gproc is that you can get some query/debugging/monitoring capabilities for free. For example, after setting up my test pool (gproc_pool:setup_test_pool/3), I can use the following stock gproc function:</div>
<div><br></div><div><div>2> gproc_pool:setup_test_pool(mypool,round_robin,[]).</div><div>add_worker(mypool, a) -> 1; Ws = [{a,1}]</div><div>add_worker(mypool, b) -> 2; Ws = [{a,1},{b,2}]</div><div>add_worker(mypool, c) -> 3; Ws = [{a,1},{b,2},{c,3}]</div>
<div>add_worker(mypool, d) -> 4; Ws = [{a,1},{b,2},{c,3},{d,4}]</div><div>add_worker(mypool, e) -> 5; Ws = [{a,1},{b,2},{c,3},{d,4},{e,5}]</div><div>add_worker(mypool, f) -> 6; Ws = [{a,1},{b,2},{c,3},{d,4},{e,5},{f,6}]</div>
<div>[true,true,true,true,true,true]</div><div>3> gproc:in</div><div>info/1 info/2 init/1 </div><div>3> catch gproc:info(self()).</div><div>[{gproc,[{{n,l,[gproc_pool,mypool,1,a]},0},</div><div> {{n,l,[gproc_pool,mypool,2,b]},0},</div>
<div> {{n,l,[gproc_pool,mypool,3,c]},0},</div><div> {{n,l,[gproc_pool,mypool,4,d]},0},</div><div> {{n,l,[gproc_pool,mypool,5,e]},0},</div><div> {{n,l,[gproc_pool,mypool,6,f]},0}]},</div><div>
{current_function,{erl_eval,do_apply,6}},</div><div> {initial_call,{erlang,apply,2}},</div><div> {status,running},</div><div> {message_queue_len,0},</div><div> …]</div><div><br></div><div>Thus, from the 'gproc footprint' of the process, I can readily tell that it's a worker in the pool 'mypool' (even if I'm not familiar with the gproc_pool concept, I can guess from convention that the first part of the name is a module name).</div>
</div><div><br></div><div>The whole idea of gproc was in fact to provide a single set of patterns that I saw appearing in many different places in our code, in lots of different implementations. So in a sense, practically everything that gproc provides is stuff that people have implemented before, in reasonably similar ways. :) Hopefully with gproc, some user code can become simpler, more debuggable and a bit more uniform.</div>
<div class="im"><br><blockquote type="cite"><div dir="ltr"><div><div>Why workers has names?<br></div>I know I can just give them names such as 0,1,2... but trying to understand the rational.<br></div></div></blockquote><div>
<br></div></div><div>I thought it was a useful layer of abstraction.</div><div><br></div><div>The performance of the pool is somewhat dependent on the spread of workers across the available slots (especially if the pool is half-full, and hashing or random selection is used). The workers themselves only need to know what they are to call themselves as they connect to the pool. Whoever manages the pool can control the positioning of each worker.</div>
<div class="im"><div><br></div><br><blockquote type="cite"><div dir="ltr"><div>As always, I'm sure this functionality will be a major part in my server like everything else in gproc,<br>
</div><div>even if I still don't know why ;)<br></div></div></blockquote><div><br></div></div><div>Haha! This reminds me of the first design review meeting at Ericsson where gproc's predecessor sysproc was up for review. The chairman of the meeting said "I guess we'll approve it, even though I don't understand what it's for". :)</div>
<div><br></div><div>It was a good decision, I thought…</div><div><br></div><div>BR,</div><div>Ulf W</div><div><div class="h5"><div><br></div><br><blockquote type="cite"><div dir="ltr"><div><br></div>Thanks<br><div><div><div>
<br><br></div></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jun 4, 2013 at 10:24 PM, Ulf Wiger <span dir="ltr"><<a href="mailto:ulf@feuerlabs.com" target="_blank">ulf@feuerlabs.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><br><div><div><div>On 4 Jun 2013, at 18:52, ANTHONY MOLINARO wrote:</div><br>
<blockquote type="cite"><div style="word-wrap:break-word">Hi Ulf,<div><br></div><div>Have you done any concurrent tests? I only ask because I've seen our own pooling code (<a href="https://github.com/openx/gen_server_pool" target="_blank">https://github.com/openx/gen_server_pool</a>) have issues under load. Now in our case</div>
<div>it's because of a single gen_server acting as a dispatch layer, which should not be the</div><div>case for gproc as IIRC it uses ets to provide for fast concurrent access (something also</div><div>done in a novel way by <a href="https://github.com/ferd/dispcount/" target="_blank">https://github.com/ferd/dispcount/</a> which I keep meaning to try</div>
<div>out), but I'd be curious to know if you've done any concurrent testing which shows that.</div></div></blockquote><div><br></div></div><div>I hadn't, but did so now.</div><div><br></div><div>Spawning N clients, which run 1000 iterations each, on e.g. a round_robin pool:</div>
<div><br></div><div>N Avg usec/iteration</div><div>1 37</div><div>10 250</div><div>100 1630</div><div>1000 18813</div><div><br></div><div>Of course, this was a pretty nasty test, with all processes banging away at the pool as fast as they possibly could. If you want frequent mutex conflicts, that's probably as good a way as any to provoke them.</div>
<div><br></div><div>When I insert a random sleep (0-50 ms) between each iteration, time each pick request and collect the averages, 100 concurrent workers pay on average 50 usec per selection. For 1000 concurrent workers, the average rises to 60 usec.</div>
<div><br></div><div>The corresponding average for the hash pool and 1000 concurrent workers is 20 usec.</div><div><br></div><div>(All on my Macbook Air)</div><div><div><br></div><br><blockquote type="cite"><div style="word-wrap:break-word">
<div>I think the number of pool implementations in erlang has probably finally surpassed</div><div>the number of json parsers ;)</div></div></blockquote><div><br></div></div><div>Well, that tends to happen with fun and reasonably well-bounded problems. ;)</div>
<div><br></div><div>BR,</div><div>Ulf W</div><div><div><br><blockquote type="cite"><div style="word-wrap:break-word"><div><br></div><div>-Anthony</div><div><div><br><div><div>On Jun 4, 2013, at 2:18 AM, Ulf Wiger <<a href="mailto:ulf@feuerlabs.com" target="_blank">ulf@feuerlabs.com</a>> wrote:</div>
<br><blockquote type="cite"><br>I pushed a new gproc component called gproc_pool the other day.<br><br>The main idea, apart from wanting to see how well it would work, was that I wanted to be able to register servers with gproc and then have an efficient way of pooling between them. A benefit of using gproc throughout is that the registration objects serve as a 'footprint' for each process - by listing the gproc entities for each process, you can tell a lot about its purpose.<br>
<br>The way gproc_pool works is that:<br>1. You define a pool, by naming it, and optionally specifying its size<br> (gproc_pool:new(Pool) | gproc_pool:new(Pool, Type, Options))<br>2. You add worker names to the pool<br>
(gproc_pool:add_worker(Pool, Name))<br>3. Your servers each connect to a given name<br> (gproc_pool:connect_worker(Pool, Name))<br>4. Users pick a worker for each request (gproc_pool:pick(Pool))<br><br>My little test code indicates that the different load-balancing strategies perform a bit differently:<br>
<br>(<a href="https://github.com/uwiger/gproc/blob/master/src/gproc_pool.erl#L843" target="_blank">https://github.com/uwiger/gproc/blob/master/src/gproc_pool.erl#L843</a>)<br><br>(Create a pool, add 6 workers and iterate 100k times, <br>
incrementing a gproc counter for each iteration.)<br><br>3> gproc_pool:test(100000,round_robin,[]).<br>worker stats (848):<br>[{a,16667},{b,16667},{c,16667},{d,16667},{e,16666},{f,16666}]<br>{2801884,ok}<br>4> gproc_pool:test(100000,hash,[]). <br>
worker stats (848):<br>[{a,16744},{b,16716},{c,16548},{d,16594},{e,16749},{f,16649}]<br>{1891517,ok}<br>5> gproc_pool:test(100000,random,[]).<br>worker stats (848):<br>[{a,16565},{b,16542},{c,16613},{d,16872},{e,16727},{f,16681}]<br>
{3701011,ok}<br>6> gproc_pool:test(100000,direct,[]).<br>worker stats (848):<br>[{a,16667},{b,16667},{c,16667},{d,16667},{e,16666},{f,16666}]<br>{1766639,ok}<br>11> gproc_pool:test(100000,claim,[]).<br>worker stats (848):<br>
[{a,100000},{b,0},{c,0},{d,0},{e,0},{f,0}]<br>{7569425,ok}<br><br><br>The worker stats show how evenly the workers were selected,<br>and the {Time, ok} comes from timer:tc/3, i.e. Time/100000 is the per-iteration cost:<br>
<br>round_robin: 28 usec (maintain a 'current' counter, modulo Size)<br>hash: 19 usec (gproc_pool:pick(Pool, Val), hash on Val)<br>random: 37 usec (pick a random worker, using crypto:rand_uniform/2)<br>direct: 18 usec (gproc_pool:pick(Pool, N), where N modulo Size selects worker)<br>
claim: 76 usec (claim the first available worker, apply a fun, then release)<br><br>I think the per-selection cost is acceptable as-is, but could perhaps be improved (esp. the 'random' strategy is surprisingly expensive). All the selection work is done in the caller's process, BTW - no communication with the gproc or gproc_pool servers (except for admin tasks).<br>
<br>The 'claim' strategy is also surprisingly expensive. I believe it's because I'm using gproc:select/3 to find the first free worker. Note also that it results in an extremely uneven distribution. That's obviously because the test run claims the first available worker and then releases it before iterating - it's always going to select the first worker.)<br>
<br><a href="https://github.com/uwiger/gproc/blob/master/doc/gproc_pool.md" target="_blank">https://github.com/uwiger/gproc/blob/master/doc/gproc_pool.md</a><br><br>Feedback welcome, be it with performance tips, usability tips, or other.<br>
<br>BR,<br>Ulf W<br><br>Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc.<br><a href="http://feuerlabs.com/" target="_blank">http://feuerlabs.com</a><br><br><br><br>_______________________________________________<br>
erlang-questions mailing list<br><a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br><a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
</blockquote></div><br></div></div></div></blockquote></div></div></div><div><div><br><div>
<span style="text-indent:0px;letter-spacing:normal;font-variant:normal;text-align:-webkit-auto;font-style:normal;font-weight:normal;line-height:normal;border-collapse:separate;text-transform:none;font-size:medium;white-space:normal;font-family:Helvetica;word-spacing:0px"><div>
<div>Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc.</div><div><a href="http://feuerlabs.com/" target="_blank">http://feuerlabs.com</a></div></div><div><br></div></span><br>
</div>
<br></div></div></div><br>_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org" target="_blank">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
<br></blockquote></div><br></div>
</blockquote></div></div></div><div><div class="h5"><br><div>
<div><div>Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc.</div><div><a href="http://feuerlabs.com" target="_blank">http://feuerlabs.com</a></div></div><div><br></div><br>
</div>
<br></div></div></div></blockquote></div><br></div>