<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi Gusti,</p>
<p>I would suggest to create a pool of N processes and a queue of
URLs to process. Every time a new URL is encountered it's added to
the queue. Then a scheduler would pick up those URLs and
distribute them across the pool of processes. I would not suggest
to create a new process for each URL unless you can be sure it
doesn't leant to an explosion of processes, i.e. that the number
of URLs is limited.</p>
<p>Greg<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 10/11/2019 10:07, I Gusti Ngurah Oka
Prinarjaya wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAMpPb5+_h19VkmsQz-=VUvgUN3RXXDf+NRevPMfpB2CVz7DxJg@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Hi,
<div><br>
</div>
<div>Anyone? </div>
<div><br>
</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">Pada tanggal Sab, 9 Nov 2019
pukul 19.43 I Gusti Ngurah Oka Prinarjaya <<a
href="mailto:okaprinarjaya@gmail.com" moz-do-not-send="true">okaprinarjaya@gmail.com</a>>
menulis:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hi,
<div><br>
</div>
<div>I need to know the best practice for Erlang's process
design to become a website downloader. I don't need heavy
parsing the website like what a scrapper do. Maybe i only
need to parse url's <a href=".." /> . </div>
<div><br>
</div>
<div>What had just come to my mind was create N number of
Erlang's process under a supervisor. N is number of url
<a href="..." /> found in a website's pages. But i'm
not sure that's a good design. So i need recommendations
from you who have experience on it. </div>
<div><br>
</div>
<div>Thank you, I appreciate all of your time and attention</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
</blockquote>
</div>
</blockquote>
</body>
</html>