Best practice for Erlang's process design to build a website-downloader (super simple crawler)

I Gusti Ngurah Oka Prinarjaya okaprinarjaya@REDACTED
Sat Nov 9 13:43:52 CET 2019


Hi,

I need to know the best practice for Erlang's process design to become a
website downloader. I don't need heavy parsing the website like what a
scrapper do. Maybe i only need to parse url's <a href=".." /> .

What had just come to my mind was create N number of Erlang's process under
a supervisor. N is number of url <a href="..." /> found in a website's
pages. But i'm not sure that's a good design. So i need recommendations
from you who have experience on it.

Thank you, I appreciate all of  your time and attention
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20191109/b2e3f5c0/attachment.htm>


More information about the erlang-questions mailing list