<div dir="ltr"><div>It is said that thousands of processes can be spawned to do the similar task concurrently and Erlang is good at handling it. If there is more work to be done, we can simply and safely add more worker processes and that makes it scalable.</div>
<div><br></div><div>What I fail to understand is that if the work performed by each work is itself resource-intensive, how will Erlang be able to handle it? For instance, if entries are being made into a table by several sources and an Erlang application withing its hundreds of processes reads rows from the table and does something, this is obviously likely to cause resource burden. Every worker will try to pull a record from the table.</div>
<div>If this is a bad example, consider a worker that has to perform a highly CPU-intensive computation in memory. Thousands of such workers running concurrently will overwork the CPU.</div><div><br></div><div>Please rectify my understanding of the scalability in Erlang:</div>
<div>Erlang processes get time slices of the CPU only if there is work available for them. OS processes on the other hand get time slices regardless of whether they are idle.</div><div>The startup and shutdown time of Erlang processes is much lower than that of OS processes.</div>
<div><br></div><div>Apart from the above two points is there something about Erlang that makes it scalable?</div><div><br></div><div>Thanks,</div><div>Melvyn</div><div><br></div></div>