I don't need transactions, and all the data is only on one node, but as Ulf pointed out, wouldn't an ets:lookup also result in a completely new copy of the long record and hence, as far as CPU cycles spent in data copying is concerned, will be the same as for message passing ? <br>
<br><div class="gmail_quote">On Mon, Apr 27, 2009 at 3:46 PM, mats cronqvist <span dir="ltr"><<a href="mailto:masse@kreditor.se">masse@kreditor.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">Amit Murthy <<a href="mailto:amit.murthy@gmail.com">amit.murthy@gmail.com</a>> writes:<br>
<br>
> Hi,<br>
><br>
> I have an optimization question. Consider the following scenario:<br>
><br>
> - I have, say around 40,000 long running processes. Each represents an online<br>
> user and keeps some user related data in the process dictionary.<br>
> - I need each of the processes to perform some computation against a long<br>
> record that I'll send as a message.<br>
> - The long record (around 40 fields, mostly short strings) is also available<br>
> in an mnesia RAM only table<br>
><br>
> So the question is<br>
><br>
> Is it better to read the record from mnesia once and send the complete record<br>
> to each of the 40,000 processes<br>
><br>
> or<br>
><br>
> Is it better to just send the key in the message and have each of the<br>
> processes do an mnesia lookup<br>
<br>
</div> if you don't need transaction semantics, AND you really need the CPU<br>
cycles, doing an ets:lookup in the mnesia RAM only table is a good<br>
option.<br>
<font color="#888888"><br>
mats<br>
</font></blockquote></div><br>