<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 16, 2015 at 5:03 PM, Eric des Courtis <span dir="ltr"><<a href="mailto:eric.des.courtis@benbria.ca" target="_blank">eric.des.courtis@benbria.ca</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>For example Python has this cost model <a href="http://scripts.mit.edu/~6.006/fall07/wiki/index.php?title=Python_Cost_Model" target="_blank">http://scripts.mit.edu/~6.006/fall07/wiki/index.php?title=Python_Cost_Model</a> .</div></blockquote></div><br>The cost model has to be evaluated against memory read times. An instruction which is in the L1 cache or perhaps even in the register bank is almost never going to give a realistic view of runtimes. So beware of simply summing these things. You need to plot the frequency distribution function to guarantee a stable number, and you should also plot the curve as data grows. Often you have cliffs where you start hitting L3 or DRAM.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Also, processor intercommunication tend to be "costly". But if you want to get more than a single core doing work, you need to move work to other cores.<br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">J.</div>
</div></div>