<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><span style="font-family:Arial,Helvetica,sans-serif">On Tue, Nov 24, 2020 at 6:40 PM Max Lapshin <<a href="mailto:max.lapshin@gmail.com">max.lapshin@gmail.com</a>> wrote:</span><br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
We analyse video streams with neural networks. It is almost impossible<br>
and useless to run these things in different processes.<br>
<br></blockquote><div><br></div><div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">Bandwidth usage is only part of the game. The other part is how large your machine learning models can be. The vast majority of those are relatively low bandwidth, i.e., there's not a whole lot of data, but heavy on compute. Not saying there aren't considerations for your use case, but the solution is definitely not common, especially with the deeper networks of today. Clearly a shallow model in a high-bandwidth scenario will have trouble. But in a low-bandwidth deep model scenario, or where latency isn't as important, the idea of keeping the model outside of the VM is generally the simplest one to pull off.</div><br></div></div></div>