<div dir="auto">Got it. Thanks Max</div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Wed 25 nov. 2020 - 20:53, Max Lapshin <<a href="mailto:max.lapshin@gmail.com">max.lapshin@gmail.com</a>> wrote :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">We use caffe + tensor-rt for video processing neural networks. They<br>
are tightly integrated with our video decoding code: it is very<br>
important to run all this in a single process to save memory.<br>
<br>
Flussonic unpacks protocols to frames, card decodes video, processes<br>
it and sends output. All this is done in a single process or a<br>
per-card process.<br>
<br>
Not easy to debug, running separate process is easier and more<br>
reliable of course.<br>
<br>
On Tue, Nov 24, 2020 at 9:07 PM Frank Muller <<a href="mailto:frank.muller.erl@gmail.com" target="_blank">frank.muller.erl@gmail.com</a>> wrote:<br>
><br>
> Hi Max<br>
><br>
> Interesting... Can you shed some light on how you integrate Neural Network with Flussonic from a design perspective?<br>
><br>
> Do you use an external AI library for that ?<br>
><br>
> /Frank<br>
><br>
> Le mar. 24 nov. 2020 à 18:40, Max Lapshin <<a href="mailto:max.lapshin@gmail.com" target="_blank">max.lapshin@gmail.com</a>> a écrit :<br>
>><br>
>> > I'm not sure I would like to have TensorFlow running inside the Erlang VM in the first place.<br>
>><br>
>> We analyse video streams with neural networks. It is almost impossible<br>
>> and useless to run these things in different processes.<br>
>><br>
>> Everything is running in same process because of enormous data<br>
>> streams, so it is absolutely ok to run all this inside erlang VM =)<br>
>><br>
>><br>
>> However, if your traffic is small, it maybe ok to split into different processes<br>
</blockquote></div></div>