[erlang-questions] driver_create_port overhead and driver_caller
Tue Jun 18 16:00:12 CEST 2013
On 18 Jun 2013, at 14:35, Lukas Larsson wrote:
> On Tue, Jun 18, 2013 at 1:39 PM, Tim Watson <watson.timothy@REDACTED> wrote:
> On 14 Jun 2013, at 11:19, QDev wrote:
> > Is there other way - would it be better in outputv to put binary/ErlIOVec directly in driver queue with driver_enqv or driver_enq_bin but then port data lock has to be use to synchronise reads/writes to driver queue anyway. That means more contention if many concurrent threads are working no? So I was thinking better to use mutex per long running thread instead.
> I've not used the driver queue much in the past, so not sure about that.
> The good thing about using the driver queue is that if there is data and a flush callback is implemented you can make sure that data is handled before closing the port. So that you don't loose things which you might need. If you do notice that you get a lot of contentions on the pdl you can of course roll your own more fine grained locking and just stick a byte of data in the driver queue in order to trigger flushes.
That sounds like workable approach, will look into it.
> > Also, is it considered necessary to pool worker thread manually? In previous releases, using async thread pool could have detrimental effect on I/O performance - is this still the case?
> ISTR reading that the async threads used by drivers are now separate from those used by the I/O drv internally.
> File I/O uses the same async threads as drivers. In fact the I/O drivers do not use any functionality which is not available to any other driver. So if you do have long running jobs in the async pool it could interfere with the latency of file I/O.
Long running jobs will use own thread pool then.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions