Logging to one process from thousands: How does it work?
Mon Jan 9 20:35:31 CET 2006
On 9 Jan 2006, at 14:51, chandru wrote:
> Hi Sean,
> On 06/01/06, Sean Hinde <sean.hinde@REDACTED> wrote:
>> Hi Chandru,
>> On 5 Jan 2006, at 19:19, Sean Hinde wrote:
>>> You could introduce an additional accumulator process which stores
>>> log messages while waiting for a separate disk log owning process
>>> to write the current chunk. The protocol to the disk log owning
>>> process could be "send async log message, but don't send any more
>>> until the disk log process confirms that the write is done with a
>>> message back".
>> OK. How about something like what follows at the end of this mail. I
>> have made it spawn a new process to do the logging rather than have
>> an additional permanent process and sending a message, but only
>> because it is simpler for a proof of concept.
>> The idea is to start this as well as open the disk logs, but route
>> log writes via this.
> Thanks for this. I'll try to integrate it into our logger app and see
> how it behaves.
Great! Let us know how you get on..
It would be interesting to try to take care of logging errors some
way (although async logging doesn't care about this normally).
It would be Very Interesting Indeed to compare the "spawn and send
result when done" solution to a solution having an additional
permanent process with "send and send result when done". It is by no
means obvious to me that the spawn solution should really be slower
(i.e. no regular garbage collections required)
More information about the erlang-questions