How to best use a distributed disk_log?

Shawn Pearce <>
Sun Feb 8 02:47:55 CET 2004


I'm having trouble figuring out the manual for disk_log.

What I'm trying to do is setup a 3 node network:

	 - log to local disk
	 - log to local disk (mirror)
	 - log to remote

I want to open up a disk_log under one name, and have two copies of
the log created, one on  and one on   I want  to be 'diskless'
and have its logs sent to  and 

I'm trying to setup a "safe" log in that I have two copies on two
different machines, and my network of diskless nodes can safely send
their data off to the two mirrors.  I don't want to use Mnesia, for many
reasons, one of which is that if  is down, I still want to be able
to log to   When  comes back up, I don't expect the logs to be
merged or anything, but I do expect all new data to get sent to both
 and 

I've tried playing with the {distributed} parameter to disk_log:open
and just keep getting weird states in my network.  This is what I think
I should be doing:

	:
		disk_log:open([{name, n}, {file, "mylog1"},
			{distributed, ['']}])

	:
		disk_log:open([{name, n}, {file, "mylog2"},
			{distributed, ['']}])

	:
		disk_log:open([{name, n}, {distributed, ['', '']})

But I'm getting weird network splits, data doesn't always seem to log
to the right places, and what if  isn't available when  tries
to open the log?  Even if I detect this and skip adding it to the
distributed option, how do I then add it later once the log is opened?

I'm going to explore this more tonight, but I'm doing it by trial
and error...  :-)

-- 
Shawn.



More information about the erlang-questions mailing list