disk_log:blog/2 --- synchronous???
Gerald Biederbeck
Gerald.Biederbeck@REDACTED
Wed Mar 28 16:40:49 CEST 2001
Hi,
I'm trying to write data to (redundant) disks
by using the 'disk_log' module.
I open it as a 'distributed log',
i.e. disk_log:open(
[ ...,
{notify, true},
{distributed, ['xxx@REDACTED', 'yyy@REDACTED']},
...
])
On both machines run the disk_log_server process.
When using the 'blog/2' function in order to write data, the
documentation
mentions that this call is synchronous...but I think it isn't!!!
If I do a Solaris 'chmod' on a certain LogFile and try to write to this
file afterwards,
no error is returned....instead my process receives an error_status
message
as described for the asynchronous feature (which is nice but not
desired).
Receiving an asynchronous {error_status, ...} is not sufficient for
me...
Is there a -real- synchronous way to ensure that the 'blog/2' call
succeeded?
My application must not call 'blog/2' again before it is ensured that
writing succeeeded!
I tried waiting for theses {error_status, ...} messages but they are
only sent
if a change in the status occures. If the status remains, let's say,
{error_status, ok}
no notification is sent and I will wait .... for how long ...forever???
Sorry, I hope someone did understand my problem,
maybe there is a quite simple solution to it.
Perhaps someone is able to tell me why there is no synchronous
functionality
within OTP disk_log when talking about distributed logs...
Is someone else interested in such a functionality or am I the only
one???
Thanx for any comment...
Cheers
/Gerry
More information about the erlang-questions
mailing list