<div dir="ltr">Hi Janos, <div><br></div><div style>Indeed, in that case transactions could be considered succeeded while actual storage operation failed...</div><div style><br></div><div style>The thing is, If mnesia needs to write data to a store (in this case dets), it doesn't check for results for this operation. Actually, it does even catch exceptions. (mnesia_tm:do_update/4)<br>
</div><div style><br></div><div style>As dets have a maximum of 2GB for file size, I believe the table gets corrupted after that. (somebody correct me please)</div><div style>So any operation on that dets table will return the bad_object error.</div>
<div style><br></div><div style>So the commit is considered succeeded, mnesia logs it, transaction returns {atomic, ok}</div><div style><br></div><div style>sync_transaction won't help, because it just waits for the commit to be synced/logged, but also doesn't check results of the store operation.</div>
<div style><br></div><div style>so you could end up in situation where dirty write operation returns an error, and in a transaction returns {atomic, ok}</div><div style><br></div><div style><div>> dets:insert(table1, SomeKey, SomeVal).</div>
<div>{error,{{bad_object,read_buckets},</div><div> "/tmp/table1.DAT"}}</div></div><div style><div>> mnesia:dirty_write(SomeRecord).</div><div>{error,{{bad_object,read_buckets},</div><div> "/tmp/table1.DAT"}}</div>
<div>> mnesia:transaction(fun()-> mnesia:write(SomeRecord) end). </div><div>{atomic,ok}</div><div><br></div><div style>You could try to detect that in your transaction by adding an operation that will fail (like read, first, last, ..etc) but that won't protect you against corrupting the file.</div>
<div><br></div><div style><br></div><div style>The question is, should mnesia detect this kind of errors and if yes, what to do about it?</div><div style>Also, can dets protect against the corruption somehow? (reject the operation that could lead to size> 2GB?)</div>
<div style><br></div><div style><br></div><div style>Best Regards, </div><div style>Ahmed</div><div style><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Aug 27, 2013 at 5:06 PM, Janos Hary <span dir="ltr"><<a href="mailto:janos.n.hary@gmail.com" target="_blank">janos.n.hary@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thanks, but no luck. It behaves the same with sync_transaction.<br>
<span class="HOEnZb"><font color="#888888"><br>
Janos<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
-----Original Message-----<br>
From: <a href="mailto:hawk.mattsson@gmail.com">hawk.mattsson@gmail.com</a> [mailto:<a href="mailto:hawk.mattsson@gmail.com">hawk.mattsson@gmail.com</a>] On Behalf Of<br>
Håkan Mattsson<br>
Sent: Tuesday, August 27, 2013 4:02 PM<br>
To: Janos Hary<br>
Cc: erlang-questions<br>
Subject: Re: [erlang-questions] mnesia silent failure and table corruption<br>
<br>
On Tue, Aug 27, 2013 at 3:05 PM, Janos Hary <<a href="mailto:janos.n.hary@gmail.com">janos.n.hary@gmail.com</a>> wrote:<br>
> All,<br>
><br>
> I started to write records into a table from a simple test function.<br>
> When the table size reaches 2GB write transactions still report<br>
> success, but they are silently ignored. The table became unusable,<br>
> read operations return error, and mnesia cannot recover the table. How<br>
> shall I avoid such situation?<br>
<br>
Try using mnesia:sync_transaction() instead of mnesia:transaction().<br>
Then Mnesia will use syncronized calls to disk_log (and syncronized commits<br>
when several nodes are involved).<br>
<br>
/Håkan<br>
<br>
_______________________________________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" target="_blank">http://erlang.org/mailman/listinfo/erlang-questions</a><br>
</div></div></blockquote></div><br></div>