[erlang-questions] inconsistent results with fragmented table
Noah Schwartz
noah.schwartz1@REDACTED
Sat Jul 6 15:17:39 CEST 2013
I can give this a shot. So you think that at some point while adding fragments items got put into the wrong fragment causing the deletes/reads to fail? Qlc works since its essentially scanning the whole table?
How can I avoid this problem when adding fragments in the future?
Sent from my iPhone
On Jul 5, 2013, at 3:12 PM, Chandru <chandrashekhar.mullaparthi@REDACTED> wrote:
> Sounds like the redistribution of data among fragments hasn't happened for whatever reason. One way to fix this is as follows:
>
> - Take a list of all the keys (assuming there aren't millions of them)
> - Figure out if it is in the correct fragment using functions in mnesia_frag_hash.erl
> - If not, read the record, delete it from the appropriate fragment, and re-insert
>
> If taking a list of all the keys isn't practical, I suggest:
>
> - take a backup
> - write a script which will traverses the backup and extracts all the keys which are in the wrong fragments
> - write another script to move said records from wrong fragment to the correct one
>
> cheers
> Chandru
>
> On 5 July 2013 20:03, Noah Schwartz <noah.schwartz1@REDACTED> wrote:
>> This was originally an un-fragmented table that I converted to a fragmented table. I converted by calling change_table_frag with active and then adding my frags.
>>
>>
>> On Fri, Jul 5, 2013 at 2:58 PM, Chandru <chandrashekhar.mullaparthi@REDACTED> wrote:
>>> How were these records written to the table in the first place? I've seen this happen when you write to the table without using mnesia_frag as the activity callback module.
>>>
>>> cheers
>>> Chandru
>>>
>>> On 5 July 2013 16:07, Noah Schwartz <noah.schwartz1@REDACTED> wrote:
>>>> I have a table of type set that worked fine as a non-fragmented table. We started seeing performance issues with the table and with the anticipation of more clients in the system, we were worried the size would exceed the 2 GB max.
>>>>
>>>> The table contains chat messages and we purge records older than 7 days. There seem to be a number of keys that don't show up in a read activity but, do show up when doing a table dump, a select, or using qlc to find records. As such, when this purge policy runs records are found that when I try to delete don't actually get deleted. It almost seems like read/delete are working off of one set while qlc is working off of another set.
>>>>
>>>> Code/outputs below. As you can see for a given key read returns nothing, qlc returns an object, delete says it worked ok. Running the code again gives the same results. Am I using the fragmentation API incorrectly somehow?
>>>>
>>>> Thanks in advance
>>>>
>>>> Code:
>>>> -module(sel).
>>>> -include_lib("stdlib/include/qlc.hrl").
>>>> -record(archive_message313, {owner_with_day_utc, owner, with, day_gregorian_seconds, messages = []}).
>>>> -export([run/0]).
>>>>
>>>> run () ->
>>>> Owner = {"9afa8671-7e88-436c-9ad4-4cb13ad45e1e", "dj.barker.xxx.com"},
>>>> With = {"d735280e-d263-4e2b-beff-ed8ccaca5535", "dj.barker.xxx.com"},
>>>> DayUtc = {2013, 6, 28},
>>>> K = {Owner, With, DayUtc},
>>>>
>>>> DelFun = fun () -> mnesia:delete({archive_message313, K}) end,
>>>> ReadFun = fun () -> mnesia:read({archive_message313, K}) end,
>>>>
>>>> QlcFun = fun () ->
>>>> Q = qlc:q([M || M <- mnesia:table(archive_message313), M#archive_message313.owner_with_day_utc == K]),
>>>> qlc:eval(Q)
>>>> end,
>>>>
>>>> Read = mnesia:activity(transaction, ReadFun, [], mnesia_frag),
>>>> Qlc = mnesia:activity(transaction, QlcFun, [], mnesia_frag),
>>>> Del = mnesia:activity(transaction, DelFun, [], mnesia_frag),
>>>>
>>>> {read_result, Read, qlc_result, Qlc, del_result, Del}.
>>>>
>>>> Output:
>>>> {read_result,[],qlc_result,
>>>> [{archive_message313,{{"9afa8671-7e88-436c-9ad4-4cb13ad45e1e",
>>>> "dj.barker.xxx.com"},
>>>> {"d735280e-d263-4e2b-beff-ed8ccaca5535",
>>>> "dj.barker.xxx.com"},
>>>> {2013,6,28}},
>>>> {"9afa8671-7e88-436c-9ad4-4cb13ad45e1e",
>>>> "dj.barker.xxx.com"},
>>>> {"d735280e-d263-4e2b-beff-ed8ccaca5535",
>>>> "dj.barker.xxx.com"},
>>>> 63539596800,
>>>> [{{{2013,6,28},{13,36,34}},
>>>> {xmlelement,"forwarded",
>>>> [{"xmlns","urn:xmpp:forward:0"}],
>>>> [{xmlelement,"delay",
>>>> [{"xmlns","urn:xmpp:delay"},
>>>> {"stamp","2013-06-28T13:36:34.000000Z"}],
>>>> []},
>>>> {xmlelement,"message",
>>>> [{"to",
>>>> "9afa8671-7e88-436c-9ad4-4cb13ad45e1e@REDACTED"},
>>>> {"from",
>>>> "d735280e-d263-4e2b-beff-ed8ccaca5535@REDACTED"},
>>>> {"type","chat"}],
>>>> [{xmlelement,"body",[],[...]}]}]},
>>>> 1372426594.889164},
>>>> {{{2013,6,28},{13,36,35}},
>>>> {xmlelement,"forwarded",
>>>> [{"xmlns","urn:xmpp:forward:0"}],
>>>> [{xmlelement,"delay",
>>>> [{"xmlns","urn:xmpp:delay"},
>>>> {"stamp","2013-06-28T13:36:35.000000Z"}],
>>>> []},
>>>> {xmlelement,"message",
>>>> [{"to",
>>>> "9afa8671-7e88-436c-9ad4-4cb13ad45e1e@REDACTED"},
>>>> {"from",
>>>> "d735280e-d263-4e2b-beff-ed8ccaca5535@REDACTED"},
>>>> {"type",[...]}],
>>>> [{xmlelement,"body",[],...}]}]},
>>>> 1372426595.124266},
>>>> {{{2013,6,28},{13,36,35}},
>>>> {xmlelement,"forwarded",
>>>> [{"xmlns","urn:xmpp:forward:0"}],
>>>> [{xmlelement,"delay",
>>>> [{"xmlns","urn:xmpp:delay"},
>>>> {"stamp","2013-06-28T13:36:35.000000Z"}],
>>>> []},
>>>> {xmlelement,"message",
>>>> [{"to",
>>>> "9afa8671-7e88-436c-9ad4-4cb13ad45e1e@REDACTED"},
>>>> {"from",[...]},
>>>> {[...],...}],
>>>> [{xmlelement,[...],...}]}]},
>>>> 1372426595.337535}]}],
>>>> del_result,ok}
>>>> --
>>>> Noah
>>>>
>>>> _______________________________________________
>>>> erlang-questions mailing list
>>>> erlang-questions@REDACTED
>>>> http://erlang.org/mailman/listinfo/erlang-questions
>>
>>
>>
>> --
>> Noah
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20130706/9d98bbe8/attachment.htm>
More information about the erlang-questions
mailing list