xmerl, ets,(mnisia) Too many db tables
Ulf Wiger (AL/EAB)
ulf.wiger@REDACTED
Mon Oct 10 10:08:35 CEST 2005
xmerl creates an ets table to keep DTD rules etc.
They should be deleted in the cleanup/1 function in xmerl_scan.erl.
A possibility is that you're starting more XML processing than you
can finish, and that you eventually get too many rules tables as a
result of that.
A quick fix is to set the environment variable ERL_MAX_DB_TABLES
to a higher value (default is 1400). You can set this to a much higher
value than that -- say 32000, or higher still. It will raise the ceiling, but
you could still run out of tables, of course.
(You should try to find a way to limit the load on your
system anyway.)
Another option is to tell xmerl_scan not to use ets at all.
I haven't actually tested this, but it should work,
modulo minor bug fixes (... one of which is in xmerl_scan -- see
below):
Take the default rules read/write/delete functions from xmerl_scan:
rules_write(Context, Name, Value, #xmerl_scanner{rules = T} = S) ->
case ets:lookup(T, {Context, Name}) of
[] ->
ets:insert(T, {{Context, Name}, Value});
_ ->
ok
end,
S.
rules_read(Context, Name, #xmerl_scanner{rules = T}) ->
case ets:lookup(T, {Context, Name}) of
[] ->
undefined;
[{_, V}] ->
V
end.
rules_delete(Context,Name,#xmerl_scanner{rules = T}) ->
ets:delete(T,{Context,Name}).
and write your own versions that use dict instead:
my_rules_write(Context, Name, Value,
#xmerl_scanner{rules = Dict1} = S) ->
Dict2 =
case dict:find(Dict, {Context,Name}) of
error ->
dict:store({Context,Name}, Value, Dict1);
{ok, _} ->
Dict1
end,
S#xmerl_scanner{rules = Dict2}.
my_rules_read(Context, Name, #xmerl_scanner{rules = Dict}) ->
case dict:find(Dict, {Context, Name}) of
error -> undefined;
{ok, V} -> V
end.
my_rules_delete(Context, Name, #xmerl_scanner{rules = Dict}=S) ->
S#xmerl_scanner{rules = dict:erase({Context,Name}, Dict)}.
Then pass the following option to xmerl_scan for each invocation:
{rules, dict:new(), fun my_rules_read/3,
fun my_rules_write/4, ""}
Performance should be pretty much the same as before,
except xmerl_scan will no longer use ets tables.
... Of course, for this to actually work, one would have to
be able to also parameterize the rules_delete_fun the same way,
but there is no way to do that. :(
/Uffe
-----Original Message-----
From: owner-erlang-questions@REDACTED [mailto:owner-erlang-questions@REDACTED]On Behalf Of Sanjaya Vitharana
Sent: den 10 oktober 2005 06:03
To: erlang-questions@REDACTED
Subject: xmerl, ets,(mnisia) Too many db tables
Hi ...!!!
I got the below error after running my system about 30-45 min. But I'm sure that I'm not using any database in my code. But the error referes to ** Too many db tables **. I have try to search the community replies & found that another person get the similer error when using mnisia. But in my case I don't use mnesia at all.
So ... I dig down to my xml server & parser. Found that in xmerl it self uses exacly the same code as error shows (i.e Tab = ets:new(rules, [set, public]), )
This occurs when I try to increase the load of xml parssing.
i.e. I have set my dialout thread to check the dialout calls every second (earlier was 15 seconds), so each check should pass the xml file (or parse_error.xml ). Also each of that dialout call need to pass 5-6 xml files to initiate a call.
Except to that dialout thing, all the call flow running on the xml & all xml files should pass. So the xml processing is high in the system.
As I have wrote about, the system works properly about 30-45 min without any problem & later gives the below error.
=ERROR REPORT==== 9-Oct-2005::17:41:43 ===
** Too many db tables **
***Failed to Parse parse_error.xml, {error,{'EXIT',
{system_limit,
[{ets,new,[rules,[set,public]]},
{xmerl_scan,initial_state,2},
{xmerl_scan,int_string,4},
{xmerl_scan,file,2},
{ivr_xml_psr,parseXMLFile_parse_error,1},
{ivr_xml_svr,do_passError,1},
{ivr_xml_svr,handle_info, 2},
{gen_server,handle_msg,6}]}}}
It continues up to several time & system crashes giving below error reports
=ERROR REPORT==== 9-Oct-2005::17:43:00 ===
Mnesia(omni_ivr@REDACTED): ** ERROR ** (ignoring core) ** FATAL ** mnesia_recover crashed: {system_limit,
[{ets,
new,
[mnesia_transient_decision,
[{keypos,
2},
set,
public]]},
{mnesia_recover,
create_transient_decision,
0},
{mnesia_recover,
do_allow_garb,
0},
{mnesia_recover,
handle_cast,
2},
{gen_server,
handle_msg,
6},
{proc_lib,
init_p,
5}]} state: {state,
<0.51.0>,
undefined,
undefined,
undefined,
0,
true,
[]}
=SUPERVISOR REPORT==== 9-Oct-2005::17:43:00 ===
Supervisor: {local,mnesia_kernel_sup}
Context: child_terminated
Reason: shutdown
Offender: [{pid,<0.52.0>},
{name,mnesia_monitor},
{mfa,{mnesia_monitor,start,[]}},
{restart_type,permanent},
{shutdown,3000},
{child_type,worker}]
=SUPERVISOR REPORT==== 9-Oct-2005::17:43:00 ===
Supervisor: {local,mnesia_kernel_sup}
Context: shutdown
Reason: reached_max_restart_intensity
Offender: [{pid,<0.52.0>},
{name,mnesia_monitor},
{mfa,{mnesia_monitor,start,[]}},
{restart_type,permanent},
{shutdown,3000},
{child_type,worker}]
-------------------------------------------------
=ERROR REPORT==== 9-Oct-2005::17:43:10 ===
** Generic server mnesia_monitor terminating
** Last message in was {'EXIT',<0.51.0>,killed}
** When Server state == {state,<0.51.0>,[],[],true,[],undefined,[]}
** Reason for termination ==
** killed
=CRASH REPORT==== 9-Oct-2005::17:43:10 ===
crasher:
pid: <0.52.0>
registered_name: mnesia_monitor
error_info: killed
initial_call: {gen,init_it,
[gen_server,
<0.51.0>,
<0.51.0>,
{local,mnesia_monitor},
mnesia_monitor,
[<0.51.0>],
[{timeout,infinity}]]}
ancestors: [mnesia_kernel_sup,mnesia_sup,<0.48.0>]
messages: []
links: [<0.20.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 21
reductions: 6172
neighbours:
=ERROR REPORT==== 9-Oct-2005::17:43:10 ===
Mnesia(omni_ivr@REDACTED): ** ERROR ** mnesia_event got unexpected event: {'EXIT',
<0.53.0>,
killed}
-----------------------------------------------------------------------------------
=CRASH REPORT==== 9-Oct-2005::17:43:10 ===
crasher:
pid: <0.50.0>
registered_name: mnesia_event
error_info: killed
initial_call: {gen,init_it,
[gen_event,
<0.49.0>,
<0.49.0>,
{local,mnesia_event},
[],
[],
[]]}
ancestors: [mnesia_sup,<0.48.0>]
messages: []
links: []
dictionary: []
trap_exit: true
status: running
heap_size: 233
stack_size: 21
reductions: 203
neighbours:
nms_adi_drv.c (239): stop
nms_adi_drv.c (476): finish
na_drv.c (558): stop
na_drv.c (2164): finish
ctaError: CTAERR_NOT_IMPLEMENTED
=CRASH REPORT==== 9-Oct-2005::17:43:10 ===
crasher:
pid: <0.47.0>
registered_name: []
error_info: killed
initial_call: {application_master,init,
[<0.5.0>,
<0.46.0>,
{appl_data,
mnesia,
[mnesia_dumper_load_regulator,
mnesia_event,
mnesia_fallback,
mnesia_controller,
mnesia_kernel_sup,
mnesia_late_loader,
mnesia_locker,
mnesia_monitor,
mnesia_recover,
mnesia_substr,
mnesia_sup,
mnesia_tm],
undefined,
{mnesia_sup,[]},
[mnesia,
mnesia_backup,
mnesia_bup,
mnesia_checkpoint,
mnesia_checkpoint_sup,
mnesia_controller,
mnesia_dumper,
mnesia_event,
mnesia_frag,
mnesia_frag_hash,
mnesia_frag_old_hash,
mnesia_index,
mnesia_kernel_sup,
mnesia_late_loader,
mnesia_lib,
mnesia_loader,
mnesia_locker,
mnesia_log,
mnesia_monitor,
mnesia_recover,
mnesia_registry,
mnesia_schema,
mnesia_snmp_hook,
mnesia_snmp_sup,
mnesia_subscr,
mnesia_sup,
mnesia_sp,
mnesia_text,
mnesia_tm],
[],
infinity,
infinity},
normal]}
ancestors: [<0.46.0>]
messages: []
links: [<0.5.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 987
stack_size: 21
reductions: 1484
neighbours:
=INFO REPORT==== 9-Oct-2005::17:43:12 ===
application: mnesia
exited: killed
type: permanent
=INFO REPORT==== 9-Oct-2005::17:43:12 ===
"Application preparing to stop."
module: ivr_app
=ERROR REPORT==== 9-Oct-2005::17:43:12 ===
"IVR_APP_FSM terminated"
shutdown
idle
...... Terninate all the 60 channels ..... same as this
=INFO REPORT==== 9-Oct-2005::17:43:12 ===
"Application stopped."
module: ivr_app
Is this the internal bug of xmerl ??? Why this happens ??? What should I do to avoid this ??? What is the proper way to handle xmerl to pass this kind of heavy xml passing situations ??
Thanks in advance,
Sanjaya Vitharana
More information about the erlang-questions
mailing list