5 Miscellaneous Mnesia Features
The earlier chapters of this User Guide described how to get started with Mnesia, and how to build a Mnesia database. In this chapter, we will describe the more advanced features available when building a distributed, fault tolerant Mnesia database. This chapter contains the following sections:
- Indexing
- Distribution and Fault Tolerance
- Table fragmentation.
- Local content tables.
- Disc-less nodes.
- More about schema management
- Debugging a Mnesia application
- Concurrent Processes in Mnesia
- Prototyping
- Object Based Programming with Mnesia.
5.1 Indexing
Data retrieval and matching can be performed very efficiently if we know the key for the record. Conversely, if the key is not known, all records in a table must be searched. The larger the table the more time comsuming it will become. To remedy this problem Mnesia's indexing capabilities are used to improve data retrieval and matching of records.
The following two functions manipulate indexes on existing tables:
mnesia:add_table_index(Tab, AttributeName) -> {aborted, R} |{atomic, ok}
mnesia:del_table_index(Tab, AttributeName) -> {aborted, R} |{atomic, ok}
These functions create or delete a table index on field defined by
AttributeName
. To illustrate this, add an index to the table definition(employee, {emp_no, name, salary, sex, phone, room_no}
, which is the example table from the Company database. The function which adds an index on the elementsalary
can be expressed in the following way:
mnesia:add_table_index(employee, salary)
The indexing capabilities of Mnesia are utilized with the following three functions, which retrieve and match records on the basis of index entries in the database.
mnesia:index_read(Tab, SecondaryKey, AttributeName) -> transaction abort | RecordList
. Avoids an exhaustive search of the entire table, by looking up theSecondaryKey
in the index to find the primary keys.mnesia:index_match_object(Pattern, AttributeName) -> transaction abort | RecordList
Avoids an exhaustive search of the entire table, by looking up the secondary key in the index to find the primary keys. The secondary key is found in theAttributeName
field of thePattern
. The secondary key must be bound.mnesia:match_object(Pattern) -> transaction abort | RecordList
Uses indecies to avoid exhaustive search of the entire table. Unlike the other functions above, this function may utilize any index as long as the secondary key is bound.These functions are further described and exemplified in Chapter 4: Pattern matching and in Mnemosyne User's Guide, Database Queries.
5.2 Distribution and Fault Tolerance
Mnesia is a distributed, fault tolerant DBMS. It is possible to replicate tables on different Erlang nodes in a variety of ways. The Mnesia programmer does not have to state where the different tables reside, only the names of the different tables are specified in the program code. This is known as "location transparency" and it is an important concept. In particular:
- A program will work regardless of the location of the data. It makes no difference whether the data resides on the local node, or on a remote node. Note: The program will run slower if the data is located on a remote node.
- The database can be reconfigured, and tables can be moved between nodes. These operations do not effect the user programs.
We have previously seen that each table has a number of system attributes, such as
index
andtype
.Table attributes are specified when the table is created. For example, the following function will create a new table with two RAM replicas:
mnesia:create_table(foo, [{ram_copies, [N1, N2]}, {attributes, record_info(fields, foo)}]).Tables can also have the following properties, where each attribute has a list of Erlang nodes as its value.
ram_copies
. The value of the node list is a list of Erlang nodes, and a RAM replica of the table will reside on each node in the list. This is a RAM replica, and it is important to realize that no disc operations are performed when a program executes write operations to these replicas. However, should permanent RAM replicas be a requirement, then the following alternatives are available:
- The
mnesia:dump_tables/1
function can be used to dump RAM table replicas to disc.- The table replicas can be backed up; either from RAM, or from disc if dumped there with the above function.
disc_copies
. The value of the attribute is a list of Erlang nodes, and a replica of the table will reside both in RAM and on disc on each node in the list. Write operations addressed to the table will address both the RAM and the disc copy of the table.disc_only_copies
. The value of the attribute is a list of Erlang nodes, and a replica of the table will reside only as a disc copy on each node in the list. The major disadvantage of this type of table replica is the access speed. The major advantage is that the table does not occupy space in memory.It is also possible to set and change table properties on existing tables. Refer to Chapter 3: Defining the Schema for full details.
There are basically two reasons for using more than one table replica: fault tolerance, or speed. It is worthwhile to note that table replication provides a solution to both of these system requirements.
If we have two active table replicas, all information is still available if one of the replicas fail. This can be a very important property in many applications. Furthermore, if a table replica exists at two specific nodes, applications which execute at either of these nodes can read data from the table without accessing the network. Network operations are considerably slower and consume more resources than local operations.
It can be advantageous to create table replicas for a distributed application which reads data often, but writes data seldom, in order to achieve fast read operations on the local node. The major disadvantage with replication is the increased time to write data. If a table has two replicas, every write operation must access both table replicas. Since one of these write operations must be a network operation, it is considerably more expensive to perform a write operation to a replicated table than to a non-replicated table.
5.3 Table Fragmentation
5.3.1 The Concept
A concept of table fragmentation has been introduced in order to cope with very large tables. The idea is to split a table into several more manageable fragments. Each fragment is implemented as a first class Mnesia table and may be replicated, have indecies etc. as any other table. But the tables may neither have
local_content
nor have thesnmp
connection activated.In order to be able to access a record in a fragmented table, Mnesia must determine to which fragment the actual record belongs. This is done by the
mnesia_frag
module, which implements themnesia_access
callback behaviour. Please, read the documentation aboutmnesia:activity/4
to see howmnesia_frag
can be used as amnesia_access
callback module.At each record access
mnesia_frag
first computes a hash value from the record key. Secondly the name of the table fragment is determined from the hash value. And finally the actual table access is performed by the same functions as for non-fragmented tables. When the key is not known beforehand, all fragments are searched for matching records. The following piece of code illustrates how an existing Mnesia table is converted to be a fragmented table and how more fragments are added later on.Eshell V4.7.3.3 (abort with ^G) (a@sam)1> mnesia:start(). ok (a@sam)2> mnesia:system_info(running_db_nodes). [b@sam,c@sam,a@sam] (a@sam)3> Tab = dictionary. dictionary (a@sam)4> mnesia:create_table(Tab, [{ram_copies, [a@sam, b@sam]}]). {atomic,ok} (a@sam)5> Write = fun(Keys) -> [mnesia:write({Tab,K,-K}) || K <- Keys], ok end. #Fun<erl_eval> (a@sam)6> mnesia:activity(sync_dirty, Write, [lists:seq(1, 256)], mnesia_frag). ok (a@sam)7> mnesia:change_table_frag(Tab, {activate, []}). {atomic,ok} (a@sam)8> mnesia:table_info(Tab, frag_properties). [{base_table,dictionary}, {foreign_key,undefined}, {n_doubles,0}, {n_fragments,1}, {next_n_to_split,1}, {node_pool,[a@sam,b@sam,c@sam]}] (a@sam)9> Info = fun(Item) -> mnesia:table_info(Tab, Item) end. #Fun<erl_eval> (a@sam)10> Dist = mnesia:activity(sync_dirty, Info, [frag_dist], mnesia_frag). [{c@sam,0},{a@sam,1},{b@sam,1}] (a@sam)11> mnesia:change_table_frag(Tab, {add_frag, Dist}). {atomic,ok} (a@sam)12> Dist2 = mnesia:activity(sync_dirty, Info, [frag_dist], mnesia_frag). [{b@sam,1},{c@sam,1},{a@sam,2}] (a@sam)13> mnesia:change_table_frag(Tab, {add_frag, Dist2}). {atomic,ok} (a@sam)14> Dist3 = mnesia:activity(sync_dirty, Info, [frag_dist], mnesia_frag). [{a@sam,2},{b@sam,2},{c@sam,2}] (a@sam)15> mnesia:change_table_frag(Tab, {add_frag, Dist3}). {atomic,ok} (a@sam)16> Read = fun(Key) -> mnesia:read({Tab, Key}) end. #Fun<erl_eval> (a@sam)17> mnesia:activity(transaction, Read, [12], mnesia_frag). [{dictionary,12,-12}] (a@sam)18> mnesia:activity(sync_dirty, Info, [frag_size], mnesia_frag). [{dictionary,64}, {dictionary_frag2,64}, {dictionary_frag3,64}, {dictionary_frag4,64}] (a@sam)19>5.3.2 Fragmentation Properties
There is a table property called
frag_properties
and may be read withmnesia:table_info(Tab, frag_properties)
. The fragmentation properties is a list of tagged tuples with the arity 2. By default the list is empty, but when it is non-empty it triggers Mnesia to regard the table as fragmented. The fragmentation properties are:
{n_fragments, Int}
n_fragments
regulates how many fragments that the table currently has. This property may explictly be set at table creation and later be changed with{add_frag, NodesOrDist}
ordel_frag
.n_fragment
s defaults to1
.
{node_pool, List}
- The node pool contains a list of nodes and may explicitly be set at table creation and later be changed with
{add_node, Node}
or{del_node, Node}
. At table creation Mnesia tries to distribute the replicas of each fragment evenly over all the nodes in the node pool. Hopefully all nodes will end up with the same number of replicas.node_pool
defaults to the return value frommnesia:system_info(db_nodes)
.
{n_ram_copies, Int}
- Regulates how many
ram_copies
replicas that each fragment should have. This property may explicitly be set at table creation. The default is0
, but ifn_disc_copies
andn_disc_only_copies
also are0
,n_ram_copies
will default be set to1
.
{n_disc_copies, Int}
- Regulates how many
disc_copies
replicas that each fragment should have. This property may explicitly be set at table creation. The default is0
.
{n_disc_only_copies, Int}
- Regulates how many
disc_only_copies
replicas that each fragment should have. This property may explicitly be set at table creation. The default is0
.
{foreign_key, ForeignKey}
ForeignKey
may either be the atomundefined
or the tuple{ForeignTab, Attr}
, whereAttr
denotes an attribute which should be interpreted as a key in another fragmented table namedForeignTab
. Mnesia will ensure that the number of fragments in this table and in the foreign table are always the same. When fragments are added or deleted Mnesia will automatically propagate the operation to all fragmented tables that has a foreign key referring to this table. Instead of using the record key to determine which fragment to access, the value of theAttr
field is used. This feature makes it possible to automatically co-locate records in different tables to the same node.foreign_key
defaults toundefined
. However if the foreign key is set to something else it will cause the default values of the other fragmentation properties to be the same values as the actual fragmentation properties of the foreign table.
Eshell V4.7.3.3 (abort with ^G) (a@sam)1> mnesia:start(). ok (a@sam)2> PrimProps = [{n_fragments, 7}, {node_pool, [node()]}]. [{n_fragments,7},{node_pool,[a@sam]}] (a@sam)3> mnesia:create_table(prim_dict, [{frag_properties, PrimProps}, {attributes,[prim_key,prim_val]}]). {atomic,ok} (a@sam)4> SecProps = [{foreign_key, {prim_dict, sec_val}}]. [{foreign_key,{prim_dict,sec_val}}] (a@sam)5> mnesia:create_table(sec_dict, [{frag_properties, SecProps}, (a@sam)5> {attributes, [sec_key, sec_val]}]). {atomic,ok} (a@sam)6> Write = fun(Rec) -> mnesia:write(Rec) end. #Fun<erl_eval> (a@sam)7> PrimKey = 11. 11 (a@sam)8> SecKey = 42. 42 (a@sam)9> mnesia:activity(sync_dirty, Write, [{prim_dict, PrimKey, -11}], mnesia_frag). ok (a@sam)10> mnesia:activity(sync_dirty, Write, [{sec_dict, SecKey, PrimKey}], mnesia_frag). ok (a@sam)11> mnesia:change_table_frag(prim_dict, {add_frag, [node()]}). {atomic,ok} (a@sam)12> SecRead = fun(PrimKey, SecKey) -> mnesia:read({sec_dict, PrimKey}, SecKey, read) end. #Fun<erl_eval> (a@sam)13> mnesia:activity(transaction, SecRead, [PrimKey, SecKey], mnesia_frag). [{sec_dict,42,11}] (a@sam)14> Info = fun(Tab, Item) -> mnesia:table_info(Tab, Item) end. #Fun<erl_eval> (a@sam)15> mnesia:activity(sync_dirty, Info, [prim_dict, frag_size], mnesia_frag). [{prim_dict,0}, {prim_dict_frag2,0}, {prim_dict_frag3,0}, {prim_dict_frag4,1}, {prim_dict_frag5,0}, {prim_dict_frag6,0}, {prim_dict_frag7,0}, {prim_dict_frag8,0}] (a@sam)16> mnesia:activity(sync_dirty, Info, [sec_dict, frag_size], mnesia_frag). [{sec_dict,0}, {sec_dict_frag2,0}, {sec_dict_frag3,0}, {sec_dict_frag4,1}, {sec_dict_frag5,0}, {sec_dict_frag6,0}, {sec_dict_frag7,0}, {sec_dict_frag8,0}] (a@sam)17>5.3.3 Management of Fragmented Tables
The function
mnesia:change_table_frag(Tab, Change)
is intended to be used for reconfiguration of fragmented tables. TheChange
argument should have one of the following values:
{activate, FragProps}
- Activates the fragmentation properties of an existing table.
FragProps
should either contain{node_pool, Nodes}
or be empty.
deactivate
- Deactivates the fragmentation properties of a table. The number of fragments must be
1
. No other tables may refer to this table in its foreign key.
{add_frag, NodesOrDist}
- Adds one new fragment to a fragmented table. All records in one of the old fragments will be rehashed and about half of them will be moved to the new (last) fragment. All other fragmented tables, which refers to this table in their foreign key, will automatically get a new fragment, and their records will also be dynamically rehashed in the same manner as for the main table.
TheNodesOrDist
argument may either be a list of nodes or the result frommnesia:table_info(Tab, frag_dist)
. TheNodesOrDist
argument is assumed to be a sorted list with the best nodes to host new replicas first in the list. The new fragment will get the same number of replicas as the first fragment (seen_ram_copies
,n_disc_copies
andn_disc_only_copies
). TheNodesOrDist
list must at least contain one element for each replica that needs to be allocated.
del_frag
- Deletes one fragment from a fragmented table. All records in the last fragment will be moved to one of the other fragments. All other fragmented tables which refers to this table in their foreign key, will automatically loose their last fragment and their records will also be dynamically rehashed in the same manner as for the main table.
{add_node, Node}
- Adds a new node to the
node_pool
. The new node pool will affect the list returned frommnesia:table_info(Tab, frag_dist)
.
{del_node, Node}
- Deletes a new node from the
node_pool
. The new node pool will affect the list returned frommnesia:table_info(Tab, frag_dist)
.
5.3.4 Extensions of Existing Functions
The function
mnesia:create_table/2
is used to create a brand new fragmented table, by setting the table propertyfrag_properties
to some proper values.The function
mnesia:delete_table/2
is used to delete a fragmented table including all its fragments. There must however not exist any other fragmented tables which refers to this table in their foreign key.The function
mnesia:table_table/2
now understands thefrag_properties
item.If the function
mnesia:table_info/2
is invoked in the activity context of themnesia_frag
module, information of several new items may be obtained:
base_table
- the name of the fragmented table
n_fragments
- the actual number of fragments
node_pool
- the pool of nodes
n_ram_copies
n_disc_copies
n_disc_only_copies
- the number of replicas with storage type
ram_copies
,disc_copies
anddisc_only_copies
respectively. The actual values are dynamically derived from the first fragment. The first fragment serves as a protype and when the actual values needs to be computed (e.g. when adding new fragments) they are simply determined by counting the number of each replicas for each storage type. This means, when the functionsmnesia:add_table_copy/3
,mnesia:del_table_copy/2
andmnesia:change_table_copy_type/2
are applied on the first fragment, it will affect the settings onn_ram_copies
,n_disc_copies
, andn_disc_only_copies
.
foreign_key
- the foreign key.
foreigners
- all other tables that refers to this table in their foreign key.
frag_names
- the names of all fragments.
frag_dist
- a sorted list of
{Node, Count}
tuples which is sorted in increasingCount
order. TheCount
is the total number of replicas that this fragmented table hosts on eachNode
. The list always contains at least all nodes in thenode_pool
. The nodes which not belongs to thenode_pool
will be put last in the list even if theirCount
is lower.
frag_size
- a list of
{Name, Size}
tuples whereName
is a fragmentName
andSize
is how many records it contains.
frag_memory
- a list of
{Name, Memory}
tuples whereName
is a fragmentName
andMemory
is how much memory it occupies.
size
- total size of all fragments
memory
- the total memory of all fragments
5.3.5 Load Balancing
There are several algorithms for distributing records in a fragmented table evenly over a pool of nodes. No one is best, it simply depends of the application needs. Here follows some examples of situations which may need some attention:
permanent change of nodes
when a new permanentdb_node
is introduced or dropped, it may be time to change the pool of nodes and re-distribute the replicas evenly over the new pool of nodes. It may also be time to add or delete a fragment before the replicas are re-distributed.
size/memory threshold
when the total size or total memory of a fragmented table (or a single fragment) exceeds some application specific threshold, it may be time to dynamically add a new fragment in order obtain a better distribution of records.
temporary node down
when a node temporarily goes down it may be time to compensate some fragments with new replicas in order to keep the desired level of redundancy. When the node comes up again it may be time to remove the superfluous replica.
overload threshold
when the load on some node is exceeds some application specific threshold, it may be time to either add or move some fragment replicas to nodes with lesser load. Extra care should be taken if the table has a foreign key relation to some other table. In order to avoid severe performance penalties, the same re-distribution must be performed for all of the related tables.Use
mnesia:change_table_frag/2
to add new fragments and apply the usual schema manipulation functions (such asmnesia:add_table_copy/3
,mnesia:del_table_copy/2
andmnesia:change_table_copy_type/2
) on each fragment to perform the actual re-distribution.5.4 Local content tables
Replicated tables have the same content on all nodes where they are replicated. However, it is sometimes advantageous to have tables but different content on different nodes.
If we specify the attribute
{local_content, true}
when we create the table, the table will reside on the nodes where we specify that the table shall exist, but the write operations on the table will only be performed on the local copy.Furthermore, when the table is initialized at start-up, the table will only be initialized locally, and the table content will not be copied from another node.
5.5 Disc-less nodes
It is possible to run Mnesia on nodes that do not have a disc. It is of course not possible to have replicas of neither
disc_copies
, nordisc_only_copies
on such nodes. This especially troublesome for theschema
table since Mnesia need the schema in order to initialize itself.The schema table may, as other tables, reside on one or more nodes. The storage type of the schema table may either be
disc_copies
orram_copies
(notdisc_only_copies
). At start-up Mnesia uses its schema to determine with which nodes it should try to establish contact. If any of the other nodes are already started, the starting node merges its table definitions with the table definitions brought from the other nodes. This also applies to the definition of the schema table itself. The application parameterextra_db_nodes
contains a list of nodes which Mnesia also should establish contact with besides the ones found in the schema. The default value is the empty list[]
.Hence, when a disc-less node needs to find the schema definitions from a remote node on the network, we need to supply this information through the application parameter
-mnesia extra_db_nodes NodeList
. Without this configuration parameter set, Mnesia will start as a single node system. It is also possible to usemnesia:change_config/2
to assign a value to 'extra_db_nodes' and force a connection after mnesia have been started, i.e. mnesia:change_config(extra_db_nodes, NodeList).The application parameter schema_location controls where Mnesia will search for its schema. The parameter may be one of the following atoms:
disc
- Mandatory disc. The schema is assumed to be located on the Mnesia directory. And if the schema cannot be found, Mnesia refuses to start.
ram
- Mandatory ram. The schema resides in ram only. At start-up a tiny new schema is generated. This default schema contains just the definition of the schema table and only resides on the local node. Since no other nodes are found in the default schema, the configuration parameter
extra_db_nodes
must be used in order to let the node share its table definitions with other nodes. (Theextra_db_nodes
parameter may also be used on disc-full nodes.)
opt_disc
- Optional disc. The schema may reside on either disc or ram. If the schema is found on disc, Mnesia starts as a disc-full node (the storage type of the schema table is disc_copies). If no schema is found on disc, Mnesia starts as a disc-less node (the storage type of the schema table is ram_copies). The default value for the application parameter is
opt_disc
.
When the
schema_location
is set to opt_disc the functionmnesia:change_table_copy_type/3
may be used to change the storage type of the schema. This is illustrated below:1> mnesia:start(). ok 2> mnesia:change_table_copy_type(schema, node(), disc_copies). {atomic, ok}Assuming that the call to
mnesia:start
did not find any schema to read on the disc, then Mnesia has started as a disc-less node, and then changed it to a node that utilizes the disc to locally store the schema.5.6 More schema management
It is possible to add and remove nodes from a Mnesia system. This can be done by adding a copy of the schema to those nodes.
The functions
mnesia:add_table_copy/3
andmnesia:del_table_copy/2
may be used to add and delete replicas of the schema table. Adding a node to the list of nodes where the schema is replicated will affect two things. First it allows other tables to be replicated to this node. Secondly it will cause Mnesia to try to contact the node at start-up of disc-full nodes.The function call
mnesia:del_table_copy(schema, mynode@host)
deletes the node 'mynode@host' from the Mnesia system. The call fails if mnesia is running on 'mynode@host'. The other mnesia nodes will never try to connect to that node again. Note, if there is a disc resident schema on the node 'mynode@host', the entire mnesia directory should be deleted. This can be done withmnesia:delete_schema/1
. If mnesia is started again on the the node 'mynode@host' and the directory has not been cleared, mnesia's behaviour is undefined.If the storage type of the schema is ram_copies, i.e, we have disc-less node, Mnesia will not use the disc on that particular node. The disc usage is enabled by changing the storage type of the table
schema
to disc_copies.New schemas are created explicitly with
mnesia:create_schema/1
or implicitly by starting Mnesia without a disc resident schema. Whenever a table (including the schema table) is created it is assigned its own unique cookie. The schema table is not created withmnesia:create_table/2
as normal tables.At start-up Mnesia connects different nodes to each other, then they exchange table definitions with each other and the table definitions are merged. During the merge procedure Mnesia performs a sanity test to ensure that the table definitions are compatible with each other. If a table exists on several nodes the cookie must be the same, otherwise Mnesia will shutdown one of the nodes. This unfortunate situation will occur if a table has been created on two nodes independently of each other while they were disconnected. To solve the problem, one of the tables must be deleted (as the cookies differ we regard it to be two different tables even if they happen to have the same name).
Merging different versions of the schema table, does not always require the cookies to be the same. If the storage type of the schema table is disc_copies, the cookie is immutable, and all other db_nodes must have the same cookie. When the schema is stored as type ram_copies, its cookie can be replaced with a cookie from another node (ram_copies or disc_copies). The cookie replacement (during merge of the schema table definition) is performed each time a RAM node connects to another node.
mnesia:system_info(schema_location)
andmnesia:system_info(extra_db_nodes)
may be used to determine the actual values of schema_location and extra_db_nodes respectively.mnesia:system_info(use_dir)
may be used to determine whether Mnesia is actually using the Mnesia directory.use_dir
may be determined even before Mnesia is started. The functionmnesia:info/0
may now be used to printout some system information even before Mnesia is started. When Mnesia is started the function prints out more information.Transactions which update the definition of a table, requires that Mnesia is started on all nodes where the storage type of the schema is disc_copies. All replicas of the table on these nodes must also be loaded. There are a few exceptions to these availability rules. Tables may be created and new replicas may be added without starting all of the disc-full nodes. New replicas may be added before all other replicas of the table have been loaded, it will suffice when one other replica is active.
5.7 Mnesia event handling
System events and table events are the two categories of events that Mneisa will generate in various situtations.
5.7.1 System events
The system events are detailed below:
{mnesia_up, Node}
- Mnesia has been started on a node. Node is the name of the node. By default this event is ignored.
{mnesia_down, Node}
- Mnesia has been stopped on a node. Node is the name of the node. By default this event is ignored.
{mnesia_checkpoint_activated, Checkpoint}
- a checkpoint with the name
Checkpoint
has been activated and that the current node is involved in the checkpoint. Checkpoints may be activated explicitly withmnesia:activate_checkpoint/1
or implicitly at backup, adding table replicas, internal transfer of data between nodes etc. By default this event is ignored.
{mnesia_checkpoint_deactivated, Checkpoint}
- A checkpoint with the name
Checkpoint
has been deactivated and that the current node was involved in the checkpoint. Checkpoints may explicitly be deactivated withmnesia:deactivate/1
or implicitly when the last replica of a table (involved in the checkpoint) becomes unavailable, e.g. at node down. By default this event is ignored.
{mnesia_overload, Details}
- Mnesia on the current node is overloaded and the subscriber should take action.
A typical overload situation occurs when the applications are performing more updates on disc resident tables than Mnesia is able to handle. Ignoring this kind of overload may lead into a situation where the disc space is exhausted (regardless of the size of the tables stored on disc).
Each update is appended to the transaction log and occassionally(depending of how it is configured) dumped to the tables files. The table file storage is more compact than the transaction log storage, especially if the same record is updated over and over again. If the thresholds for dumping the transaction log have been reached before the previous dump was finished an overload event is triggered.
Another typical overload situation is when the transaction manager cannot commit transactions at the same pace as the applications are performing updates of disc resident tables. When this happens the message queue of the transaction manager will continue to grow until the memory is exhausted or the load decreases.
The same problem may occur for dirty updates. The overload is detected locally on the current node, but its cause may be on another node. Application processes may cause heavy loads if any table are residing on other nodes (replicated or not). By default this event is reported to the error_logger.
{inconsistent_database, Context, Node}
- Mnesia regards the database as potential inconsistent and gives its applications a chance to recover from the inconsistency, e.g. by installing a consistent backup as fallback and then restart the system or pick a
MasterNode
frommnesia:system_info(db_nodes)
) and invokemnesia:set_master_node([MasterNode])
. By default an error is reported to the error logger.
{mnesia_fatal, Format, Args, BinaryCore}
- Mnesia has encountered a fatal error and will (in a short period of time) be terminated. The reason for the fatal error is explained in Format and Args which may be given as input to
io:format/2
or sent to the error_logger. By default it will be sent to the error_logger.BinaryCore
is a binary containing a summary of Mnesia's internal state at the time the when the fatal error was encountered. By default the binary is written to a unique file name on current directory. On RAM nodes the core is ignored.
{mnesia_info, Format, Args}
- Mnesia has detected something that may be of interest when debugging the system. This is explained in
Format
andArgs
which may appear as input toio:format/2
or sent to the error_logger. By default this event is printed withio:format/2
.
{mnesia_error, Format, Args}
- Mnesia has encountered an error. The reason for the error is explained i
Format
andArgs
which may be given as input toio:format/2
or sent to the error_logger. By default this event is reported to the error_logger.
{mnesia_user, Event}
- An application has invoked the function
mnesia:report_event(Event)
.Event
may be any Erlang data structure. When tracing a system of Mnesia applications it is useful to be able to interleave Mnesia's own events with application related events that give information about the application context. Whenever the application starts with a new and demanding Mnesia activity or enters a new and interesting phase in its execution it may be a good idea to usemnesia:report_event/1
.
5.7.2 Table events
Another category of events are table events, which are events related to table updates. The table events are tuples typically looking like this:
{Oper, Record, ActivityId}
. WhereOper
is the operation performed.Record
is the record involved in the operation andActivityId
is the identity of the transaction performing the operation. Note that the name of the record is the table name even when therecord_name
has another setting. The various table related events that may occur are:
{write, NewRecord, ActivityId}
- a new record has been written. NewRecord contains the new value of the record.
{delete_object, OldRecord, ActivityId}
- a record has possibly been deleted with
mnesia:delete_object/1
.OldRecord
contains the value of the old record as stated as argument by the application. Note that, other records with the same key may be remaining in the table if it is a bag.
{delete, {Tab, Key}, ActivityId}
- one or more records possibly has been deleted. All records with the key Key in the table
Tab
have been deleted.
It is possible for user process to subscribe on the events generated by Mnesia. We have the following two functions:
mnesia:subscribe(EventCategory)
- Ensures that a copy of all events of type
EventCategory
are sent to the calling process.
mnesia:unsubscribe(EventCategory)
- Removes the subscription on events of type
EventCategory
EventCategory
may either be the atomsystem
or the tuple{table, Tab}
. The subscribe functions activate a subscription of events. The events are delivered as messages to the process evaluating themnesia:subscribe/1
function. The syntax of system events is{mnesia_system_event, Event}
and{mnesia_table_event, Event}
for table events. What system events and table events means is described above.All system events are subscribed by Mnesia's gen_event handler. The default gen_event handler is
mnesia_event
. But it may be changed by using the application parameterevent_module
. The value of this parameter must be the name of a module implementing a complete handler as specified by thegen_event
module in stdlib.mnesia:system_info(subscribers)
andmnesia:table_info(Tab, subscribers)
may be used to determine which processes are subscribed to various events.5.8 Debugging Mnesia applications
Debugging a Mnesia application can be difficult due to a number of reasons, primarily related to difficulties in understanding how the transaction and table load mechanisms work. An other source of confusion may be the semantics of nested transactions.
We may set the debug level of Mnesia by calling:
mnesia:set_debug_level(Level)
Where the parameter
Level
is:
none
- no trace outputs at all. This is the default.
verbose
- activates tracing of important debug events. These debug events will generate
{mnesia_info, Format, Args}
system events. Processes may subscribe to these events withmnesia:subscribe/1
. The events are always sent to Mnesia's event handler.
debug
- activates all events at the verbose level plus traces of all debug events. These debug events will generate
{mnesia_info, Format, Args}
system events. Processes may subscribe to these events withmnesia:subscribe/1
. The events are always sent to Mnesia's event handler. On this debug level Mnesia's event handler starts subscribing updates in the schema table.
trace
- activates all events at the debug level. On this debug level Mnesia's event handler starts subscribing updates on all Mnesia tables. This level is only intended for debugging small toy systems, since many large events may be generated.
false
- is an alias for none.
true
- is an alias for debug.
The debug level of Mnesia itself, is also an application parameter, thereby making it possible to start an Erlang system in order to turn on Mnesia debug in the initial start-up phase by using the following code:
% erl -mnesia debug verbose5.9 Concurrent Processes in Mnesia
Programming concurrent Erlang systems is the subject of a separate book. However, it is worthwhile to draw attention to the following features, which permit concurrent processes to exist in a Mnesia system.
A group of functions or processes can be called within a transaction. A transaction may include statements that read, write or delete data from the DBMS. A large number of such transactions can run concurrently, and the programmer does not have to explicitly synchronize the processes which manipulate the data. All programs accessing the database through the transaction system may be written as if they had sole access to the data. This is a very desirable property since all synchronization is taken care of by the transaction handler. If a program reads or writes data, the system ensures that no other program tries to manipulate the same data at the same time.
It is possible to move tables, delete tables or reconfigure the layout of a table in various ways. An important aspect of the actual implementation of these functions is that it is possible for user programs to continue to use a table while it is being reconfigured. For example, it is possible to simultaneously move a table and perform write operations to the table . This is important for many applications that require continuously available services. Refer to Chapter 4: Transactions and other access contexts for more information.
5.10 Prototyping
If and when we decide that we would like to start and manipulate Mnesia, it is often easier to write the definitions and data into an ordinary text file. Initially, no tables and no data exist, or which tables are required. At the initial stages of prototyping it is prudent write all data into one file, process that file and have the data in the file inserted into the database. It is possible to initialize Mnesia with data read from a text file. We have the following two functions to work with text files.
mnesia:load_textfile(Filename)
Which loads a series of local table definitions and data found in the file into Mnesia. This function also starts Mnesia and possibly creates a new schema. The function only operates on the local node.
mnesia:dump_to_textfile(Filename)
Dumps all local tables of a mnesia system into a text file which can then be edited (by means of a normal text editor) and then later reloaded.
These functions are of course much slower than the ordinary store and load functions of Mnesia. However, this is mainly intended for minor experiments and initial prototyping. The major advantages of these functions is that they are very easy to use.
The format of the text file is:
{tables, [{Typename, [Options]}, {Typename2 ......}]}. {Typename, Attribute1, Atrribute2 ....}. {Typename, Attribute1, Atrribute2 ....}.
Options
is a list of{Key,Value}
tuples conforming to the options we could give tomnesia:create_table/2
.For example, if we want to start playing with a small database for healthy foods, we enter then following data into the file
FRUITS
.{tables, [{fruit, [{attributes, [name, color, taste]}]}, {vegetable, [{attributes, [name, color, taste, price]}]}]}. {fruit, orange, orange, sweet}. {fruit, apple, green, sweet}. {vegetable, carrot, orange, carrotish, 2.55}. {vegetable, potato, yellow, none, 0.45}.The following session with the Erlang shell then shows how to load the fruits database.
% erl Erlang (BEAM) emulator version 4.9 Eshell V4.9 (abort with ^G) 1> mnesia:load_textfile("FRUITS"). New table fruit New table vegetable {atomic,ok} 2> mnesia:info(). ---> Processes holding locks <--- ---> Processes waiting for locks <--- ---> Pending (remote) transactions <--- ---> Active (local) transactions <--- ---> Uncertain transactions <--- ---> Active tables <--- vegetable : with 2 records occuping 299 words of mem fruit : with 2 records occuping 291 words of mem schema : with 3 records occuping 401 words of mem ===> System info in version "1.1", debug level = none <=== opt_disc. Directory "/var/tmp/Mnesia.nonode@nohost" is used. use fallback at restart = false running db nodes = [nonode@nohost] stopped db nodes = [] remote = [] ram_copies = [fruit,vegetable] disc_copies = [schema] disc_only_copies = [] [{nonode@nohost,disc_copies}] = [schema] [{nonode@nohost,ram_copies}] = [fruit,vegetable] 3 transactions committed, 0 aborted, 0 restarted, 2 logged to disc 0 held locks, 0 in queue; 0 local transactions, 0 remote 0 transactions waits for other nodes: [] ok 3>Where we can see that the DBMS was initiated from a regular textfile.
5.11 Object Based Programming with Mnesia
The Company database introduced in Chapter 2 has three tables which store records (employee, dept, project), and three tables which store relationships (manager, at_dep, in_proj). This is a normalized data model, which has some advantages over a non-normalized data model.
It is more efficient to do a generalized search in a normalized database. Some operations are also easier to perform on a normalized data model. For example, we can easily remove one project, as the following example illustrates:
remove_proj(ProjName) -> F = fun() -> Ip = mnemosyne:eval(query [X || X <- table(in_proj), X.proj_name = ProjName] end), mnesia:delete({project, ProjName}), del_in_projs(Ip) end, mnesia:transaction(F). del_in_projs([Ip|Tail]) -> mnesia:delete_object(Ip), del_in_projs(Tail); del_in_projs([]) -> done.In reality, data models are seldom fully normalized. A realistic alternative to a normalized database model would be a data model which is not even in first normal form. Mnesia is very suitable for applications such as telecommunications, because it is easy to organize data in a very flexible manner. A Mnesia database is always organized as a set of tables. Each table is filled with rows/objects/records. What sets Mnesia apart is that individual fields in a record can contain any type of compound data structures. An individual field in a record can contain lists, tuples, functions, and even record code.
Many telecommunications applications have unique requirements on lookup times for certain types of records. If our Company database had been a part of a telecommunications system, then it could be that the lookup time of an employee together with a list of the projects the employee is working on, should be minimized. If this was the case, we might choose a drastically different data model which has no direct relationships. We would only have the records themselves, and different records could contain either direct references to other records, or they could contain other records which are not part of the Mnesia schema.
We could create the following record definitions:
-record(employee, {emp_no, name, salary, sex, phone, room_no, dept, projects, manager}). -record(dept, {id, name}). -record(project, {name, number, location}).An record which describes an employee might look like this:
Me = #employee{emp_no= 104732, name = klacke, salary = 7, sex = male, phone = 99586, room_no = {221, 015}, dept = 'B/SFR', projects = [erlang, mnesia, otp], manager = 114872},This model only has three different tables, and the employee records contain references to other records. We have the following references in the record.
'B/SFR'
refers to adept
record.[erlang, mnesia, otp]
. This is a list of three direct references to three differentprojects
records.114872
. This refers to another employee record.We could also use the Mnesia record identifiers (
{Tab, Key}
) as references. In this case, thedept
attribute would be set to the value{dept, 'B/SFR'}
instead of'B/SFR'
.With this data model, some operations execute considerably faster than they do with the normalized data model in our Company database. On the other hand, some other operations become much more complicated. In particular, it becomes more difficult to ensure that records do not contain dangling pointers to other non-existent, or deleted, records.
The following code exemplifies a search with a non-normalized data model. To find all employees at department
Dep
with a salary higher thanSalary
, use the following code:get_emps(Salary, Dep) -> F = fun() -> eval(query [E || E <- table(employee), E.dept = Dept, E.salary > Salary] end) end, transaction(F).This code is not only easier to write and to understand, but it also executes much faster.
It is easy to show examples of code which executes faster if we use a non-normalized data model, instead of a normalized model. The main reason for this is that fewer tables are required. For this reason, we can more easily combine data from different tables in join operations. In the above example, the
get_emps/2
function was transformed from a join operation into a simple query which consists of a selection and a projection on one single table.