1 Mnesia Release Notes
This document describes the changes made to the Mnesia system from version to version. The intention of this document is to list all incompatibilities as well as all enhancements and bugfixes for every release of Mnesia. Each release of Mnesia thus constitutes one section in this document. The title of each section is the version number of Mnesia.
1.1 Mnesia 3.9.2
1.1.1 Improvements and new features
- Access to non-local tables avoids disc_only_replicas is possible.
1.1.2 Fixed Bugs and malfunctions
- Sometimes
dirty_read
fails during brutal kill of replicated node, this is a synchronization problem. Mnesia now behaves better but can still fail in worst case, use transactions if this behavior is not acceptable.
Own Id: OTP-3585.
Aux Id: Seq 4574
- A delete followed by a write in a transaction failed to show up in
bag
tables.
- Removed a possibility for mnesia to hang on every node if mnesia crashed on one node during startup.
- Removed a deadlock possibility when using table locks.
1.1.3 Incompatibilities
1.1.4 Known bugs and problems
1.2 Mnesia 3.9.1
1.2.1 Improvements and new features
1.2.2 Fixed Bugs and malfunctions
- Mnesia could crash during startup on ram nodes, due to a race condition.
Own Id: OTP-3557.
Aux Id: Seq 4512
1.2.3 Incompatibilities
1.2.4 Known bugs and problems
1.3 Mnesia 3.9
1.3.1 Improvements and new features
- A new configuration parameter
fallback_error_function
is introduced to let the user handle the case when mnesia have a fallback installed and and another mnesia goes down. The default behavior is as it always been to kill it self to avoid inconsistencies. The user can now start Erlang with-mnesia fallback_error_function '{UserMod, UserFunc}'
.
Own Id: OTP-3539, OTP-3057
Aux Id: Seq 4476, Seq 1590, Seq 3809
1.3.2 Fixed Bugs and malfunctions
- In earlier releases when mnesia reported
inconsistent_database
the only way to make it stop was to restart mnesia from a backup. To allow the usage ofmaster_nodes
settings, mnesia now checks if master_nodes (on the schema table) is set before reporting the inconsistency event. If set the event will not be reported.
Own Id: OTP-3551
Aux Id: Seq 4491
- If a logfile is very corrupted and disk_log fails to recognize the file as a log_file, mnesia fails to handle the case properly.
Own Id: OTP-3545
Aux Id: Seq 4492
- Mnesia table copy mechanism was not stable if the Erlang distribution went down and instantly up.
Own Id: OTP-3544
Aux Id: Seq 4493
1.3.3 Incompatibilities
- The major version number change implies that mnesia-3.9 and later will not be able to talk with other mnesia nodes which are running mnesia-3.7.*.
1.3.4 Known bugs and problems
1.4 Mnesia 3.8.6
1.4.1 Improvements and new features
1.4.2 Fixed Bugs and malfunctions
- Mnesia could crash when sending a table to another node.
Own Id: OTP-3517.
1.4.3 Incompatibilities
1.4.4 Known bugs and problems
1.5 Mnesia 3.8.5
1.5.1 Improvements and new features
1.5.2 Fixed Bugs and malfunctions
mnesia:[dirty]all_keys
didn't work with ordered_set tables.
Own Id: OTP-3467.
Aux Id: Seq 4338
1.5.3 Incompatibilities
1.5.4 Known bugs and problems
1.6 Mnesia 3.8.4
1.6.1 Improvements and new features
1.6.2 Fixed Bugs and malfunctions
mnesia:[dirty][index]_match_object
didn't work in the latest release, with index and wild_cards.
Own Id: OTP-3450.
Aux Id: Seq 4312
1.6.3 Incompatibilities
1.6.4 Known bugs and problems
1.7 Mnesia 3.8.3
1.7.1 Improvements and new features
- Subscriptions have been extended with a new more detailed event. The detailed variant is activated with
mnesia:subscribe({table, Tab, detailed}).
and the events look like {Operation, Table, Value, OldValues, Tid}.
Own Id: OTP-3356
Aux Id: Seq 4066
1.7.2 Fixed Bugs and malfunctions
- Delete objects on snmp tables, didn't work in the mnesia-3.8 releases.
Own Id: OTP-3436
Aux Id: Seq 4289
mnesia:[dirty]index_match_object
didn't work if the key was bound or the table resided on another node.
Own Id: OTP-3399
Aux Id: Seq 4229
- A couple of locking bugs has been fixed. A deadlock could occur if more than two processes were involved and one process wanted a table lock and the other wanted read and write lock and a third process held the lock.
mnesia:read(Tab, Key, write)
could if the table was deleted during the operation return a list of nodes.
- An internal deadlock in mnesia was removed. It happened during the startup when mnesia tried to connect to another node. This has only occurred when we started several mnesia nodes on the same Free-BSD machine.
mnesia:dump_to_textfile
andmnesia:load_textfile
didn't work with record_names.
1.7.3 Incompatibilities
1.7.4 Known bugs and problems
- There is issue with dangling module pointers that makes it impossible to upgrade code on the fly for the module mnesia_monitor in all earlier mnesia releases, this has now been fixed so it should not be an issue in the coming releases.
1.8 Mnesia 3.8.2
1.8.1 Improvements and new features
- Introduced
mnesia:transform_table/4
in order to make it possible to handle new record types in code upgrades.
1.8.2 Fixed Bugs and malfunctions
- Mnesia on one node can fail to load a table if master_nodes is set to another node, and the other node loads the table from a third node. Force load on the first may hang forever.
Own Id: OTP-3358.
1.8.3 Incompatibilities
1.8.4 Known bugs and problems
1.9 Mnesia 3.8.1
1.9.1 Improvements and new features
1.9.2 Fixed Bugs and malfunctions
- When using
mnesia:match_object
with partially unbound key (e.g. Key = {bar, {'_', foo}}), mnesia failed to detect that the key where unbound which could result in hanging transactions.
Own Id: OTP-3342.
Aux Id: Seq 4064.
1.9.3 Incompatibilities
1.9.4 Known bugs and problems
1.10 Mnesia 3.8
1.10.1 Improvements and new features
- Some new backup features has been introduced. The
Args
argument to the functions:mnesia:backup(Opaque,Args)
andmnesia:backup_checkpoint(Name,Opaque,Args)
has now been extended to also allow a list of options as a complement to the oldMod
atom.
- Incremental backup, enables backup of recent updates of the database, avoiding a full backup. The
{incremental,PrevName}
option specifies the name of a previously activated checkpoint, which hopefully already has been backed up. All updates that has been performed between the activation of thePrevName
checkpoint and theName
checkpoint will be included in the backup.
- Decentralized backup, enables backup of local tables, avoiding transfer of records to a central backup coordinator node. The
{scope,local}
option will cause themnesia:backup_checkpoint(Name,Opaque,Args)
function to simply ignore tables on remote nodes. Thescope
argument defaults toglobal
.
- Selective backup, enables backup of a subset of the tables included in the checkpoint, avoiding creation of huge backup files. The
{tables,TabList}
option will cause the backup functions to simply ignore all other tables than the ones included in theTabList
.
- In order to select a customized
mnesia_backup
callback module, the option{module,Mod}
is used.
- The
ram_overrides_dump
option tomnesia:activate_checkpoint
has been extended to also allow an explicit list of tables which should have theram_overrides_dump
semantics.
- It is now possible to remove a mnesia node,
mnesia:del_table_copy(schema, Node)
now completly removes the reference to the nodeNode
when the node is down. All tables which only reside on that node will be removed.
See also incompatibilities.
- Connection to other mnesia nodes can be done with
mnesia:change_config(extra_db_nodes, NodeList)
, after mnesia is started. It can be called from a node which uses disc schema to connect to new and empty ram nodes. Or it can be called from the new ram node to connect to an already existing mnesia cluster.
- The type
ordered_set
is now supported forram_copies
anddisc_copies
tables if it is supported by the Erlang runtime system, i.e. OTP R5 or later.
1.10.2 Fixed Bugs and malfunctions
mnesia:create_schema/1
created a temporary file on the current directory. This turned out to be not so splendid, especially if no write access was granted to the current directory. Now the temporary file is created on the Mnesia directory.
mnesia:restore/2
did not work if someone had subscribed to a table involved in the restore.
Own Id: OTP-3183.
Aux Id: Seq 3633.
mnesia:transform_table/3
has been made more efficient, it still uses alot memory due to the fact that all records has to be logged to disk as a single transaction.
own Id: OTP-3246.
Aux Id: Seq 3840
- A flaw in the locking mechanism caused a big overhead for lock acquisition when a lot of locks where held by the same transaction, this has now been fixed.
- The snmp hooks have been changed to use snmp_index, this should make it faster and have a smaller memory footprint for snmp tables in OTP R5 and later.
1.10.3 Incompatibilities
Mnesia
cannot be upgraded on the fly, it must be restarted with the new code, due to protocol changes.
It should be sufficient to restart onemnesia
node at the time. New functionality will not work until all nodes have been upgraded.
- In mnesia-3.7.2 the following recovery mechansim where introduced.
Mnesia generated a core dump and intentionally stopped if the log files were corrupted and contained the wrong records. Now Mnesia will generate an error event and continue if possible. This behaviour may lead to inconsistent data but Mnesia will be alive.
The statement above is only valid if configuration parameterauto_repair
is set totrue
(default). Ifauto_repair
is set tofalse
mnesia will generate a fatal event and terminate.
mnesia:del_table_copy(schema, Node)
will only work if and only if mnesia on the nodeNode
is NOT running. Beforemnesia:del_table_copy(schema, Node)
would only work if the node was running. This behavior made it impossible to remove a mnesia node when the hardware was malfunctioning.
1.10.4 Known bugs and problems
1.11 Mnesia 3.7.2
1.11.1 Improvements and new features
- Mnesia generated a core dump and intentionally stopped if the log files were corrupted and contained the wrong records. Now Mnesia will generate an error event and continue if possible. This behaviour may lead to inconsistent data but Mnesia will be alive.
Own Id: OTP-3269.
Aux Id: seq 3938.
1.11.2 Fixed Bugs and malfunctions
1.11.3 Incompatibilities
1.11.4 Known bugs and problems
1.12 Mnesia 3.7.1
1.12.1 Improvements and new features
1.12.2 Fixed Bugs and malfunctions
- Fixed a problem with table loading. Mnesia terminated if the receiving node died in a certain state.
mnesia:add_table_copy/3
did not work forlocal_content
-tables.
1.12.3 Incompatibilities
1.12.4 Known bugs and problems
1.13 Mnesia 3.7
1.13.1 Improvements and new features
mnesia:dirty_all_keys/1
has been optimized.
Own Id: OTP-2914.
Aux Id: seq 1389.
- Various optimizations has been performed, such as schema transactions.
Own Id: OTP-2804
- A concept of table fragmentation has been introduced in order to cope with very large tables. The idea is to split a table into several more manageable fragments. Each fragment is implemented as a first class Mnesia table and may be replicated, have indecies etc. as any other table.
The functionsmnesia:create_table/2
,mnesia:delete_table/1
andmnesia:table_info/2
has been extended to cope with fragmented tables.
The fragmentation properties of a table can be changed with the new functionmnesia:change_table_frag/2
.
See the Users Guide for more information.
Own Id: OTP-1748, OTP-2789.
1.13.2 Fixed Bugs and malfunctions
- Mnesia's internal process architecture has partly been redesigned in order to avoid some rare deadlock situations during startup.
Own Id: OTP-2495, OTP-2767, OTP-2421.
Aux Id: seq 4, seq 1138.
- The algorithm for copying of tables between nodes has been improved, in order to better cope with dirty updates during the copy operation.
1.13.3 Incompatibilities
Mnesia must be restarted on all nodes in order to cope with the following changes:
- The internal process architecture has been changed.
- The message protocol between nodes has been changed.
- The schema representation has been changed.
1.13.4 Known bugs and problems
The new concept of fragmented tables must be described in the
Users Guide
andReference Manual
.1.14 Mnesia 3.6
1.14.1 Improvements and new features
- The internal
OLD_DIR
directory is no more, since it was almost like a persistent memory leak.
- The fallback concept has been improved. Installation and uninstallation of fallbacks are now more predictable.
The installation is performed only on the nodes with disc resident schema. The operation will fail if the local node is not one of the disc resident nodes. Which nodes that are disc resident or not, is determined from the schema info in the backup.
The uninstallation is performed only on the nodes with disc resident schema. Which nodes that are disc resident or not, is determined from the schema info in the local fallback. The operation will fail if there is no local fallback installed.
It is now possible to install and uninstalllocal
fallbacks. By default, the installation (and uninstallation) scope of a fallback isglobal
. But if{scope, local}
is given as argument, the operation will only affect the local node. The effect of invoking the install (or uninstall) operation manually on each node with disc resident schema is the same as one single invokation with global scope.
Normally local installation and uninstallation is targeted towards the local Mnesia directory (see the-mnesia dir
configuration parameter). But if{mnesia_dir, AlternateDir}
is given as argument, the operation will be performed on the alternate directory regardless of the Mnesia directory configuration parameter setting. After installation of a fallback on an alternate Mnesia directory that directory is fully prepared for usage as an active Mnesia directory. This is a somewhat dangerous feature which must be used with care. By unintentional mixing of directories you may easily end up with a inconsistent database, if the same backup is installed on more than one directory.
Please, read more aboutmnesia:install_fallback/1,2
andmnesia:uninstall_fallback/0,1
in the reference manual.
1.14.2 Fixed Bugs and malfunctions
1.14.3 Incompatibilities
1.14.4 Known bugs and problems
1.15 Mnesia 3.5.4
1.15.1 Improvements and new features
1.15.2 Fixed Bugs and malfunctions
- This bug is an old one, but in this release the file descriptors are handled better. Notes copied from 3.1.1.
At startup, when Mnesia recreated the database from a fallback, lots of files was simultaneously opened. For large (or rather medium) size databases this caused the Erlang runtime system to encounter its hard-coded limit of 256 open ports. Mnesia no longer consumes as many open files during startup in order to avoid the system limit. The error message looked like:** FATAL ** mnesia_tm crashed: {"Cannot start from fallback", {'EXIT', {badarg, {os,ask_driver, [open_port, [{spawn,os__drv__},[]]]}}}}
1.15.3 Incompatibilities
1.15.4 Known bugs and problems
1.16 Mnesia 3.5.3
1.16.1 Improvements and new features
1.16.2 Fixed Bugs and malfunctions
- Meta table information got inconsistent when a table was copied from a node which crashed during the table transfer. This bug could lead to that the table was never loaded.
Own Id: OTP-2852.
1.16.3 Incompatibilities
1.16.4 Known bugs and problems
1.17 Mnesia 3.5.2
1.17.1 Improvements and new features
1.17.2 Fixed Bugs and malfunctions
- Dumping an ets table into a dets table caused unexpected memory consumption when the dets table resided in the ram_file'r.
1.17.3 Incompatibilities
1.17.4 Known bugs and problems
1.18 Mnesia 3.5.1
1.18.1 Improvements and new features
1.18.2 Fixed Bugs and malfunctions
- Records was sometimes skipped when iterating over a backup with the default backup module
mnesia_backup
. The error affected mnesia:restore/2,mnesia:install_fallback/1
andmnesia:traverse_backup/4,6
.
Own Id: OTP-2819.
Aux Id: seq 1261.
1.18.3 Incompatibilities
1.18.4 Known bugs and problems
1.19 Mnesia 3.5
1.19.1 Improvements and new features
1.19.2 Fixed Bugs and malfunctions
- Copying of tables between nodes has been improved. Dirty updates performed while the table is copied could sometimes be lost.
Own Id: OTP-2817.
Aux Id: seq 1258.
- The outcome (abort/commit) of heavy weight transactions was some time lost after node crashes. A new log format has been introduced and the processing of decisions in the log has been improved.
- Minor bug fixed regarding iteration of checkpoints with
disc_only_copies
-tables. It could cause inconsistent backups and inconsistent replicas if the internal hash tables grew or shrank during the iteration.
- Transient schema information about indecies was not cleaned up after
mnesia:del_table_copy/3
. This coused miscellaneous index related errors in successive operations.
- Recovery of
mnesia:create_table/2
during startup crashed on nodes without any replica.
1.19.3 Incompatibilities
1.19.4 Known bugs and problems
1.20 Mnesia 3.4.1
1.20.1 Improvements and new features
- The documentation has been enhanced.
1.20.2 Fixed Bugs and malfunctions
- A bug that caused
mnesia:wread/1
to hang infinitely has been fixed. It was a unique situation when the transaction, which invokedmnesia:wread/1
, was waiting for a write lock and a remote transaction was holding a sticky_write lock on the same record. If the remote node crashed in that situation and the remote node holding the sticky_write lock was one of the first nodes in thewhere_to_write
list of nodes, it could causemnesia:wread/1
to never return, instead of simply re-running the transaction.
- Application processes of Mnesia were neither unlinked after invocation of
mnesia:unsubscribe/1
nor shutdown. This caused spurious exit signals to be sent to the subscribing processes, when Mnesia was terminated. Subscribers ofsystem
events never receive a {mnesia_down, node()} event if a local node is stopped normally. These bugs has been fixed now.
- Some minor bugs regarding the
record_name
feature (that was introduced in 3.4) has been fixed.
1.20.3 Incompatibilities
1.20.4 Known bugs and problems
1.21 Mnesia 3.4
1.21.1 Improvements and new features
1.21.1.1 Record name may differ from table name
From this release onwards, the record name of records stored in Mnesia may differ from the name of the table that they are stored in. In order to use this new feature the table property
{record_name, Name}
has been introduced. If this property is omitted when the table is created, the table name will be used as record name. For example, if two tables are created like this:TabDef = [{record_name, subscriber}] mnesia:create_table(my_subscriber, TabDef) mnesia:create_table(your_subscriber, TabDef)it would be possible to store subscriber records in both of them:
mnesia:write(my_subscriber, #subscriber{}, sticky_write) mnesia:write(your_subscriber, #subscriber{}, write)In order to enable usage of this new support for
record_name
new functions have been added to the Mnesia API:mnesia:dirty_write(Tab, Record) mnesia:dirty_delete(Tab, Key) mnesia:dirty_delete_object(Tab, Record) mnesia:dirty_update_counter(Tab, Key, Incr) mnesia:dirty_read(Tab, Key) mnesia:dirty_match_object(Tab, Pattern) mnesia:dirty_index_match_object(Tab, Pattern, Attr) mnesia:write(Tab, Record, LockKind) mnesia:delete(Tab, Key, LockKind) mnesia:delete_object(Tab, Record, LockKind) mnesia:read(Tab, Key, LockKind) mnesia:match_object(Tab, Pattern, LockKind) mnesia:all_keys(Tab) mnesia:index_match_object(Tab, Pattern, Attr, LockKind) mnesia:index_read(Tab, SecondaryKey, Attr) LockKind ::= read | write | sticky_write | ...The old corresponding functions still exists, but are merely a syntactic sugar for the new ones:
mnesia:dirty_write(Record) -> Tab = element(1, Record), mnesia:dirty_write(Tab, Record). mnesia:dirty_delete({Tab, Key}) -> mnesia:dirty_delete(Tab, Key). mnesia:dirty_delete_object(Record) -> Tab = element(1, Record), mnesia:dirty_delete_object(Tab, Record) mnesia:dirty_update_counter({Tab, Key}, Incr) -> mnesia:dirty_update_counter(Tab, Key, Incr). mnesia:dirty_read({Tab, Key}) -> Tab = element(1, Record), mnesia:dirty_read(Tab, Key). mnesia:dirty_match_object(Pattern) -> Tab = element(1, Pattern), mnesia:dirty_match_object(Tab, Pattern). mnesia:dirty_index_match_object(Pattern, Attr) Tab = element(1, Pattern), mnesia:dirty_index_match_object(Tab, Pattern, Attr). mnesia:write(Record) -> Tab = element(1, Record), mnesia:write(Tab, Record, write). mnesia:s_write(Record) -> Tab = element(1, Record), mnesia:write(Tab, Record, sticky_write). mnesia:delete({Tab, Key}) -> mnesia:delete(Tab, Key, write). mnesia:s_delete({Tab, Key}) -> mnesia:delete(Tab, Key, sticky_write). mnesia:delete_object(Record) -> Tab = element(1, Record), mnesia:delete_object(Tab, Record, write). mnesia:s_delete_object(Record) -> Tab = element(1, Record), mnesia:delete_object(Tab, Record. sticky_write). mnesia:read({Tab, Key}) -> mnesia:read(Tab, Key, read). mnesia:wread({Tab, Key}) -> mnesia:read(Tab, Key, write). mnesia:match_object(Pattern) -> Tab = element(1, Pattern), mnesia:match_object(Tab, Pattern, read). mnesia:index_match_object(Pattern, Attr) -> Tab = element(1, Pattern), mnesia:index_match_object(Tab, Pattern, Attr, read).The earlier function semantics remain unchanged.
Use the function
mnesia:table_info(Tab, record_name)
to determine the record name of a table.If the name of all tables equals the record name that the table hosts everything is backward compatible. But if the new record_name feature is used this may affect old existing applications:
- Functions that provide read access to Mnesia tables may now return records with names that differ from the table name. Make sure your code is able to handle this.
- The backup format is slightly different and this may affect users of the
mnesia:traverse_backup/4,6
functions. When iterating over a backup, the record name is always equal to the table name regardless of the record_name setting. Make sure your code is able to handle this.
- The format of table subscription event is slightly different and this may affect users of the
mnesia:subscribe/1
function and user installedevent_module
's. When a process receives table events, the record name is always equal to the table name regardless of therecord_name
setting.
1.21.1.2 New function mnesia:lock/2
A new locking function has been introduced:
mnesia:lock(LockItem, LockKind) LockItem ::= {table, Tab} | {global, Item, Nodes} | ... LockKind ::= read | write | ...The old table locking functions still exists, but are now merely a syntactic sugar for the new functions:
mnesia:read_lock_table(Tab) -> mnesia:lock({table, Tab}, read). mnesia:write_lock_table(Tab) mnesia:lock({table, Tab}, write).1.21.1.3 New function mnesia:activity/2,3,4
In the Mnesia API there are some functions whose semantics depends of the execution context:
mnesia:lock(LockItem, LockKind) mnesia:write(Tab, Rec, LockKind) mnesia:delete(Tab, Key, LockKind) mnesia:delete_object(Tab, Rec, LockKind) mnesia:read(Tab, Key, LockKind) mnesia:match_object(Tab, Pat, LockKind) mnesia:all_keys(Tab) mnesia:index_match_object(Tab, Pat, Attr, LockKind) mnesia:index_read(Tab, SecondaryKey, Attr) mnesia:table_info(Tab, InfoItem)if these functions are executed within a
mnesia:transaction/1,2,3
, locks are acquired, atomic commit is ensured etc. If the same functions are executed within the context ofmnesia:async_dirty/1,2
,mnesia:sync_dirty/1,2
ormnesia:ets/1,2
their semantics are different. Although this is not entirely new, new functions have been introduced:mnesia:activity(ActivityKind, Fun) mnesia:activity(ActivityKind, Fun, Args) mnesia:activity(ActivityKind, Fun, Module) mnesia:activity(ActivityKind, Fun, Args, Module) ActivityKind ::= transaction | {transaction, Retries} | async_dirty | sync_dirty | etsDepending on the
ActivityKind
argument, the evaluation context will be the same as with the functions:
mnesia:transaction
,
mnesia:async_dirty
,
mnesia:sync_dirty
and
mnesia:ets
respectively. TheModule
argument provides the name of a callback module that will implement themnesia_access
behavior. It must export the functions:lock(ActivityId, Opaque, LockItem, LockKind) write(ActivityId, Opaque, Tab, Rec, LockKind) delete(ActivityId, Opaque, Tab, Key, LockKind) delete_object(ActivityId, Opaque, Tab, Rec), LockKind read(ActivityId, Opaque, Tab, Key, LockKind) match_object(ActivityId, Opaque, Tab, Pat, LockKind) all_keys(ActivityId, Opaque, Tab, LockKind) index_match_object(ActivityId, Opaque, Tab, Pat, Attr, LockKind) index_read(ActivityId, Opaque, Tab, SecondaryKey, Attr, LockKind) table_info(ActivityId, Opaque, Tab, InfoItem) ActivityId ::= A record which represents the identity of the enclosing Mnesia activity. The first field (obtained with element(1, ActivityId) contains an atom which may be interpreted as the type of the activity: 'ets', 'async_dirty', 'sync_dirty' or 'tid'. 'tid' means that the activity is a transaction. Opaque ::= An opaque data structure which is internal to Mnesia.
mnesia
andmnesia_frag
are examples of callback modules. By default themnesia
module is used as callback module for accesses within "Mnesia activities".
For example invoke the functionmnesia:read(Tab, Key, LockKind)
, and the correspondingModule:read(ActivityId, Opaque, Tab, Key, LockKind)
will be invoked to perform the job (or it will pass it on tomnesia:read(ActivityId, Opaque, Tab, Key, LockKind)
).A customized callback module may be used for several purposes, such as providing triggers, integrity constraints, run time statistics, or virtual tables. The callback module does not have to access real Mnesia tables, it is a free agent provided the callback interface is fulfilled.
The context sensitive function
mnesia:table_info/2
may be used to provide virtual information about a table. This function enables the user to performMnemosyne
queries within an activity context with a customized callback module. By providing table indices information and otherMnemosyne
requirements,Mnemosyne
can be used as an efficient generic query language for access of virtual tables.Please, read the "mnesia_access callback behavior" in Appendix C for a code example from the
mnesia_frag
module.1.21.1.4 New configuration parameter access_module
The new configuration parameter
access_module
has been added. It defaults to the atommnesia
, but may be set to any module that fulfills the callback interface withmnesia_access
behavior.The
mnesia:activity
functions will use theaccess_module
as a callback module if it not is explicitly overridden by theModule
argument.Use
mnesia:system_info(access_module)
to determine the actualaccess_module
setting.1.21.2 Fixed Bugs and malfunctions
- A bug regarding master nodes has been fixed. If all nodes only have a remote master nodes set for a table, the intended behavior is that no node should take the initiative to load the table. This now works as intended.
1.21.3 Incompatibilities
None as long as all tables only host records with the same name as the table name. Please, read the chapter
Improvements and new features
about the potential inconsistencies.1.21.4 Known bugs and problems
1.22 Mnesia 3.3
1.22.1 Improvements and new features
1.22.2 Fixed Bugs and malfunctions
mnesia:change_table_copy/3
on schema from disc_copies to ram_copies did not work on non Solaris platforms.
Own Id: OTP-2364.
- Indices of disc_only_tables did not work.
Own Id: OTP-2363.
- Shutdown of Mnesia took 30 seconds due to an internal deadlock in the application_controller. This problem is now circumvented by Mnesia.
Own Id: OTP-2664.
mnesia:set_master_nodes/1,2
may now be invoked off-line, i.e. even if Mnesia happens to be stopped.
Own Id: OTP-2425.
1.22.3 Incompatibilities
1.22.4 Known bugs and problems
No new problems or bugs. See previous release notes.
1.23 Mnesia 3.2
1.23.1 Improvements and new features
- The function
mnesia:restore/2
has eventually been implemented. The documentation of the function has been adopted to the semantics of the actual implementation. Please, read the documentation about the details.
Own Id: OTP-1560
Own Id: OTP-1736
Own Id: OTP-1824
1.23.2 Fixed Bugs and malfunctions
- Mnesia crashed if a replica was removed with
mnesia:delete_table_copy/2
before the load of the table was completed on the actual node.
Own Id: OTP-2519.
Aux Id: seq 893.
- Long table names was truncated in the printout from
mnesia:info/0
.
Own Id: OTP-2529.
Aux Id: seq 895.
1.23.3 Incompatibilities
1.23.4 Known bugs and problems
No new problems or bugs. See previous release notes.
1.24 Mnesia 3.1.1
This release is a minor release and the release notes describes the difference between version 3.1.1 and version 3.1 of Mnesia.
1.24.1 Improvements and new features
1.24.2 Fixed Bugs and malfunctions
- At startup, when Mnesia recreated the database from a fallback, lots of files was simultaneously opened. For large (or rather medium) size databases this caused the Erlang runtime system to encounter its hard-coded limit of 256 open ports. Mnesia no longer consumes as many open files during startup in order to avoid the system limit. The error message looked like:
** FATAL ** mnesia_tm crashed: {"Cannot start from fallback", {'EXIT', {badarg, {os,ask_driver, [open_port, [{spawn,os__drv__},[]]]}}}}
Own Id: OTP-2534.
Aux Id: seq 907.
1.24.3 Incompatibilities
1.24.4 Known bugs and problems
No new problems or bugs. See previous release notes.
1.25 Mnesia 3.1
1.25.1 Improvements and new features
- A new configuration parameter
ignore_fallback_at_startup
has been added. It defaults tofalse
, but if it is set totrue
it causes Mnesia to ignore the potential fallback that may have been installed withmnesia:install_fallback/2
. The new configuration parameter enables Mnesia to start with the old database even if a fallback is present.
Own Id: OTP-2530.
Aux Id: seq 903.
Aux Id: HA86394.
- Previously, Mnesia terminated itself on all nodes if a fallback was installed and Mnesia on another node was terminated. It is now possible to bypass this behavior.
When Mnesia's event handler receives a{mnesia_down, Node}
event it will check if there is any fallback active and then possibly terminate the local Mnesia system. If the user changes the configuration parameterevent_module
to an own module, the new module may simply omit to terminate Mnesia.
Own Id: OTP-2530.
Aux Id: seq 903.
Aux Id: HA86394.
1.25.2 Fixed Bugs and malfunctions
- When Mnesia encountered a fatal error, Mnesia would terminate itself on all nodes. Now Mnesia will only terminate itself on the local node as intended.
1.25.3 Incompatibilities
1.25.4 Known bugs and problems
No new problems or bugs. See previous release notes.
1.26 Mnesia 3.0
This release is a major release and the release notes describes the difference between version 3.0 and version 2.3. 3.0 is classified as major release due to the issues described in the chapter about incompatibilities described below.
1.26.1 Improvements and new features
- Mnesia's internal storage format on disc has been made more future safe.
- A new attribute in the
schema
table has been added. Calleduser_properties
, it enables applications to store its own meta data about a table together with the Meta data that is built into Mnesia. Theuser_properties
can either be set when the table is created, by stating a{user_properties,PropList}
tuple in theCreateList
given tomnesia:create_table(Tab,CreateList)
or later withmnesia:write_table_property(Tab,Prop)
.
The user propertyProp
must be a record (a tagged tuple). Subsequent calls towrite_table_property/2
with the same tag will overwrite the old setting. The property record may have any number of fields.
The user property can be read withmnesia:read_table_property(Tab,PropName)
. It returns the user property record or exits with{no_exists, {Tab,PropName}}
.
The user property can be deleted withmnesia:delete_table_property(Tab,PropertyName)
.
All user defined properties for a table can be obtained withmnesia:table_info(Tab,user_properties)
. It returns a list of the stored property records.
- All table properties that are normally obtained singularly with
mnesia:table_info(Tab,Prop)
, can be read with one single call tomnesia:table_info(Tab,all)
. It returns a list of{PropName,PropVal}
tuples.
- All system properties that are normally obtained with
mnesia:system_info(Prop)
singularly, can be read with one single call tomnesia:system_info(all)
. It returns a list of{PropName,PropVal}
tuples. If Mnesia is not running, only a subset of the properties will be returned.
- The module
mnesia_registry
has been added. It contains functions which support the customized creation of backing storage tables for registries inerl_interface
.
1.26.2 Fixed Bugs and malfunctions
No serious bugs or malfunctions.
1.26.3 Incompatibilities
Mnesia 3.0 is primary developed for OTP R4, but is still backward compatible with the OTP R3 platform.
The internal database format on disc has been made more future safe. It has also been altered in order to cope with the newly introduced features. The special upgrade procedure is as follows:
- First of all, a full backup must be performed with the old Mnesia system.
- Then the backup must be installed as fallback, preferably by the new version of Mnesia.
- At last the new version of Mnesia may be started.
Mnemosyne has been made into an own separate application. The new application is called
mnemosyne
. Please, read its release notes. This application split implies a few incompatibilities:
- The compiler directive:
-include_lib("mnesia/include/mnemosyne.hrl").that was mandatory in all Erlang modules that contains embeddedMnemosyne
queries has been replaced with:
-include_lib("mnemosyne/include/mnemosyne.hrl").During an interim period, both compiler directives will be supported. But in a future release the backward compatibility directive will be removed.
- At the startup of Mnesia, the Mnemosyne statistics process
mnemosyne_catalog
has automatically been started by Mnesia. This is not the case anymore. Themnemosyne
application must be started separately after the start of themnesia
application.
However, a temporary configuration parameterembedded_mnemosyne
has been added to allow the automatic start of Mnemosyne. By defaultembedded_mnemosyne
is set tofalse
, but if it is set totrue
Mnesia will start Mnemosyne as a supervised part of the Mnesia application as it did in previous releases.
1.26.4 Known bugs and problems
None of these are newly introduced.
- Indices of disc_only_tables does not work.
Own Id: OTP-2363.
- Changing the storage type of the schema from disc_copies to ram_copies does not work on Windows systems.
Own Id: OTP-2364.
- Mnesia will not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory the result is totally unpredictable.
- mnesia:restore/1 is not implemented.
Own Id: OTP-1736.
1.27 Mnesia 2.3
1.27.1 Improvements and new features
1.27.2 Fixed Bugs and malfunctions
- Mnesia will run into an internal deadlock occasionally, at startup if several nodes were brutally terminated simultaneously (eg. after a power failure).
Own Id: OTP-2501.
Aux Id: seq 879, HA85350
1.27.3 Incompatibilities
1.27.4 Known bugs and problems
No new ones. See previous release notes.
1.28 Mnesia 2.2
1.28.1 Improvements and new features
1.28.2 Fixed Bugs and malfunctions
- Mnemosyne's internal query evaluator processes continued to perform work, even after the initiating application process (the process that performed the call to mnesia:transaction) died. This would lead to dangling locks which were never released.
Own Id: OTP-2390, OTP-2340.
Aux Id: seq 797
- Mnemosyne's internal query evaluator processes continued to perform work, even after the enclosing transaction had started its commit/abort work. This would lead to dangling locks which was never released.
- Mnemosyne's internal query evaluator processes hang when Mnesia detected a potential deadlock and tried to restart the transaction. This led to dangling locks which were never released.
- Mnemosyne's internal query evaluator processes hang when a remote node was terminated while they where waiting for locks.This led to dangling locks which were never released.
1.28.3 Incompatibilities
1.28.4 Known bugs and problems
- The evaluation of Mnemosyne queries exhibits undefined behavior if used in conjunction with nested transactions.
- The evaluation of Mnemosyne queries exhibits undefined behavior if the involved tables are updated after the cursor has been initiated.
- Mnemosyne queries should not be used on disc-only tables since the optimizer can't handle such tables.
- Mnesia does not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory anything can happen. The result is totally unpredictable.
- mnesia:restore/1 is not implemented.
Own Id: OTP-1736.
1.29 Mnesia 2.1.2
1.29.1 Improvements and new features
1.29.2 Fixed Bugs and malfunctions
- The mnesia_locker process EXIT'ed when trying to move a sticky lock from one node to another. As a result, Mnesia would terminated.
1.29.3 Incompatibilities
1.29.4 Known bugs and problems
1.30 Mnesia 2.1.1
1.30.1 Improvements and new features
1.30.2 Fixed Bugs and malfunctions
- A bug related to remote load of tables has been fixed. If Mnesia was in the process of loading tables from another node and that node terminated before the remote loading of tables was finished, Mnesia did not automatically load the remaining ram_copies tables.
Own Id: OTP-2339.
Aux Id: seq 795
1.30.3 Incompatibilities
1.30.4 Known bugs and problems
1.31 Mnesia 2.1
1.31.1 Improvements and new features
- A new feature for load regulation of the mnesia_dumper processes has been introduced. It is activated by setting the configuration parameter
dump_log_load_regulation
totrue
.
This is a temporary solution while waiting for a fair scheduler algorithm in the Erlang emulator. The mnesia_dumper process performs many costly BIF invocations and ought to pay for this. But since the Emulator does not handle this properly we must compensate for this with some form of load regulation ourselves in order to not steal all computation power in the Erlang Emulator and make other processes starve or triggerheart
to restart the entire node.
- A new example has been introduced.
mnesia_meter
can be used to obtain performance figures for some API functions.
1.31.2 Fixed Bugs and malfunctions
- A bug related to
master_nodes
has been fixed. The table was not loaded locally, when the local node was set to be master node for a replicated table which was active on another node during startup of Mnesia on the local node.
- Another bug related to
master_nodes
has also been fixed. The table was not loaded from the local node, when the master node was set to the local node for a replicated table, withram_copies
as localstorage_type
anddisc_copies
ordisc_only_copies
on the other nodes.
- A bug related to remote load of tables has been fixed. Mnesia could refuse to copy the table from another node if an active
checkpoint
was deactivated in a critical period at the end of the copy operation.
- In earlier releases
mnesia:async_dirty
,mnesia:sync_dirty
andmnesia:ets
caught EXIT's and returned them as{'EXIT', Reason}
when something went wrong. This behavior was neither intended nor documented. Now the (mis-)behavior has been corrected to doexit(Reason)
as expected.
- Removed a deadlock situation which occurred when fallbacks was installed/uninstalled while Mnesia was stopped.
- Removed a potential deadlock situation which could occur if Mnesia was terminated on a remote node during commit of "heavyweight" transactions.
Under some circumstances the bug could cause a crash in the application process with an error message like this:{{case_clause,{acc_pre_commit,{tid,523,<0.216.0>},<87.355.0>}}, {mnesia_tm,t_begin,[#Fun,[],2,infinity]}}
Own Id: OTP-2106.
Aux Id: seq 607
- Improved recovery of nested transactions when the coordinator process died during commit.
- Better error handling of
mnesia:add_table_copy/3
andmnesia:move_table_copy/3
when the application tried to add disc resident replicas on disc less nodes. The effect of the bug was that Mnesia could crash with a error message like this:mnesia_init crashed: {has_no_disc,...
- Removed a deadlock situation that occurred during startup when tables was loaded too early.
mnesia_init
could crash if it failed to copy a table from one node and then tried to copy the table from another node.
- The semantics of
master_nodes
have been made more clear. Earlier, unusual things happened in odd situations, especially in conjunction withmnesia:force_load_table/1
.
- The handling of transaction decisions has been optimized. Fewer messages are sent. The decision log is now only dumped during startup.
mnesia_recover
could crash when decision table was saved to disc. Theets:tab2file/2
is no longer available due to its unsafe behavior.
mnesia_recover
could crash during shutdown of Mnesia. Mnesia's internal overload detection mechanism was not synchronized with the shutdown protocol.
- Mnesia could crash while it performed its protocol negotiation with other nodes. The problem occurred when Mnesia tried to detect if the network had been partitioned.
Own Id: OTP-2234.
Aux Id: seq 702.
- The algorithm which triggered dumps of the transaction log has been improved. Previously the log would not dumped even if it was necessary. The problem occurred when the dump thresholds were exceeded while the previous dump was still in progress.
disc_only_copies
can now be managed via SNMP.
- Locks could be left dangling if the user process crashed in a critical situation.
- The compiled code for the example programs does not reside on
mnesia/ebin
anymore. It can be found on themnesia/examples
directory instead.
1.31.3 Incompatibilities
- Mnesia 2.1 is not bug compatible with earlier releases, see the chapter
Fixed Bugs and malfunctions
about changed behaviors.
1.31.4 Known bugs and problems
- Mnesia does not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory the result is totally unpredictable.
- mnesia:restore/1 is not implemented.
Own Id: OTP-1736.
- Mnemosyne queries should not be used on disc-only tables since the optimizer can't handle such tables.
1.32 Mnesia 2.0.2
1.32.1 Improvements and new features
The performance of the Mnemosyne catalog is improved. There are now some parameters per table available for tuning. Two functions are introduced for this:
mnemosyne_catalog:set_parameter(Table, Name, Value) mnemosyne_catalog:get_parameter(Table, Name)Both return the present value. They may change in a future releases! The possible
Names
are:
Name
Values Default Description do_local_upd
yes no yes Collect statistics for this table on this node min_upd_interval
integer seconds 10 Minimum allowed interval between two updates upd_limit
Percent 10 New statistics are collected when more than upd_limit % of the table is updated max_wait
integer millisec 1000 Maximum time to wait for the initial call from the optimizer to the catalog server Mnemosyne configuration parameters 1.32.2 Fixed Bugs and malfunctions
- Checkpoints could under some circumstances become inconsistent, if the involved tables were updated during the activation of a checkpoint. This could lead to inconsistent backups and databases.
Own Id: OTP-1733.
- Secondary indices were not handled correctly,
mnesia:index_read/3
could return incorrect result.
Own Id: OTP-2083.
- The
mnesia_locker
andmnesia_tm
processes did under some circumstances crash, if tables where updated while their definitions were changed in schema transactions.
Own Id: OTP-1734.
- Problems that occurred when applications tried to run schema transactions directly after Mnesia was started (but before the schema was merged), have been fixed. Now Mnesia first merges its schema with the schema on other nodes before it starts completely.
- The Mnemosyne catalog function (which gathers statistics for the query optimizer) could stop working in some situations on disk-less nodes. This is corrected.
Own Id: OTP-1982.
Aux Id: seq 466.
1.32.3 Incompatibilities
- The default behavior of Mnesia's event handler has been slightly changed. An
{inconsistent_database, Context, Node}
event will not automatically causemnesia:set_master_node
to be invoked anymore. However, an error is still reported to theerror_logger
. For advanced users the opportunity to recover from an inconsistent database remains, by explicit use of the functionmnesia:set_master_node
.
1.32.4 Known bugs and problems
- Mnesia does not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory anything may happen. The result is totally unpredictable.
- mnesia:restore/1 is not implemented.
Own Id: OTP-1736.
- Mnemosyne queries should not be used on disc-only tables since the optimizer can't handle such tables.
1.33 Mnesia 2.0.1
1.33.1 Improvements and new features
1.33.2 Fixed Bugs and malfunctions
- Customized backup modules would suffer from a bug which caused by an update to the
Opaque
data structure to be lost. TheOpaque
data is a part of the Mnesia backup callback interface. However, Mnesia's default backup module did work despite the bug.
- Due to a timing bug, Mnesia could crash at startup, if tables was accessed before the schema had been merged. The error message looked like
** FATAL ** mnesia_init crashed: {no_exists, {chanRegPpchTable, storage_type}}
. The earlier bugfix made in Mnesia 1.3 did fix one timing bug, but it turned out to be yet another timing bug also :-(.
Aux Id: seq 384.
- A reference to the non-existent Mnesia module
mnesia_reconfig
has been removed from the configuration filemnesia.app
.
Own Id: OTP-1888.
- Enhanced synchronization of asynchronous dirty writes that are performed on non local tables. The bug symptom would enable a dirty read to read the table before a previous dirty update.
Own Id: OTP-1731.
1.33.3 Incompatibilities
1.33.4 Known bugs and problems
- Checkpoints may, under some circumstances, be inconsistent if the involved tables are updated during the activation of checkpoint.
Own Id: OTP-1733.
- mnesia_locker may, under some circumstances, crash if tables are updated while their definitions are being changed in schema transactions.
Own Id: OTP-1734.
- Mnesia does not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory anything may happen. The result is totally unpredictable.
- mnesia:restore/1 is not implemented.
Own Id: OTP-1736.
1.34 Mnesia 2.0
The release notes describe the difference between version 2.0 and version 1.3.2 of Mnesia. 2.0 is classified as major release of Mnesia due to changes in the internal database format on disc, see the chapter about incompatibilities for further details.
1.34.1 Improvements and new features
- It is now possible to have read only tables. By default a table is open for both read and write access, but if the property
{access_mode, read_only}
is given tomnesia:create_table/2
it will be set to be read only. It may be changed later with the new functionmnesia:change_table_access/2
. Normally at startup, Mnesia will primary load its tables from other nodes where a replica is already active. But for those, possibly large, tables that are set to be read only, the local replica will always be loaded from disc without the risk of introducing inconsistency to the database. The current allowed access mode of a table may be read with the functionmnesia:table_info/2
.
- The load order of tables may now be controlled. By default all tables are given the load order priority
0
, but if the property{load_order, Integer}
is given tomnesia:create_table/2
it will be set to the stated priority. The tables with the highest load order priority will be loaded first. The load order may later be changed with the new functionmnesia:change_table_load_order/2
. In systems with many and/or large tables, the applications now have the opportunity to gain early availability to some important applications by ensuring the tables are loaded early. The load order between tables with the same load order priority is still undefined. The current load order priority of a table may be read with the functionmnesia:table_info/2
.
- A new configuration parameter has been introduced. It is called
max_wait_for_decision
. By default it is set to the atominfinity
, which implies that if Mnesia upon startup encounters a "heavyweight transaction" whose outcome is unclear, the local Mnesia will wait until Mnesia is started on some or possibly all of the other nodes that were involved in the interrupted transaction. This is a very rare situation, but when/if it happens, Mnesia will not guess if the transaction on the other nodes was committed or aborted, it will wait until it knows the outcome and then act accordingly.
However, it is now possible to force Mnesia to finish the transaction, even if the outcome of the transaction for the moment is unclear. Aftermax_wait_for_decision
milliseconds, Mnesia will commit/abort the transaction and continue with the startup. This may lead to transaction committing on some nodes and aborting on others. If the schema was updated in the transaction, the inconsistency may be fatal.
- Mnesia tries to guarantee that the database always is consistent, but there are a few situations when inconsistency can be introduced:
- When two Erlang nodes loose contact due to a communication failure. On both nodes Mnesia will think that the other node is stopped and its applications may possibly continue to update the tables on both nodes in parallel, while the nodes are unconnected. The database has then possibly become inconsistent and Mnesia will detect that either when the Erlang nodes re-gain contact with each other or at startup (on one of the two nodes).
- When the application has fooled Mnesia to make a wrong transaction decision by setting
max_wait_for_decision
to something other thaninfinity
. The database will become inconsistent. Mnesia will detect this later when the remaining nodes (where conflicting transaction decisions has been made) start.
- When the application has deliberately forced Mnesia to load a table with
mnesia:force_load_table/1
. The database could possibly become inconsistent, but the effect to Mnesia will be minimal.
- When Mnesia detects that the database is possibly inconsistent a newly introduced event will be generated:
{inconsistent_database, Context, Node}
. It the default handling of that event which will pick aMasterNode
frommnesia:system_info(db_nodes)
) and invokemnesia:set_master_node([MasterNode])
.
- The normal table load algorithm, loads replicas depending on whether Mnesia has had the opportunity to perform updates on tables on other nodes, while Mnesia was stopped on the local nodes. But this behavior may now be overridden by the new function
mnesia:set_master_node(Table, MasterNodes)
. At startup Mnesia will primary load replicas from theMasterNodes
that are defined for a table. However, if there are noMasterNodes
defined for the table, the normal table load algorithm will apply. It is also possible to set the same master node(s) for all tables by invoking the functionmnesia:set_master_node(MasterNodes)
. The master nodes of a table can be read with the functionmnesia:table_info(Tab, master_nodes)
. A list of all local tables with at least one defined master node can be read with the functionmnesia:system_info(master_node_tables)
. The master node feature is reset, by invokingmnesia:set_master_node/1,2
to the empty list[]
.
- Mnesia now has the ability to set a master node.
Own Id: OTP-1456.
Aux Id: HA52431.
1.34.2 Fixed Bugs and malfunctions
mnesia:system_info(db_nodes)
returns the nodes which make up the persistent database. Disc-less nodes are only included in the list of nodes if they explicitly have been added to the schema, eg. withmnesia:add_table_copy/3
. The function can be invoked even if Mnesia is not yet running. There are three bugs related todb_nodes
that have been fixed:
The semantics was poorly described,
It returned the wrong result on disc-less nodes and under some circumstances thestorage_type
for the schema was unknown, which made it impossible to create tables from such a node.
Own Id: OTP-1790.
Aux Id: seq 348.
- Mnesia crashed if it was stopped during the load of an SNMP-table.
Own Id: OTP-1732, OTP-1650.
- Transactions acquiring sticky locks could be hanging indefinitely, if another node already had the lock and crashed during the hand over of the lock.
Own Id: OTP-1730.
- Spurious {mnesia_down, Node} messages were sent to application processes after they had committed a transaction.
Own Id: OTP-1735.
- The configuration parameter
dump_time_threshold
was ignored.
Own Id: OTP-1789.
Aux Id: seq 353.
ram_copies
replicas were not automatically loaded at startup.
Own Id: OTP-1856.
Aux Id: seq 411.
mnesia:force_load_table/1
would load the local replica if it was invoked before the schema merge had been completed.
- The database could get inconsistency if the application process crashed while committing its transaction. The transaction recovery has been enhanced.
- Mnesia now copes with partitioned networks.
1.34.3 Incompatibilities
Mnesia 2.0 is primary developed for OTP R3, but is still backward compatible with the OTP R1D and OTP R2D platforms.
The internal database format on disc has been changed in order to cope with the new features that have been introduced. This implies a special upgrade procedure:
- First of all, a full backup must be performed with the old Mnesia system.
- Then the backup must be installed as fallback, preferably by the new version of Mnesia.
- At last the new version of Mnesia may be started.
1.34.4 Known bugs and problems
- Mnesia does not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory anything may happen. The result is totally unpredictable.
- mnesia:restore/1 is not implemented.
1.35 Mnesia 1.3.2
1.35.1 Improvements and new features
1.35.2 Fixed Bugs and malfunctions
- When Mnesia goes down while a transaction is committing on a node involved in a distributed transaction, the locks could remain indefinitely. This serious bug was introduced in Mnesia 1.3.1.
- Local load of ram_copies tables which does not have any disc_copies or disc_only_copies, is now immediate. They are either loaded from another active replica or locally if no replica is active.
- Backups of ram_copies tables when ram_overrides_dump was set to false was inconsistent.
- Backups of tables which were updated during the backup were inconsistent.
1.36 Mnesia 1.3.1
1.36.1 Improvements and new features
- All changes of replica storage types (with mnesia:change_table_copy_type/3) does not require that all nodes with schema on disc are up and running anymore. The storage type may be changed from ram_copies to disc_copies or from disc_copies to disc_only_copies. But all other changes have the same restrictions as before.
1.36.2 Fixed Bugs and malfunctions
- A few bugs related to release of locks when Mnesia (on other nodes) went down, have been fixed. The symptom of the bugs was an infinite hang.
- A bug related to the hand over of sticky locks has been fixed. The symptom of the bug was indefinite hanging.
- mnesia:dirty_update_counter/2 did not work for snmp-tables. The symptom of the error was
{{case_clause,update_counter}, {mnesia_snmp_hook,update,[update_counter,<0.58.0>,1,[1]]}}
.
Own Id: OTP-1725.
1.36.3 Incompatibilities
1.36.4 Known bugs and problems
1.37 Mnesia 1.3
This release is a minor bugfix release and the release notes describes the difference between version 1.3 and version 1.2.3 of Mnesia.
1.37.1 Improvements and new features
1.37.2 Fixed Bugs and malfunctions
- Due to timing bug, Mnesia could crash at startup under some circumstances when it exchanged definitions of tables that was created on remote nodes while the local node was down. The error message looked like
** FATAL ** mnesia_init crashed: {no_exists, {eve_node_info, storage_type}}
.
Aux Id: HA55473.
1.37.3 Incompatibilities
1.37.4 Known bugs and problems
1.38 Mnesia 1.2.3
1.38.1 Improvements and new features
1.38.2 Fixed Bugs and malfunctions
- Due to a timing bug when creating a table (mnesia:create_table/2), Mnesia could crash under some circumstances. The error message looked like
** FATAL ** mnesia_init crashed: {no_exists, {ftmFaultState, storage_type}}
.
1.38.3 Incompatibilities
1.38.4 Known bugs and problems
See notes about release 1.2.2.
1.39 Mnesia 1.2.2
1.39.1 Improvements and new features
1.39.2 Fixed Bugs and malfunctions
- It was not possible change the storage type of the schema table (from ram_copies to disc_copies) if it was performed from a remote node by using mnesia:change_table_copy_type/3. Now the function may be invoked from any node running Mnesia.
- While the storage type of the schema table, was changed from ram_copies to disc_copies (with mnesia:change_table_copy_type/3), Mnesia entered a short period of internal inconsistency. If other ongoing transactions was ended during that period, their outcome was logged to disk before the decision log was initiated. When this situation occurred it would cause Mnesia to crash.
1.39.3 Incompatibilities
1.39.4 Known bugs and problems
See notes about release 1.2.1.
1.40 Mnesia 1.2.1
1.40.1 Improvements and new features
1.40.2 Fixed Bugs and malfunctions
- Mnesia should be able to repair its log files after file system crashes. This is normally done by the disc_log module, but if the log file is so badly damaged that disc_log does not regard the file as a log file, Mnesia will now delete the file and create a new fresh one. This will only happen if the configuration parameter auto_repair is set to true, otherwise no tries to repair the log file will be performed.
Aux Id: HA49139.
- Ambiguous storage type for tables is not allowed. Mnesia did not check this in earlier releases when tables was created with mnesia:create_table/2. Now Mnesia will reject the creation of such tables. This may have led to confusing behavior in many situations for tables with ambiguous storage types. These tables may have become inconsistent, not loaded at all, loaded on some nodes but not on all nodes etc, etc.
Aux Id: HA52368.
- Tables with the local_content property set to true, were not handled properly at startup. The most obvious effect of this was that these tables were not loaded on some nodes.
Aux Id: HA52368.
1.40.3 Incompatibilities
1.40.4 Known bugs and problems
1.41 Mnesia 1.2
1.41.1 Improvements and new features
1.41.2 Fixed Bugs and malfunctions
- Fatal error when updating schema at Mnesia startup.
Own Id: OTP-1441.
- The definition of a newly created table was not propagated to all nodes if some of them were down when the table was created. The nodes were not intended to hold an own replica never had a table definition once they were started. From the application's point of view the table seemed to not exist on the nodes that lacked the table definition.
- Incompatibility bug fixed. The bug made it impossible to install older backups as fallback if they contained SNMP tables.
1.41.3 Incompatibilities
1.41.4 Known bugs and problems
See notes about release 1.1.1.
1.42 Mnesia 1.1.1
This section describes the changes made to Mnesia in the 1.1.1 version of Mnesia. This release is a minor upgrade from 1.1.
1.42.1 Improvements and new features
- -
1.42.2 Fixed Bugs and malfunctions
- Mis-spelling of backup format version corrected. The bug made it impossible to install older backups as fallback.
1.42.3 Incompatibilities
1.42.4 Known bugs and problems
1.43 Mnesia 1.1.1
This section describes the changes made to Mnesia in the 1.1.1 version of Mnesia. This release is a minor upgrade from 1.1.
1.43.1 Improvements and new features
- -
1.43.2 Fixed Bugs and malfunctions
- Spelling error in the backup format corrected. The bug made it impossible to install older backups as fallback.
1.43.3 Incompatibilities
1.43.4 Known bugs and problems
1.44 Mnesia 1.1
This release is a normal release for general use and it comes with full documentation. The release notes describe the difference between version 1.1 and version 1.0 of Mnesia. 1.1 is a minor release, but the storage format on disc has been changed. In order to use databases created with older versions of Mnesia, a full backup file must be created with the old version of Mnesia. The backup must be installed as fallback by the new version of Mnesia. Then the new version of Mnesia may be started.
1.44.1 Improvements and new features
- Logging of "heavy weight transactions" has been optimized.
1.44.2 Fixed Bugs and malfunctions
- Cleanup after deletion of SNMP-tables and deactivation of checkpoints was not sufficiently performed.
Own Id: OTP-1367.
1.44.3 Incompatibilities
As mentioned above, the storage format on disc has been changed.
1.44.4 Known bugs and problems
1.45 Mnesia 1.0
This special version of Mnesia is intended to be used by the GPRS project only. It is released without any new documentation besides this release note. The release notes describes the difference between version 1.0 and version 0.80 of Mnesia. 1.0 is a major release and in order to use databases created with older versions of Mnesia, a full backup file must be created with the old version of Mnesia. The backup must be installed as fallback by the new version of Mnesia. Then the new version of Mnesia may be started.
1.45.1 Improvements and new features
1.45.1.1 Enhanced concept of schema and db_nodes
The notion of db_nodes has been extended. Now Mnesia is able to run on disc-less nodes as well as regular nodes that utilize the disc.
The schema table may, as other tables, reside on one or more nodes. The storage type of the schema table may either be disc_copies or ram_copies (not disc_only_copies). At startup Mnesia uses its schema to determine which other nodes that it should try to establish contact with. If any of the other nodes are already started, the starting node merges its table definitions with the table definitions brought from the other nodes. This also applies to the definition of the schema table itself. The application parameter extra_db_nodes contains a list of nodes which Mnesia should also establish contact with (besides the ones found in the schema). The default value is the empty list [].
The application parameter schema_location controls enables Mnesia to look for and locate schema. The parameter may be one of the following atoms:
disc
- Mandatory disc. The schema was assumed to be located on the Mnesia directory. And if the schema could not be found , Mnesia refused to start. This was the old behavior.
ram
- Mandatory ram. The schema resides in ram only. At startup a tiny new schema is generated. This default schema contains just the definition of the schema table and only resides on the local node. Since no other nodes are found in the default schema, the configuration parameter extra_db_nodes must be used in order to let the node share its table definitions with other nodes. (The extra_db_nodes parameter may also be used on disc-full nodes.)
opt_disc
- Optional disc. The schema may reside on either disc or ram. If the schema is found on disc, Mnesia starts as a disc-full node (the storage type of the schema table is disc_copies). If no schema is found on disc, Mnesia starts as a disc-less node (the storage type of the schema table is ram_copies). When the schema_location is set to opt_disc the function
mnesia:change_table_copy_type/3
may be used to change the storage type of the schema. The default is opt_disc.
The functions
mnesia:add_table_copy/3
andmnesia:del_table_copy/2
can be used to add and delete replicas of the schema table. Adding a node to the list of nodes where the schema is replicated will affect two things.
First it allows other tables to be replicated to this node.
Secondly it will cause Mnesia to try to contact the node at startup of disc full nodes.If the storage type of the schema is ram_copies, Mnesia will not use the disc on that particular node. The disc usage is enabled by changing the storage type of the table
schema
to disc_copies.The schema table is not created with
mnesia:create_table/2
as normal tables. New schemas are created explicitly withmnesia:create_schema/1
or implicitly by starting Mnesia without a disc resident schema. Whenever a table (including the schema table) is created it is assigned its own unique cookie. At startup when Mnesia connects to each other on different nodes, they exchange table definitions with each other and the table definitions are merged.
During the merge procedure Mnesia performs a sanity test to ensure that the table definitions are compatible with each other. If a table exists on several nodes the cookie must be the same, otherwise Mnesia will shutdown one of the nodes. This unfortunate situation will occur if a table has been created on two nodes independently of each other while they were disconnected. To solve this problem, one of the tables must be deleted (as the cookies differ, it is regarded as two different tables even if they happen to have the same name).Merging different versions of the schema table, does not always require the cookies to be the same. If the storage type of the schema table is disc_copies, the cookie is immutable, and all other db_nodes must have the same cookie. But if the storage type of the schema is ram_copies its cookie can be replaced with a cookie from another node (ram_copies or disc_copies). Cookie replacement during a merge of the schema table definition is performed each time a RAM node connects to another node.
The functions
mnesia:add_db_node/1
andmnesia:del_db_node/3
have been removed from the API. Adding and deleting db_nodes are performed as described above.
mnesia:system_info(schema_location)
andmnesia:system_info(extra_db_nodes)
may be used to determine the actual values of schema_location and extra_db_nodes respectively.mnesia:system_info(use_dir)
may be used to determine whether Mnesia is actually be using the Mnesia directory or not.use_dir
may be determined even before Mnesia is started. The functionmnesia:info/0
may now be used to printout some system information even before Mnesia is started. When Mnesia is started the function prints out more information.Transactions which update the definition of a table, require that Mnesia is started on all nodes where the storage type of the schema is disc_copies. All replicas of the table on these nodes must also be loaded.
There are a few exceptions to these availability rules. Tables may be created and new replicas may be added without all disc-full nodes being started. New replicas may be added without all other replicas of the table is being loaded, one other replica is sufficient.
The internal representation of the schema cookie, schema version and db_nodes has been changed. Their representation in backup files has also been changed. This affects the function
mnesia:traverse_backup/4,6
slightly. Now the definition of the schema table is represented in the same manner as the definition of other tables. In the backup this means a tuple{schema, schema, TableDef}
instead of{schema, cookie,Cookie}
,{schema, version, Version}
and{schema, db_nodes, DbNodes}
. Now all tables (including the schema table) have their own cookie and version. The db_nodes are found in the list of ram_copies and disc_copies nodes respectively in the tuple containing the definition of the schema table.1.45.1.2 New concept of handling Mnesia events
As Mnesia has evolved to conform to the application concept, the mnesia_user process has been replaced with a gen_event server.
In various situations Mnesia generates events. There are several categories of events. First, there are system events, which are important events that serious Mnesia applications should take an interest. The system events are currently:
{mnesia_up, Node}
- This means that Mnesia has been started on a node. Node is the name of the node. By default this event is ignored.
{mnesia_down, Node}
- Mnesia has been stopped on a node. Node is the name of the node. By default this event is ignored.
{mnesia_checkpoint_activated, Checkpoint}
- A checkpoint with the name
Checkpoint
has been activated and that the current node is involved in the checkpoint. Checkpoints may be activated explicitly withmnesia:activate_checkpoint/1
or implicitly at backup, adding table replicas, internal transfer of data between nodes etc. By default this event is ignored.
{mnesia_checkpoint_deactivated, Checkpoint}
- A checkpoint with the name
Checkpoint
has been deactivated and that the current node was involved in the checkpoint. Checkpoints may explicitly be deactivated withmnesia:deactivate/1
or implicitly when the last replica of a table (involved in the checkpoint) becomes unavailable, e.g. at node down. By default this event is ignored.
{mnesia_overload, Details}
- Mnesia on the current node is overloaded and that the application ought to do something about it.
One example of a typical overload situation is when the application is performing more updates on disc resident tables than Mnesia is able to handle. Ignoring this kind of overload may lead into a situation where the disc space is exhausted (regardless of the size of the tables stored on disc). Each update is appended to the transaction log and it is dumped to the tables files occasionally depending on how it is configured. The table file storage is more compact than the transaction log storage, especially if the same record is updated many times. If the thresholds for dumping the transaction log has been reached before the previous dump was finished an overload event is triggered.
Another typical overload situation is when the transaction manager cannot commit transactions at the same pace as the applications are performing updates of disc resident tables. When this happens the message queue of the transaction manager will continue to grow until the memory is exhausted or the load decreases. The same problem may occur for dirty updates.
The overload is detected locally on the current node, but the cause may be on another node. Application processes may cause heavy loads on other nodes if any of the tables are residing on other nodes (replicated or not). By default this event is reported to the error_logger.
{mnesia_fatal, Format, Args, BinaryCore}
- Mnesia has encountered a fatal error and will be terminated imminently. The reason for the fatal error is explained in Format and Args which may be given as input to
io:format/2
or sent to the error_logger. By default it is sent to the error_logger.BinaryCore
is a binary containing a summary of Mnesia's internal state when the fatal error was encountered. By default the binary is written to a unique file name on current directory. On RAM nodes the core is ignored.
{mnesia_info, Format, Args}
- This means that Mnesia has detected something that may be interesting when debugging the system. What is interesting is explained in
Format
andArgs
which may be given as input toio:format/2
or sent to the error_logger. By default this event is printed withio:format/2
.
{mnesia_error, Format, Args}
- This means that Mnesia has encountered an error. The reason for the error is explained i
Format
andArgs
which may be given as input toio:format/2
or sent to the error_logger. By default this event is reported to the error_logger.
{mnesia_user, Event}
- This means that some application has invoked the function
mnesia:report_event(Event)
.Event
may be any Erlang data structure. When tracing a system of Mnesia applications it is useful to be able to interleave Mnesia's own events with application related events that gives information about the application context. Whenever the application starts with some new demanding Mnesia activity or if it is entering a new interesting phase in its execution it may be a good idea to usemnesia:report_event/1
.
Another category of events are table events, which are events related to table updates. The table events are tuples typically resembling:
{Oper, Record, TransId}
. WhereOper
is the operation performed.Record
is the record involved in the operation andTransId
is the identity of the transaction performing the operation. The various table related events that may occur are:
{write, NewRecord, TransId}
- A new record has been written. NewRecord contains the new value of the record.
{delete_object, OldRecord, TransId}
- A record has possibly been deleted with
mnesia:delete_object/1
.OldRecord
contains the value of the old record as stated as argument by the application. Note that, other records with the same key may be remaining in the table if it is a bag.
{delete, {Tab, Key}, TransId}
- One or more records has possibly been deleted. All records with the key Key in the table
Tab
have been deleted.
The function
mnesia:subscribe_config_change/0
has been replaced with the functionsmnesia:subscribe(EventCategory)
andmnesia:unsubscribe(EventCategory)
.EventCategory
may either be the atomsystem
or the tuple{table, Tab}
. The subscribe functions activate a subscription of events. The events are delivered as messages to the process evaluating themnesia:subscribe/1
function. The syntax of system events is{mnesia_system_event, Event}
and{mnesia_table_event, Event}
for table events. What system events and table events means is described above.All system events are always subscribed by Mnesia's gen_event handler. The default gen_event handler is
mnesia_event
. But it may be changed with the application parameterevent_module
. The value of this parameter must be a name of a module implementing a complete handler as specified by the gen_event module in stdlib.mnesia:system_info(subscribers)
andmnesia:table_info(Tab, subscribers)
may be used to determine which processes subscribe to various events.1.45.1.3 Enhanced debugging support
The new subscription mechanism enables building of powerful debugging and configuration tools (like the soon to be released Xmnesia).
mnesia:debug/0
andmnesia:verbose/0
has been replaced withmnesia:set_debug_level(Level)
.Level
is an atom which regulates the debugging level of Mnesia. The following debug levels are supported:
none
- No trace output at all. This is the default.
verbose
- Activates tracing of important debug events. These debug events will generate
{mnesia_info, Format, Args}
system events. Processes may subscribe on these events withmnesia:subscribe/1
. The events are always sent to Mnesia's event handler.
debug
- Activates all events on the verbose level plus a full trace of all debug events. These debug events will generate
{mnesia_info, Format, Args}
system events. Processes may subscribe to these events withmnesia:subscribe/1
. The events are always sent to Mnesia's event handler. On this debug level Mnesia's event handler starts subscribing updates to the schema table.
trace
- Activates all events at the level debug. On this debug level Mnesia's event handler starts subscribing updates on all Mnesia tables. This level is only intended for debugging small toy systems, since many large events may be generated.
false
- is an alias for none.
true
- is an alias for debug.
1.45.1.4 Enhanced error codes
Mnesia functions return
{error, Reason}
or{aborted, Reason}
when it fails. This is still true but theReason
is now in many cases a tuple instead of a cryptic atom. The first field in the tuple tells what kind of error it is and the rest of the tuple contains details about the context or where the error occurred. For example, if a table does not exist{no_exists, TableName}
is returned instead of just the atomno_exists
. The functionmnesia:error_description/1
accepts the old atom style and the new tuple style.{error, Reason}
and{aborted, Reason}
tuples are also accepted.1.45.1.5 Conformance to the application concept
As Mnesia has evolved to conform with the OTP application concept, the process architecture of Mnesia has been restructured. For processes which not are performance critical gen_server, gen_event and gen_fsm are now used. Supervisor's are used to supervise Mnesia's internal long living processes. The startup procedure now conforms with the supervisor concept. The side effect is poorer error codes at startup.
mnesia:start/0
will now return the cryptic tuple{error,{shutdown, {mnesia_sup,start,[normal,[]]}}}
when Mnesia startup fails. Use -boot start_sasl as argument to the erl script in order to get a little bit more information from start failures.Mnesia now negotiates with Mnesia on other nodes at startup about which message protocol to use. This means that connecting a node with a future release of Mnesia with a node running this release of Mnesia will work fine, and not cause inconsistency because of protocol mismatches. Mnesia is also prepared for code change without stopping Mnesia. (The file representation on disc was already prepared for future format changes.)
1.45.1.6 The transaction concept has been extended
A functional object (Fun) performing operations like:
- mnesia:read/1
- mnesia:wread/1
- mnesia:write/1
- mnesia:s_write/1
- mnesia:delete/1
- mnesia:s_delete/1
- mnesia:delete_object/1
- mnesia:s_delete_object/1
- mnesia:all_keys/1
- mnesia:match_object/1
- mnesia:index_match_object/2
- mnesia:index_read/3
- mnesia:read_lock_table/1
- mnesia:write_lock_table/1
may be sent to the function
mnesia:transaction/1,2,3
and will be performed in a transaction context involving mechanisms like locking, logging, replication, checkpoints, subscriptions, commit protocols etc. This is still true, but the same function may also evaluated in other contexts.By sending the same "fun" to the function
mnesia:async_dirty(Fun [, Args])
it will be performed in dirty context. The function calls will be mapped to the corresponding dirty functions. This will still involve logging, replication and subscriptions but there will be no locking, local transaction storage or commit protocols involved. Checkpoint retainers will be updated but it will be updated dirty. As for normal mnesia:dirty_* operations the operations are performed semi asynchronous. The functions will wait for the operation to be performed on one node but not the others. If the table resides locally no waiting for other nodes is involved.By sending the same "fun" to the function
mnesia:sync_dirty(Fun [, Args])
it will be performed in almost the same context asmnesia:async_dirty/1,2
. The difference is that the operations are performed synchronously. The caller will wait for the updates to be performed on all active replicas. Using sync_dirty is useful for applications that are executing on several nodes and want to be sure that the update is performed on remote node before a remote process is spawned or a message is sent to a remote process. It may also be useful if the application perform so frequent or voluminous updates that Mnesia becomes overloaded on other nodes.By sending the same "fun" to the function
mnesia:ets(Fun [, Args])
it will be performed in a very raw context. The operations will be performed directly on the local ets tables assuming that the local storage type is ram_copies and that the table is not replicated to other nodes. Subscriptions will not be triggered nor checkpoints updated, but it is blindingly fast.All these activities (transaction, async_dirty, sync_dirty and ets) may be nested. Yes, we do support nested transactions! A nested activity is always automatically upgraded to be of the same kind as the outer one. For example, a "fun" executed in async_dirty inside a transaction will be executing inside transaction context.
Locks acquired by nested transactions will not be released until the outermost transaction has ended. The updates performed in a nested transaction will not be committed until the outermost transaction commits.
Mnemosyne queries may be performed in all these activity contexts (transaction, async_dirty, sync_dirty and ets). The ets activity will only work if the table has no indexes.
1.45.1.7 An alternate commit protocol has been added
Both the new and old protocols are still used. Mnesia selects the most appropriate transaction commit protocol depending on which tables that has been involved in the transaction.
The old commit protocol is a very fast protocol, with a simple algorithm for recovery. But the drawback is that it only guarantees consistency after recovery for symmetrically replicated tables. If all tables involved in the transaction have the same replica pattern they are regarded as symmetric replicated. For example, if all tables involved in the transaction have a ram_copies replica on node A and a disc_copies replica on node B they are symmetrically replicated.
If the tables are asymmetrically replicated or if the schema table (containing the table definitions) is involved, the new heavy weight protocol is used to commit the transaction. The new protocol ensures consistency for all kinds of table updates. The protocol is able to recover the tables to a consistent state regardless of provocative applications or when a node crash occurs.
During commit, the new protocol will cause more network messages and disc accesses than the old protocol, but it is safer. At startup after a crash, there may exist transactions in the log that we do not know the outcome of. This should be very rare since the protocol has been deliberately designed to make the period of uncertainty as short as possible. When this rare situation occurs, Mnesia on recovering the node will not be able to start without asking Mnesia on other nodes for an outcome of the transaction in order to decide whether to commit or abort.
With this new approach several of the problems, described in earlier release notes have disappeared. See below:
- Now Mnesia is able to recover crashed schema transactions.
- Mnesia also guarantees consistency of asymmetrically replicated tables.
- The Mnesia database will remain consistent even if Mnesia is terminated under unfortunate circumstances.
- Mnesia now guarantees consistency on SNMP tables even if the application tries to store records that violate the type definition of the SNMP key.
1.45.1.8 New startup procedure
When Mnesia starts on one node it may come to the conclusion that some of the tables can be loaded from local disc since no other nodes may hold a replica that is newer than the one on the starting node. The start function now returns without performing any loading of tables. The tables are loaded later in background allowing early availability of the transaction manager for those tables that have been loaded. The application will not have to wait for all tables to be loaded before it can start.
mnesia:start/0
now returns the atomok
or{error, Reason}
. In embedded systems this function is not used. In such systemsapplication:start(mnesia)
is used.A new function
mnesia:start(Config)
is introduced. TheConfig
argument is a list of{Name, Val}
tuples, where Name is a name of an application parameter and Val is its value. The effect of the new values will remain until the application Mnesia is reloaded. After the transient change of application parameters, Mnesia will be started withmnesia:start/0
and return its return value. For examplemnesia:start([{extra_db_nodes, [svarte_bagge@switchboard]}])
would override any old setting of extra_db_nodes until the application Mnesia was reloaded.When an application is started it must synchronize with Mnesia to ensure that all the tables that it needs to access, have really been loaded before it attempts access. The function
mnesia:wait_for_tables(Tables, Timeout)
should be used for this purpose. Note:It is even more important to do this now since the start function will return earlier without loading any tables compared to previous releases. Do not forget this, otherwise the application will be less robust.Each Mnesia table should be owned by only one module. This module is responsible for the life cycle of the table. When the application is installed for the first time in a network of nodes, this module must create the necessary tables. In each subsequent release of the application the module owning the table is responsible for performing changes of the table definition if required, (e.g. invoking
mnesia:transform/3
to transform the table).When the application ceases to exist, it must be uninstalled from the network and the module that owns the table must delete its tables. It might also be a good idea to let this module export functions that allow customized interfaces to some of the Mnesia functions (eg. a special wait_for_tables that only waits for certain hard coded tables).
Other applications that need direct access to tables owned by a module in another application must declare a dependency to the other application in its .app-file in order to allow the code change, start and stop algorithms in the application_controller and supervisor modules to work.
1.45.1.9 mnesia:dump_tables/1
mnesia:dump_tables/1
is now performed as a transaction and returns{atomic, ok}
or{aborted, Reason}
as the other functions performed in transaction context.1.45.1.10 mnesia:dirty_update_counter/2
mnesia:dirty_update_counter(Counter, Incr)
now returns the new counter value instead of the atom ok.1.45.1.11 New dirty functions have been introduced
- mnesia:dirty_all_keys/1
- mnesia:dirty_index_read/3
- mnesia:dirty_index_match/2
They perform the same work as the corresponding function without 'dirty_' prefix but in dirty context.
1.45.1.12 Easier use of indexes
Now attribute names are also allowed to specify index positions. Index positions may still be given as field positions in the tuple corresponding to the record definitions.
The functions
mnesia:match_object/1
andmnesia:dirty_match_object/1
automatically make use of indexes if any exist. No heuristics are performed in order to select the best index. Use Mnemosyne if this is an issue.1.45.1.13 Enhanced control of thresholds for dump of transaction log
The operations found in the transaction log will occasionally be performed on the actual dets tables. The frequency of the dumps is configurable with two application parameters. One is the dump_log_time_threshold which is an integer that specifies the dump log interval in milliseconds (it defaults to 3 minutes). If a dump has not been performed within dump_log_time_threshold milliseconds, a new dump will be performed regardless of how many writes have been performed.
The other is dump_log_write_threshold which is an integer specifying how many writes to the transaction log that is allowed, before a new dump of the log is to be performed. It defaults to 100 log writes.
Both thresholds may be configured independently of each other. When one of them is exceeded a dump will be performed.
As explained elsewhere, there may occur situations when the application performs updates at a pace faster than Mnesia is able to propagate the updates from the log to the table files. When this occurs, overload events are generated. If availability is important, subscribe to these events and perform a regulation of the update intensity for your application! If you ignore this, you may exhaust the disc space.
The old application parameter -mnesia_dump_log_interval has been replaced by the two operations mentioned above.
The function
mnesia:change_dump_log_config/1
has been removed from the API.1.45.1.14 Tables must have at least arity 3
Mnesia will not allow tables with arity 2. All tables must at least have one extra attribute besides the key. Now, an extra check is performed to disallow creation of tables with an arity of less then 3. In earlier releases the table creation succeeded but since Mnesia was not designed for such peculiar tables strange things happened and records were lost.
Check your schema for such tables if you load an old backup file.
1.45.1.15 Non-blocking emulator
As mentioned in earlier release notes, match operations in ets tables will block the emulator for a long time if the tables are large. Now, a new BIF which performs partial matching of tables has been introduced. Mnesia is using the new BIF in match operations in order to avoid blocking the emulator with time consuming matches in large ets tables.
1.45.1.16 Configuration parameters
As Mnesia has evolved and conformed to the application concept, the style of the configuration parameters have been changed. Mnesia is now configured by arguments to the erl script using the syntax stated by the application module in stdlib.Below is an example of parameters in the old style:
- -mnesia_dir /my/favourite/dir
Parameters should now resemble:
- -mnesia dir "/my/favourite/dir"
- -mnesia extra_db_nodes '[kjetil@gprs, uffe@roof]'
Following is a brief summary of the new configuration parameters:
- debug ::= none | verbose | debug | trace
- schema_location ::= ram | disc | opt_disc
- extra_db_nodes ::= list of nodes
- event_module ::= name of gen_event handler module
- dump_log_write_threshold ::= number of log writes
- dump_log_time_threshold ::= number milliseconds
1.45.1.17 Fixed Bugs and malfunctions
The following
Own Identities; and,
Aux Identities have been solved:
- Own Id: OTP-1132.
Aux Id: HA37457.
- Own Id: OTP-1164.
- Own Id: OTP-1182.
Aux Id: HA40250.
- Own Id: OTP-1183 (OTP-1152).
- Own Id: OTP-1303.
Aux Id: HA44214.
1.45.1.18 Incompatibilities
See the chapter regarding improvements.
1.45.1.19 Known bugs and problems
- The documentation describes the 0.80 version of Mnesia. Keep this in mind whenever you read something about startup algorithms, db_nodes, transactions etc.
- Mnesia does not cope with network partitioning. If two running Mnesia nodes lose contact due to malfunctioning communication between the hosts anything may happen. The result is totally unpredictable.
- Mnesia does not detect if two nodes are sharing the same Mnesia directory. When two nodes are using the same Mnesia directory anything may happen. The result is totally unpredictable.
- mnesia:restore/1 is not implemented.
1.46 Mnesia 0.X
See the historical archives.