what is a good way to do 'maintaining data distribution'

Sat Apr 12 19:43:31 CEST 2003

>hp> On of my responsibilities is to make sure that all databases 
>hp> generated on a few master machines get distributed to other machines
>hp> in a timely and reliable fashion.
>hp> Scale: machine number is around 240.
>hp> Dependence: intertwining.
>There aren't enough information to answer your question correctly but I try.
>1) If you could switch to Mnesia it did all work for you. Some
>sophisticated rules could be solved using Ulf Wiger's `rdbms' package
>(available at contributions page).
>2) If you must use SQL database the simplest solution would be
>database triggers on insert/update/delete to store key information of
>changed records in log table with timestamps and periodic update on
>remote nodes using erlang `odbc' and `rpc' modules.
  Thank you for your input.

  Unfortunately, SQL is too slow for our company. 
  And the nice Mnesia in Erlang cannot be used for our purpose.
  Our company uses proprietary databases developed internally
  using C++, for speed of accessing and performing 
  complicated compression calculations. 
  (btw, it is very hard to change something existing :-)  
   Our company has recently rejected a proposal of moving 
   into ROOT -- a better C++ data framework
   developed at CERN for dealing with massive amount of data.) 
  Fortunately,  each database is divided into many individual files.
  As a result, maintaining database synchronization becomes the problem
  of syncing at the level of individual files.

More information about the erlang-questions mailing list