Dynamic Node Additions

martin j logan martin@REDACTED
Wed Dec 11 19:25:38 CET 2002


Hello, 
    I am not sure how the real time constraints on your system would
mesh with this solution. The way that we have chosen to design
applications when possible is so that they are dynamically scalable.
Knowledge of other producers/consumers is discovered at run time with
out configuration. The way the system works is that first a new node
must find a single node that is already in the node cluster that it is
to join.
This is accomplished with DNS SRV records. Once it is a member of the
appropriate cluster it relies on the "resource_discovery" application
included in all Vail dynamic discovery apps .rel files. The way that
this dynamic discovery works is it contains at start up a list of the
generic resource names that the node provides the network coupled with
the tokens, usually pid() or {atom(), node()}, to access those
resources. It also contains the generic names of the remote resources
that the node itself needs to function. The cluster is then given this
information, cluster being nodes(), via an async message.
The nodes that require any of the resources the new node possesses cache
the appropriate tokens the new node provided. The nodes that contain
resources the new node needed to function respond to that new node with
the tokens for the resources were requested. This is similar for mnesia
in the basic case. When a node comes up it comes up with a blank schema
and finds the nodes that are already running with the correct schema via
resource discovery. The new node then does a change_config, with one of
the nodes that were discovered, and then adds table copies. In this
manner it becomes one of the replicating members of that resource pool.

Example: Say there are two clients and one server clients would include
in their resource discovery lists something like this. 

% the list of resources the client has
[{client, self()}]

%The list of resources that the client wants to be aware of
[server]

A server would have something like this.
% the list of resources the server has, in this case its token is a 
% {registered name, node()}
[{server, {app_server, some_app@REDACTED}}]

%The list of resources that the server wants to be aware of
[]

So when you bring up a new client the server would see that the new node
wants to be aware of servers and hand off its token to that client.

If a new server were to be brought up then the client would see that it
has been messaged by a server and cache that servers token.

So at this point the client would know about all servers in the cluster
and be able to use them as an application programmer sees fit. When it
is discovered that a server is no longer there the app programmer
deletes it from the cache of servers. Perhaps this is too simple a
solution for your application. It seems to work in many of the cases
that we have here and allows for minimal configuration, minimal
deployment headache, and maximal scalability.

Cheers,
Martin 


On Wed, 2002-12-11 at 01:09, DANIESC SCHUTTE wrote:
> Good morning to all the fountains of information, from which I greedily gulp.
> 
> Our current systems looks as follows:
> 
> ( 2 x Solaris 8 x86 Application Servers, 1 x Sparc Solaris 8 DB)  
> 
> as this is a realtime system we were wondering about the way additional application server nodes can be added to share the processing load once certain critical levels are reached, and how the upgrading procedure would be done.
> 
> Is there a suggested way of doing this elegantly?  
> 
> (We were looking at downing 1 node, loading the relevant boot scripts etc and then bringing it up again, then downing node 2 doing the same, the caveat however is that every application must be run on at least two nodes, and both those nodes must not go down simultaneously).
> 
> Thank you
> Daniel Schutte
> 
> Danie Schutte
> Phone: +27 - 11 - 203 - 1613
> Mobile: 084-468-3138
> e-Mail: Daniesc@REDACTED
> 
> #####################################################################################
> The information contained in this message and or attachments is intended
> only for the person or entity to which it is addressed and may contain
> confidential and/or privileged material.  Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you received this in error, please contact the sender and
> delete the material from any system and destroy and copies.
> #####################################################################################





More information about the erlang-questions mailing list