<div dir="ltr">Hi Garrett<br><br><div>Thanks very much for the information that does make a lot of sense. </div><div><br></div><div>I am trying to wrap my head around how large scale Erlang systems are structured in practice but I see that it will depend on the application and is only something worth thinking about once/if in becomes a real problem. </div><div><br></div><div>What initially had me thinking about it was availability rather than scale. A lot of what I have read and seen in videos suggests avoiding hot code loading where possible and rather having multiple independent app instances that can just be swapped, one at a time with new instances (provided the app allows it). </div><div><br></div><div>Thanks for your assistance</div><div><br></div><div>Regards, </div><br><div class="gmail_quote">On Thu, May 21, 2015 at 1:15 AM Garrett Smith <<a href="mailto:g@rre.tt">g@rre.tt</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Chris,<br>
<br>
On Wed, May 20, 2015 at 2:02 PM, Chris Clark <<a href="mailto:boozelclark@gmail.com" target="_blank">boozelclark@gmail.com</a>> wrote:<br>
> Hi<br>
><br>
> I have an erlang application that is made up of 3 applications and their<br>
> dependencies. I am trying to decide on how to distribute these so that I can<br>
> scale them by running additional instance of which ever app start to reach<br>
> its performance limits.<br>
><br>
> I've managed to find lot of info of how to connect erlang nodes into a<br>
> cluster but nothing on how to structure a distributed application in<br>
> practice. Does anyone know where I can find some info on this?<br>
><br>
> If for example I have three hosts in my cluster. Should I<br>
> A:<br>
> Compile these all into one release and the not start the apps automatically<br>
> but just start the number of required instances within the single erlang<br>
> nodes each running on a host<br>
<br>
A release corresponds to a node and what's in that release should be<br>
driven by the behavior of the node - there may be some linkage with so<br>
called scalability problems in specific cases, but certainly not in<br>
the general case.<br>
<br>
As for not starting "all" - this sounds like a misunderstanding - on<br>
your part or my part. Maybe I'm misreading you but it sounds like<br>
you're viewing your "apps" as units of power - like octane - that you<br>
might want to hold in reserve for the time you need to go super fast.<br>
<br>
An app is just a single supervisory tree that provides some set of<br>
functionality within your system (i.e. a running release).<br>
<br>
If an "app" in this case can "scale" - then you ought to know how. Is<br>
the constraint CPU, disk, IO - or a host of complex interactions that<br>
you'd never guess in a million years? If and when you understand how<br>
it can scale, look to deploy multiple instances of it per unit of<br>
actual power (CPU, spindle, network interface, etc) - if that would<br>
even work. Chances are you'll have to rethink something fundamental in<br>
the way you're doing things. That's okay though - it's the sign of a<br>
system evolving under pressure.<br>
<br>
Erlang is outstanding for this.<br>
<br>
> B:<br>
> Create each app into a separate release and then just start a node for each<br>
> instance of each app I want resulting in multiple erlang nodes per host. If<br>
> for example I just wanted one host then I would run three nodes on that<br>
> host. One for each app.<br>
<br>
Releases are just ways to package apps and run them in concert in a<br>
VM. If you want one app running on a VM, then do this. Otherwise don't<br>
do this.<br>
<br>
> Any guidance or info on the pros and cons of each approach would be<br>
> appreciated.<br>
<br>
Knowing nothing about your application, I'd just start by creating one<br>
release with a complete working system (user facing functionality).<br>
Then see how it behaves at various levels of load (request frequency,<br>
concurrency, volumes, etc.)<br>
<br>
Personally, I wouldn't think about scalability at all - and not just<br>
to make a social statement. I'd put the system into production and see<br>
what happens. You'll need to monitor things, but start by monitoring<br>
things from the end user perspective - this will force you to define<br>
meaningful performance targets. When you start to see something bad<br>
happening, study it carefully and take steps to fix the problem.<br>
<br>
When you're at the point you have actual problems (rather than<br>
architectural, which is not an actual thing, much less a problem) you<br>
can post some detailed information here and get some actual help.<br>
<br>
Hope that helps ;)<br>
<br>
Garrett<br>
</blockquote></div></div>