<div dir="ltr">In my experience with many simultaneous SSL connections in Erlang, what surprised me was that as default, Erlang is configured to have reusable/continuable SSL sessions. This means that some ETS table containing the session info can grow fairly large. That feature can be disabled, however, which helps a great deal on memory usage.<div><br></div><div>- Erik</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-07-03 20:10 GMT+02:00 Dmitry Kolesnikov <span dir="ltr"><<a href="mailto:dmkolesnikov@gmail.com" target="_blank">dmkolesnikov@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
There was posts at Internet about 2M connection per node.<br>
<a href="http://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections" rel="noreferrer" target="_blank">http://www.phoenixframework.<wbr>org/blog/the-road-to-2-<wbr>million-websocket-connections</a><br>
<br>
I’ve scale it up to 1M connection on AWS long time ago. This numbers is “easy” to achieve out of box for system with low traffic. I had the following traffic pattern 20 connection per second (poisson arrival rate), connections stays idle but clients shows 3 second think time (poisson rate).<br>
<br>
The major concern was a memory all the time. The out of box deployment on AWS large instance was capable to handle 300K connections before OOM. I’ve use hibernate feature aggressively to reduce memory demand. Finally, I’ve used 2xlarge node to run the system.<br>
<br>
The latency was not an issue. The message scheduling was good while CPU utilization was below 70%. However, I’ve reduce logging to error-level and made the nodes responsible for message routing only. I’ve learn that you need to keep in balance CPU/Memory to get acceptable results but this depends on your traffic.<br>
<br>
Unfortunately, that set-up excluded SSL. I was not successful to scale SSL up to 1M due to heavier memory demand (I’ve user OTP 18 for that problem). Unfortunately, I do not recall exactly the number of concurrent SSL connection (cannot find the data sheets). It was high >>20K but <<100M for sure.<br>
<br>
Frankly speaking, I would design a system around ELB to terminate SSL traffic and use weighted load balancing feature to scale ELB connectivity then rush for high concurrency on single node due to operational simplicity.<br>
<br>
I hope, I shed some light on your problem.<br>
<br>
Best Regards,<br>
Dmitry<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
> On Jul 2, 2017, at 12:56 PM, Paul Peregud <<a href="mailto:paulperegud@gmail.com">paulperegud@gmail.com</a>> wrote:<br>
><br>
> I'm looking at SSL + websocket + cowboy scaling with regards to number<br>
> of simultaneous connections. I'm interested in relationship between<br>
> number of connections and latency of message delivery. Is there any<br>
> data on performance of SSL implementation in BEAM?<br>
><br>
> Best regards,<br>
> Paul Peregud<br>
> ______________________________<wbr>_________________<br>
> erlang-questions mailing list<br>
> <a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
> <a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/<wbr>listinfo/erlang-questions</a><br>
<br>
______________________________<wbr>_________________<br>
erlang-questions mailing list<br>
<a href="mailto:erlang-questions@erlang.org">erlang-questions@erlang.org</a><br>
<a href="http://erlang.org/mailman/listinfo/erlang-questions" rel="noreferrer" target="_blank">http://erlang.org/mailman/<wbr>listinfo/erlang-questions</a><br>
</div></div></blockquote></div><br></div>