[Original] Re: Solving the right problem
Wed Nov 6 09:18:04 CET 2019
Nice questions. Thank you.
HTTP(S) was assumed for historical reasons which might no longer be valid. Frankly, it was probably more about the ports which were the first to be opened for outgoing traffic by panic-struck workplaces, and the level of packet inspection that accompanied that than the specific properties of the protocol. Neither the problem nor the solutions I foresee are actually stateless by nature, that much is true. Tracking cursor states in the server allows for much more intelligent choices and proactive actions to ensure that each next bit of data is delivered to the client via the most opportune path from where it is mastered, cached or calculated.
Having said that, I'd think maintaining sessions too low down the network stack might at some point start working against the type of cursor mobility I wish to have. It remains to be seen. Ideally the time it takes to set-up and tear down network level connections between individual servers and individual clients can also be factored into how the work is distributed across nodes with available computing and network capacity.
Network latency is critically important but only from the perspective of how it impacts on user experience. The project's design approach aims to address this by progressively rolling out and using servers that are physically close (in network terms) to the clients they serve and then using smart pre-fetch logic to get the data there before the user needs it. It's impossible to make up for network latency using faster processing, but people take several orders of magnitude longer to consume a chunk of data than computers and that create the opportunity to "catch-up" on latency.
I'd love to discuss alternatives if you'd like to offer some? Making the most of ready-made components is crucial, but not if they have made conflicting design decisions which end up keeping the project overall from attaining its goals.
On 2019/11/06, 01:16, "james" <james@REDACTED> wrote:
My question would be - why HTTP/HTTPS?
I know 'everybody loves stateless' but actually that's not always a good
If you were to design based on gRPC and drop the connection when the
user is not actively working, how many concurrent connections will you
have, and how much latency can you wear?
More information about the erlang-questions