[erlang-questions] Erlang HTTP client libraries- pros/cons
Sat Sep 2 08:06:43 CEST 2017
On Sat, Sep 2, 2017 at 5:22 PM Max Lapshin <> wrote:
> HTTP client over external port is the most expensive way in all terms:
> 1) programming. It is REALLY hard to debug it. Was it launched under
> valgrind? If no, then there are at least 5 horrible memory leaks and memory
> corruptions per each screen of code that haven't been discovered yet
And yet you wrote https://github.com/maxlapshin/csv_reader (which I use
btw). Running port executables protects the VM from any bugs in the C code,
at least. It's almost like Erlang in that regard! Valgrind was used during
> 2) deploying. Deploying NIF is a pain because you need to have build farm
> for each architecture that you target to. For example, we deploy flussonic
> under suse 10, debian 7, debian8/ubuntu16, arm7, arm8, windows, elbrus 2k.
> All these platforms are different and you cannot rely on cross compiler.
> Good luck with building repeatable infrastructure for compiling under
> If you have erlang code, you can compile under mac and launch under
> windows because guys from OTP team have already done all dirty job. Just
> read their manual and follow it.
This I will concede can be an issue with NIFs. It's not an issue for my use
case, however, and can be overcome by using things like Docker.
> 3) speed. It will be slow in all terms. High latency due to multiple OS
> process scheduling: read in one process, then write to pipe and send
> further. Do you think that linux pipe is a "good and optimized" thing? It
> is not.
> What if we speak about high traffic? 1 gigabit of input will become
> several gigabits of _useless_ copying data.
> So I do not understand what can give libcurl that cannot be achieved in
> plain erlang. It is definitely not about high traffic speed because plain
> vanilla lhttpc can download 10 Gbit/s over fiber without any extra
I think perhaps you have tunnel vision based on your specific HTTP use
case. I'm not sure what you mean by "in all terms". In
https://github.com/lpgauth/httpc_bench (d)lhttpc peaks at 93178 reqs/s
whereas Katipo peaks at 107900. You may not require the HTTP compatibility
that *22K* commits to libcurl provides. Your custom version of lhttpc may
well be able to sustain 10Gbit/s over fiber - that is not my use case so
I've no idea if Katipo could do the same.
> On Thu, Aug 31, 2017 at 10:22 AM, Paul Oliver <> wrote:
>> On Thu, Aug 31, 2017 at 6:20 PM Taras Halturin <>
>>> I think, Max means that you choose most expensive way to deal with it
>>> and it's not about efficiency of http handling but about efficiency of aim
>>> achieving :)
>> The most expensive way in terms of what? If not speed do you mean
>> development effort? Given that the aim is to use libcurl then the choice is
>> a port executable or some sort of NIF. When using a port executable I don't
>> have to worry about it crashing my VM and all I pay is the price of port
>> communications. If I use a NIF I have to concern myself with making sure my
>> NIF code and the code in libcurl doesn't crash my VM. That's a lot more
>> development time and risk.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the erlang-questions