From icfp.publicity@REDACTED Tue Nov 1 02:59:20 2016 From: icfp.publicity@REDACTED (Lindsey Kuper) Date: Mon, 31 Oct 2016 18:59:20 -0700 Subject: [erlang-questions] Call for Workshop Proposals: ICFP 2017 Message-ID: <5817f6f8473a9_54a3fd6a9465be46723d@landin.local.mail> CALL FOR WORKSHOP AND CO-LOCATED EVENT PROPOSALS ICFP 2017 22nd ACM SIGPLAN International Conference on Functional Programming September 3-9, 2017 Oxford, United Kingdom http://conf.researchr.org/home/icfp-2017 The 22nd ACM SIGPLAN International Conference on Functional Programming will be held in Oxford, United Kingdom on September 3-9, 2017. ICFP provides a forum for researchers and developers to hear about the latest work on the design, implementations, principles, and uses of functional programming. Proposals are invited for workshops (and other co-located events, such as tutorials) to be affiliated with ICFP 2017 and sponsored by SIGPLAN. These events should be less formal and more focused than ICFP itself, include sessions that enable interaction among the attendees, and foster the exchange of new ideas. The preference is for one-day events, but other schedules can also be considered. The workshops are scheduled to occur on September 3 (the day before ICFP) and September 7-9 (the three days after ICFP). ---------------------------------------------------------------------- Submission details Deadline for submission: November 19, 2016 Notification of acceptance: December 18, 2016 Prospective organizers of workshops or other co-located events are invited to submit a completed workshop proposal form in plain text format to the ICFP 2017 workshop co-chairs (David Christiansen and Andres Loeh), via email to icfp2017-workshops@REDACTED by November 19, 2016. (For proposals of co-located events other than workshops, please fill in the workshop proposal form and just leave blank any sections that do not apply.) Please note that this is a firm deadline. Organizers will be notified if their event proposal is accepted by December 18, 2016, and if successful, depending on the event, they will be asked to produce a final report after the event has taken place that is suitable for publication in SIGPLAN Notices. The proposal form is available at: http://www.icfpconference.org/icfp2017-files/icfp17-workshops-form.txt Further information about SIGPLAN sponsorship is available at: http://www.sigplan.org/Resources/Proposals/Sponsored/ ---------------------------------------------------------------------- Selection committee The proposals will be evaluated by a committee comprising the following members of the ICFP 2017 organizing committee, together with the members of the SIGPLAN executive committee. Workshop Co-Chair: David Christiansen (Indiana University) Workshop Co-Chair: Andres Loeh (Well-Typed LLP) General Chair: Jeremy Gibbons (University of Oxford) Program Chair: Mark Jones (Portland State University) ---------------------------------------------------------------------- Further information Any queries should be addressed to the workshop co-chairs (David Christiansen and Andres Loeh), via email to icfp2017-workshops@REDACTED From hm@REDACTED Tue Nov 1 16:25:05 2016 From: hm@REDACTED (=?UTF-8?Q?H=C3=A5kan_Mattsson?=) Date: Tue, 1 Nov 2016 16:25:05 +0100 Subject: [erlang-questions] Trunkated stacktrace Message-ID: Hi, Anybody who knows why the stacktrace returned by process_info(Pid, current_stacktrace) is truncated to 8 items? Is it due to some performance consideration? In order to get more usable stacktraces I were thinking of doing this little fix in our system? Do you foresee any drawbacks with it? Btw, 8 happens to be the default of the system flag backtrace_depth. diff --git a/otp/erts/emulator/beam/erl_bif_info.c b/otp/erts/emulator/beam/erl_bif_info.c index d7f1e2d..2dc310f 100755 --- a/otp/erts/emulator/beam/erl_bif_info.c +++ b/otp/erts/emulator/beam/erl_bif_info.c @@ -1607,7 +1607,7 @@ current_stacktrace(Process* p, Process* rp, Eterm** hpp) Eterm mfa; Eterm res = NIL; - depth = 8; + depth = erts_backtrace_depth; sz = offsetof(struct StackTrace, trace) + sizeof(BeamInstr *)*depth; s = (struct StackTrace *) erts_alloc(ERTS_ALC_T_TMP, sz); s->depth = 0; /H?kan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Tue Nov 1 17:09:19 2016 From: vans_163@REDACTED (Vans S) Date: Tue, 1 Nov 2016 16:09:19 +0000 (UTC) Subject: [erlang-questions] Trunkated stacktrace In-Reply-To: References: Message-ID: <1046527484.2190330.1478016559168@mail.yahoo.com> Increase the stack trace depth. 8 is default. No serious drawbacks I can think of. Indeed 8 is often too little. On Tuesday, November 1, 2016 11:25 AM, H?kan Mattsson wrote: Hi, Anybody who knows why the stacktrace returned by process_info(Pid, current_stacktrace) is truncated to 8 items? Is it due to some performance consideration? In order to get more usable stacktraces I were thinking of doing this little fix in our system? Do you foresee any drawbacks with it? Btw, 8 happens to be the default of the system flag backtrace_depth. diff --git a/otp/erts/emulator/beam/erl_bif_info.c b/otp/erts/emulator/beam/erl_bif_info.c index d7f1e2d..2dc310f 100755 --- a/otp/erts/emulator/beam/erl_bif_info.c +++ b/otp/erts/emulator/beam/erl_bif_info.c @@ -1607,7 +1607,7 @@ current_stacktrace(Process* p, Process* rp, Eterm** hpp) Eterm mfa; Eterm res = NIL; - depth = 8; + depth = erts_backtrace_depth; sz = offsetof(struct StackTrace, trace) + sizeof(BeamInstr *)*depth; s = (struct StackTrace *) erts_alloc(ERTS_ALC_T_TMP, sz); s->depth = 0; /H?kan _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From rtrlists@REDACTED Tue Nov 1 19:15:17 2016 From: rtrlists@REDACTED (Robert Raschke) Date: Tue, 1 Nov 2016 19:15:17 +0100 Subject: [erlang-questions] Trunkated stacktrace In-Reply-To: <1046527484.2190330.1478016559168@mail.yahoo.com> References: <1046527484.2190330.1478016559168@mail.yahoo.com> Message-ID: Personally, I interpret deep stacktraces to be a result of frameworkitis or poorly factored code, as something to avoid or fix. But that's probably an unpopular opinion to have these days. Cheers, Robby On 1 Nov 2016 17:09, "Vans S" wrote: > Increase the stack trace depth. 8 is default. No serious drawbacks I can > think of. Indeed 8 is often too little. > > On Tuesday, November 1, 2016 11:25 AM, H?kan Mattsson > wrote: > > > > Hi, > > Anybody who knows why the stacktrace returned by process_info(Pid, > current_stacktrace) is truncated to 8 items? Is it due to some performance > consideration? > > > In order to get more usable stacktraces I were thinking of doing this > little fix in our system? Do you foresee any drawbacks with it? > > Btw, 8 happens to be the default of the system flag backtrace_depth. > > > diff --git a/otp/erts/emulator/beam/erl_bif_info.c > b/otp/erts/emulator/beam/erl_bif_info.c > index d7f1e2d..2dc310f 100755 > --- a/otp/erts/emulator/beam/erl_bif_info.c > +++ b/otp/erts/emulator/beam/erl_bif_info.c > @@ -1607,7 +1607,7 @@ current_stacktrace(Process* p, Process* rp, Eterm** > hpp) > Eterm mfa; > Eterm res = NIL; > > - depth = 8; > + depth = erts_backtrace_depth; > sz = offsetof(struct StackTrace, trace) + sizeof(BeamInstr *)*depth; > s = (struct StackTrace *) erts_alloc(ERTS_ALC_T_TMP, sz); > s->depth = 0; > > > /H?kan > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomas.abrahamsson@REDACTED Tue Nov 1 23:22:35 2016 From: tomas.abrahamsson@REDACTED (Tomas Abrahamsson) Date: Tue, 1 Nov 2016 23:22:35 +0100 Subject: [erlang-questions] Trunkated stacktrace In-Reply-To: References: <1046527484.2190330.1478016559168@mail.yahoo.com> Message-ID: On Tue, Nov 1, 2016 at 7:15 PM, Robert Raschke wrote: > Personally, I interpret deep stacktraces to be a result of frameworkitis > or poorly factored code, as something to avoid or fix. But that's probably > an unpopular opinion to have these days. > I've been bitten by truncated stacktraces a few times in code recursing over data structures, for instance traversing some data type descriptions or in parse transforms. Granted, this may not be everyday tasks, but debugging becomes so much more difficult with truncation. So if there is no drawback, I'd welcome upping the default. So far, I've never found I that I wanted to see fewer levels of stack when debugging. BRs Tomas > On 1 Nov 2016 17:09, "Vans S" wrote: > >> Increase the stack trace depth. 8 is default. No serious drawbacks I can >> think of. Indeed 8 is often too little. >> >> >> On Tuesday, November 1, 2016 11:25 AM, H?kan Mattsson >> wrote: >> >> >> >> Hi, >> >> Anybody who knows why the stacktrace returned by process_info(Pid, >> current_stacktrace) is truncated to 8 items? Is it due to some performance >> consideration? >> >> >> In order to get more usable stacktraces I were thinking of doing this >> little fix in our system? Do you foresee any drawbacks with it? >> >> Btw, 8 happens to be the default of the system flag backtrace_depth. >> >> >> diff --git a/otp/erts/emulator/beam/erl_bif_info.c >> b/otp/erts/emulator/beam/erl_bif_info.c >> index d7f1e2d..2dc310f 100755 >> --- a/otp/erts/emulator/beam/erl_bif_info.c >> +++ b/otp/erts/emulator/beam/erl_bif_info.c >> @@ -1607,7 +1607,7 @@ current_stacktrace(Process* p, Process* rp, Eterm** >> hpp) >> Eterm mfa; >> Eterm res = NIL; >> >> - depth = 8; >> + depth = erts_backtrace_depth; >> sz = offsetof(struct StackTrace, trace) + sizeof(BeamInstr *)*depth; >> s = (struct StackTrace *) erts_alloc(ERTS_ALC_T_TMP, sz); >> s->depth = 0; >> >> >> /H?kan >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dn.nhattan@REDACTED Wed Nov 2 05:13:49 2016 From: dn.nhattan@REDACTED (Tan Duong) Date: Wed, 2 Nov 2016 05:13:49 +0100 Subject: [erlang-questions] Running erlang on cluster Message-ID: Hi, What is the best practice to run an erlang application on a cluster, via job management (qsub), so as to achieve good performance , say, similar to a shared-memory, muticores machine? has anybody tried this before? I think that the cluster tend to be transparent to user, so possibly one doesn't know how many erlang nodes should be started, isn't is? (There are experiences out there about deploying erlang on Amazon Ec2. But normally users already have control of cluster in advance). Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans.bolinder@REDACTED Wed Nov 2 09:13:28 2016 From: hans.bolinder@REDACTED (Hans Bolinder) Date: Wed, 2 Nov 2016 08:13:28 +0000 Subject: [erlang-questions] dialyzer error on fun2ms output In-Reply-To: References: Message-ID: Hi, > The reason is that dbg:fun2ms generates the match-spec: [{'$1',[],[{message,'$1'}]}] > But the type as seen in erlang.erl or erts_internal.erl only allows a list or the wildcard atom '_' as the match-spec head. > -type trace_match_spec() :: [{[term()] | '_' ,[term()],[term()]}]. Thanks. I'll fix the bug by adding atom() to the match head type. One could argue that the matchspec types should be improved, but that's beyond the scope for the fix. Best regards, Hans Bolinder, Erlang/OTP team, Ericsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz@REDACTED Wed Nov 2 11:23:14 2016 From: aschultz@REDACTED (Andreas Schultz) Date: Wed, 2 Nov 2016 11:23:14 +0100 Subject: [erlang-questions] HiPE miscompilation of binary matches Message-ID: <31a25e1f-a387-9280-23b5-fe981d0a4e2e@tpip.net> Hi, Either HiPE is messing up binary matches in some cases or I'm not seeing the problem. A reduced version of the code is: -module(binopt). -export([test1/1, test2/1, data/0]). -compile([bin_opt_info]). data() -> <<50,16,0>>. test1(<<1:3, 1:1, _:1, 0:1, 0:1, 0:1, _/binary>>) -> 'Case #1'; test1(<<1:3, 1:1, _:1, _:1, _:1, _:1, _/binary>>) -> 'Case #2'; test1(_) -> other. test2(<<1:3, 1:1, _:1, A:1, B:1, C:1, _/binary>>) when A =:= 1; B =:= 1; C =:= 1 -> 'Case #2'; test2(<<1:3, 1:1, _:1, 0:1, 0:1, 0:1, _/binary>>) -> 'Case #1'; test2(_) -> other. With Erlang 19.1.3 the HiPE compile version of this behaves differently than the non-HiPE version: Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Eshell V8.1 (abort with ^G) 1> compile:file("src/binopt", [native, {hipe, o3}]). {ok,binopt} 2> binopt:test1(binopt:data()). other WTF???? Case #2 in test1 should match and I can't see why it shouldn't. 3> binopt:test2(binopt:data()). 'Case #2' This is as expected. None HiPE: 4> compile:file("src/binopt", []). {ok,binopt} 6> l(binopt). {module,binopt} 7> binopt:test1(binopt:data()). 'Case #2' 8> binopt:test2(binopt:data()). 'Case #2' As expected, both version return the same result. So, do I do something wrong here or is this a legitimate HiPE bug? Andreas From kostis@REDACTED Wed Nov 2 12:06:50 2016 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 2 Nov 2016 12:06:50 +0100 Subject: [erlang-questions] HiPE miscompilation of binary matches In-Reply-To: <31a25e1f-a387-9280-23b5-fe981d0a4e2e@tpip.net> References: <31a25e1f-a387-9280-23b5-fe981d0a4e2e@tpip.net> Message-ID: On 11/02/2016 11:23 AM, Andreas Schultz wrote: > So, do I do something wrong here or is this a legitimate HiPE bug? Yes, this looks looks like a legitimate HiPE bug. Will investigate. Kostis From spylik@REDACTED Wed Nov 2 12:10:38 2016 From: spylik@REDACTED (Oleksii Semilietov) Date: Wed, 2 Nov 2016 13:10:38 +0200 Subject: [erlang-questions] mnesia:ets/2 vs direct ets calls Message-ID: Question about mnesia:ets/3 and direct ets calls for mnesia tables. In case of lookup (sequence calls from same node as ram replica node), what is the benefit to use mnesia:ets instead of direct ets:lookup call? I have single writer to DB (I have one node ram type, other replicas - disk copies just for save data to disk somewhere) and would like do something like this on ram node (sequence pseudo-code): ...... X = receive NewData#somerecord{id = TransID} -> case ets:lookup(Table, TransID) of [#somerecord{date = undefined}] = OldData -> NewDate = erlang:system_time(milli_seconds), mnesia:dirty_write(OldData#somerecord{date = NewDate}), {TransID, NewDate}; [#somerecord{date = Date}] -> {TransID, Date} [] -> NewDate = erlang:system_time(milli_seconds), mnesia:dirty_write(NewData), {TransID, Date} end OtherReceive -> ...blabala.... end, some_operation_with_x(X). Or I should wrap this stuff to something like this?: ..... receive NewData#somerecord{id = TransID} -> mnesia:ets(fun may_update/3, [NewData, Table, TransID]); OtherReceive -> ...blabala.... end, some_operation_with_x(X). may_update(NewData, Table, TransID) -> case mnesia:read(Table, TransID) of [#somerecord{date = undefined}] = OldData -> NewDate = erlang:system_time(milli_seconds), mnesia:write(OldData#somerecord{date = NewDate}), {TransID, NewDate}; [#somerecord{date = Date}] -> {TransID, Date} [] -> NewDate = erlang:system_time(milli_seconds), mnesia:write(NewData), {TransID, Date} end ). What is the benefit to wrap it to mnesia:ets? Best regards. -- Oleksii D. Semilietov -------------- next part -------------- An HTML attachment was scrubbed... URL: From jargon@REDACTED Wed Nov 2 23:30:48 2016 From: jargon@REDACTED (Johannes =?utf-8?B?V2Vpw59s?=) Date: Wed, 2 Nov 2016 23:30:48 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 Message-ID: <20161102223048.ksfkdiueylaonvzy@molb.org> Hi, Last week GCC got updated from 6.1.1 to 6.2.0 on my Debian laptop. Since then OTP releases built with this compiler have broken HiPE [1]. The error is reproducible on different machines, even the official OTP 19.1.5 Debian binary package is broken because it was built with the new compiler. Has anybody else experienced the same with GCC 6.2? I have not done much debugging, the error could be in GCC or OTP (maybe usage of undefined behavior). Regards, Johannes [1] Stacktrace after `c(my_module, [native]).` for any module: {'EXIT',{badarg,[{hipe_bifs,patch_call, [1103888528,94502719669968,[]], []}, {hipe_unified_loader,patch_call_insn,3, [{file,"hipe_unified_loader.erl"},{line,508}]}, {hipe_unified_loader,patch_bif_call_list,4, [{file,"hipe_unified_loader.erl"},{line,494}]}, {hipe_unified_loader,patch_call,5, [{file,"hipe_unified_loader.erl"},{line,485}]}, {hipe_unified_loader,patch,5, [{file,"hipe_unified_loader.erl"},{line,462}]}, {hipe_unified_loader,load_common,4, [{file,"hipe_unified_loader.erl"},{line,215}]}, {hipe_unified_loader,load_native_code,3, [{file,"hipe_unified_loader.erl"},{line,111}]}, {code_server,try_load_module_2,6, [{file,"code_server.erl"},{line,1131}]}]}} Dialyzer fails with: Compiling some key modules to native code...{"init terminating in do_boot",{{badmatch,ok},[{dialyzer_cl,hc_cache,1,[{file,"dialyzer_cl.erl"},{line,572}]},{lists,foreach,2,[{file,"lists.erl"},{line,1338}]},{dialyzer_cl,hipe_compile,2,[{file,"dialyzer_cl.erl"},{line,516}]},{dialyzer_cl,do_analysis,4,[{file,"dialyzer_cl.erl"},{line,382}]},{dialyzer,'-cl/1-fun-0-',1,[{file,"dialyzer.erl"},{line,153}]},{dialyzer,doit,1,[{file,"dialyzer.erl"},{line,243}]},{dialyzer,plain_cl,0,[{file,"dialyzer.erl"},{line,84}]},{init,start_em,1,[]}]}} After the fix in https://github.com/erlang/otp/commit/cb987678ff56142029758e0e84fa97fa90003b4a: Compiling some key modules to native code...{"init terminating in do_boot",{{badmatch,{error,{'EXIT',{badarg,[{hipe_bifs,patch_call,[1075697819,94447304565728,[]],[]},{hipe_unified_loader,patch_call_insn,3,[{file,"hipe_unified_loader.erl"},{line,508}]},{hipe_unified_loader,patch_bif_call_list,4,[{file,"hipe_unified_loader.erl"},{line,494}]},{hipe_unified_loader,patch_call,5,[{file,"hipe_unified_loader.erl"},{line,485}]},{hipe_unified_loader,patch,5,[{file,"hipe_unified_loader.erl"},{line,460}]},{hipe_unified_loader,load_common,4,[{file,"hipe_unified_loader.erl"},{line,215}]},{hipe_unified_loader,load_native_code,3,[{file,"hipe_unified_loader.erl"},{line,111}]},{code_server,try_load_module_2,6,[{file,"code_server.erl"},{line,1131}]}]}}}},[{dialyzer_cl,hc_cache,1,[{file,"dialyzer_cl.erl"},{line,572}]},{lists,foreach,2,[{file,"lists.erl"},{line,1338}]},{dialyzer_cl,hipe_compile,2,[{file,"dialyzer_cl.erl"},{line,516}]},{dialyzer_cl,do_analysis,4,[{file,"dialyzer_cl.erl"},{line,382}]},{dialyzer,'-cl/1-fun-0-',1,[{file,"dialyzer.erl"},{line,153}]},{dialyzer,doit,1,[{file,"dialyzer.erl"},{line,243}]},{dialyzer,plain_cl,0,[{file,"dialyzer.erl"},{line,84}]},{init,start_em,1,[]}]}} From max.lapshin@REDACTED Thu Nov 3 13:47:40 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 3 Nov 2016 15:47:40 +0300 Subject: [erlang-questions] Elbrus 2K support Message-ID: Hi. We have successfully compiled erlang 19.0 under Elbrus 2K. It is russian computer with VLIW architecture and transparent support for amd64 instructions execution. $ uname -a Linux EL2S4-53-31 3.14.46-elbrus.314.1.14 #1 SMP Mon Sep 21 22:13:08 GMT 2015 e2k E2S EL2S4 GNU/Linux Right now we have tested how it works in amd64 emulation and had to make some trivial hacks for it to compile (patch in the end of email), because compiler is lcc: $ lcc -v lcc:1.20.09:Aug-27-2015:e2k-4c-linux Thread model: posix gcc version 4.4.0 compatible. Stackoverflow couldn't help us with compiling under elbrus, so I decided to make two dirty patches =) Erlang and thus our Flussonic can run on this architecture. Thank you for portable code! diff --git a/erts/emulator/beam/erl_bif_re.c b/erts/emulator/beam/erl_bif_re.c index ff7746c..e83c762 100644 --- a/erts/emulator/beam/erl_bif_re.c +++ b/erts/emulator/beam/erl_bif_re.c @@ -31,7 +31,7 @@ #include "big.h" #define ERLANG_INTEGRATION 1 #define PCRE_STATIC -#include "pcre.h" +#include "../pcre/pcre.h" #define PCRE_DEFAULT_COMPILE_OPTS 0 #define PCRE_DEFAULT_EXEC_OPTS 0 diff --git a/erts/emulator/beam/sys.h b/erts/emulator/beam/sys.h index dfe82ca..872a7df 100644 --- a/erts/emulator/beam/sys.h +++ b/erts/emulator/beam/sys.h @@ -235,7 +235,7 @@ __decl_noreturn void __noreturn erl_assert_error(const char* expr, const char *f * Compile time assert * (the actual compiler error msg can be a bit confusing) */ -#if ERTS_AT_LEAST_GCC_VSN__(3,1,1) +#if false && ERTS_AT_LEAST_GCC_VSN__(3,1,1) # define ERTS_CT_ASSERT(e) \ do { \ enum { compile_time_assert__ = __builtin_choose_expr((e),0,(void)0) }; \ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker.eriksson@REDACTED Thu Nov 3 15:47:58 2016 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Thu, 3 Nov 2016 15:47:58 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <20161102223048.ksfkdiueylaonvzy@molb.org> References: <20161102223048.ksfkdiueylaonvzy@molb.org> Message-ID: <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> If this is x86_64 (amd64) then it looks like the beam was built without gcc's default small code model where "the program and its symbols must be linked in the lower 2 GB of the address space." The second argument to hipe_bifs:patch_call/2 should in this case be the address of a BIF, but 94502719669968 is way past 2GB. /Sverker, Erlang/OTP On 11/02/2016 11:30 PM, Johannes Wei?l wrote: > Hi, > > Last week GCC got updated from 6.1.1 to 6.2.0 on my Debian laptop. Since then > OTP releases built with this compiler have broken HiPE [1]. The error is > reproducible on different machines, even the official OTP 19.1.5 Debian binary > package is broken because it was built with the new compiler. > > Has anybody else experienced the same with GCC 6.2? I have not done much > debugging, the error could be in GCC or OTP (maybe usage of undefined behavior). > > Regards, > Johannes > > [1] Stacktrace after `c(my_module, [native]).` for any module: > > {'EXIT',{badarg,[{hipe_bifs,patch_call, > [1103888528,94502719669968,[]], > []}, > {hipe_unified_loader,patch_call_insn,3, > [{file,"hipe_unified_loader.erl"},{line,508}]}, > {hipe_unified_loader,patch_bif_call_list,4, > [{file,"hipe_unified_loader.erl"},{line,494}]}, > {hipe_unified_loader,patch_call,5, > [{file,"hipe_unified_loader.erl"},{line,485}]}, > {hipe_unified_loader,patch,5, > [{file,"hipe_unified_loader.erl"},{line,462}]}, > {hipe_unified_loader,load_common,4, > [{file,"hipe_unified_loader.erl"},{line,215}]}, > {hipe_unified_loader,load_native_code,3, > [{file,"hipe_unified_loader.erl"},{line,111}]}, > {code_server,try_load_module_2,6, > [{file,"code_server.erl"},{line,1131}]}]}} > > Dialyzer fails with: > Compiling some key modules to native code...{"init terminating in do_boot",{{badmatch,ok},[{dialyzer_cl,hc_cache,1,[{file,"dialyzer_cl.erl"},{line,572}]},{lists,foreach,2,[{file,"lists.erl"},{line,1338}]},{dialyzer_cl,hipe_compile,2,[{file,"dialyzer_cl.erl"},{line,516}]},{dialyzer_cl,do_analysis,4,[{file,"dialyzer_cl.erl"},{line,382}]},{dialyzer,'-cl/1-fun-0-',1,[{file,"dialyzer.erl"},{line,153}]},{dialyzer,doit,1,[{file,"dialyzer.erl"},{line,243}]},{dialyzer,plain_cl,0,[{file,"dialyzer.erl"},{line,84}]},{init,start_em,1,[]}]}} > > After the fix in https://github.com/erlang/otp/commit/cb987678ff56142029758e0e84fa97fa90003b4a: > > Compiling some key modules to native code...{"init terminating in do_boot",{{badmatch,{error,{'EXIT',{badarg,[{hipe_bifs,patch_call,[1075697819,94447304565728,[]],[]},{hipe_unified_loader,patch_call_insn,3,[{file,"hipe_unified_loader.erl"},{line,508}]},{hipe_unified_loader,patch_bif_call_list,4,[{file,"hipe_unified_loader.erl"},{line,494}]},{hipe_unified_loader,patch_call,5,[{file,"hipe_unified_loader.erl"},{line,485}]},{hipe_unified_loader,patch,5,[{file,"hipe_unified_loader.erl"},{line,460}]},{hipe_unified_loader,load_common,4,[{file,"hipe_unified_loader.erl"},{line,215}]},{hipe_unified_loader,load_native_code,3,[{file,"hipe_unified_loader.erl"},{line,111}]},{code_server,try_load_module_2,6,[{file,"code_server.erl"},{line,1131}]}]}}}},[{dialyzer_cl,hc_cache,1,[{file,"dialyzer_cl.erl"},{line,572}]},{lists,foreach,2,[{file,"lists.erl"},{line,1338}]},{dialyzer_cl,hipe_compile,2,[{file,"dialyzer_cl.erl"},{line,516}]},{dialyzer_cl,do_analysis,4,[{file,"dialyzer_cl.erl"},{line,38 > 2}]},{dialyzer,'-cl/1-fun-0-',1,[{file,"dialyzer.erl"},{line,153}]},{dialyzer,doit,1,[{file,"dialyzer.erl"},{line,243}]},{dialyzer,plain_cl,0,[{file,"dialyzer.erl"},{line,84}]},{init,start_em,1,[]}]}} > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From kostis@REDACTED Thu Nov 3 17:42:31 2016 From: kostis@REDACTED (Kostis Sagonas) Date: Thu, 3 Nov 2016 17:42:31 +0100 Subject: [erlang-questions] HiPE miscompilation of binary matches In-Reply-To: References: <31a25e1f-a387-9280-23b5-fe981d0a4e2e@tpip.net> Message-ID: <71d37cbc-bb4e-801e-52c6-f628e07ed90a@cs.ntua.gr> On 11/02/2016 12:06 PM, Kostis Sagonas wrote: > On 11/02/2016 11:23 AM, Andreas Schultz wrote: >> So, do I do something wrong here or is this a legitimate HiPE bug? > > Yes, this looks looks like a legitimate HiPE bug. Will investigate. Apologies for replying to my own mail, but we indeed investigated this and a fix appears here: https://github.com/erlang/otp/pull/1234 I guess it will be merged to maint at some point and the fix will appear in a released OTP, hopefully soon. Thanks for reporting this bug. Kostis From mikpelinux@REDACTED Thu Nov 3 20:12:38 2016 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Thu, 3 Nov 2016 20:12:38 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> Message-ID: <22555.35878.143034.246244@gargle.gargle.HOWL> Sverker Eriksson writes: > If this is x86_64 (amd64) then it looks like > the beam was built without gcc's default small code model > where "the program and its symbols must be linked > in the lower 2 GB of the address space." > > The second argument to hipe_bifs:patch_call/2 > should in this case be the address of a BIF, > but 94502719669968 is way past 2GB. > > /Sverker, Erlang/OTP > > > On 11/02/2016 11:30 PM, Johannes Wei?l wrote: > > Hi, > > > > Last week GCC got updated from 6.1.1 to 6.2.0 on my Debian laptop. Since then > > OTP releases built with this compiler have broken HiPE [1]. The error is > > reproducible on different machines, even the official OTP 19.1.5 Debian binary > > package is broken because it was built with the new compiler. > > > > Has anybody else experienced the same with GCC 6.2? I have not done much > > debugging, the error could be in GCC or OTP (maybe usage of undefined behavior). > > > > Regards, > > Johannes > > > > [1] Stacktrace after `c(my_module, [native]).` for any module: > > > > {'EXIT',{badarg,[{hipe_bifs,patch_call, > > [1103888528,94502719669968,[]], > > []}, > > {hipe_unified_loader,patch_call_insn,3, > > [{file,"hipe_unified_loader.erl"},{line,508}]}, > > {hipe_unified_loader,patch_bif_call_list,4, > > [{file,"hipe_unified_loader.erl"},{line,494}]}, > > {hipe_unified_loader,patch_call,5, > > [{file,"hipe_unified_loader.erl"},{line,485}]}, > > {hipe_unified_loader,patch,5, > > [{file,"hipe_unified_loader.erl"},{line,462}]}, > > {hipe_unified_loader,load_common,4, > > [{file,"hipe_unified_loader.erl"},{line,215}]}, > > {hipe_unified_loader,load_native_code,3, > > [{file,"hipe_unified_loader.erl"},{line,111}]}, > > {code_server,try_load_module_2,6, > > [{file,"code_server.erl"},{line,1131}]}]}} I cannot reproduce this with the tip of the otp master branch, and gcc's built from either the gcc-6.2.0 release tar ball or from a recent head of the gcc-6 branch. Most likely the Erlang VM was compiled with non-standard options, for instance as a PIE (position-independent executable) which would break all address space layout assumptions. This could be the result of otp build options or non-standard behaviour in that Debian gcc. If you want further help debugging this, show us (1) the output of gcc -v (2) any special options (whether via ./configure or environment variables) used when compiling otp From jargon@REDACTED Thu Nov 3 20:20:35 2016 From: jargon@REDACTED (Johannes =?utf-8?B?V2Vpw59s?=) Date: Thu, 3 Nov 2016 20:20:35 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> Message-ID: <20161103192035.dutmfz2yymkcjglt@molb.org> Hi Sverker, On Thu, Nov 03, 2016 at 03:47PM +0100, Sverker Eriksson wrote: > If this is x86_64 (amd64) then it looks like > the beam was built without gcc's default small code model > where "the program and its symbols must be linked > in the lower 2 GB of the address space." Thanks for the reply! Yes, all tested platforms were x86_64 (amd64). I tried again with an OTP build compiled explicitly with CFLAGS="-mcmodel=small" [1] (if this is what you meant), but the error stays the same. I haven't mentioned it in my report, I tested with OTP 19.1.5 and the current master branch (2ccd860 yesterday and 214aba4 today). Regards, Johannes [1] https://gcc.gnu.org/onlinedocs/gcc-6.2.0/gcc/x86-Options.html#x86-Options From alex.arnon@REDACTED Fri Nov 4 10:52:47 2016 From: alex.arnon@REDACTED (Alex Arnon) Date: Fri, 4 Nov 2016 11:52:47 +0200 Subject: [erlang-questions] Elbrus 2K support In-Reply-To: References: Message-ID: Very very cool!!! Do they still make SPARC compatible machines? Where are Elbrus machines used? Have they achieved a wide distribution? On Thu, Nov 3, 2016 at 2:47 PM, Max Lapshin wrote: > > Hi. > > We have successfully compiled erlang 19.0 under Elbrus 2K. It is russian > computer with VLIW architecture and transparent support for amd64 > instructions execution. > > $ uname -a > > Linux EL2S4-53-31 3.14.46-elbrus.314.1.14 #1 SMP Mon Sep 21 22:13:08 GMT > 2015 e2k E2S EL2S4 GNU/Linux > > > Right now we have tested how it works in amd64 emulation and had to make > some trivial hacks for it to compile (patch in the end of email), because > compiler is lcc: > > $ lcc -v > > lcc:1.20.09:Aug-27-2015:e2k-4c-linux > > Thread model: posix > > gcc version 4.4.0 compatible. > > Stackoverflow couldn't help us with compiling under elbrus, so I decided > to make two dirty patches =) > > > Erlang and thus our Flussonic can run on this architecture. Thank you for > portable code! > > > > > > diff --git a/erts/emulator/beam/erl_bif_re.c b/erts/emulator/beam/erl_bif_ > re.c > index ff7746c..e83c762 100644 > --- a/erts/emulator/beam/erl_bif_re.c > +++ b/erts/emulator/beam/erl_bif_re.c > @@ -31,7 +31,7 @@ > #include "big.h" > #define ERLANG_INTEGRATION 1 > #define PCRE_STATIC > -#include "pcre.h" > +#include "../pcre/pcre.h" > > #define PCRE_DEFAULT_COMPILE_OPTS 0 > #define PCRE_DEFAULT_EXEC_OPTS 0 > diff --git a/erts/emulator/beam/sys.h b/erts/emulator/beam/sys.h > index dfe82ca..872a7df 100644 > --- a/erts/emulator/beam/sys.h > +++ b/erts/emulator/beam/sys.h > @@ -235,7 +235,7 @@ __decl_noreturn void __noreturn erl_assert_error(const > char* expr, const char *f > * Compile time assert > * (the actual compiler error msg can be a bit confusing) > */ > -#if ERTS_AT_LEAST_GCC_VSN__(3,1,1) > +#if false && ERTS_AT_LEAST_GCC_VSN__(3,1,1) > # define ERTS_CT_ASSERT(e) \ > do { \ > enum { compile_time_assert__ = __builtin_choose_expr((e),0,(void)0) }; \ > > > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan@REDACTED Fri Nov 4 15:20:36 2016 From: duncan@REDACTED (duncan@REDACTED) Date: Fri, 04 Nov 2016 07:20:36 -0700 Subject: [erlang-questions] =?utf-8?q?Erlide_with_cloud=3F?= Message-ID: <20161104072036.7e43b23f706d1a78218bd3e1c66e57ee.7b2eabfa2e.wbe@email23.godaddy.com> An HTML attachment was scrubbed... URL: From vladdu55@REDACTED Fri Nov 4 15:30:47 2016 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Fri, 4 Nov 2016 15:30:47 +0100 Subject: [erlang-questions] Erlide with cloud? In-Reply-To: <20161104072036.7e43b23f706d1a78218bd3e1c66e57ee.7b2eabfa2e.wbe@email23.godaddy.com> References: <20161104072036.7e43b23f706d1a78218bd3e1c66e57ee.7b2eabfa2e.wbe@email23.godaddy.com> Message-ID: Hi! The erlide implementation is very Eclipse-biased. So if other platforms can talk to Eclipse plugins, then it works; if not, it doesn't. In short, I don't think there is any other platform that can run erlide as it is. On the other hand, my current work is focused on implementing a language server for Erlang, as specified by Microsoft's Language Server Protocol ( https://github.com/Microsoft/language-server-protocol). This will make it relatively easy to implement clients that use it (in the clout or on the desktop). Something usable is still a bit in the future, though. best regards, Vlad On Fri, Nov 4, 2016 at 3:20 PM, wrote: > Anybody know if erlide (http://erlide.org/) works on any of the cloud > eclipse platforms (orion, eclipse che, dirigible, other?)? i've been > developing using command line; I'm giving in to trying an IDE and the cloud > based ones looked interesting. > > Duncan Sparrell > s-Fractal Consulting LLC > iPhone, iTypo, iApologize > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Fri Nov 4 16:54:03 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Fri, 4 Nov 2016 18:54:03 +0300 Subject: [erlang-questions] Elbrus 2K support In-Reply-To: References: Message-ID: As far as I understand, they are not related to Sparc anymore. It was some time ago, but they have made progress and now it is different. About usage: it is not very clear, because they are very young as a commercial product and haven't got wide distribution. But if you don't want NSA backdoor, Elbrus is definitely worthy to look at =) On Fri, Nov 4, 2016 at 12:52 PM, Alex Arnon wrote: > Very very cool!!! > > Do they still make SPARC compatible machines? > Where are Elbrus machines used? Have they achieved a wide distribution? > > On Thu, Nov 3, 2016 at 2:47 PM, Max Lapshin wrote: > >> >> Hi. >> >> We have successfully compiled erlang 19.0 under Elbrus 2K. It is russian >> computer with VLIW architecture and transparent support for amd64 >> instructions execution. >> >> $ uname -a >> >> Linux EL2S4-53-31 3.14.46-elbrus.314.1.14 #1 SMP Mon Sep 21 22:13:08 GMT >> 2015 e2k E2S EL2S4 GNU/Linux >> >> >> Right now we have tested how it works in amd64 emulation and had to make >> some trivial hacks for it to compile (patch in the end of email), because >> compiler is lcc: >> >> $ lcc -v >> >> lcc:1.20.09:Aug-27-2015:e2k-4c-linux >> >> Thread model: posix >> >> gcc version 4.4.0 compatible. >> >> Stackoverflow couldn't help us with compiling under elbrus, so I decided >> to make two dirty patches =) >> >> >> Erlang and thus our Flussonic can run on this architecture. Thank you for >> portable code! >> >> >> >> >> >> diff --git a/erts/emulator/beam/erl_bif_re.c >> b/erts/emulator/beam/erl_bif_re.c >> index ff7746c..e83c762 100644 >> --- a/erts/emulator/beam/erl_bif_re.c >> +++ b/erts/emulator/beam/erl_bif_re.c >> @@ -31,7 +31,7 @@ >> #include "big.h" >> #define ERLANG_INTEGRATION 1 >> #define PCRE_STATIC >> -#include "pcre.h" >> +#include "../pcre/pcre.h" >> >> #define PCRE_DEFAULT_COMPILE_OPTS 0 >> #define PCRE_DEFAULT_EXEC_OPTS 0 >> diff --git a/erts/emulator/beam/sys.h b/erts/emulator/beam/sys.h >> index dfe82ca..872a7df 100644 >> --- a/erts/emulator/beam/sys.h >> +++ b/erts/emulator/beam/sys.h >> @@ -235,7 +235,7 @@ __decl_noreturn void __noreturn >> erl_assert_error(const char* expr, const char *f >> * Compile time assert >> * (the actual compiler error msg can be a bit confusing) >> */ >> -#if ERTS_AT_LEAST_GCC_VSN__(3,1,1) >> +#if false && ERTS_AT_LEAST_GCC_VSN__(3,1,1) >> # define ERTS_CT_ASSERT(e) \ >> do { \ >> enum { compile_time_assert__ = __builtin_choose_expr((e),0,(void)0) }; >> \ >> >> >> >> >> >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostis@REDACTED Fri Nov 4 17:18:56 2016 From: kostis@REDACTED (Kostis Sagonas) Date: Fri, 4 Nov 2016 17:18:56 +0100 Subject: [erlang-questions] Elbrus 2K support In-Reply-To: References: Message-ID: <94729e4a-255b-34e7-fdbc-1e38248253fa@cs.ntua.gr> On 11/04/2016 04:54 PM, Max Lapshin wrote: > > About usage: it is not very clear, because they are very young as a > commercial product and haven't got wide distribution. But if you don't > want NSA backdoor, Elbrus is definitely worthy to look at =) And which backdoor do you get instead? :) Kostis From vans_163@REDACTED Fri Nov 4 17:42:55 2016 From: vans_163@REDACTED (Vans S) Date: Fri, 4 Nov 2016 16:42:55 +0000 (UTC) Subject: [erlang-questions] Elbrus 2K support In-Reply-To: <94729e4a-255b-34e7-fdbc-1e38248253fa@cs.ntua.gr> References: <94729e4a-255b-34e7-fdbc-1e38248253fa@cs.ntua.gr> Message-ID: <397473111.335052.1478277775227@mail.yahoo.com> You get putin in your backdoor. Think that might be worded wrong.. On Friday, November 4, 2016 12:19 PM, Kostis Sagonas wrote: On 11/04/2016 04:54 PM, Max Lapshin wrote: > > About usage: it is not very clear, because they are very young as a > commercial product and haven't got wide distribution. But if you don't > want NSA backdoor, Elbrus is definitely worthy to look at =) And which backdoor do you get instead? :) Kostis _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From jargon@REDACTED Sat Nov 5 00:49:07 2016 From: jargon@REDACTED (Johannes =?utf-8?B?V2Vpw59s?=) Date: Sat, 5 Nov 2016 00:49:07 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <22555.35878.143034.246244@gargle.gargle.HOWL> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> <22555.35878.143034.246244@gargle.gargle.HOWL> Message-ID: <20161104234907.rg6xxt2vycctm6vz@molb.org> Hi Mikael, On Thu, Nov 03, 2016 at 08:12PM +0100, Mikael Pettersson wrote: > I cannot reproduce this with the tip of the otp master branch, and gcc's built > from either the gcc-6.2.0 release tar ball or from a recent head of the gcc-6 branch. Can you try to configure gcc with "--enable-default-pie"? With this flag (which is used for the Debian gcc package) I could reproduce the bug on Debian and Fedora with the gcc-6.2.0 release tar ball and the current gcc svn trunk (rev 241852). PIE also seems to have been used for the official Fedora erlang19.1.4-1.fc25 package, as there HiPE is also broken. > Most likely the Erlang VM was compiled with non-standard options, for instance > as a PIE (position-independent executable) which would break all address space > layout assumptions. This seems to be the case, thanks for the pointer! What would be your preferred solution to solve this problem? One possibility would be to add an option to the OTP build system to compile without PIE, so that it does not break for compilers that have PIE enabled by default. Similar has been suggested here for the Linux kernel: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=841438 With CFLAGS and LDFLAGS set to "-no-pie -fno-pie" I can compile a working version with the current Debian Testing gcc. > If you want further help debugging this, show us > (1) the output of gcc -v $ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/6/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Debian 6.2.0-10' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 6.2.0 20161027 (Debian 6.2.0-10) > (2) any special options (whether via ./configure or environment variables) used when > compiling otp No special options. Regards, Johannes From liuzhongzheng2012@REDACTED Sat Nov 5 06:00:56 2016 From: liuzhongzheng2012@REDACTED (Zhongzheng Liu) Date: Sat, 5 Nov 2016 13:00:56 +0800 Subject: [erlang-questions] How to get scheduler_id in nif? Message-ID: Hi mail list: We can use erlang:system_info(scheduler_id) in Erlang code to know which scheduler the code is runing on. How to get this value inside nif ? Thanks Zhongzheng Liu From sergej.jurecko@REDACTED Sat Nov 5 09:09:42 2016 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Sat, 5 Nov 2016 09:09:42 +0100 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: References: Message-ID: My suggestion is to call it from erlang when starting up your nif and save it to a thread local storage variable. You can specify where a process is run with with spawn opt {scheduler,X} where X > 0 Regards, Sergej On Nov 5, 2016 1:35 PM, "Zhongzheng Liu" wrote: > Hi mail list: > > We can use erlang:system_info(scheduler_id) in Erlang code to know > which scheduler the code is runing on. > > How to get this value inside nif ? > > > Thanks > > Zhongzheng Liu > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostis@REDACTED Sat Nov 5 11:22:23 2016 From: kostis@REDACTED (Kostis Sagonas) Date: Sat, 5 Nov 2016 11:22:23 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <20161104234907.rg6xxt2vycctm6vz@molb.org> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> <22555.35878.143034.246244@gargle.gargle.HOWL> <20161104234907.rg6xxt2vycctm6vz@molb.org> Message-ID: On 11/05/2016 12:49 AM, Johannes Wei?l wrote: > Hi Mikael, > > On Thu, Nov 03, 2016 at 08:12PM +0100, Mikael Pettersson wrote: >> > I cannot reproduce this with the tip of the otp master branch, and gcc's built >> > from either the gcc-6.2.0 release tar ball or from a recent head of the gcc-6 branch. > Can you try to configure gcc with "--enable-default-pie"? With this flag > (which is used for the Debian gcc package) I could reproduce the bug on > Debian and Fedora with the gcc-6.2.0 release tar ball and the current > gcc svn trunk (rev 241852). PIE also seems to have been used for the > official Fedora erlang19.1.4-1.fc25 package, as there HiPE is also > broken. > >> > Most likely the Erlang VM was compiled with non-standard options, for instance >> > as a PIE (position-independent executable) which would break all address space >> > layout assumptions. > This seems to be the case, thanks for the pointer! What would be your > preferred solution to solve this problem? One possibility would be to > add an option to the OTP build system to compile without PIE, so > that it does not break for compilers that have PIE enabled by default. > Similar has been suggested here for the Linux kernel: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=841438 Well, it's good to know that HiPE, when suffering from such problems, is in good company (that of the Linux kernel). > With CFLAGS and LDFLAGS set to "-no-pie -fno-pie" I can compile a > working version with the current Debian Testing gcc. From a quick read of the thread you suggested for the kernel and of the corresponding Ubuntu thread, it seems that forcing no-pie in the flags is the way on this one. But of course this does not solve the problem for OTP releases that are already out there. (It should be pretty obvious that this affects all Erlang/OTP releases that include HiPE: none can be compiled with PIE.) Kostis From jargon@REDACTED Sat Nov 5 14:08:19 2016 From: jargon@REDACTED (Johannes =?utf-8?B?V2Vpw59s?=) Date: Sat, 5 Nov 2016 14:08:19 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> <22555.35878.143034.246244@gargle.gargle.HOWL> <20161104234907.rg6xxt2vycctm6vz@molb.org> Message-ID: <20161105130819.74shzsfm7aoewd3f@molb.org> On Sat, Nov 05, 2016 at 11:22AM +0100, Kostis Sagonas wrote: > On 11/05/2016 12:49 AM, Johannes Wei?l wrote: > > With CFLAGS and LDFLAGS set to "-no-pie -fno-pie" I can compile a > > working version with the current Debian Testing gcc. > > From a quick read of the thread you suggested for the kernel and of the > corresponding Ubuntu thread, it seems that forcing no-pie in the flags is > the way on this one. OK, thanks! I opened https://bugs.erlang.org/browse/ERL-294 for it. Johannes From alex.arnon@REDACTED Sat Nov 5 23:00:28 2016 From: alex.arnon@REDACTED (Alex Arnon) Date: Sun, 6 Nov 2016 00:00:28 +0200 Subject: [erlang-questions] Elbrus 2K support In-Reply-To: <94729e4a-255b-34e7-fdbc-1e38248253fa@cs.ntua.gr> References: <94729e4a-255b-34e7-fdbc-1e38248253fa@cs.ntua.gr> Message-ID: I wonder! :) On Fri, Nov 4, 2016 at 6:18 PM, Kostis Sagonas wrote: > On 11/04/2016 04:54 PM, Max Lapshin wrote: > >> >> About usage: it is not very clear, because they are very young as a >> commercial product and haven't got wide distribution. But if you don't >> want NSA backdoor, Elbrus is definitely worthy to look at =) >> > > And which backdoor do you get instead? :) > > Kostis > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikpelinux@REDACTED Sun Nov 6 12:32:52 2016 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Sun, 6 Nov 2016 12:32:52 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <20161104234907.rg6xxt2vycctm6vz@molb.org> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> <22555.35878.143034.246244@gargle.gargle.HOWL> <20161104234907.rg6xxt2vycctm6vz@molb.org> Message-ID: <22559.5348.761936.796310@gargle.gargle.HOWL> Johannes Wei?l writes: > Hi Mikael, > > On Thu, Nov 03, 2016 at 08:12PM +0100, Mikael Pettersson wrote: > > I cannot reproduce this with the tip of the otp master branch, and gcc's built > > from either the gcc-6.2.0 release tar ball or from a recent head of the gcc-6 branch. > > Can you try to configure gcc with "--enable-default-pie"? With this flag > (which is used for the Debian gcc package) I could reproduce the bug on > Debian and Fedora with the gcc-6.2.0 release tar ball and the current > gcc svn trunk (rev 241852). PIE also seems to have been used for the > official Fedora erlang19.1.4-1.fc25 package, as there HiPE is also > broken. Using a gcc-6.2.0 configured with --enable-default-pie reproduces the bug for me on FC23. > > Most likely the Erlang VM was compiled with non-standard options, for instance > > as a PIE (position-independent executable) which would break all address space > > layout assumptions. > > This seems to be the case, thanks for the pointer! What would be your > preferred solution to solve this problem? One possibility would be to > add an option to the OTP build system to compile without PIE, so > that it does not break for compilers that have PIE enabled by default. > Similar has been suggested here for the Linux kernel: > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=841438 > > With CFLAGS and LDFLAGS set to "-no-pie -fno-pie" I can compile a > working version with the current Debian Testing gcc. If this problem had been limited to Debian, I would have said that the Erlang/OTP package maintainer for Debian should add the necessary options to disable PIE. However, reading some related Linux Kernel ML messages today it seems that Gentoo and Fedora (so eventually also RHEL and CentOS) also are affected, so we have no option but to work around it in OTP. A change in erts/configure.in to add options to disable PIE if HiPE is not disabled and the target arch is x86_64 should take care of the issue. People building older versions will have to backport the patch or override the compiler options, but that's nothing new. From kennethlakin@REDACTED Sun Nov 6 22:30:56 2016 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Sun, 6 Nov 2016 13:30:56 -0800 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <22559.5348.761936.796310@gargle.gargle.HOWL> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> <22555.35878.143034.246244@gargle.gargle.HOWL> <20161104234907.rg6xxt2vycctm6vz@molb.org> <22559.5348.761936.796310@gargle.gargle.HOWL> Message-ID: <306275a9-9da7-c814-49a6-be35e7786c13@gmail.com> On 11/06/2016 03:32 AM, Mikael Pettersson wrote: > A change in erts/configure.in to add options to disable PIE if HiPE is > not disabled and the target arch is x86_64 should take care of the issue. Is i386 likely unaffected by this, or is x86_64 shorthand for "x86 32- and 64-bit systems"? (Yes, I'm one of _those people_ with an ancient laptop. ;) ) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From donpedrothird@REDACTED Sun Nov 6 23:32:56 2016 From: donpedrothird@REDACTED (John Doe) Date: Mon, 7 Nov 2016 01:32:56 +0300 Subject: [erlang-questions] Need help with the error dump - erl_child_setup closed Message-ID: Hi, got an erl_crash.dump from one of my customers today, with the "Slogan: erl_child_setup closed" The full dump is there (it's small): http://pastebin.com/vGbcX9v7 What are possible reasons for this error? Erlang 19.0.7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gomoripeti@REDACTED Mon Nov 7 00:23:45 2016 From: gomoripeti@REDACTED (=?UTF-8?B?UGV0aSBHw7Ztw7ZyaQ==?=) Date: Mon, 7 Nov 2016 00:23:45 +0100 Subject: [erlang-questions] Need help with the error dump - erl_child_setup closed In-Reply-To: References: Message-ID: Hi, according to 19.0 release notes (http://erlang.org/download/ otp_src_19.0.readme) "OTP-13088 Application(s): erts The functionality behind erlang:open_port/2 when called with spawn or spawn_executable has been redone so that the forking of the new program is done in a separate process called erl_child_setup. ..." and the ERTS release notes mentions a bug fix in 19.1 ( http://erlang.org/doc/apps/erts/notes.html#id108734) "Fix a rare race condition in erlang:open_port({spawn, ""}, ...) that would result in the erl_child_setup program aborting and cause the emulator to exit. Own Id: OTP-13868" (https://github.com/erlang/otp/commit/7c5f497ab6f4b145554ee884e9fa0e c86246e9ee) so maybe you've hit this race bug. This may or may not be of any use for you :) (Upgrading to 19.1 might help) br Peter On Sun, Nov 6, 2016 at 11:32 PM, John Doe wrote: > Hi, > got an erl_crash.dump from one of my customers today, with the "Slogan: > erl_child_setup closed" > > The full dump is there (it's small): http://pastebin.com/vGbcX9v7 > > What are possible reasons for this error? > > Erlang 19.0.7 > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakesgordon@REDACTED Sun Nov 6 22:53:51 2016 From: jakesgordon@REDACTED (Jake Gordon) Date: Sun, 6 Nov 2016 13:53:51 -0800 Subject: [erlang-questions] Unexpected tls_alert "handshake failure" connecting to api.bitbucket.org (and others) with Erlang 18.3.4 (and later) Message-ID: Hi All. I'm hoping to get some insight into a problem with ssl:connect (and ultimately httpc:request) getting tls handshake errors connecting to some (but not all) webservers even while other clients on the same machine (cURL, Ruby Net::HTTP, etc) can connect just fine. I'm using Erlang 19.1.3, but this issue appears to have started with 18.3.4 (earlier versions appear to work correctly) I'm trying to connect to a (correctly configured) public endpoint at api.bitbucket.org > ssl:connect('api.bitbucket.org', 443, []). {error,{tls_alert,"handshake failure"}} If I attempt to connect to a different endpoint, lets say api.github.com it works just fine. > ssl:connect('api.github.com', 443, []) {ok,{sslsocket, ... }} Since it's only *some* SSL endpoints, clearly there is some server side certificate configuration causing the erlang client to behave differently during the handshake, but I'm not sure how to diagnose this when cURL and other language clients work correctly. I'm using a clean install of the esl-erlang packages provided by Erlang Solutions on Ubuntu 16.04 and debugging with older versions it looks like it broke somewhere around 18.3.4 Any insights would be greatly appreciated! Thanks - Jake -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuzhongzheng2012@REDACTED Mon Nov 7 04:56:25 2016 From: liuzhongzheng2012@REDACTED (Zhongzheng Liu) Date: Mon, 7 Nov 2016 11:56:25 +0800 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: References: Message-ID: Scheduler_id seems to be stored in env -> proc -> scheduler_data -> no But it cannot be accessed in nif interface. Can I make use of it ? Thanks 2016-11-05 13:00 GMT+08:00 Zhongzheng Liu : > Hi mail list: > > We can use erlang:system_info(scheduler_id) in Erlang code to know > which scheduler the code is runing on. > > How to get this value inside nif ? > > > Thanks > > Zhongzheng Liu From zxq9@REDACTED Mon Nov 7 07:27:22 2016 From: zxq9@REDACTED (zxq9) Date: Mon, 07 Nov 2016 15:27:22 +0900 Subject: [erlang-questions] Unexpected tls_alert "handshake failure" connecting to api.bitbucket.org (and others) with Erlang 18.3.4 (and later) In-Reply-To: References: Message-ID: <9264428.smQtnVG0Ba@burrito> On 2016?11?6? ??? 13:53:51 Jake Gordon wrote: > Hi All. > > I'm hoping to get some insight into a problem with ssl:connect (and > ultimately httpc:request) getting tls handshake errors connecting to some > (but not all) webservers even while other clients on the same machine > (cURL, Ruby Net::HTTP, etc) can connect just fine. > > I'm using Erlang 19.1.3, but this issue appears to have started with 18.3.4 > (earlier versions appear to work correctly) I have a similar problem with unreliable handshakes to a few sites in R16. I know that a few of them are unsupported cipher suite issues, but that is the minority. Something else is causing one of my utilities to fail on TLS handshake with a few sites, and I haven't had a chance to thoroughly check the problem. I'm sure that its not just the two of us... ? -Craig From mikpelinux@REDACTED Mon Nov 7 10:16:22 2016 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Mon, 7 Nov 2016 10:16:22 +0100 Subject: [erlang-questions] OTP / HiPE broken with GCC 6.2 In-Reply-To: <306275a9-9da7-c814-49a6-be35e7786c13@gmail.com> References: <20161102223048.ksfkdiueylaonvzy@molb.org> <642dfc08-b072-3c9f-58ad-00a8e334996b@ericsson.com> <22555.35878.143034.246244@gargle.gargle.HOWL> <20161104234907.rg6xxt2vycctm6vz@molb.org> <22559.5348.761936.796310@gargle.gargle.HOWL> <306275a9-9da7-c814-49a6-be35e7786c13@gmail.com> Message-ID: <22560.18022.756287.135504@gargle.gargle.HOWL> Kenneth Lakin writes: > On 11/06/2016 03:32 AM, Mikael Pettersson wrote: > > A change in erts/configure.in to add options to disable PIE if HiPE is > > not disabled and the target arch is x86_64 should take care of the issue. > > Is i386 likely unaffected by this, or is x86_64 shorthand for "x86 32- > and 64-bit systems"? (Yes, I'm one of _those people_ with an ancient > laptop. ;) ) HiPE on 32-bit x86 is unaffected by the PIE issue. From jesper.louis.andersen@REDACTED Mon Nov 7 11:23:12 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 07 Nov 2016 10:23:12 +0000 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: References: Message-ID: On Sat, Nov 5, 2016 at 9:09 AM Sergej Jure?ko wrote: > My suggestion is to call it from erlang when starting up your nif and save > it to a thread local storage variable. > > > That sounds like a data race waiting to happen. If your process is moved from one scheduler to another or if the scheduler you are running on are taken offline by the operating system, you are in trouble. I'm more inclined to ask why it is useful to know which scheduler_id a given process is being run on. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benmmurphy@REDACTED Mon Nov 7 11:31:37 2016 From: benmmurphy@REDACTED (Ben Murphy) Date: Mon, 7 Nov 2016 10:31:37 +0000 Subject: [erlang-questions] Unexpected tls_alert "handshake failure" connecting to api.bitbucket.org (and others) with Erlang 18.3.4 (and later) In-Reply-To: References: Message-ID: Hi Jake, If you force TLSv1.2 it will connect correctly. We have had trouble with IIS servers returning connection_closed when they are using SHA256 certificate and we don't force TLSv1.2. More details here: http://erlang.org/pipermail/erlang-bugs/2016-September/005195.html . However, this server looks to be running nginx and a different error is returned so I'm not sure if is the same issue. The handshake falls over after the client hello for me. It seems the only big difference between the hellos is the TLS version (maybe some nginx/openssl servers are dropping TLS1.0 traffic?) and the lack of signature algorithms. On Sun, Nov 6, 2016 at 9:53 PM, Jake Gordon wrote: > Hi All. > > I'm hoping to get some insight into a problem with ssl:connect (and > ultimately httpc:request) getting tls handshake errors connecting to some > (but not all) webservers even while other clients on the same machine (cURL, > Ruby Net::HTTP, etc) can connect just fine. > > I'm using Erlang 19.1.3, but this issue appears to have started with 18.3.4 > (earlier versions appear to work correctly) > > I'm trying to connect to a (correctly configured) public endpoint at > api.bitbucket.org > > > ssl:connect('api.bitbucket.org', 443, []). > {error,{tls_alert,"handshake failure"}} > > If I attempt to connect to a different endpoint, lets say api.github.com it > works just fine. > > > ssl:connect('api.github.com', 443, []) > {ok,{sslsocket, ... }} > > Since it's only *some* SSL endpoints, clearly there is some server side > certificate configuration causing the erlang client to behave differently > during the handshake, but I'm not sure how to diagnose this when cURL and > other language clients work correctly. > > I'm using a clean install of the esl-erlang packages provided by Erlang > Solutions on Ubuntu 16.04 and debugging with older versions it looks like it > broke somewhere around 18.3.4 > > Any insights would be greatly appreciated! > > Thanks > - Jake > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- No. Time Source Destination Protocol Length Info 87 2016-11-07 10:26:22.416034000 192.168.0.121 104.192.143.5 TLSv1.2 310 Client Hello Frame 87: 310 bytes on wire (2480 bits), 310 bytes captured (2480 bits) on interface 0 Interface id: 0 (en0) Encapsulation type: Ethernet (1) Arrival Time: Nov 7, 2016 10:26:22.416034000 GMT [Time shift for this packet: 0.000000000 seconds] Epoch Time: 1478514382.416034000 seconds [Time delta from previous captured frame: 0.025847000 seconds] [Time delta from previous displayed frame: 0.025847000 seconds] [Time since reference or first frame: 3.031643000 seconds] Frame Number: 87 Frame Length: 310 bytes (2480 bits) Capture Length: 310 bytes (2480 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:tcp:ssl] [Coloring Rule Name: TCP] [Coloring Rule String: tcp] Ethernet II, Src: Apple_8e:af:4e (5c:f9:38:8e:af:4e), Dst: Routerbo_36:4d:e6 (e4:8d:8c:36:4d:e6) Destination: Routerbo_36:4d:e6 (e4:8d:8c:36:4d:e6) Address: Routerbo_36:4d:e6 (e4:8d:8c:36:4d:e6) .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default) .... ...0 .... .... .... .... = IG bit: Individual address (unicast) Source: Apple_8e:af:4e (5c:f9:38:8e:af:4e) Address: Apple_8e:af:4e (5c:f9:38:8e:af:4e) .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default) .... ...0 .... .... .... .... = IG bit: Individual address (unicast) Type: IP (0x0800) Internet Protocol Version 4, Src: 192.168.0.121 (192.168.0.121), Dst: 104.192.143.5 (104.192.143.5) Version: 4 Header Length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 296 Identification: 0x6afb (27387) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0x15ee [validation disabled] [Good: False] [Bad: False] Source: 192.168.0.121 (192.168.0.121) Destination: 104.192.143.5 (104.192.143.5) [Source GeoIP: Unknown] [Destination GeoIP: Unknown] Transmission Control Protocol, Src Port: 61363 (61363), Dst Port: 443 (443), Seq: 1, Ack: 1, Len: 244 Source Port: 61363 (61363) Destination Port: 443 (443) [Stream index: 8] [TCP Segment Len: 244] Sequence number: 1 (relative sequence number) [Next sequence number: 245 (relative sequence number)] Acknowledgment number: 1 (relative ack number) Header Length: 32 bytes .... 0000 0001 1000 = Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgment: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 4138 [Calculated window size: 132416] [Window size scaling factor: 32] Checksum: 0x7cc3 [validation disabled] [Good Checksum: False] [Bad Checksum: False] Urgent pointer: 0 Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps No-Operation (NOP) Type: 1 0... .... = Copy on fragmentation: No .00. .... = Class: Control (0) ...0 0001 = Number: No-Operation (NOP) (1) No-Operation (NOP) Type: 1 0... .... = Copy on fragmentation: No .00. .... = Class: Control (0) ...0 0001 = Number: No-Operation (NOP) (1) Timestamps: TSval 629680589, TSecr 441089700 Kind: Time Stamp Option (8) Length: 10 Timestamp value: 629680589 Timestamp echo reply: 441089700 [SEQ/ACK analysis] [iRTT: 0.075788000 seconds] [Bytes in flight: 244] Secure Sockets Layer TLSv1.2 Record Layer: Handshake Protocol: Client Hello Content Type: Handshake (22) Version: TLS 1.0 (0x0301) Length: 239 Handshake Protocol: Client Hello Handshake Type: Client Hello (1) Length: 235 Version: TLS 1.2 (0x0303) Random GMT Unix Time: Nov 7, 2016 10:26:22.000000000 GMT Random Bytes: 9116f50cc50acecb9cdd427c0eeddd63ad87ea8c5e90cf17... Session ID Length: 0 Cipher Suites Length: 100 Cipher Suites (50 suites) Cipher Suite: TLS_EMPTY_RENEGOTIATION_INFO_SCSV (0x00ff) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 (0xc024) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02e) Cipher Suite: TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 (0xc032) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 (0xc026) Cipher Suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 (0xc02a) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 (0x00a3) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x006b) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 (0x006a) Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 (0xc023) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02d) Cipher Suite: TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 (0xc031) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 (0xc025) Cipher Suite: TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 (0xc029) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 (0x00a2) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (0x0067) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 (0x0040) Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA (0x0038) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA (0xc005) Cipher Suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA (0xc00f) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) Cipher Suite: TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA (0xc008) Cipher Suite: TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012) Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016) Cipher Suite: TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x0013) Cipher Suite: TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA (0xc003) Cipher Suite: TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA (0xc00d) Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA (0xc004) Cipher Suite: TLS_ECDH_RSA_WITH_AES_128_CBC_SHA (0xc00e) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) Compression Methods Length: 1 Compression Methods (1 method) Compression Method: null (0) Extensions Length: 94 Extension: server_name Type: server_name (0x0000) Length: 22 Server Name Indication extension Server Name list length: 20 Server Name Type: host_name (0) Server Name length: 17 Server Name: api.bitbucket.org Extension: elliptic_curves Type: elliptic_curves (0x000a) Length: 58 Elliptic Curves Length: 56 Elliptic curves (28 curves) Elliptic curve: sect571r1 (0x000e) Elliptic curve: sect571k1 (0x000d) Elliptic curve: secp521r1 (0x0019) Elliptic curve: brainpoolP512r1 (0x001c) Elliptic curve: sect409k1 (0x000b) Elliptic curve: sect409r1 (0x000c) Elliptic curve: brainpoolP384r1 (0x001b) Elliptic curve: secp384r1 (0x0018) Elliptic curve: sect283k1 (0x0009) Elliptic curve: sect283r1 (0x000a) Elliptic curve: brainpoolP256r1 (0x001a) Elliptic curve: secp256k1 (0x0016) Elliptic curve: secp256r1 (0x0017) Elliptic curve: sect239k1 (0x0008) Elliptic curve: sect233k1 (0x0006) Elliptic curve: sect233r1 (0x0007) Elliptic curve: secp224k1 (0x0014) Elliptic curve: secp224r1 (0x0015) Elliptic curve: sect193r1 (0x0004) Elliptic curve: sect193r2 (0x0005) Elliptic curve: secp192k1 (0x0012) Elliptic curve: secp192r1 (0x0013) Elliptic curve: sect163k1 (0x0001) Elliptic curve: sect163r1 (0x0002) Elliptic curve: sect163r2 (0x0003) Elliptic curve: secp160k1 (0x000f) Elliptic curve: secp160r1 (0x0010) Elliptic curve: secp160r2 (0x0011) Extension: ec_point_formats Type: ec_point_formats (0x000b) Length: 2 EC point formats Length: 1 Elliptic curves point formats (1) EC point format: uncompressed (0) -------------- next part -------------- No. Time Source Destination Protocol Length Info 3969 2016-11-07 10:28:24.886201000 192.168.0.121 104.192.143.5 TLSv1.2 338 Client Hello Frame 3969: 338 bytes on wire (2704 bits), 338 bytes captured (2704 bits) on interface 0 Interface id: 0 (en0) Encapsulation type: Ethernet (1) Arrival Time: Nov 7, 2016 10:28:24.886201000 GMT [Time shift for this packet: 0.000000000 seconds] Epoch Time: 1478514504.886201000 seconds [Time delta from previous captured frame: 0.027051000 seconds] [Time delta from previous displayed frame: 0.027051000 seconds] [Time since reference or first frame: 125.501810000 seconds] Frame Number: 3969 Frame Length: 338 bytes (2704 bits) Capture Length: 338 bytes (2704 bits) [Frame is marked: False] [Frame is ignored: False] [Protocols in frame: eth:ethertype:ip:tcp:ssl] [Coloring Rule Name: TCP] [Coloring Rule String: tcp] Ethernet II, Src: Apple_8e:af:4e (5c:f9:38:8e:af:4e), Dst: Routerbo_36:4d:e6 (e4:8d:8c:36:4d:e6) Destination: Routerbo_36:4d:e6 (e4:8d:8c:36:4d:e6) Address: Routerbo_36:4d:e6 (e4:8d:8c:36:4d:e6) .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default) .... ...0 .... .... .... .... = IG bit: Individual address (unicast) Source: Apple_8e:af:4e (5c:f9:38:8e:af:4e) Address: Apple_8e:af:4e (5c:f9:38:8e:af:4e) .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default) .... ...0 .... .... .... .... = IG bit: Individual address (unicast) Type: IP (0x0800) Internet Protocol Version 4, Src: 192.168.0.121 (192.168.0.121), Dst: 104.192.143.5 (104.192.143.5) Version: 4 Header Length: 20 bytes Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport)) 0000 00.. = Differentiated Services Codepoint: Default (0x00) .... ..00 = Explicit Congestion Notification: Not-ECT (Not ECN-Capable Transport) (0x00) Total Length: 324 Identification: 0x88c9 (35017) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 64 Protocol: TCP (6) Header checksum: 0xf803 [validation disabled] [Good: False] [Bad: False] Source: 192.168.0.121 (192.168.0.121) Destination: 104.192.143.5 (104.192.143.5) [Source GeoIP: Unknown] [Destination GeoIP: Unknown] Transmission Control Protocol, Src Port: 61384 (61384), Dst Port: 443 (443), Seq: 1, Ack: 1, Len: 272 Source Port: 61384 (61384) Destination Port: 443 (443) [Stream index: 42] [TCP Segment Len: 272] Sequence number: 1 (relative sequence number) [Next sequence number: 273 (relative sequence number)] Acknowledgment number: 1 (relative ack number) Header Length: 32 bytes .... 0000 0001 1000 = Flags: 0x018 (PSH, ACK) 000. .... .... = Reserved: Not set ...0 .... .... = Nonce: Not set .... 0... .... = Congestion Window Reduced (CWR): Not set .... .0.. .... = ECN-Echo: Not set .... ..0. .... = Urgent: Not set .... ...1 .... = Acknowledgment: Set .... .... 1... = Push: Set .... .... .0.. = Reset: Not set .... .... ..0. = Syn: Not set .... .... ...0 = Fin: Not set Window size value: 4138 [Calculated window size: 132416] [Window size scaling factor: 32] Checksum: 0xb7d5 [validation disabled] [Good Checksum: False] [Bad Checksum: False] Urgent pointer: 0 Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps No-Operation (NOP) Type: 1 0... .... = Copy on fragmentation: No .00. .... = Class: Control (0) ...0 0001 = Number: No-Operation (NOP) (1) No-Operation (NOP) Type: 1 0... .... = Copy on fragmentation: No .00. .... = Class: Control (0) ...0 0001 = Number: No-Operation (NOP) (1) Timestamps: TSval 629802148, TSecr 441120318 Kind: Time Stamp Option (8) Length: 10 Timestamp value: 629802148 Timestamp echo reply: 441120318 [SEQ/ACK analysis] [iRTT: 0.075722000 seconds] [Bytes in flight: 272] Secure Sockets Layer TLSv1.2 Record Layer: Handshake Protocol: Client Hello Content Type: Handshake (22) Version: TLS 1.2 (0x0303) Length: 267 Handshake Protocol: Client Hello Handshake Type: Client Hello (1) Length: 263 Version: TLS 1.2 (0x0303) Random GMT Unix Time: Nov 7, 2016 10:28:24.000000000 GMT Random Bytes: 7e016520574973b193b7e93d592e9415bc17d101b3cca6ec... Session ID Length: 0 Cipher Suites Length: 100 Cipher Suites (50 suites) Cipher Suite: TLS_EMPTY_RENEGOTIATION_INFO_SCSV (0x00ff) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 (0xc024) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02e) Cipher Suite: TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 (0xc032) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 (0xc026) Cipher Suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 (0xc02a) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (0x009f) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 (0x00a3) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (0x006b) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 (0x006a) Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02b) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 (0xc023) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256 (0xc02d) Cipher Suite: TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 (0xc031) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256 (0xc025) Cipher Suite: TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256 (0xc029) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (0x009e) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 (0x00a2) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (0x0067) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 (0x0040) Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA (0x0038) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA (0xc005) Cipher Suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA (0xc00f) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) Cipher Suite: TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA (0xc008) Cipher Suite: TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (0xc012) Cipher Suite: TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (0x0016) Cipher Suite: TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (0x0013) Cipher Suite: TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA (0xc003) Cipher Suite: TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA (0xc00d) Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a) Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009) Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032) Cipher Suite: TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA (0xc004) Cipher Suite: TLS_ECDH_RSA_WITH_AES_128_CBC_SHA (0xc00e) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) Compression Methods Length: 1 Compression Methods (1 method) Compression Method: null (0) Extensions Length: 122 Extension: server_name Type: server_name (0x0000) Length: 22 Server Name Indication extension Server Name list length: 20 Server Name Type: host_name (0) Server Name length: 17 Server Name: api.bitbucket.org Extension: elliptic_curves Type: elliptic_curves (0x000a) Length: 58 Elliptic Curves Length: 56 Elliptic curves (28 curves) Elliptic curve: sect571r1 (0x000e) Elliptic curve: sect571k1 (0x000d) Elliptic curve: secp521r1 (0x0019) Elliptic curve: brainpoolP512r1 (0x001c) Elliptic curve: sect409k1 (0x000b) Elliptic curve: sect409r1 (0x000c) Elliptic curve: brainpoolP384r1 (0x001b) Elliptic curve: secp384r1 (0x0018) Elliptic curve: sect283k1 (0x0009) Elliptic curve: sect283r1 (0x000a) Elliptic curve: brainpoolP256r1 (0x001a) Elliptic curve: secp256k1 (0x0016) Elliptic curve: secp256r1 (0x0017) Elliptic curve: sect239k1 (0x0008) Elliptic curve: sect233k1 (0x0006) Elliptic curve: sect233r1 (0x0007) Elliptic curve: secp224k1 (0x0014) Elliptic curve: secp224r1 (0x0015) Elliptic curve: sect193r1 (0x0004) Elliptic curve: sect193r2 (0x0005) Elliptic curve: secp192k1 (0x0012) Elliptic curve: secp192r1 (0x0013) Elliptic curve: sect163k1 (0x0001) Elliptic curve: sect163r1 (0x0002) Elliptic curve: sect163r2 (0x0003) Elliptic curve: secp160k1 (0x000f) Elliptic curve: secp160r1 (0x0010) Elliptic curve: secp160r2 (0x0011) Extension: ec_point_formats Type: ec_point_formats (0x000b) Length: 2 EC point formats Length: 1 Elliptic curves point formats (1) EC point format: uncompressed (0) Extension: signature_algorithms Type: signature_algorithms (0x000d) Length: 24 Signature Hash Algorithms Length: 22 Signature Hash Algorithms (11 algorithms) Signature Hash Algorithm: 0x0603 Signature Hash Algorithm Hash: SHA512 (6) Signature Hash Algorithm Signature: ECDSA (3) Signature Hash Algorithm: 0x0601 Signature Hash Algorithm Hash: SHA512 (6) Signature Hash Algorithm Signature: RSA (1) Signature Hash Algorithm: 0x0503 Signature Hash Algorithm Hash: SHA384 (5) Signature Hash Algorithm Signature: ECDSA (3) Signature Hash Algorithm: 0x0501 Signature Hash Algorithm Hash: SHA384 (5) Signature Hash Algorithm Signature: RSA (1) Signature Hash Algorithm: 0x0403 Signature Hash Algorithm Hash: SHA256 (4) Signature Hash Algorithm Signature: ECDSA (3) Signature Hash Algorithm: 0x0401 Signature Hash Algorithm Hash: SHA256 (4) Signature Hash Algorithm Signature: RSA (1) Signature Hash Algorithm: 0x0303 Signature Hash Algorithm Hash: SHA224 (3) Signature Hash Algorithm Signature: ECDSA (3) Signature Hash Algorithm: 0x0301 Signature Hash Algorithm Hash: SHA224 (3) Signature Hash Algorithm Signature: RSA (1) Signature Hash Algorithm: 0x0203 Signature Hash Algorithm Hash: SHA1 (2) Signature Hash Algorithm Signature: ECDSA (3) Signature Hash Algorithm: 0x0201 Signature Hash Algorithm Hash: SHA1 (2) Signature Hash Algorithm Signature: RSA (1) Signature Hash Algorithm: 0x0202 Signature Hash Algorithm Hash: SHA1 (2) Signature Hash Algorithm Signature: DSA (2) From essen@REDACTED Mon Nov 7 11:34:10 2016 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Mon, 7 Nov 2016 12:34:10 +0200 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: References: Message-ID: <58a27cfc-0ef0-3221-f6d4-9e5084e44d8c@ninenines.eu> On 11/07/2016 12:23 PM, Jesper Louis Andersen wrote: > > > On Sat, Nov 5, 2016 at 9:09 AM Sergej Jure?ko > wrote: > > My suggestion is to call it from erlang when starting up your nif > and save it to a thread local storage variable. > > > That sounds like a data race waiting to happen. If your process is moved > from one scheduler to another or if the scheduler you are running on are > taken offline by the operating system, you are in trouble. > > I'm more inclined to ask why it is useful to know which scheduler_id a > given process is being run on. It is if you only send it on init. It's fine if you pass it in every call. For example you have N threads in the NIF, and use the scheduler id to pick the thread (works best when N = # of schedulers; often being the # of cores also). I do this in a customer project, it works very well. The trick is that it ultimately doesn't matter what number is passed, so it can change just fine. If you really need to know the scheduler of the calling process, afraid the only solution is to tie the process to a specific scheduler. But I wouldn't recommend it. -- Lo?c Hoguin https://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From sergej.jurecko@REDACTED Mon Nov 7 11:40:20 2016 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Mon, 7 Nov 2016 11:40:20 +0100 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: References: Message-ID: On Nov 7, 2016 3:53 PM, "Jesper Louis Andersen" < jesper.louis.andersen@REDACTED> wrote: > > > > On Sat, Nov 5, 2016 at 9:09 AM Sergej Jure?ko wrote: >> >> My suggestion is to call it from erlang when starting up your nif and save it to a thread local storage variable. >> >> > That sounds like a data race waiting to happen. If your process is moved from one scheduler to another or if the scheduler you are running on are taken offline by the operating system, you are in trouble. > Assuming he needs this data for communication with erlang code not something that is inside a nif only. Sergej -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex0player@REDACTED Mon Nov 7 09:05:00 2016 From: alex0player@REDACTED (Alex S.) Date: Mon, 7 Nov 2016 11:05:00 +0300 Subject: [erlang-questions] Unexpected tls_alert "handshake failure" connecting to api.bitbucket.org (and others) with Erlang 18.3.4 (and later) In-Reply-To: References: Message-ID: <36668655-AC7E-498C-BF62-C13DFEF29116@gmail.com> > 7 ????. 2016 ?., ? 0:53, Jake Gordon ???????(?): > > Hi All. > > I'm hoping to get some insight into a problem with ssl:connect (and ultimately httpc:request) getting tls handshake errors connecting to some (but not all) webservers even while other clients on the same machine (cURL, Ruby Net::HTTP, etc) can connect just fine. > > I'm using Erlang 19.1.3, but this issue appears to have started with 18.3.4 (earlier versions appear to work correctly) > > I'm trying to connect to a (correctly configured) public endpoint at api.bitbucket.org > > > ssl:connect('api.bitbucket.org ', 443, []). > {error,{tls_alert,"handshake failure?}} Well, the least I can do is confirm that it indeed works that way on 19.1 and works correctly on 18.3 thoguh I forgot the minor version. So, not just the two of you. (I?m using kerl builds, so no Erlang Solutions mods for sure). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.valim@REDACTED Mon Nov 7 13:11:24 2016 From: jose.valim@REDACTED (=?UTF-8?Q?Jos=C3=A9_Valim?=) Date: Mon, 7 Nov 2016 13:11:24 +0100 Subject: [erlang-questions] net_kernel:monitor_nodes and DOWN message guarantees Message-ID: Given a process in node 'A' that calls net_kernel:monitor_nodes() and a node 'B'. * If node 'B' reconnects intermittently, is it guaranteed {nodedown, 'B'} is always delivered **before** an eventual {nodeup, 'B'}? * If the same process in node 'A' that calls net_kernel:monitor_nodes() also monitors a pid in node 'B', is it guaranteed the 'DOWN' messages for such pid are delivered **before** {nodedown, 'B'}? I am in particular looking for the transitive property that if A monitors a process in B and B reconnects, the 'DOWN' messages are delivered before nodeup. Thank you, *Jos? Valim* www.plataformatec.com.br Skype: jv.ptec Founder and Director of R&D -------------- next part -------------- An HTML attachment was scrubbed... URL: From jakesgordon@REDACTED Mon Nov 7 16:07:31 2016 From: jakesgordon@REDACTED (Jake Gordon) Date: Mon, 7 Nov 2016 07:07:31 -0800 Subject: [erlang-questions] Unexpected tls_alert "handshake failure" connecting to api.bitbucket.org (and others) with Erlang 18.3.4 (and later) In-Reply-To: References: Message-ID: Thanks Ben! Yes, I was seeing the same thing with wireshark, the server was responding with failed handshake immediately after the ClientHello, and yes, forcing the connection to use tlsv1.2 for that endpoint does resolve the issue for me. For the record, I forced the tls version as follows... > ssl:connect('api.bitbucket.org', 443, [{ versions, [ 'tlsv1.2' ] }]). And, just to confirm I can also do that from the higher level Elixir HTTPoison library... iex> HTTPoison.request("GET", "https://api.bitbucket.org", "", [], ssl: [ versions: [ :'tlsv1.2' ] ]) Thank you! - Jake On Mon, Nov 7, 2016 at 2:31 AM, Ben Murphy wrote: > Hi Jake, > > If you force TLSv1.2 it will connect correctly. We have had trouble > with IIS servers returning connection_closed when they are using > SHA256 certificate and we don't force TLSv1.2. More details here: > http://erlang.org/pipermail/erlang-bugs/2016-September/005195.html . > However, this server looks to be running nginx and a different error > is returned so I'm not sure if is the same issue. The handshake falls > over after the client hello for me. > > It seems the only big difference between the hellos is the TLS version > (maybe some nginx/openssl servers are dropping TLS1.0 traffic?) and > the lack of signature algorithms. > > On Sun, Nov 6, 2016 at 9:53 PM, Jake Gordon wrote: > > Hi All. > > > > I'm hoping to get some insight into a problem with ssl:connect (and > > ultimately httpc:request) getting tls handshake errors connecting to some > > (but not all) webservers even while other clients on the same machine > (cURL, > > Ruby Net::HTTP, etc) can connect just fine. > > > > I'm using Erlang 19.1.3, but this issue appears to have started with > 18.3.4 > > (earlier versions appear to work correctly) > > > > I'm trying to connect to a (correctly configured) public endpoint at > > api.bitbucket.org > > > > > ssl:connect('api.bitbucket.org', 443, []). > > {error,{tls_alert,"handshake failure"}} > > > > If I attempt to connect to a different endpoint, lets say api.github.com > it > > works just fine. > > > > > ssl:connect('api.github.com', 443, []) > > {ok,{sslsocket, ... }} > > > > Since it's only *some* SSL endpoints, clearly there is some server side > > certificate configuration causing the erlang client to behave differently > > during the handshake, but I'm not sure how to diagnose this when cURL and > > other language clients work correctly. > > > > I'm using a clean install of the esl-erlang packages provided by Erlang > > Solutions on Ubuntu 16.04 and debugging with older versions it looks > like it > > broke somewhere around 18.3.4 > > > > Any insights would be greatly appreciated! > > > > Thanks > > - Jake > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuzhongzheng2012@REDACTED Tue Nov 8 03:11:10 2016 From: liuzhongzheng2012@REDACTED (Zhongzheng Liu) Date: Tue, 8 Nov 2016 10:11:10 +0800 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: <58a27cfc-0ef0-3221-f6d4-9e5084e44d8c@ninenines.eu> References: <58a27cfc-0ef0-3221-f6d4-9e5084e44d8c@ninenines.eu> Message-ID: If get scheduler_id in Erlang and pass it to nif, like this: SID = erlang:system_info(scheduler_id), nif_do_something(SID) I worry that after getting scheduler_id, the reductions run out and the process may migrate to another scheduler. It may create a data race. There is enif_make_unique_integer in OTP 19. Scheduler_id can be extracted from unique integer. Unfortunately enif_make_unique_integer is not available in OTP 18 which I am working on. May be is possible to unbox a reference to extract scheduler_id ? > I'm more inclined to ask why it is useful to know which scheduler_id a given process is being run on. I am trying to make my nif lock free like erlang:make_ref/0 and erlang:unique_integer/0 in OTP 18+ 2016-11-07 18:34 GMT+08:00 Lo?c Hoguin : > On 11/07/2016 12:23 PM, Jesper Louis Andersen wrote: >> >> >> >> On Sat, Nov 5, 2016 at 9:09 AM Sergej Jure?ko > > wrote: >> >> My suggestion is to call it from erlang when starting up your nif >> and save it to a thread local storage variable. >> >> >> That sounds like a data race waiting to happen. If your process is moved >> from one scheduler to another or if the scheduler you are running on are >> taken offline by the operating system, you are in trouble. >> >> I'm more inclined to ask why it is useful to know which scheduler_id a >> given process is being run on. > > > It is if you only send it on init. It's fine if you pass it in every call. > > For example you have N threads in the NIF, and use the scheduler id to pick > the thread (works best when N = # of schedulers; often being the # of cores > also). I do this in a customer project, it works very well. The trick is > that it ultimately doesn't matter what number is passed, so it can change > just fine. > > If you really need to know the scheduler of the calling process, afraid the > only solution is to tie the process to a specific scheduler. But I wouldn't > recommend it. > > -- > Lo?c Hoguin > https://ninenines.eu > Author of The Erlanger Playbook, > A book about software development using Erlang From arunp@REDACTED Tue Nov 8 06:23:57 2016 From: arunp@REDACTED (ARUN P) Date: Tue, 08 Nov 2016 10:53:57 +0530 Subject: [erlang-questions] Sync transaction vs dump log in mnesia Message-ID: <5821616D.6040700@utl.in> Hi, Can someone kindly describe me what is the difference between sync transaction and dump log in mnesia.?. Because in my application I am using mnesia database and I observed that soon after writing data into table, if the system restarts the data will not be persistent, And I am aware of the mnesia dump_log_time_threshold and dump_log_write_threshold and i don't want to change these thresholds for time being. To solve this problem I used mnesia:sync_transaction/1 but that also fails ( in the function description of mnesia:sync_transaction in Mnesia Reference Manual its been told that the data will be logged to disk ). But if I call mnesia:dump_log/0 function soon after the write operation and restart the system the data will be persistent. Thanks in advance, Arun From dgud@REDACTED Tue Nov 8 08:04:46 2016 From: dgud@REDACTED (Dan Gudmundsson) Date: Tue, 08 Nov 2016 07:04:46 +0000 Subject: [erlang-questions] Sync transaction vs dump log in mnesia In-Reply-To: <5821616D.6040700@utl.in> References: <5821616D.6040700@utl.in> Message-ID: Mnesia does not sync the log to disc during transactions at all, that is too slow. If you really need to sync your data/writes to disc use mnesia:sync_log(), which gives some more safety, though notice that the data may still only be in harddrive caches. mnesia:dump_log() can be used, but that also goes through the log file and ditches out the writes to table files before returning, so that is really slow. sync_transaction and sync_dirty syncs the writes between other nodes before returning to the user, so when combining with a dirty_read on a remote node the data is available. If you also use transaction protection on your 'read' it is not necessary to use sync functions. Clearer? On Tue, Nov 8, 2016 at 6:24 AM ARUN P wrote: > Hi, > > Can someone kindly describe me what is the difference between sync > transaction and dump log in mnesia.?. Because in my application I am > using mnesia database and I observed that soon after writing data into > table, if the system restarts the data will not be persistent, And I am > aware of the mnesia dump_log_time_threshold and dump_log_write_threshold > and i don't want to change these thresholds for time being. To solve > this problem I used mnesia:sync_transaction/1 but that also fails ( in > the function description of mnesia:sync_transaction in Mnesia Reference > Manual its been told that the data will be logged to disk ). But if I > call mnesia:dump_log/0 function soon after the write operation and > restart the system the data will be persistent. > > Thanks in advance, > Arun > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From v@REDACTED Tue Nov 8 13:30:40 2016 From: v@REDACTED (Valentin Micic) Date: Tue, 8 Nov 2016 14:30:40 +0200 Subject: [erlang-questions] Cost of inter-nodal link/1 Message-ID: Hi, Did anyone ever measured how expensive is to execute link/1 for the process running on a different node? Kind regards V/ From sweden.feng@REDACTED Tue Nov 8 14:12:35 2016 From: sweden.feng@REDACTED (Alex Feng) Date: Tue, 8 Nov 2016 14:12:35 +0100 Subject: [erlang-questions] The way of accessing record's attributes. Message-ID: Hi, Does anyone know why erlang has to attach the record's name to be able to access an attribute ? I don't understand, for example, the record "#robot" has been assigned to variable "Crusher", why do we have to use "Crusher#robot.hobbies" instead of "Crusher.hobbies" ? 5> Crusher = #robot{name="Crusher", hobbies=["Crushing people","petting cats"]}. #robot{name = "Crusher",type = industrial, hobbies = ["Crushing people","petting cats"], details = []} 6> Crusher#robot.hobbies. ["Crushing people","petting cats"] Br, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang.org@REDACTED Tue Nov 8 15:30:08 2016 From: erlang.org@REDACTED (Stanislaw Klekot) Date: Tue, 8 Nov 2016 15:30:08 +0100 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: References: Message-ID: <20161108143008.GA13485@jarowit.net> On Tue, Nov 08, 2016 at 02:12:35PM +0100, Alex Feng wrote: > Does anyone know why erlang has to attach the record's name to be able to > access an attribute ? > I don't understand, for example, the record "#robot" has been assigned to > variable "Crusher", why do we have to use "Crusher#robot.hobbies" instead > of "Crusher.hobbies" ? Records are syntax sugar for tuples. Compiler doesn't know what kind of value is in `Crusher' (maybe some different record) and what are the field names (which is more important), so you need to specify appropriate record name yourself. -- Stanislaw Klekot From hugo@REDACTED Tue Nov 8 15:39:17 2016 From: hugo@REDACTED (Hugo Mills) Date: Tue, 8 Nov 2016 14:39:17 +0000 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: References: Message-ID: <20161108143917.GP16645@carfax.org.uk> On Tue, Nov 08, 2016 at 02:12:35PM +0100, Alex Feng wrote: > Hi, > > Does anyone know why erlang has to attach the record's name to be able to > access an attribute ? > I don't understand, for example, the record "#robot" has been assigned to > variable "Crusher", why do we have to use "Crusher#robot.hobbies" instead > of "Crusher.hobbies" ? First, note that records are a *compile time* construct. Once the code is compiled, it turns a record into a tagged tuple (e.g. {robot, "Crusher", industrial, [], []}), and turns all of the access of the record into pattern matches like {robot, _, industrial, _, _}, or simple element extraction with element/2. In order to do this, the compiler needs to know what type of record it is, *at compile time*. In your example, it's pretty obvious what kind of record the variable is, but if you are (say) passing that variable to a function, how would the compiler know which record definition it should be using? It can't find all of the call sites where the function is called from, because some of them might be outside the module being compiled... Hugo. > > 5> Crusher = #robot{name="Crusher", hobbies=["Crushing people","petting > cats"]}. > #robot{name = "Crusher",type = industrial, > hobbies = ["Crushing people","petting cats"], > details = []} > 6> Crusher#robot.hobbies. > ["Crushing people","petting cats"] > > > Br, > Alex > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- Hugo Mills | I thought I'd discovered a new colour, but it was hugo@REDACTED carfax.org.uk | just a pigment of my imagination. http://carfax.org.uk/ | PGP: E2AB1DE4 | -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From raimo+erlang-questions@REDACTED Tue Nov 8 15:41:29 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Tue, 8 Nov 2016 15:41:29 +0100 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: References: Message-ID: <20161108144129.GA11963@erix.ericsson.se> On Tue, Nov 08, 2016 at 02:12:35PM +0100, Alex Feng wrote: > Hi, > > Does anyone know why erlang has to attach the record's name to be able to > access an attribute ? > I don't understand, for example, the record "#robot" has been assigned to > variable "Crusher", why do we have to use "Crusher#robot.hobbies" instead > of "Crusher.hobbies" ? Records is a compile time syntactical sugar on tagged tuples. So "Crusher#robot.hobbies" is translated to something like begin robot = element(1, Crusher), element(3, Crusher) end if 'hobbies' is the second element of #robot. The compiler can not know that it is the hobbies field from the #robot record you want to extract unless told so. You may mean the hobbies field from the #whatever record. It could see that it is a #robot record from an earlier assignment in the same code and warn about it if it's not, but in the general case it can not know what is stored in the variable hence leaves it to be solved in runtime. > > > 5> Crusher = #robot{name="Crusher", hobbies=["Crushing people","petting > cats"]}. > #robot{name = "Crusher",type = industrial, > hobbies = ["Crushing people","petting cats"], > details = []} > 6> Crusher#robot.hobbies. > ["Crushing people","petting cats"] > > > Br, > Alex > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From alex0player@REDACTED Tue Nov 8 15:28:11 2016 From: alex0player@REDACTED (Alex S.) Date: Tue, 8 Nov 2016 17:28:11 +0300 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: References: Message-ID: Potentially, nothing stops you from having multiple records with same-named fields, and as erlang isn?t statically-typed, compiler wouldn?t have a way to tell those apart. That said, you could potentially add support for such a syntax so long as it?s unambiguous. From lukas@REDACTED Tue Nov 8 15:43:04 2016 From: lukas@REDACTED (Lukas Larsson) Date: Tue, 8 Nov 2016 15:43:04 +0100 Subject: [erlang-questions] net_kernel:monitor_nodes and DOWN message guarantees In-Reply-To: References: Message-ID: Hello, On Mon, Nov 7, 2016 at 1:11 PM, Jos? Valim wrote: > Given a process in node 'A' that calls net_kernel:monitor_nodes() and a > node 'B'. > > * If node 'B' reconnects intermittently, is it guaranteed {nodedown, > 'B'} is always delivered **before** an eventual {nodeup, 'B'}? > No. The two connections are considered two different concurrent entities, so the same message ordering guarantees as normal apply, i.e. none in this case. > > * If the same process in node 'A' that calls net_kernel:monitor_nodes() > also monitors a pid in node 'B', is it guaranteed the 'DOWN' messages for > such pid are delivered **before** {nodedown, 'B'}? > Yes. The monitor_nodes() up/down messages enclose the connection, so all traffic (be it down, broken links, messages etc) are guaranteed to be inbetween the up and down of that connection. > > I am in particular looking for the transitive property that if A monitors > a process in B and B reconnects, the 'DOWN' messages are delivered before > nodeup. > So no, that is not guaranteed. Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker.eriksson@REDACTED Tue Nov 8 15:53:26 2016 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Tue, 8 Nov 2016 15:53:26 +0100 Subject: [erlang-questions] How to get scheduler_id in nif? In-Reply-To: References: <58a27cfc-0ef0-3221-f6d4-9e5084e44d8c@ninenines.eu> Message-ID: On 11/08/2016 03:11 AM, Zhongzheng Liu wrote: > I am trying to make my nif lock free like erlang:make_ref/0 and > erlang:unique_integer/0 in OTP 18+ > > > Why not use the enif_tsd_* interface (thread specific data) and create your own unique id for each calling thread. /Sverker, Erlang/OTP From sweden.feng@REDACTED Tue Nov 8 16:21:59 2016 From: sweden.feng@REDACTED (Alex Feng) Date: Tue, 8 Nov 2016 16:21:59 +0100 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: <20161108144129.GA11963@erix.ericsson.se> References: <20161108144129.GA11963@erix.ericsson.se> Message-ID: Thank you all for the detailed explanation, I guess I was trying to use "C/C++" way of thinking to understand . Br, Alex 2016-11-08 15:41 GMT+01:00 Raimo Niskanen < raimo+erlang-questions@REDACTED>: > On Tue, Nov 08, 2016 at 02:12:35PM +0100, Alex Feng wrote: > > Hi, > > > > Does anyone know why erlang has to attach the record's name to be able to > > access an attribute ? > > I don't understand, for example, the record "#robot" has been assigned to > > variable "Crusher", why do we have to use "Crusher#robot.hobbies" > instead > > of "Crusher.hobbies" ? > > Records is a compile time syntactical sugar on tagged tuples. > > So "Crusher#robot.hobbies" is translated to something like > begin robot = element(1, Crusher), element(3, Crusher) end > if 'hobbies' is the second element of #robot. > > The compiler can not know that it is the hobbies field from the #robot > record you want to extract unless told so. You may mean the hobbies field > from the #whatever record. It could see that it is a #robot record from an > earlier assignment in the same code and warn about it if it's not, > but in the general case it can not know what is stored in the variable > hence leaves it to be solved in runtime. > > > > > > > > 5> Crusher = #robot{name="Crusher", hobbies=["Crushing people","petting > > cats"]}. > > #robot{name = "Crusher",type = industrial, > > hobbies = ["Crushing people","petting cats"], > > details = []} > > 6> Crusher#robot.hobbies. > > ["Crushing people","petting cats"] > > > > > > Br, > > Alex > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > -- > > / Raimo Niskanen, Erlang/OTP, Ericsson AB > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Tue Nov 8 16:35:43 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Tue, 8 Nov 2016 16:35:43 +0100 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: References: <20161108144129.GA11963@erix.ericsson.se> Message-ID: <20161108153543.GA17320@erix.ericsson.se> On Tue, Nov 08, 2016 at 04:21:59PM +0100, Alex Feng wrote: > Thank you all for the detailed explanation, I guess I was trying to use > "C/C++" way of thinking to understand . This corresponds to the case in C where you have a void *, then you have to cast it to the appropriate (struct robot *) before accessing a field ->hobbies in it. In Erlang you can not declare the type of a variable, which seems to be the C/C++ thinking you fell into. Type is a run-time property in Erlang. (except when using the static type checker Dialyzer) > > Br, > Alex > > 2016-11-08 15:41 GMT+01:00 Raimo Niskanen < > raimo+erlang-questions@REDACTED>: > > > On Tue, Nov 08, 2016 at 02:12:35PM +0100, Alex Feng wrote: > > > Hi, > > > > > > Does anyone know why erlang has to attach the record's name to be able to > > > access an attribute ? > > > I don't understand, for example, the record "#robot" has been assigned to > > > variable "Crusher", why do we have to use "Crusher#robot.hobbies" > > instead > > > of "Crusher.hobbies" ? > > > > Records is a compile time syntactical sugar on tagged tuples. > > > > So "Crusher#robot.hobbies" is translated to something like > > begin robot = element(1, Crusher), element(3, Crusher) end > > if 'hobbies' is the second element of #robot. > > > > The compiler can not know that it is the hobbies field from the #robot > > record you want to extract unless told so. You may mean the hobbies field > > from the #whatever record. It could see that it is a #robot record from an > > earlier assignment in the same code and warn about it if it's not, > > but in the general case it can not know what is stored in the variable > > hence leaves it to be solved in runtime. > > > > > > > > > > > > > 5> Crusher = #robot{name="Crusher", hobbies=["Crushing people","petting > > > cats"]}. > > > #robot{name = "Crusher",type = industrial, > > > hobbies = ["Crushing people","petting cats"], > > > details = []} > > > 6> Crusher#robot.hobbies. > > > ["Crushing people","petting cats"] > > > > > > > > > Br, > > > Alex > > > > > _______________________________________________ > > > erlang-questions mailing list > > > erlang-questions@REDACTED > > > http://erlang.org/mailman/listinfo/erlang-questions > > > > > > -- > > > > / Raimo Niskanen, Erlang/OTP, Ericsson AB > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From ok@REDACTED Wed Nov 9 02:13:59 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 9 Nov 2016 14:13:59 +1300 Subject: [erlang-questions] The way of accessing record's attributes. In-Reply-To: <20161108153543.GA17320@erix.ericsson.se> References: <20161108144129.GA11963@erix.ericsson.se> <20161108153543.GA17320@erix.ericsson.se> Message-ID: <06a389de-4733-629e-da0d-d317e72f482f@cs.otago.ac.nz> > On Tue, Nov 08, 2016 at 04:21:59PM +0100, Alex Feng wrote: >> Thank you all for the detailed explanation, I guess I was trying to use >> "C/C++" way of thinking to understand . For what it's worth, you may have noticed that a lot of structs in the UNIX api have prefixes on their fields, e.g., tm_sec, tm_min, tm_hour, or st_dev, st_ino, st_mode, ... The reason is simple: there was a time before C got typed pointers (indeed, when it just barely had types at all), so the prefixes were necessary for disambiguation in the very same way. The "C way of thinking" hasn't always been the same. From bjorn-egil.xb.dahlberg@REDACTED Wed Nov 9 12:31:36 2016 From: bjorn-egil.xb.dahlberg@REDACTED (=?UTF-8?Q?Bj=c3=b6rn-Egil_Dahlberg_XB?=) Date: Wed, 9 Nov 2016 12:31:36 +0100 Subject: [erlang-questions] Patch Package OTP 19.1.6 Released Message-ID: <9e02eb5f-ecbd-778a-e04f-8d8b962b2e69@ericsson.com> Patch Package: OTP 19.1.6 Git Tag: OTP-19.1.6 Date: 2016-11-09 Trouble Report Id: OTP-13956, OTP-13997, OTP-14009 Seq num: System: OTP Release: 19 Application: erts-8.1.1 Predecessor: OTP 19.1.5 Check out the git tag OTP-19.1.6, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- erts-8.1.1 ------------------------------------------------------ --------------------------------------------------------------------- Note! The erts-8.1.1 application can *not* be applied independently of other applications on an arbitrary OTP 19 installation. On a full OTP 19 installation, also the following runtime dependency has to be satisfied: -- sasl-3.0.1 (first satisfied in OTP 19.1) --- Fixed Bugs and Malfunctions --- OTP-13956 Application(s): erts Related Id(s): ERL-133, ERL-262 The emulator got a dynamic library dependency towards libsctp, which on Linux was not intended since the emulator there loads and resolves the needed sctp functions in runtime. This has been fixed and a configure switch --enable-sctp=lib has been added for those who want such a library dependency. OTP-13997 Application(s): erts Fix SIGUSR1 crashdump generation Do not generate a core when a crashdump is asked for. OTP-14009 Application(s): erts The new functions in code that allows loading of many modules at once had a performance problem. While executing a helper function in the erl_prim_loader process, garbage messages were produced. The garbages messages were ignored and ultimately discarded, but there would be a negative impact on performance and memory usage. The number of garbage message depended on both the number of modules to be loaded and the length of the code path. The functions affected of this problem were: atomic_load/1, ensure_modules_loaded/1, and prepare_loading/1. Full runtime dependencies of erts-8.1.1: kernel-5.0, sasl-3.0.1, stdlib-3.0 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- This patch package will only be available via GitHub: https://github.com/erlang/otp/tree/OTP-19.1.6 From thomas.elsgaard@REDACTED Wed Nov 9 20:48:25 2016 From: thomas.elsgaard@REDACTED (Thomas Elsgaard) Date: Wed, 09 Nov 2016 19:48:25 +0000 Subject: [erlang-questions] List troubles Message-ID: Hi I am having some difficulties by flattening a list which looks like this: [<<"A">>,"4",<<"B">>,"c",<<"d">>] After flattening, it should look like this: A4Bcd Any good ways to do this ? Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.benavides@REDACTED Wed Nov 9 20:50:21 2016 From: fernando.benavides@REDACTED (Brujo Benavides) Date: Wed, 9 Nov 2016 16:50:21 -0300 Subject: [erlang-questions] List troubles In-Reply-To: References: Message-ID: <73BC5BFA-D871-48C1-BAAB-9B5C52A287E6@inakanetworks.com> Have you tried iolist_to_binary ? 1> iolist_to_binary([<<"A">>,"4",<<"B">>,"c",<<"d">>]). <<"A4Bcd">> 2> binary_to_list(v(-1)). "A4Bcd" 3> > On Nov 9, 2016, at 16:48, Thomas Elsgaard wrote: > > Hi > > I am having some difficulties by flattening a list which looks like this: > > [<<"A">>,"4",<<"B">>,"c",<<"d">>] > > After flattening, it should look like this: A4Bcd > > Any good ways to do this ? > > Thomas > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.elsgaard@REDACTED Wed Nov 9 21:26:30 2016 From: thomas.elsgaard@REDACTED (Thomas Elsgaard) Date: Wed, 09 Nov 2016 20:26:30 +0000 Subject: [erlang-questions] List troubles In-Reply-To: <73BC5BFA-D871-48C1-BAAB-9B5C52A287E6@inakanetworks.com> References: <73BC5BFA-D871-48C1-BAAB-9B5C52A287E6@inakanetworks.com> Message-ID: Thanks! I tried many other things, but you solved it, thanks! Now I also learned something new today ;-) Thomas On Wed, 9 Nov 2016 at 20:50 Brujo Benavides < fernando.benavides@REDACTED> wrote: > Have you tried iolist_to_binary ? > > 1> iolist_to_binary([<<"A">>,"4",<<"B">>,"c",<<"d">>]). > <<"A4Bcd">> > 2> binary_to_list(v(-1)). > "A4Bcd" > 3> > > On Nov 9, 2016, at 16:48, Thomas Elsgaard > wrote: > > Hi > > I am having some difficulties by flattening a list which looks like this: > > [<<"A">>,"4",<<"B">>,"c",<<"d">>] > > After flattening, it should look like this: A4Bcd > > Any good ways to do this ? > > Thomas > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.14159@REDACTED Thu Nov 10 04:39:11 2016 From: j.14159@REDACTED (Jeremy Pierre) Date: Thu, 10 Nov 2016 03:39:11 +0000 Subject: [erlang-questions] MLFE v0.2.4 released Message-ID: Hi all, Version 0.2.4 of "ML-flavoured Erlang" (MLFE) is available now (name change still pending) at https://github.com/j14159/mlfe/tree/v0.2.4 Lots of bug fixes and improvements over the last few minor versions, changelog here: https://github.com/j14159/mlfe/blob/master/ChangeLog.org The big changes are: - fixes for ADTs using built-in polymorphic types (lists, maps) - fixes for ADT unification and type aliases - basic support for records with row polymorphism (not compatible with Erlang records) The tour has been updated here: https://github.com/j14159/mlfe/blob/master/Tour.md And a blog post summarizing some of the fixes and record functionality here: http://noisycode.com/blog/2016/11/09/mlfe-v0-dot-2-4-released/ As always, feedback and contributions are most welcome. You can find a few interested people in #mlfe on freenode as well. Thanks, Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Thu Nov 10 08:22:45 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Thu, 10 Nov 2016 08:22:45 +0100 Subject: [erlang-questions] List troubles In-Reply-To: References: <73BC5BFA-D871-48C1-BAAB-9B5C52A287E6@inakanetworks.com> Message-ID: <20161110072245.GA80057@erix.ericsson.se> On Wed, Nov 09, 2016 at 08:26:30PM +0000, Thomas Elsgaard wrote: > Thanks! I tried many other things, but you solved it, thanks! Now I also > learned something new today ;-) It might actually be this one you need: 2> unicode:characters_to_list([<<"A">>,"4",<<"B">>,"c",<<"d">>]). "A4Bcd" or 3> unicode:characters_to_binary([<<"A">>,"4",<<"B">>,"c",<<"d">>]). <<"A4Bcd">> if you should want an UTF-8 binary as the result. iolist_to_binary assumes ISO Latin-1 i.e ISO 8859-1 character encoding. The unicode module presumes UTF-8 and enables the choice of most Unicode encodings as well as Latin-1 as well as the choice of outputting a binary or a list. Side not: the term "flattening a list" suggests a list of any type terms. For this task there are functions in the lists module but they flatten just lists and does not convert a contained binary. > > Thomas > > On Wed, 9 Nov 2016 at 20:50 Brujo Benavides < > fernando.benavides@REDACTED> wrote: > > > Have you tried iolist_to_binary ? > > > > 1> iolist_to_binary([<<"A">>,"4",<<"B">>,"c",<<"d">>]). > > <<"A4Bcd">> > > 2> binary_to_list(v(-1)). > > "A4Bcd" > > 3> > > > > On Nov 9, 2016, at 16:48, Thomas Elsgaard > > wrote: > > > > Hi > > > > I am having some difficulties by flattening a list which looks like this: > > > > [<<"A">>,"4",<<"B">>,"c",<<"d">>] > > > > After flattening, it should look like this: A4Bcd > > > > Any good ways to do this ? > > > > Thomas > > > > > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From ulf@REDACTED Thu Nov 10 11:13:06 2016 From: ulf@REDACTED (Ulf Wiger) Date: Thu, 10 Nov 2016 11:13:06 +0100 Subject: [erlang-questions] gproc_dist:multicall/3 Message-ID: I recently increased the share of my own dogfood in my daily programming diet, which among other things might mean that I'll become a bit more responsive to support requests - let's hope. I thought I'd mention my latest PR to gproc: https://github.com/uwiger/gproc/pull/126 >From the edoc: @spec multicall(Module::atom(), Func::atom(), Args::list()) -> {[Result], [{node(), Error}]} @doc Perform a multicall RPC on all live gproc nodes This function works like {@link rpc:multicall/3}, except the calls are routed via the gproc leader and its connected nodes - the same route as for the data replication. This means that a multicall following a global registration is guaranteed to follow the update on each gproc node. The return value will be of the form {GoodResults, BadNodes}, where BadNodes is a list of {Node, Error} for each node where the call fails. @end This is not something I personally need right now, but meditating over the test suite, it occurred to me that it might be useful. Given that gproc replicates asynchronously, there is a distinct risk of race conditions if you register a global entry and then want to perform an operation on remote nodes, based on the newly registered information (given that updates go through the leader and lookups are served locally). The gproc_dist:multicall(M, F, A) function *should* (as far as I can tell) ensure that the multicall will always be executed after the successful update of a preceding entry. Note that this assumes that the registration and multicall originate from the same process. Given that gproc_dist:multicall/3 is routed via the gproc leader, it will practically always be slower than an rpc:multicall/4, but for the intended use case, this is of course intentional (since being too fast means that the rpc:multicall/4 might race past the preceding registation.) Feedback is welcome, esp. while the PR is waiting to be merged. BR, Ulf W -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex0player@REDACTED Thu Nov 10 14:45:28 2016 From: alex0player@REDACTED (Alex S.) Date: Thu, 10 Nov 2016 16:45:28 +0300 Subject: [erlang-questions] Patch Package OTP 19.1.6 Released In-Reply-To: <9e02eb5f-ecbd-778a-e04f-8d8b962b2e69@ericsson.com> References: <9e02eb5f-ecbd-778a-e04f-8d8b962b2e69@ericsson.com> Message-ID: > 9 ????. 2016 ?., ? 14:31, Bj?rn-Egil Dahlberg XB ???????(?): > > Patch Package: OTP 19.1.6 > Git Tag: OTP-19.1.6 > Date: 2016-11-09 > Trouble Report Id: OTP-13956, OTP-13997, OTP-14009 > Seq num: > System: OTP > Release: 19 > Application: erts-8.1.1 > Predecessor: OTP 19.1.5 Is this patch package included in official Docker images by any chance? All of my docker releases FROM erlang:19.1 (thankfully not in production) fail with > {"init terminating in do_boot",{load_failed,[tls_handshake,tls_v1,tls_record,tls,tls_connection_sup,tls_connection,ssl_v2,ssl_v3,ssl_tls_dist_proxy,ssl_sup,ssl_socket,ssl_session,ssl_pkix_db,ssl_listen_tracker_sup,ssl_dist_sup,ssl_crl_cache,ssl_crl,ssl_config,ssl_cipher,ssl_app,ssl_alert,inet_tls_dist,inet6_tls_dist,dtls_record,dtls_connection_sup,dtls_handshake,dtls_connection,dtls,ssl_srp_primes,ssl_session_cache_api,ssl_session_cache,ssl_record,ssl_manager,ssl_handshake,ssl_crl_hash_dir,ssl_crl_cache_api,ssl_connection,ssl_certificate,ssl,dtls_v1]}} -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Thu Nov 10 18:04:34 2016 From: vans_163@REDACTED (Vans S) Date: Thu, 10 Nov 2016 17:04:34 +0000 (UTC) Subject: [erlang-questions] extending mnesia:subscribe/1 functionality References: <1523679983.448288.1478797474543.ref@mail.yahoo.com> Message-ID: <1523679983.448288.1478797474543@mail.yahoo.com> I function I often find myself needing around subscribe is to only subscribe for certain keys. For example if we have 10,000 processes subscribing via: mnesia:subscribe({table, Table, detailed}) We will get 10,000 of the same message, every time. What if we could subscribe like: mnesia:subscribe({table, Table, {detailed, uuid_1234}}) OR mnesia:subscribe({table, Table, {detailed, {'_', uuid_1234}}}) What this means. This means the subscribing process to that table will only receive subscription messages if the key of the mnesia record matches what was given. In the second example, if the key is composite, we can filter on it too. What do you think? From thomas.elsgaard@REDACTED Thu Nov 10 18:45:03 2016 From: thomas.elsgaard@REDACTED (Thomas Elsgaard) Date: Thu, 10 Nov 2016 17:45:03 +0000 Subject: [erlang-questions] List troubles In-Reply-To: <20161110072245.GA80057@erix.ericsson.se> References: <73BC5BFA-D871-48C1-BAAB-9B5C52A287E6@inakanetworks.com> <20161110072245.GA80057@erix.ericsson.se> Message-ID: Hi Raimo, that was even better, thanks! Thomas On Thu, 10 Nov 2016 at 08:22 Raimo Niskanen < raimo+erlang-questions@REDACTED> wrote: > On Wed, Nov 09, 2016 at 08:26:30PM +0000, Thomas Elsgaard wrote: > > Thanks! I tried many other things, but you solved it, thanks! Now I also > > learned something new today ;-) > > It might actually be this one you need: > > 2> unicode:characters_to_list([<<"A">>,"4",<<"B">>,"c",<<"d">>]). > "A4Bcd" > > or > > 3> unicode:characters_to_binary([<<"A">>,"4",<<"B">>,"c",<<"d">>]). > <<"A4Bcd">> > > if you should want an UTF-8 binary as the result. > > iolist_to_binary assumes ISO Latin-1 i.e ISO 8859-1 character encoding. > The unicode module presumes UTF-8 and enables the choice of most Unicode > encodings as well as Latin-1 as well as the choice of outputting a binary > or a list. > > Side not: the term "flattening a list" suggests a list of any type terms. > For this task there are functions in the lists module but they flatten > just lists and does not convert a contained binary. > > > > > > > Thomas > > > > On Wed, 9 Nov 2016 at 20:50 Brujo Benavides < > > fernando.benavides@REDACTED> wrote: > > > > > Have you tried iolist_to_binary ? > > > > > > 1> iolist_to_binary([<<"A">>,"4",<<"B">>,"c",<<"d">>]). > > > <<"A4Bcd">> > > > 2> binary_to_list(v(-1)). > > > "A4Bcd" > > > 3> > > > > > > On Nov 9, 2016, at 16:48, Thomas Elsgaard > > > wrote: > > > > > > Hi > > > > > > I am having some difficulties by flattening a list which looks like > this: > > > > > > [<<"A">>,"4",<<"B">>,"c",<<"d">>] > > > > > > After flattening, it should look like this: A4Bcd > > > > > > Any good ways to do this ? > > > > > > Thomas > > > > > > > > > > > > _______________________________________________ > > > erlang-questions mailing list > > > erlang-questions@REDACTED > > > http://erlang.org/mailman/listinfo/erlang-questions > > > > > > > > > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > -- > > / Raimo Niskanen, Erlang/OTP, Ericsson AB > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Thu Nov 10 19:51:09 2016 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 10 Nov 2016 19:51:09 +0100 Subject: [erlang-questions] [ANN] Syn 1.6.0 - now with `get_local_members` and `publish_to_local` support Message-ID: All, Syn 1.6.0 has just been released. For those of you who don't know it, Syn is a global Process Registry and Process Group manager for Erlang, which supports PubSub. Besides some optimizations, the main addition is that you can retrieve local pid members of a group, and publish only to those (instead of a global publish). Thank you to everyone who contributed. https://github.com/ostinelli/syn Best, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fw339tgy@REDACTED Fri Nov 11 08:26:30 2016 From: fw339tgy@REDACTED (=?GBK?B?zLi549TG?=) Date: Fri, 11 Nov 2016 15:26:30 +0800 (CST) Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ Message-ID: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> i campare the erlang's computing with c++ erlang run 100000000 time the test_sum_0 test_sum_0(N) -> bp_eva_delta([1,2,3,4],[3,4,5,6],[]), test_sum_0(N-1). bp_eva_delta([],_,L) -> lists:reverse(L); bp_eva_delta([O|Output],[S|Sigma],L) -> bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). c++ run the same time (100000000 ) the similar fun , for(int i = 0 ;i< 100000000;++i) { double b[5] = {1,2,3,4,5}; double s[5] = {6,7,8,9,10}; double o[5]; for(int i = 0; i < 5;++i) { o[i] = s[i] * b[i] * (1 - b[i]); } }. the erlang spend 29's , and c++ spend 2.78's. why the erlang is so slower than c++? Or I do not configure the right parameter? -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Fri Nov 11 11:10:07 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Fri, 11 Nov 2016 11:10:07 +0100 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> Message-ID: <20161111101007.GA43001@erix.ericsson.se> On Fri, Nov 11, 2016 at 03:26:30PM +0800, ??? wrote: > i campare the erlang's computing with c++ > > > erlang run 100000000 time the test_sum_0 > > > test_sum_0(N) -> > bp_eva_delta([1,2,3,4],[3,4,5,6],[]), > test_sum_0(N-1). > > > > > bp_eva_delta([],_,L) -> > lists:reverse(L); > bp_eva_delta([O|Output],[S|Sigma],L) -> > bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). > > > > > > > > > c++ run the same time (100000000 ) the similar fun , > > > for(int i = 0 ;i< 100000000;++i) > { > double b[5] = {1,2,3,4,5}; > double s[5] = {6,7,8,9,10}; > double o[5]; > for(int i = 0; i < 5;++i) > { > o[i] = s[i] * b[i] * (1 - b[i]); > } > > > }. > > > the erlang spend 29's , and c++ spend 2.78's. > > > why the erlang is so slower than c++? > You are comparing apples with pears. For starters; your Erlang code probably spends most of it time allocing new memory, garbage collecting and freeing memory, while your C++ code just reads and writes from the same stack memory locations. Both examples produce nothing but in different ways. This is a very syntethic and unjust comparision. > > Or I do not configure the right parameter? What are you trying to measure? -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From carlsson.richard@REDACTED Fri Nov 11 11:13:09 2016 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Fri, 11 Nov 2016 11:13:09 +0100 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> Message-ID: You are comparing a native-compiled C++ program that works on small arrays of raw numbers with an interpreted Erlang program that traverses linked lists of tagged numbers. The only surprise is that the difference is _only_ a factor 10. (And if the C code was using integers instead of double precision floats, it would be even faster.) /Richard 2016-11-11 8:26 GMT+01:00 ??? : > i campare the erlang's computing with c++ > > erlang run 100000000 time the test_sum_0 > > test_sum_0(N) -> > bp_eva_delta([1,2,3,4],[3,4,5,6],[]), > test_sum_0(N-1). > > > bp_eva_delta([],_,L) -> > lists:reverse(L); > bp_eva_delta([O|Output],[S|Sigma],L) -> > bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). > > > > > c++ run the same time (100000000 ) the similar fun , > > for(int i = 0 ;i< 100000000;++i) > { > double b[5] = {1,2,3,4,5}; > double s[5] = {6,7,8,9,10}; > double o[5]; > for(int i = 0; i < 5;++i) > { > o[i] = s[i] * b[i] * (1 - b[i]); > } > > }. > > the erlang spend 29's , and c++ spend 2.78's. > > why the erlang is so slower than c++? > > Or I do not configure the right parameter? > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony@REDACTED Fri Nov 11 14:10:10 2016 From: tony@REDACTED (Tony Rogvall) Date: Fri, 11 Nov 2016 14:10:10 +0100 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> Message-ID: <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> The original program is looping over [1,2,3,4] when it should loop over [1,2,3,4,5] as in the C program. After fixing that and also making sure the program is using floating point numbers and adding a variable that calculate a result, then the difference is ( on my mac ) Erlang: 57s C: 3.7s That is 15 times slower which is not that bad considering :-) But when adding a -O3 flag to the C code compilation that ratio will increases to 110 times slower. Just tossing in a -native flag did not lead to a any significant change but? The when using the still forgotten loop unrolling directives, inline sizes and friends. I used this ( WARNING! not to be used in production code yet, I guess? ) -compile(native). -compile(inline). -compile({inline_size,1000}). -compile({inline_effort,2000}). -compile({inline_unroll,6}). Erlang: 4.3s Which is nearly the same as unoptimized C code and just 8 times slower than -O3 optimized C code. and that is just amazing! /Tony > On 11 nov 2016, at 11:13, Richard Carlsson wrote: > > You are comparing a native-compiled C++ program that works on small arrays of raw numbers with an interpreted Erlang program that traverses linked lists of tagged numbers. The only surprise is that the difference is _only_ a factor 10. (And if the C code was using integers instead of double precision floats, it would be even faster.) > > > /Richard > > 2016-11-11 8:26 GMT+01:00 ??? : > i campare the erlang's computing with c++ > > erlang run 100000000 time the test_sum_0 > > test_sum_0(N) -> > bp_eva_delta([1,2,3,4],[3,4,5,6],[]), > test_sum_0(N-1). > > > bp_eva_delta([],_,L) -> > lists:reverse(L); > bp_eva_delta([O|Output],[S|Sigma],L) -> > bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). > > > > > c++ run the same time (100000000 ) the similar fun , > > for(int i = 0 ;i< 100000000;++i) > { > double b[5] = {1,2,3,4,5}; > double s[5] = {6,7,8,9,10}; > double o[5]; > for(int i = 0; i < 5;++i) > { > o[i] = s[i] * b[i] * (1 - b[i]); > } > > }. > > the erlang spend 29's , and c++ spend 2.78's. > > why the erlang is so slower than c++? > > Or I do not configure the right parameter? > > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vans_163@REDACTED Fri Nov 11 16:16:27 2016 From: vans_163@REDACTED (Vans S) Date: Fri, 11 Nov 2016 15:16:27 +0000 (UTC) Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> Message-ID: <1910330560.2158279.1478877387346@mail.yahoo.com> Please write C nif containing function: for(int i = 0 ;i< 100000000;++i) { double b[5] = {1,2,3,4,5}; double s[5] = {6,7,8,9,10}; double o[5]; for(int i = 0; i < 5;++i) { o[i] = s[i] * b[i] * (1 - b[i]); } }. change erlang code to: test_sum_0(N) -> ? call_c_nif(N); Try test again. On Friday, November 11, 2016 5:01 AM, ??? wrote: i campare the erlang's?computing with c++? erlang run 100000000 time the?test_sum_0 test_sum_0(N) ->? bp_eva_delta([1,2,3,4],[3,4,5,6],[]),? test_sum_0(N-1). bp_eva_delta([],_,L) -> lists:reverse(L);bp_eva_delta([O|Output],[S|Sigma],L) -> bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). c++ run the same time (100000000?) the similar fun , for(int i = 0 ;i< 100000000;++i) { double b[5] = {1,2,3,4,5}; double s[5] = {6,7,8,9,10}; double o[5]; for(int i = 0; i < 5;++i) { o[i] = s[i] * b[i] * (1 - b[i]); } }. the erlang spend 29's , and c++ spend 2.78's. why the erlang is so slower than c++? ?Or I do not configure ?the right parameter? ? _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From fw339tgy@REDACTED Fri Nov 11 17:06:05 2016 From: fw339tgy@REDACTED (=?GBK?B?zLi549TG?=) Date: Sat, 12 Nov 2016 00:06:05 +0800 (CST) Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> Message-ID: <71f4e8b5.bfa9.15854238bee.Coremail.fw339tgy@126.com> can you tell me why the native flag can cause such improvement? the way that add natvie flag is not the bese ,since it is not in safe mode,. is there anyother Available way? At 2016-11-11 21:10:10, "Tony Rogvall" wrote: >The original program is looping over [1,2,3,4] when it should loop over [1,2,3,4,5] as >in the C program. After fixing that and also making sure the program is using floating point >numbers and adding a variable that calculate a result, then the difference is ( on my mac ) > >Erlang: 57s >C: 3.7s > >That is 15 times slower which is not that bad considering :-) >But when adding a -O3 flag to the C code compilation that ratio will increases to 110 times slower. >Just tossing in a -native flag did not lead to a any significant change but? >The when using the still forgotten loop unrolling directives, inline sizes and friends. >I used this ( WARNING! not to be used in production code yet, I guess? ) > >-compile(native). >-compile(inline). >-compile({inline_size,1000}). >-compile({inline_effort,2000}). >-compile({inline_unroll,6}). > >Erlang: 4.3s > >Which is nearly the same as unoptimized C code and >just 8 times slower than -O3 optimized C code. >and that is just amazing! > >/Tony > >> On 11 nov 2016, at 11:13, Richard Carlsson wrote: >> >> You are comparing a native-compiled C++ program that works on small arrays of raw numbers with an interpreted Erlang program that traverses linked lists of tagged numbers. The only surprise is that the difference is _only_ a factor 10. (And if the C code was using integers instead of double precision floats, it would be even faster.) >> >> >> /Richard >> >> 2016-11-11 8:26 GMT+01:00 ??? : >> i campare the erlang's computing with c++ >> >> erlang run 100000000 time the test_sum_0 >> >> test_sum_0(N) -> >> bp_eva_delta([1,2,3,4],[3,4,5,6],[]), >> test_sum_0(N-1). >> >> >> bp_eva_delta([],_,L) -> >> lists:reverse(L); >> bp_eva_delta([O|Output],[S|Sigma],L) -> >> bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). >> >> >> >> >> c++ run the same time (100000000 ) the similar fun , >> >> for(int i = 0 ;i< 100000000;++i) >> { >> double b[5] = {1,2,3,4,5}; >> double s[5] = {6,7,8,9,10}; >> double o[5]; >> for(int i = 0; i < 5;++i) >> { >> o[i] = s[i] * b[i] * (1 - b[i]); >> } >> >> }. >> >> the erlang spend 29's , and c++ spend 2.78's. >> >> why the erlang is so slower than c++? >> >> Or I do not configure the right parameter? >> >> >> >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Fri Nov 11 22:54:37 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Sat, 12 Nov 2016 00:54:37 +0300 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <71f4e8b5.bfa9.15854238bee.Coremail.fw339tgy@126.com> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> <71f4e8b5.bfa9.15854238bee.Coremail.fw339tgy@126.com> Message-ID: Tony, you've just exploded my brain. I will spend next 2 weeks in a fuzzy applying hipe flags to our Flussonic =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Sat Nov 12 01:43:38 2016 From: mjtruog@REDACTED (Michael Truog) Date: Fri, 11 Nov 2016 16:43:38 -0800 Subject: [erlang-questions] [ANN] 0.3.0 PEST (Primitive Erlang Security Tool) Released Message-ID: <582665BA.80604@gmail.com> PEST (Primitive Erlang Security Tool) is a basic security tool to examine Erlang source code and find any function calls that may lead to security problems. PEST can also examine your Erlang crypto version information (i.e., '-V crypto' usage) to determine how many OpenSSL security problems it may have (based on the version information for OpenSSL that is provided). PEST appears to have all the features it needs, but don't hesitate to voice an opinion if you think something is missing. The repository is at https://github.com/okeuday/pest/#readme . While typical use relies on the pest.erl escript, the escript file also can be used as an Erlang module and hex package. Best Regards, Michael From pierrefenoll@REDACTED Sat Nov 12 06:50:56 2016 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Fri, 11 Nov 2016 21:50:56 -0800 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> <71f4e8b5.bfa9.15854238bee.Coremail.fw339tgy@126.com> Message-ID: Tony, could you write a blog post describing how & when to use "the still forgotten loop unrolling directives, inline sizes and friends"? Pretty please Cheers, -- Pierre Fenoll On 11 November 2016 at 13:54, Max Lapshin wrote: > Tony, you've just exploded my brain. > > I will spend next 2 weeks in a fuzzy applying hipe flags to our Flussonic > =) > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Sat Nov 12 06:56:14 2016 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Sat, 12 Nov 2016 06:56:14 +0100 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> Message-ID: On Nov 11, 2016 6:40 PM, "Tony Rogvall" wrote: > I used this ( WARNING! not to be used in production code yet, I guess? ) Are these flags new? Sergej -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Sat Nov 12 10:53:09 2016 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Sat, 12 Nov 2016 10:53:09 +0100 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> Message-ID: New as of 2001 or so. :-) Inlining is documented towards the bottom of this page: http://erlang.org/doc/man/compile.html However, it only describes the inline_size option. There is also the inline_effort limit, which can be increased from the default 150 at the expense of compile time (its purpose is to ensure that the automatic inliner does not get bogged down in any particular part of the code). And then there's the slightly experimental inline_unroll, which is actually more of a side effect of the normal inlining behaviour if you just allow it to repeat itself on loops. The interaction between the unroll limit and the size/effort limit is not obvious (and could maybe be improved - I haven't looked at that code for 15 years). In particular, the size limit seems to need bumping from the default 24 to about 200 or more for unrolling to happen, depending on the size of the loop body, and the effort limit also needs raising to at least 500 or 1000. I suggest you use the 'to_core' option and inspect the result until you find settings that work for your program. If you want to use unrolling you should probably put that code in a separate module and use custom compiler option for that module, not apply the same limits to your whole code base. See comments in https://github.com/erlang/otp/blob/maint/lib/compiler/src/cerl_inline.erl for details. (It's a wonderful algorithm, if you're into that sort of thing, but can take a while to get your head around. It's basically just constant propagation and folding, treating functions like any other constants, and handling local functions and funs in the same way. I'd revisit it if I had the time.) Note that if you use the option {inline,[{Name,Arity},...]} instead of just 'inline', then an older, simpler inliner is used, which _only_ inlines those functions you listed, ignoring any size limits. /Richard 2016-11-12 6:56 GMT+01:00 Sergej Jure?ko : > On Nov 11, 2016 6:40 PM, "Tony Rogvall" wrote: > > > I used this ( WARNING! not to be used in production code yet, I guess? ) > > Are these flags new? > > Sergej > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony@REDACTED Sat Nov 12 12:15:28 2016 From: tony@REDACTED (Tony Rogvall) Date: Sat, 12 Nov 2016 12:15:28 +0100 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> <8AB1F659-4C2E-4AAF-A4F8-ADF2BD263707@rogvall.se> Message-ID: <7792DE6E-DD7B-4E8A-807D-F4D460DB4881@rogvall.se> Thank you for the pointers and some insights. I noticed that unroll directive was needed for this case. Actually size, effort and unroll was needed to get the desired effect. I have been nagging about this before. Some effort should be made to do these optimizations automatically, they can be really hard to do manually, also consider how "funny" the code would look if you unroll the code your self ;-) /Tony "typed while walking!" > On 12 Nov 2016, at 10:53, Richard Carlsson wrote: > > New as of 2001 or so. :-) Inlining is documented towards the bottom of this page: http://erlang.org/doc/man/compile.html > > However, it only describes the inline_size option. There is also the inline_effort limit, which can be increased from the default 150 at the expense of compile time (its purpose is to ensure that the automatic inliner does not get bogged down in any particular part of the code). And then there's the slightly experimental inline_unroll, which is actually more of a side effect of the normal inlining behaviour if you just allow it to repeat itself on loops. > > The interaction between the unroll limit and the size/effort limit is not obvious (and could maybe be improved - I haven't looked at that code for 15 years). In particular, the size limit seems to need bumping from the default 24 to about 200 or more for unrolling to happen, depending on the size of the loop body, and the effort limit also needs raising to at least 500 or 1000. I suggest you use the 'to_core' option and inspect the result until you find settings that work for your program. If you want to use unrolling you should probably put that code in a separate module and use custom compiler option for that module, not apply the same limits to your whole code base. > > See comments in https://github.com/erlang/otp/blob/maint/lib/compiler/src/cerl_inline.erl for details. (It's a wonderful algorithm, if you're into that sort of thing, but can take a while to get your head around. It's basically just constant propagation and folding, treating functions like any other constants, and handling local functions and funs in the same way. I'd revisit it if I had the time.) > > Note that if you use the option {inline,[{Name,Arity},...]} instead of just 'inline', then an older, simpler inliner is used, which _only_ inlines those functions you listed, ignoring any size limits. > > /Richard > > 2016-11-12 6:56 GMT+01:00 Sergej Jure?ko : >> On Nov 11, 2016 6:40 PM, "Tony Rogvall" wrote: >> >> > I used this ( WARNING! not to be used in production code yet, I guess? ) >> >> Are these flags new? >> >> Sergej >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Sat Nov 12 21:51:49 2016 From: vans_163@REDACTED (Vans S) Date: Sat, 12 Nov 2016 20:51:49 +0000 (UTC) Subject: [erlang-questions] adding maps:merge_nested/2 References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> Message-ID: <1109706702.2843701.1478983909057@mail.yahoo.com> What do you think of a function merge_nested added to maps module? The function would merge two maps and all nested maps Example: #{k1=> v1, k2=> #{k3=> 5, k4=> #{k4=> v2}}} = maps:merge_nested( #{k1=> v1, k2=> #{k3=> 5}, #{k1=> v1, k2=> #{k4=> #{k4=> v2}}}) Also perhaps extending maps:merge to maps:merge/1 to allow passing a list: maps:merge([#{k1=> v1}, #{k2=> v2}, #{k3=> v3}]) From g@REDACTED Sat Nov 12 22:58:26 2016 From: g@REDACTED (Guilherme Andrade) Date: Sat, 12 Nov 2016 21:58:26 +0000 Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: <1109706702.2843701.1478983909057@mail.yahoo.com> References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> Message-ID: Throwing my two cents out there: I would find 'merge_nested' useful (I'm pretty sure I've had the need of something similar in the past) but it looks a bit too specific for the maps module; particularly, one can think of it as a "map of (maps of ...)" method, not simply a generic, flattened maps one. Then again, the lists module includes (very dearly needed) functions for dealing with 'lists of tuples', so what do I know :) Perhaps an alternative would be having a merge/N function that receives a custom value-merger function (besides the unmerged maps), and use it recursively to solve your particular problem. On 12 November 2016 at 20:51, Vans S wrote: > What do you think of a function merge_nested added to maps module? > > The function would merge two maps and all nested maps > > Example: > > > #{k1=> v1, k2=> #{k3=> 5, k4=> #{k4=> v2}}} > = maps:merge_nested( > #{k1=> v1, k2=> #{k3=> 5}, > #{k1=> v1, k2=> #{k4=> #{k4=> v2}}}) > > Also perhaps extending maps:merge to maps:merge/1 to allow passing a list: > > > maps:merge([#{k1=> v1}, #{k2=> v2}, #{k3=> v3}]) > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Guilherme -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Sat Nov 12 23:05:48 2016 From: vans_163@REDACTED (Vans S) Date: Sat, 12 Nov 2016 22:05:48 +0000 (UTC) Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> Message-ID: <1244660615.2893135.1478988348712@mail.yahoo.com> These days I am working more and more with javascript on the front-end, erlang on the back. ?The map type fits really well and is simple. I usually try to keep my data flat but often this is not the reality. ?For example parsing some JSON from the frontend into a map is dead simple, I know I rather just send erlang binary term format, but not everyone has a fully working erlang term serializer for their language. Also the recent addition of with and without to maps shows that indeed there is a need for more functionality out of maps. ? As neoncontrails recently told me, "a list is just a map", let that sink in! Indeed a list is simply a map, with the index being the key. ?Its maps, maps all the way down, then more maps. Another useful function is to diff two maps, but this now I think is indeed beyond scope of the maps module. Also Haskell style lens or similar to make accessing/updating nested properties would be useful, but again beyond the scope of this merge_nested. PS. And you know what just happened :) On Saturday, November 12, 2016 4:58 PM, Guilherme Andrade wrote: Throwing my two cents out there: I would find 'merge_nested' useful (I'm pretty sure I've had the need of something similar in the past) but it looks a bit too specific for the maps module; particularly, one can think of it as a "map of (maps of ...)" method, not simply a generic, flattened maps one. Then again, the lists module includes (very dearly needed) functions for dealing with 'lists of tuples', so what do I know :) Perhaps an alternative would be having a merge/N function that receives a custom value-merger function (besides the unmerged maps), and use it recursively to solve your particular problem. On 12 November 2016 at 20:51, Vans S wrote: What do you think of a function merge_nested added to maps module? The function would merge two maps and all nested maps Example: #{k1=> v1, k2=> #{k3=> 5, k4=> #{k4=> v2}}} = maps:merge_nested( #{k1=> v1, k2=> #{k3=> 5}, #{k1=> v1, k2=> #{k4=> #{k4=> v2}}}) Also perhaps extending maps:merge to maps:merge/1 to allow passing a list: maps:merge([#{k1=> v1}, #{k2=> v2}, #{k3=> v3}]) ______________________________ _________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/ listinfo/erlang-questions -- Guilherme -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Sun Nov 13 10:43:54 2016 From: bchesneau@REDACTED (Benoit Chesneau) Date: Sun, 13 Nov 2016 10:43:54 +0100 Subject: [erlang-questions] spawn_link/{2,4} implementation? Message-ID: Is there any documentation that describes how spawning a a process on a remote node is implemented? Which part of the code is responsible of it? I am wondering how the link between 2 processes on 2 different nodes is implemented, if there is some PING mechanism or such, ... Beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From westonc@REDACTED Sun Nov 13 19:15:04 2016 From: westonc@REDACTED (Weston C) Date: Sun, 13 Nov 2016 10:15:04 -0800 Subject: [erlang-questions] "OpenSSL might not be installed on this system." (OS X 10.9, openssl seems to be there). Message-ID: When trying to use crypto/ssl-related stuff in my local build of Erlang (Mac OS X 10.9), I'm getting errors that indicate it doesn't believe I have OpenSSL installed on my system, despite some notable evidence that I apparently do. Here's the error I'm seeing: westonMBP:weston$ erl Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.3 (abort with ^G) 1> crypto:start(). ** exception error: undefined function crypto:start/0 2> =ERROR REPORT==== 7-Nov-2016::16:45:02 === Unable to load crypto library. Failed with error: "load_failed, Failed to load NIF library: 'dlopen(/usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so, 2): Symbol not found: _EVP_aes_128_cbc Referenced from: /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so Expected in: flat namespace in /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so'" OpenSSL might not be installed on this system. =WARNING REPORT==== 7-Nov-2016::16:45:02 === The on_load function for module crypto returned {error, {load_failed, "Failed to load NIF library: 'dlopen(/usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so, 2): Symbol not found: _EVP_aes_128_cbc\n Referenced from: /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so\n Expected in: flat namespace\n in /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so'"}} Now... I'm pretty sure I have openssl installed: westonMBP:otp_src_18.3 weston$ openssl version OpenSSL 1.0.2j 26 Sep 2016 westonMBP:otp_src_18.3 weston$ which openssl /usr/local/bin/openssl And libcrypto.a and libcrypto.dylib show under /usr/local/lib, there's a full openssl header directory under /usr/local/include. So, I figured I'd go back and take a look at the different options presented to me by `configure` when I built from source. Initially, I ran configure with `--with-ssl=/usr/local`, but I notice one can opmit the path... why not try just `--with-ssl`, and see if it makes the connection better on its own? Doesn't seem to: checking for static ZLib to be used by SSL in standard locations... no checking for OpenSSL >= 0.9.7 in standard locations... rm: conftest.dSYM: is a directory rm: conftest.dSYM: is a directory found; but not usable configure: WARNING: No (usable) OpenSSL found, skipping ssl, ssh and crypto applications checking for kstat_open in -lkstat... (cached) no ... ********************************************************************* ********************** APPLICATIONS DISABLED ********************** ********************************************************************* crypto : No usable OpenSSL found ssh : No usable OpenSSL found ssl : No usable OpenSSL found Going back to `--with-ssl=/usr/local` gives me: checking for static ZLib to be used by SSL in standard locations... no checking for OpenSSL kerberos 5 support... yes rm: conftest.dSYM: is a directory checking for krb5.h in standard locations... found in /usr/include checking for kstat_open in -lkstat... (cached) no So the build process seems to think I've got it. Nevertheless, invoking `crypto:start()` brings us back to the "OpenSSL might not be installed on this system." error. Having wrestled with some weird OS X library/header path issues recently, I thought about the possibility that one or the other is there, but OS X can't see it, figured I'd look up an OpenSSL "Hello World" program: /* https://www.mitchr.me/SS/exampleCode/openssl.html https://www.mitchr.me/SS/exampleCode/openssl/bio_hello0.c.html */ #include #include #include int main(int argc, char *argv[]); int main(int argc, char *argv[]) { BIO *bio_stdout; bio_stdout = BIO_new_fp(stdout, BIO_NOCLOSE); BIO_printf(bio_stdout, "hello, World!\n"); BIO_free_all(bio_stdout); return 0; } And then try building/running it: westonMBP:tmp weston$ gcc -o hellossl hellossl.c -lcrypto westonMBP:tmp weston$ ./hellossl hello, World! So, it's finding openssl on this toy/test program. I did consider that this might be related to an issue potentially fixed in a later release, and tried builing 19.1. Same result (although the missing symbol given in the error message seems to be _CRYPTO_num_locks rather than _EVP_aes_128_cbc). I also tried building/installing a few different versions of openssl -- 1.0.2e, 1.0.2j, and 1.1.0b. There's no difference between the result of the 1.0.2's, 1.1.0b seems to actually break the make process entirely. Any hints on what to try next? From ok@REDACTED Mon Nov 14 01:14:49 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 14 Nov 2016 13:14:49 +1300 Subject: [erlang-questions] Why erlang's computing performance is enormously less than c++ In-Reply-To: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> References: <2002d85e.7db3.1585247dab4.Coremail.fw339tgy@126.com> Message-ID: On 11/11/16 8:26 PM, ??? wrote: > i campare the erlang's computing with c++ > > erlang run 100000000 time the test_sum_0 > > test_sum_0(N) -> > bp_eva_delta([1,2,3,4],[3,4,5,6],[]), > test_sum_0(N-1). > > > bp_eva_delta([],_,L) -> > lists:reverse(L); > bp_eva_delta([O|Output],[S|Sigma],L) -> > bp_eva_delta(Output,Sigma,[S * O * (1-O) |L]). Here are some times I got. Erlang (native compilation) : 10.1 seconds. Erlang (unrolled loop) : 2.8 seconds. Standard ML : 2.7 seconds. Clean (default lazy lists) : 8.3 seconds. Clean (unrolled strict data) : 3.0 seconds. A fair comparison in C : 118.4 seconds. The thing is that the C and Erlang code may be computing the same function (technically they aren't), but they are not doing it the same WAY, so the comparison is not a comparison of LANGUAGES but a comparison of *list processing* in one language with *array processing* in another language. When you compare Erlang with statically typed languages doing the same thing (well, not quite) the same *way* you find the numbers pleasantly close. A list is made up of pairs. A fairer analogue of this in C would be struct Node { struct Node *next; int item; }; struct Node dummy = {0,0}; struct Node *revloop( struct Node *L, struct Node *R ) { while (L != &dummy) { struct Node *N = malloc(sizeof *N); N->next = R, N->item = L->item; R = N, L = L->next; } return R; } struct Node *reverse( struct Node *L ) { return revloop(L, &dummy); } struct Node *bp_eva_delta( struct Node *Output, struct Node *Sigma ) { struct Node *L = &dummy; while (Output != &dummy && Sigma != &dummy) { int O = Output->item, S = Sigma->item; struct Node *N = malloc(sizeof *N); N->next = L, N->item = S * O * (1 - O); L = N; } return reverse(L); } struct Node *cons( int item, struct Node *next ) { struct Node *N = malloc(sizeof *N); N->next = next, N->item = item; return N; } void test_sum_0( void ) { struct Node *Output = cons(1, cons(2, cons(3, cons(4, &dummy)))); struct Node *Sigma = cons(3, cons(4, cons(5, cons(6, &dummy)))); struct Node *R; int N; for (N = 100*1000*1000; N > 0; N--) { R = bp_eva_delta(Output, Sigma); } } int main(void) { clock_t t0, t1; t0 = clock(); test_sum_0(); t1 = clock(); printf("%g\n", (t1-t0)/(double)CLOCKS_PER_SEC); return 0; } > the erlang spend 29's , and c++ spend 2.78's. > > why the erlang is so slower than c++? On the contrary, why is C so staggeringly slow compared with Erlang, Clean, and SML? (On my desktop machine, that is. On my laptop, it ran for a LONG time and then other things started dying. Hint: no GC.) There are at least five differences between your C++ and Erlang examples: (1) List processing vs array processing. (2) Memory allocation costs (malloc() can be S L O W). (3) Static type system. (4) Truncating arithmetic. (5) Loop unrolling. and there may be an issue of (6) native code compilation vs emulated code. The SML, Clean, C, and C++ programs use *truncating* integer arithmetic. The Erlang program uses unbounded integer arithmetic, with no prospect of overflow. It takes extra time to be ready for that. The fast Erlang code doesn't use a list, it uses an *unrolled* list: -type urlist(T) :: {T,T,T,T,urlist(T)} | {T,T,T} | {T,T} | {T} | {}. For example, {1,2,3,4,{5,6,7,8,{}}}. The Erlang (unrolled data) code does this, with manual loop unrolling. I have library code for unrolled strict lists in Haskell, Clean, and SML, but not (yet) for Erlang. Thinking about unrolling is fair because this is something that C and C++ compilers routinely do these days. > > Or I do not configure the right parameter? Assuming we are using similar machines, it is possible that your Erlang code was running emulated, not native. From bchesneau@REDACTED Mon Nov 14 05:04:04 2016 From: bchesneau@REDACTED (Benoit Chesneau) Date: Mon, 14 Nov 2016 04:04:04 +0000 Subject: [erlang-questions] spawn_link/{2,4} implementation? In-Reply-To: References: Message-ID: Got a little further this morning, I see in the erlang bif, that it does a `gen_server:call` to `{net_kernel, Node}` and then in net_kernel, spawn a process, and link to the original pid. But I'm trying to understand how the 'EXIT' is relayed, what is the logic behind? My original intention was to add it to teleport once the handshake is improved: https://gitlab.com/barrel-db/teleport so I can have an option to spawn a process on a remote node via tcp connection and link it. Any idea is welcome. Benoit On Sun, Nov 13, 2016 at 10:43 AM Benoit Chesneau wrote: > Is there any documentation that describes how spawning a a process on a > remote node is implemented? Which part of the code is responsible of it? > > > I am wondering how the link between 2 processes on 2 different nodes is > implemented, if there is some PING mechanism or such, ... > > Beno?t > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuna.prime@REDACTED Mon Nov 14 06:57:54 2016 From: kuna.prime@REDACTED (Karlo Kuna) Date: Mon, 14 Nov 2016 06:57:54 +0100 Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> Message-ID: I'm actually working on something similar. And problem is that one cannot assume that merge logic is the same for all nested levels. In other words for M1{....Kn => M2 } (i am using incorrect syntax for clarity ) we cannot assume that merging function is the same for M1 and M2. So then one can try to pass list of merger functions that each correspond to "merging" level as: M1{... Kn => M2 {...Knm => M3}}, [F1,F2,F3] where F1(M1) would "recursively" call F2(M2) and so on but this is also incorrect assumption and one can imagine that map can have multiple nested maps on the same level with different semantics for merging as: M1{...Kn=> M2 ... Km => M3} where we want function F1 to merge M2 and F3 to merge M3. there is also problem in type conversion: M1{K => atom} , M2{K => M3} should key K now be list: [atom, M3]?? It cannot be map as there is no valid key, except in odd cases where you want something like Mr{K => M{atom => M3}}. It is "natural" that it should be list but again it could be tuple! For now i think this problem gets easier if we generalize it. Then question is how to merge ANY two objects in erlang ? My answer is to pass a pair to the merge function that is merge on type dispatch. F(X, Y) when is_list(X) andalso is_list(Y) -> %get two elements Xn, Yn F(Xn, Yn), ....; F(X, Y) when is_atom(X) andalso is_atom(Y) -> // branch termination X; % this is just for illustration logic can be to throw or what ever ..... IMO this strategy gives most flexibility and one can make wrapper merge(X, Y , F); and then provide implementation of different merging logics to be used as needed my_merge_logic(X, Y) ..... call merge merge(X, Y, fun my_merge_logic/2), -------------- next part -------------- An HTML attachment was scrubbed... URL: From drew.varner@REDACTED Sun Nov 13 22:58:28 2016 From: drew.varner@REDACTED (Andrew Varner) Date: Sun, 13 Nov 2016 16:58:28 -0500 Subject: [erlang-questions] "OpenSSL might not be installed on this system." (OS X 10.9, openssl seems to be there). In-Reply-To: References: Message-ID: <5421DF49-08CB-4BED-9546-6B977899141A@me.com> I used Homebrew to install OpenSSL. I build Erlang with kerl. The following in my .kerlrc file works for me in OS X 10.11.6: KERL_CONFIGURE_OPTIONS="--with-ssl=/usr/local/opt/openssl? > On Nov 13, 2016, at 1:15 PM, Weston C wrote: > > When trying to use crypto/ssl-related stuff in my local build of > Erlang (Mac OS X 10.9), I'm getting errors that indicate it doesn't > believe I have OpenSSL installed on my system, despite some notable > evidence that I apparently do. > > Here's the error I'm seeing: > > westonMBP:weston$ erl > Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] > [async-threads:10] [hipe] [kernel-poll:false] > > Eshell V7.3 (abort with ^G) > 1> crypto:start(). > ** exception error: undefined function crypto:start/0 > 2> > =ERROR REPORT==== 7-Nov-2016::16:45:02 === > Unable to load crypto library. Failed with error: > "load_failed, Failed to load NIF library: > 'dlopen(/usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so, 2): > Symbol not found: _EVP_aes_128_cbc > Referenced from: /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so > Expected in: flat namespace > in /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so'" > OpenSSL might not be installed on this system. > > =WARNING REPORT==== 7-Nov-2016::16:45:02 === > The on_load function for module crypto returned {error, > {load_failed, > "Failed to load NIF library: > 'dlopen(/usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so, 2): > Symbol not found: _EVP_aes_128_cbc\n Referenced from: > /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so\n Expected > in: flat namespace\n in > /usr/local/lib/erlang/lib/crypto-3.7/priv/lib/crypto.so'"}} > > Now... I'm pretty sure I have openssl installed: > > > westonMBP:otp_src_18.3 weston$ openssl version > OpenSSL 1.0.2j 26 Sep 2016 > westonMBP:otp_src_18.3 weston$ which openssl > /usr/local/bin/openssl > > And libcrypto.a and libcrypto.dylib show under /usr/local/lib, there's > a full openssl header directory under /usr/local/include. > > So, I figured I'd go back and take a look at the different options > presented to me by `configure` when I built from source. > > Initially, I ran configure with `--with-ssl=/usr/local`, but I notice > one can opmit the path... why not try just `--with-ssl`, and see if it > makes the connection better on its own? > > Doesn't seem to: > > > checking for static ZLib to be used by SSL in standard locations... no > checking for OpenSSL >= 0.9.7 in standard locations... rm: > conftest.dSYM: is a directory > rm: conftest.dSYM: is a directory > found; but not usable > configure: WARNING: No (usable) OpenSSL found, skipping ssl, ssh > and crypto applications > checking for kstat_open in -lkstat... (cached) no > > ... > > ********************************************************************* > ********************** APPLICATIONS DISABLED ********************** > ********************************************************************* > > crypto : No usable OpenSSL found > ssh : No usable OpenSSL found > ssl : No usable OpenSSL found > > > Going back to `--with-ssl=/usr/local` gives me: > > > checking for static ZLib to be used by SSL in standard locations... no > checking for OpenSSL kerberos 5 support... yes > rm: conftest.dSYM: is a directory > checking for krb5.h in standard locations... found in /usr/include > checking for kstat_open in -lkstat... (cached) no > > > So the build process seems to think I've got it. Nevertheless, > invoking `crypto:start()` brings us back to the "OpenSSL might not be > installed on this system." error. > > Having wrestled with some weird OS X library/header path issues > recently, I thought about the possibility that one or the other is > there, but OS X can't see it, figured I'd look up an OpenSSL "Hello > World" program: > > > /* > https://www.mitchr.me/SS/exampleCode/openssl.html > https://www.mitchr.me/SS/exampleCode/openssl/bio_hello0.c.html > */ > #include > #include > #include > > int main(int argc, char *argv[]); > > int main(int argc, char *argv[]) { > > BIO *bio_stdout; > > bio_stdout = BIO_new_fp(stdout, BIO_NOCLOSE); > > BIO_printf(bio_stdout, "hello, World!\n"); > > BIO_free_all(bio_stdout); > > return 0; > } > > > And then try building/running it: > > > westonMBP:tmp weston$ gcc -o hellossl hellossl.c -lcrypto > westonMBP:tmp weston$ ./hellossl > hello, World! > > > So, it's finding openssl on this toy/test program. > > I did consider that this might be related to an issue potentially > fixed in a later release, and tried builing 19.1. Same result > (although the missing symbol given in the error message seems to be > _CRYPTO_num_locks rather than _EVP_aes_128_cbc). > > I also tried building/installing a few different versions of openssl > -- 1.0.2e, 1.0.2j, and 1.1.0b. There's no difference between the > result of the 1.0.2's, 1.1.0b seems to actually break the make process > entirely. > > Any hints on what to try next? > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From lukas@REDACTED Mon Nov 14 09:57:51 2016 From: lukas@REDACTED (Lukas Larsson) Date: Mon, 14 Nov 2016 09:57:51 +0100 Subject: [erlang-questions] spawn_link/{2,4} implementation? In-Reply-To: References: Message-ID: Hello, On Mon, Nov 14, 2016 at 5:04 AM, Benoit Chesneau wrote: > > Got a little further this morning, I see in the erlang bif, that it does a > `gen_server:call` to `{net_kernel, Node}` and then in net_kernel, spawn a > process, and link to the original pid. But I'm trying to understand how the > 'EXIT' is relayed, what is the logic behind? > Most of the distribution mechanisms are handled seamlessly by the Erlang VM. It uses the protocol described here: http://erlang.org/doc/apps/erts/erl_dist_protocol.html to communicate in between nodes. The remote erlang:link/1 call and the resulting EXIT signal is sent and received through that protocol. Lukas > > My original intention was to add it to teleport once the handshake is > improved: > https://gitlab.com/barrel-db/teleport > > so I can have an option to spawn a process on a remote node via tcp > connection and link it. Any idea is welcome. > > Benoit > > > On Sun, Nov 13, 2016 at 10:43 AM Benoit Chesneau > wrote: > >> Is there any documentation that describes how spawning a a process on a >> remote node is implemented? Which part of the code is responsible of it? >> >> >> I am wondering how the link between 2 processes on 2 different nodes is >> implemented, if there is some PING mechanism or such, ... >> >> Beno?t >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.charles.davis@REDACTED Mon Nov 14 15:12:01 2016 From: steven.charles.davis@REDACTED (Steve Davis) Date: Mon, 14 Nov 2016 08:12:01 -0600 Subject: [erlang-questions] "OpenSSL might not be installed on this system." (OS X 10.9, openssl seems to be there). Message-ID: <959BC8FB-9981-46A5-8F2A-6F05EC6BEB89@gmail.com> I remember facing a similar issue. IIRC the fix was to create/update an LD_LIBRARY_PATH env variable to add the path to the openssl directory. From vans_163@REDACTED Mon Nov 14 15:31:23 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 14 Nov 2016 14:31:23 +0000 (UTC) Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> Message-ID: <2107015579.3689199.1479133883107@mail.yahoo.com> A general merge two terms function would definitely be both useful and complex, because questions arise where do you draw the line, do you just merge primitive types, or do you want to be able to merge proplists/orddicts/etc as well for example. ?The only primitive complex types are tuples, lists and maps from the top of my head. ?I can see this merge for example doing for a tuple merge({#{}, #{}}, {#{},#{5=>6}}) and giving {#{}, #{5=>6}}. A question arises if tuples, lists and maps should get their own merge functions then. ?Lists have a merge function that you can implement yourself, maps do not. Tuples do not. If a general function is used, where would it reside? Merging the same key with one being atom and one being M3 must give you only M3. ?But this custom logic such as create a list on merging same key with different atom value definitely has its uses, clojure has a library for stuff like this but I cant remember the name. On Monday, November 14, 2016 12:57 AM, Karlo Kuna wrote: I'm actually working on something similar. And problem is that one cannot assume that merge logic is the same for all nested levels. In other words for M1{....Kn => M2 } (i am using incorrect syntax for clarity ) we cannot assume that merging function is the same for M1 and M2. So then one can try to pass list of merger functions that each correspond to "merging" level as:? M1{... Kn => M2 {...Knm => M3}}, [F1,F2,F3] where F1(M1) would "recursively" call F2(M2) and so on but this is also incorrect assumption and one can imagine that map can have multiple nested maps on the same level with?different semantics for merging as: M1{...Kn=> M2 ... Km => M3}?where we want function F1 to merge M2 and F3 to merge M3. there is also problem in type conversion:? M1{K => atom} , M2{K => M3}? should key K now be list: [atom, M3]?? It cannot be ?map as there is no valid key, except in odd cases where you want something like?Mr{K => M{atom => M3}}. It is "natural" that it should be list but again it could be tuple! ? For now i think this problem gets easier if we generalize it. Then question is how to merge ANY two objects in erlang ? My answer is to pass a pair to the merge function that is merge on type dispatch.? F(X, Y) when is_list(X) andalso is_list(Y) ->?? ? ? ? ? ?%get two elements Xn, Yn? ? ? ? ? ?F(Xn, Yn),? ? ? ? ? ?....;F(X, Y) when is_atom(X) andalso is_atom(Y) -> // branch termination? ? ? ? ? ?X; % this is just for illustration logic can be to throw or what ever?.....?IMO this strategy gives most flexibility and one can make wrapper? merge(X, Y , F); and then provide implementation of different merging logics to be used as needed? my_merge_logic(X, Y) ..... call merge? merge(X, Y, fun my_merge_logic/2), -------------- next part -------------- An HTML attachment was scrubbed... URL: From emil@REDACTED Mon Nov 14 20:05:17 2016 From: emil@REDACTED (Emil Holmstrom) Date: Mon, 14 Nov 2016 19:05:17 +0000 Subject: [erlang-questions] "OpenSSL might not be installed on this system." (OS X 10.9, openssl seems to be there). In-Reply-To: <959BC8FB-9981-46A5-8F2A-6F05EC6BEB89@gmail.com> References: <959BC8FB-9981-46A5-8F2A-6F05EC6BEB89@gmail.com> Message-ID: the tests performed by the configure script should give some useful output in config.log. /emil On Mon, 14 Nov 2016 at 15:12, Steve Davis wrote: > I remember facing a similar issue. IIRC the fix was to create/update an > LD_LIBRARY_PATH env variable to add the path to the openssl directory. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jared.biel@REDACTED Mon Nov 14 19:46:36 2016 From: jared.biel@REDACTED (Jared Biel) Date: Mon, 14 Nov 2016 18:46:36 +0000 Subject: [erlang-questions] Patch Package OTP 19.1.6 Released In-Reply-To: References: <9e02eb5f-ecbd-778a-e04f-8d8b962b2e69@ericsson.com> Message-ID: Looking at the official Erlang Docker images page , the 19.1 tag is at 19.1.5 right now. It doesn't seem that there is a pull request to update to 19.1.6 yet. On 10 November 2016 at 13:45, Alex S. wrote: > > 9 ????. 2016 ?., ? 14:31, Bj?rn-Egil Dahlberg XB ericsson.com> ???????(?): > > Patch Package: OTP 19.1.6 > Git Tag: OTP-19.1.6 > Date: 2016-11-09 > Trouble Report Id: OTP-13956, OTP-13997, OTP-14009 > Seq num: > System: OTP > Release: 19 > Application: erts-8.1.1 > Predecessor: OTP 19.1.5 > > > Is this patch package included in official Docker images by any chance? > All of my docker releases FROM erlang:19.1 (thankfully not in production) > fail with > > {"init terminating in do_boot",{load_failed,[tls_ > handshake,tls_v1,tls_record,tls,tls_connection_sup,tls_ > connection,ssl_v2,ssl_v3,ssl_tls_dist_proxy,ssl_sup,ssl_ > socket,ssl_session,ssl_pkix_db,ssl_listen_tracker_sup,ssl_ > dist_sup,ssl_crl_cache,ssl_crl,ssl_config,ssl_cipher,ssl_ > app,ssl_alert,inet_tls_dist,inet6_tls_dist,dtls_record, > dtls_connection_sup,dtls_handshake,dtls_connection, > dtls,ssl_srp_primes,ssl_session_cache_api,ssl_session_ > cache,ssl_record,ssl_manager,ssl_handshake,ssl_crl_hash_ > dir,ssl_crl_cache_api,ssl_connection,ssl_certificate,ssl,dtls_v1]}} > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silver.surfertab@REDACTED Mon Nov 14 20:29:15 2016 From: silver.surfertab@REDACTED (Silver Surfer) Date: Tue, 15 Nov 2016 00:59:15 +0530 Subject: [erlang-questions] C-nodes crashed randomly with double free, memory corruption malloc messege Message-ID: Hi, I have an application running on otp-19.0, where there is a single erlang node and two c-nodes; several processes from erlang node continuously send messages to the c-node servers. The c-node accepts a single connection and process all the received messages (ERL_MSG-> ERL_REG_SEND, ERL_SEND) in separate threads except ERL_TICK, ERL_ERROR etc. So in the c-node main() a connection is accepted and all the messages are received (erl_receive_msg), if the message is ERL_TICK, ERL_ERROR etc. they are dealt appropriately by freeing the memory, else if the message is ERL_MSG then a thread is created and the message is passed to the thread where it is processed and a reply is send using erl_send; this approach of handling message through thread is taken as some of the operation performed by the c-node takes considerable amount of time which is more than the tick time (is there a better way to do this?). Now out of the two c-nodes one is crashing randomly (10 times in 24Hrs, more or less); both the c-nodes follows same architecture, only the operations they perform are different. In most of the times the c-node just goes down without giving any error reason and in 2 or 3 cases it crashes because of double free or memory corruption printer by malloc, the trace back points to erl_receive_msg. Another point observed is that, in the thread after erl_free_compound, when we look at the allocated blocks using erl_eterm_statistics(allocated, freed), it is 0 most of the times but sometimes it is non zero value, i.e. 9, 18, etc. Any help is appreciated. Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Mon Nov 14 21:20:19 2016 From: mjtruog@REDACTED (Michael Truog) Date: Mon, 14 Nov 2016 12:20:19 -0800 Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> Message-ID: <582A1C83.6000706@gmail.com> On 11/12/2016 01:58 PM, Guilherme Andrade wrote: > Throwing my two cents out there: I would find 'merge_nested' useful (I'm pretty sure I've had the need of something similar in the past) but it looks a bit too specific for the maps module; particularly, one can think of it as a "map of (maps of ...)" method, not simply a generic, flattened maps one. > > Then again, the lists module includes (very dearly needed) functions for dealing with 'lists of tuples', so what do I know :) > > Perhaps an alternative would be having a merge/N function that receives a custom value-merger function (besides the unmerged maps), and use it recursively to solve your particular problem. The main merge function you are missing in the maps module, is the merge/3 (instead of merge/2) that is common in other modules (like the dict/orddict modules). I added an implementation of merge/3 at https://github.com/okeuday/mapsd due to needing the dict API elsewhere. With the merge/3 function it should be easier to merge nested maps, in whatever way is required, since one size shouldn't fit all. From jesper.louis.andersen@REDACTED Tue Nov 15 13:53:54 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Tue, 15 Nov 2016 12:53:54 +0000 Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: <582A1C83.6000706@gmail.com> References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> <582A1C83.6000706@gmail.com> Message-ID: On Mon, Nov 14, 2016 at 9:20 PM Michael Truog wrote: > > The main merge function you are missing in the maps module, is the merge/3 > (instead of merge/2) that is common in other modules (like the dict/orddict > modules). I added an implementation of merge/3 at > https://github.com/okeuday/mapsd due to needing the dict API elsewhere. > With the merge/3 function it should be easier to merge nested maps, in > whatever way is required, since one size shouldn't fit all. > In OCaml, you often have a merge-function like this one: val merge : ('k, 'v1, 'cmp) t -> ('k, 'v2, 'cmp) t -> f:(key:'k -> [ | `Left of 'v1 | `Right of 'v2 | `Both of 'v1 * 'v2 ] -> 'v3 option) -> ('k, 'v3, 'cmp) t which means that you have to supply a function of the form fun (K, {left, VL}) -> Res; (K, {right, VR}) -> Res; (K, {both, VL, VR}) -> Res end where Res is either undefined | {ok, Result} for some result value. The semantics are that left,right, and both encodes on which side the value was in the input maps. And the Res encodes if a new value should be produced in the new map. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abimbola.adegun@REDACTED Tue Nov 15 15:35:32 2016 From: abimbola.adegun@REDACTED (Abimbola Adegun) Date: Tue, 15 Nov 2016 15:35:32 +0100 Subject: [erlang-questions] Decode ASN.1 values to Map Message-ID: Hello, Does anyone know a way to decode ASN.1 encoded values to a Map structure using Erlang? Best Regards, Abimbola -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Tue Nov 15 16:22:18 2016 From: vans_163@REDACTED (Vans S) Date: Tue, 15 Nov 2016 15:22:18 +0000 (UTC) Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> <582A1C83.6000706@gmail.com> Message-ID: <384360571.260596.1479223338052@mail.yahoo.com> > fun > (K, {left, VL}) -> Res; > (K, {right, VR}) -> Res; > (K, {both, VL, VR}) -> Res > end Some questions on what VL/VR are once inside a nest, or does that happen all behind the scenes (going in nests). So you only need to compare the current values and return what will replace them? It seems like this is a more common use case then I initially thought. Having a general way to do this in erlang/OTP would be useful. On Tuesday, November 15, 2016 7:54 AM, Jesper Louis Andersen wrote: On Mon, Nov 14, 2016 at 9:20 PM Michael Truog wrote: >The main merge function you are missing in the maps module, is the merge/3 (instead of merge/2) that is common in other modules (like the dict/orddict modules). I added an implementation of merge/3 at https://github.com/okeuday/mapsd due to needing the dict API elsewhere. With the merge/3 function it should be easier to merge nested maps, in whatever way is required, since one size shouldn't fit all. > In OCaml, you often have a merge-function like this one: val merge : ('k, 'v1, 'cmp) t -> ('k, 'v2, 'cmp) t -> f:(key:'k -> [ | `Left of 'v1 | `Right of 'v2 | `Both of 'v1 * 'v2 ] -> 'v3 option) -> ('k, 'v3, 'cmp) t which means that you have to supply a function of the form fun (K, {left, VL}) -> Res; (K, {right, VR}) -> Res; (K, {both, VL, VR}) -> Res end where Res is either undefined | {ok, Result} for some result value. The semantics are that left,right, and both encodes on which side the value was in the input maps. And the Res encodes if a new value should be produced in the new map. _______________________________________________ >erlang-questions mailing list >erlang-questions@REDACTED >http://erlang.org/mailman/listinfo/erlang-questions > > From jesper.louis.andersen@REDACTED Tue Nov 15 20:34:41 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Tue, 15 Nov 2016 19:34:41 +0000 Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: <384360571.260596.1479223338052@mail.yahoo.com> References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> <582A1C83.6000706@gmail.com> <384360571.260596.1479223338052@mail.yahoo.com> Message-ID: Hi Vans, Suppose we have Map1 = #{ a => 1, b => 2 }, Map2 = #{ b => 3, c => 3 }, and we call maps:merge(F, Map1, Map2). F will be called 3 times with the following inputs: F(a, {left, 1}) %% because a is only in the left side of the map. F(b, {both, 2, 3}) %% since b is in both maps and the value 2 is in the left and 3 in the right F(c, {right, 3}) %% since c is only in the right map In each case, we can return a new value {ok, V} for some new value V, perhaps computed from the input. Or we can return undefined in the case we wish to have the new map ignore that key completely and omit it from the produced/merged map. On Tue, Nov 15, 2016 at 4:23 PM Vans S wrote: > > fun > > (K, {left, VL}) -> Res; > > (K, {right, VR}) -> Res; > > (K, {both, VL, VR}) -> Res > > end > > Some questions on what VL/VR are once inside a nest, or does that happen > all behind the scenes (going in nests). So you only need to compare the > current values and return what will replace them? > > It seems like this is a more common use case then I initially thought. > Having a general way to do this in erlang/OTP would be useful. > > > > > On Tuesday, November 15, 2016 7:54 AM, Jesper Louis Andersen < > jesper.louis.andersen@REDACTED> wrote: > > > > > > > On Mon, Nov 14, 2016 at 9:20 PM Michael Truog wrote: > > > >The main merge function you are missing in the maps module, is the > merge/3 (instead of merge/2) that is common in other modules (like the > dict/orddict modules). I added an implementation of merge/3 at > https://github.com/okeuday/mapsd due to needing the dict API elsewhere. > With the merge/3 function it should be easier to merge nested maps, in > whatever way is required, since one size shouldn't fit all. > > > > In OCaml, you often have a merge-function like this one: > > > val merge : ('k, 'v1, 'cmp) t -> ('k, 'v2, 'cmp) t -> f:(key:'k -> [ > | `Left of 'v1 > | `Right of 'v2 > | `Both of 'v1 * 'v2 > ] -> 'v3 option) -> ('k, 'v3, 'cmp) t > > > which means that you have to supply a function of the form > > > fun > > (K, {left, VL}) -> Res; > > (K, {right, VR}) -> Res; > > (K, {both, VL, VR}) -> Res > > end > > > where Res is either undefined | {ok, Result} for some result value. The > semantics are that left,right, and both encodes on which side the value was > in the input maps. And the Res encodes if a new value should be produced in > the new map. > > _______________________________________________ > >erlang-questions mailing list > >erlang-questions@REDACTED > >http://erlang.org/mailman/listinfo/erlang-questions > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Tue Nov 15 20:55:17 2016 From: vans_163@REDACTED (Vans S) Date: Tue, 15 Nov 2016 19:55:17 +0000 (UTC) Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> <582A1C83.6000706@gmail.com> <384360571.260596.1479223338052@mail.yahoo.com> Message-ID: <1458451696.494951.1479239717789@mail.yahoo.com> What would happen if we had: Map1 = #{ a => #{ b => 2, f=> #{}} }, Map2 = #{ a=> #{b => 3, c => 4, d=> #{e=> 5}} } Would we need to do our own calls to recurse into each nest? On Tuesday, November 15, 2016 2:34 PM, Jesper Louis Andersen wrote: Hi Vans, Suppose we have Map1 = #{ a => 1, b => 2 }, Map2 = #{ b => 3, c => 3 }, and we call maps:merge(F, Map1, Map2). F will be called 3 times with the following inputs: F(a, {left, 1}) %% because a is only in the left side of the map. F(b, {both, 2, 3}) %% since b is in both maps and the value 2 is in the left and 3 in the right F(c, {right, 3}) %% since c is only in the right map In each case, we can return a new value {ok, V} for some new value V, perhaps computed from the input. Or we can return undefined in the case we wish to have the new map ignore that key completely and omit it from the produced/merged map. On Tue, Nov 15, 2016 at 4:23 PM Vans S wrote: > fun >> (K, {left, VL}) -> Res; >> (K, {right, VR}) -> Res; >> (K, {both, VL, VR}) -> Res >> end > >Some questions on what VL/VR are once inside a nest, or does that happen all behind the scenes (going in nests). So you only need to compare the current values and return what will replace them? > >It seems like this is a more common use case then I initially thought. Having a general way to do this in erlang/OTP would be useful. > > > > >On Tuesday, November 15, 2016 7:54 AM, Jesper Louis Andersen wrote: > > > > > > >On Mon, Nov 14, 2016 at 9:20 PM Michael Truog wrote: > > >>The main merge function you are missing in the maps module, is the merge/3 (instead of merge/2) that is common in other modules (like the dict/orddict modules). I added an implementation of merge/3 at https://github.com/okeuday/mapsd due to needing the dict API elsewhere. With the merge/3 function it should be easier to merge nested maps, in whatever way is required, since one size shouldn't fit all. >> > >In OCaml, you often have a merge-function like this one: > > >val merge : ('k, 'v1, 'cmp) t -> ('k, 'v2, 'cmp) t -> f:(key:'k -> [ >| `Left of 'v1 >| `Right of 'v2 >| `Both of 'v1 * 'v2 >] -> 'v3 option) -> ('k, 'v3, 'cmp) t > > >which means that you have to supply a function of the form > > >fun > > (K, {left, VL}) -> Res; > > (K, {right, VR}) -> Res; > > (K, {both, VL, VR}) -> Res > >end > > >where Res is either undefined | {ok, Result} for some result value. The semantics are that left,right, and both encodes on which side the value was in the input maps. And the Res encodes if a new value should be produced in the new map. > >_______________________________________________ >>erlang-questions mailing list >>erlang-questions@REDACTED >>http://erlang.org/mailman/listinfo/erlang-questions >> >> > From silver.surfertab@REDACTED Tue Nov 15 20:32:52 2016 From: silver.surfertab@REDACTED (Silver Surfer) Date: Wed, 16 Nov 2016 01:02:52 +0530 Subject: [erlang-questions] Erlang C-nodes crashed randomly with double free, memory corruption malloc messege Message-ID: Hi, I have an application running on otp-19.0, where there is a single erlang node and two c-nodes; several processes from erlang node continuously send messages to the c-node servers. The c-node accepts a single connection and process all the received messages (ERL_MSG-> ERL_REG_SEND, ERL_SEND) in separate threads except ERL_TICK, ERL_ERROR etc. So in the c-node main() a connection is accepted and all the messages are received (erl_receive_msg), if the message is ERL_TICK, ERL_ERROR etc. they are dealt appropriately by freeing the memory, else if the message is ERL_MSG then a thread is created and the message is passed to the thread where it is processed and a reply is send using erl_send; this approach of handling message through thread is taken as some of the operation performed by the c-node takes considerable amount of time which is more than the tick time (is there a better way to do this?). Now out of the two c-nodes one is crashing randomly (10 times in 24Hrs, more or less); both the c-nodes follows same architecture, only the operations they perform are different. In most of the times the c-node just goes down without giving any error reason and in 2 or 3 cases it crashes because of double free or memory corruption printer by malloc, the trace back points to erl_receive_msg. Another point observed is that, in the thread after erl_free_compound, when we look at the allocated blocks using erl_eterm_statistics(allocated, freed), it is 0 most of the times but sometimes it is non zero value, i.e. 9, 18, etc. Any help is appreciated. Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Tue Nov 15 21:39:50 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Tue, 15 Nov 2016 20:39:50 +0000 Subject: [erlang-questions] adding maps:merge_nested/2 In-Reply-To: <1458451696.494951.1479239717789@mail.yahoo.com> References: <1109706702.2843701.1478983909057.ref@mail.yahoo.com> <1109706702.2843701.1478983909057@mail.yahoo.com> <582A1C83.6000706@gmail.com> <384360571.260596.1479223338052@mail.yahoo.com> <1458451696.494951.1479239717789@mail.yahoo.com> Message-ID: https://gist.github.com/jlouis/525249cd5d7860d691cdd97b3a4845af One way of implementing this is simply to convert the two maps to lists and sort them: -module(z). -export([t/0, merge/3]). merge(F, L, R) -> L1 = lists:sort(maps:to_list(L)), L2 = lists:sort(maps:to_list(R)), merge(F, L1, L2, []). This sets up the body of the merger function. We know the lists are ordered, so we can exploit this fact to scrutinize each possible variant case and handle them. The helper function f/3 handles the computation on the merger function F. We just have to make sure each recursive (tail) call consumes the elements correctly. merge(_F, [], [], Acc) -> maps:from_list(Acc); merge(F, [{KX, VX}|Xs], [], Acc) -> merge(F, Xs, [], f(KX, F(KX, {left, VX}), Acc)); merge(F, [], [{KY, VY} | Ys], Acc) -> merge(F, [], Ys, f(KY, F(KY, {right, VY}), Acc)); merge(F, [{KX, VX}|Xs] = Left, [{KY,VY}|Ys] = Right, Acc) -> if KX < KY -> merge(F, Xs, Right, f(KX, F(KX, {left, VX}), Acc)); KX > KY -> merge(F, Left, Ys, f(KY, F(KY, {right, VY}), Acc)); KX =:= KY -> merge(F, Xs, Ys, f(KX, F(KX, {both, VX, VY}), Acc)) end. This makes f/3 easily definable by matching on the output structure of F: f(_K, undefined, Acc) -> Acc; f(K, {ok, R}, Acc) -> [{K, R} | Acc]. And some rudimentary test: t() -> Map1 = #{ a => 1, b => 2 }, Map2 = #{ b => 3, c => 3 }, Fun1 = fun (_K, {left, V}) -> {ok, V}; (_K, {right, V}) -> {ok, V}; (_K, {both, V1, V2}) -> {ok, V1 + V2} end, #{ a := 1, b := 5, c := 3 } = merge(Fun1, Map1, Map2), Map3 = #{ a => #{ b => 2, f=> #{}} }, Map4 = #{ a=> #{b => 3, c => 4, d=> #{e=> 5}} }, Fun2 = fun F(_K, {left, V}) -> {ok, V}; F(_K, {right, V}) -> {ok, V}; F(_K, {both, L, R}) when is_map(L), is_map(R) -> {ok, merge(F, L, R)}; F(_K, {both, L, R}) -> {ok, [L, R]} %% arbitrary choice end, #{a => #{b => [2,3],c => 4,d => #{e => 5},f => #{}}} = merge(Fun2, Map3, Map4). Your answer is in Map3, Map4 and Fun2 in the above. Note that you have to say what to do when you have the case {both, L, R} where either L or R is not a map. I just smash them into a list in the above, but there are probably better solutions depending on your needs. On Tue, Nov 15, 2016 at 9:01 PM Vans S wrote: > What would happen if we had: > > Map1 = #{ a => #{ b => 2, f=> #{}} }, > Map2 = #{ a=> #{b => 3, c => 4, d=> #{e=> 5}} } > > Would we need to do our own calls to recurse into each nest? > > On Tuesday, November 15, 2016 2:34 PM, Jesper Louis Andersen < > jesper.louis.andersen@REDACTED> wrote: > > > > Hi Vans, > > Suppose we have > > Map1 = #{ a => 1, b => 2 }, > Map2 = #{ b => 3, c => 3 }, > > and we call maps:merge(F, Map1, Map2). F will be called 3 times with the > following inputs: > > F(a, {left, 1}) %% because a is only in the left side of the map. > F(b, {both, 2, 3}) %% since b is in both maps and the value 2 is in the > left and 3 in the right > F(c, {right, 3}) %% since c is only in the right map > > In each case, we can return a new value {ok, V} for some new value V, > perhaps computed from the input. Or we can return undefined in the case we > wish to have the new map ignore that key completely and omit it from the > produced/merged map. > > > On Tue, Nov 15, 2016 at 4:23 PM Vans S wrote: > > > fun > >> (K, {left, VL}) -> Res; > >> (K, {right, VR}) -> Res; > >> (K, {both, VL, VR}) -> Res > >> end > > > >Some questions on what VL/VR are once inside a nest, or does that happen > all behind the scenes (going in nests). So you only need to compare the > current values and return what will replace them? > > > >It seems like this is a more common use case then I initially thought. > Having a general way to do this in erlang/OTP would be useful. > > > > > > > > > >On Tuesday, November 15, 2016 7:54 AM, Jesper Louis Andersen < > jesper.louis.andersen@REDACTED> wrote: > > > > > > > > > > > > > >On Mon, Nov 14, 2016 at 9:20 PM Michael Truog wrote: > > > > > >>The main merge function you are missing in the maps module, is the > merge/3 (instead of merge/2) that is common in other modules (like the > dict/orddict modules). I added an implementation of merge/3 at > https://github.com/okeuday/mapsd due to needing the dict API elsewhere. > With the merge/3 function it should be easier to merge nested maps, in > whatever way is required, since one size shouldn't fit all. > >> > > > >In OCaml, you often have a merge-function like this one: > > > > > >val merge : ('k, 'v1, 'cmp) t -> ('k, 'v2, 'cmp) t -> f:(key:'k -> [ > >| `Left of 'v1 > >| `Right of 'v2 > >| `Both of 'v1 * 'v2 > >] -> 'v3 option) -> ('k, 'v3, 'cmp) t > > > > > >which means that you have to supply a function of the form > > > > > >fun > > > > (K, {left, VL}) -> Res; > > > > (K, {right, VR}) -> Res; > > > > (K, {both, VL, VR}) -> Res > > > >end > > > > > >where Res is either undefined | {ok, Result} for some result value. The > semantics are that left,right, and both encodes on which side the value was > in the input maps. And the Res encodes if a new value should be produced in > the new map. > > > >_______________________________________________ > >>erlang-questions mailing list > >>erlang-questions@REDACTED > >>http://erlang.org/mailman/listinfo/erlang-questions > >> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Korpilla@REDACTED Tue Nov 15 23:24:59 2016 From: Oliver.Korpilla@REDACTED (Oliver Korpilla) Date: Tue, 15 Nov 2016 23:24:59 +0100 Subject: [erlang-questions] Decode ASN.1 values to Map In-Reply-To: References: Message-ID: Hello, Abimbola. 'fraid not. I wrote something to do that for S1AP protocol and I wasn't very convinced with the result. Would have welcomed something premade as well. The mix of tuples and records asn1ct produces lacks a bit in usability (in my opinion and for my specific needs, of course) but it seems to be the best we have. It gets the job done too. Cheers, Oliver ? ? Gesendet:?Dienstag, 15. November 2016 um 15:35 Uhr Von:?"Abimbola Adegun" An:?erlang-questions@REDACTED Betreff:?[erlang-questions] Decode ASN.1 values to Map Hello, ? Does anyone know a way to decode ASN.1 encoded values to a Map structure using Erlang? ? ? Best Regards, Abimbola_______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From drohrer@REDACTED Wed Nov 16 00:39:14 2016 From: drohrer@REDACTED (Douglas Rohrer) Date: Tue, 15 Nov 2016 23:39:14 +0000 Subject: [erlang-questions] [ANN] Basho move Lager to erlang-lager organization on Github Message-ID: Recognizing that Lager has long-since become an important open-source tool for Erlang developers, the team at Basho is happy to announce we have created the Erlang-Lager organization on Github to open up Lager to encourage broader community involvement: https://github.com/erlang-lager/lager The primary maintainers will continue to be Mark Allen and John Daily. Please reach out to them to coordinate your involvement going forward. You?ll notice the original lager repo is owned by the organization and we?ve forked a copy back to Basho. All your existing forks will work just fine, though you may need to update your remote URLs (if you were previously pushing directly to the Basho repo): ? lager git:(develop) git remote -v upstream https://github.com/basho/lager.git (fetch) upstream https://github.com/basho/lager.git (push) ? lager git:(develop) git remote set-url upstream https://github.com/erlang-lager/lager.git ? lager git:(develop) git remote -v upstream https://github.com/erlang-lager/lager.git (fetch) upstream https://github.com/erlang-lager/lager.git (push) "upstream" may be different depending on your personal setup. You will also want to update any existing build tools (rebar, mix, erlang.mk, etc.) in your projects that point to the Basho clone of the repo to instead use the erlang-lager organization's repo. I want to thank Mark and John for being willing to continue their excellent work on Lager for the wider community, along with all of our other outside contributors. We all believe a larger maintainer base on Lager will continue to improve Lager and our community as a whole. Please share with any interested parties that may not see this announcement. Best, Doug Rohrer Principal Engineer Basho -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Nov 16 01:23:32 2016 From: zxq9@REDACTED (zxq9) Date: Wed, 16 Nov 2016 09:23:32 +0900 Subject: [erlang-questions] Decode ASN.1 values to Map In-Reply-To: References: Message-ID: <1906724.6B2LUoEQ6b@burrito> On 2016?11?15? ??? 15:35:32 Abimbola Adegun wrote: > Hello, > > Does anyone know a way to decode ASN.1 encoded values to a Map structure > using Erlang? Sure, assuming you've got the ASN.1 schema. Converting "from ASN.1 to a map" is never a 1::1 conversion, so there will be an intermediate step involved but it is not difficult, though writing the conversion function(s) may be tedious the one time you have to do it. The information flow goes: 1. ASN.1 -> mapped Erlang values 2. mapped Erlang values -> Whatever else you want (a map) Just like interpreting any other external data. (Even BERT values are not read-made for internal use, not to mention the circus surrounding JSON.) Providing concrete advice would be a lot easier if you have the schema and sample data, or a cut down example to illustrate the point. -Craig PS: Hi, list. I'm back! From sverker.eriksson@REDACTED Wed Nov 16 14:19:33 2016 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Wed, 16 Nov 2016 14:19:33 +0100 Subject: [erlang-questions] Erlang C-nodes crashed randomly with double free, memory corruption malloc messege In-Reply-To: References: Message-ID: <4082a7b9-c0ec-535c-4f17-860b46f48c88@ericsson.com> Have you tried to run your c-node under valgrind. http://valgrind.org/. /Sverker On 11/15/2016 08:32 PM, Silver Surfer wrote: > Hi, > I have an application running on otp-19.0, where there is a single erlang > node and two c-nodes; several processes from erlang node continuously send > messages to the c-node servers. The c-node accepts a single connection and > process all the received messages (ERL_MSG-> ERL_REG_SEND, ERL_SEND) in > separate threads except ERL_TICK, ERL_ERROR etc. > > So in the c-node main() a connection is accepted and all the messages are > received (erl_receive_msg), if the message is ERL_TICK, ERL_ERROR etc. they > are dealt appropriately by freeing the memory, else if the message is > ERL_MSG then a thread is created and the message is passed to the thread > where it is processed and a reply is send using erl_send; this approach of > handling message through thread is taken as some of the operation performed > by the c-node takes considerable amount of time which is more than the tick > time (is there a better way to do this?). > > Now out of the two c-nodes one is crashing randomly (10 times in 24Hrs, > more or less); both the c-nodes follows same architecture, only the > operations they perform are different. In most of the times the c-node just > goes down without giving any error reason and in 2 or 3 cases it crashes > because of double free or memory corruption printer by malloc, the trace > back points to erl_receive_msg. > > Another point observed is that, in the thread after erl_free_compound, when > we look at the allocated blocks using erl_eterm_statistics(allocated, > freed), it is 0 most of the times but sometimes it is non zero value, i.e. > 9, 18, etc. > > Any help is appreciated. > > Greg > > From sweden.feng@REDACTED Wed Nov 16 20:06:03 2016 From: sweden.feng@REDACTED (Alex Feng) Date: Wed, 16 Nov 2016 20:06:03 +0100 Subject: [erlang-questions] "Port reading stdout from external program" Message-ID: Hi, I am using an port to communicate with an external program, I would like to fetch each output from the external problem. But I have problem to fetch the output when the external problem runs into waiting input," scanf " for example. Here is an example to communicate with a simple C program. I need to fetch the prompt line("Please input you name") from Port in order to interact with C code, but with "scanf" added in C code, I couldn't fetch it from Port. Does anyone know why is that ? Any advice and suggestions will be greatly appreciated. Erlang code: read() -> Port = open_port({spawn, "/home/erlang/test/a.out"}, [binary, {line, 255}]), %binary,{line, 255} io:format("Port: ~p~n",[Port]), do_read(Port). do_read(Port) -> receive {Port,{data,Data}} -> io:format("Data: ~p~n",[Data]); Any -> io:format("No match fifo_client:do_read/1, ~p~n",[Any]) end, do_read(Port). C code: int main() { char str[10]; printf("Please input you name:"); //Prompt line scanf ("%9s",str); // with this line, port can not read the output from this program. return 1; } Br, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang.org@REDACTED Wed Nov 16 20:43:31 2016 From: erlang.org@REDACTED (Stanislaw Klekot) Date: Wed, 16 Nov 2016 20:43:31 +0100 Subject: [erlang-questions] "Port reading stdout from external program" In-Reply-To: References: Message-ID: <20161116194331.GA26704@jarowit.net> On Wed, Nov 16, 2016 at 08:06:03PM +0100, Alex Feng wrote: > Hi, > > I am using an port to communicate with an external program, I would like to > fetch each output from the external problem. > But I have problem to fetch the output when the external problem runs into > waiting input," scanf " for example. > > Here is an example to communicate with a simple C program. > > I need to fetch the prompt line("Please input you name") from Port in order > to interact with C code, but with "scanf" added in C code, I couldn't > fetch it from Port. > Does anyone know why is that ? Any advice and suggestions will be greatly > appreciated. [...] > C code: > > int main() > { > > char str[10]; > > printf("Please input you name:"); //Prompt line > > scanf ("%9s",str); // with this line, port can not read the output from this program. > > return 1; > } It's because STDOUT buffering differs between running things in terminal directly and running them with STDOUT redirected somewhere. You can see it yourself by first running this "a.out" in shell, and then running it redirected to cat: "a.out | cat". In the latter case you won't get prompt. You need to call fflush() to flush anything buffered by printf() (see its man page for details). -- Stanislaw Klekot From frank.muller.erl@REDACTED Thu Nov 17 05:22:50 2016 From: frank.muller.erl@REDACTED (Frank Muller) Date: Thu, 17 Nov 2016 04:22:50 +0000 Subject: [erlang-questions] "Port reading stdout from external program" In-Reply-To: <20161116194331.GA26704@jarowit.net> References: <20161116194331.GA26704@jarowit.net> Message-ID: Use erlexec in "async" mode instead: https://github.com/saleyn/erlexec Le mer. 16 nov. 2016 ? 20:43, Stanislaw Klekot a ?crit : > On Wed, Nov 16, 2016 at 08:06:03PM +0100, Alex Feng wrote: > > Hi, > > > > I am using an port to communicate with an external program, I would like > to > > fetch each output from the external problem. > > But I have problem to fetch the output when the external problem runs > into > > waiting input," scanf " for example. > > > > Here is an example to communicate with a simple C program. > > > > I need to fetch the prompt line("Please input you name") from Port in > order > > to interact with C code, but with "scanf" added in C code, I couldn't > > fetch it from Port. > > Does anyone know why is that ? Any advice and suggestions will be > greatly > > appreciated. > [...] > > C code: > > > > int main() > > { > > > > char str[10]; > > > > printf("Please input you name:"); //Prompt line > > > > scanf ("%9s",str); // with this line, port can not read the output > from this program. > > > > return 1; > > } > > It's because STDOUT buffering differs between running things in terminal > directly and running them with STDOUT redirected somewhere. You can see > it yourself by first running this "a.out" in shell, and then running it > redirected to cat: "a.out | cat". In the latter case you won't get > prompt. > > You need to call fflush() to flush anything buffered by printf() (see > its man page for details). > > -- > Stanislaw Klekot > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christopher.meiklejohn@REDACTED Thu Nov 17 10:39:50 2016 From: christopher.meiklejohn@REDACTED (Christopher Meiklejohn) Date: Thu, 17 Nov 2016 10:39:50 +0100 Subject: [erlang-questions] is there anything wrong with global? In-Reply-To: <54A0708F.9020501@gmail.com> References: <54A0708F.9020501@gmail.com> Message-ID: On Sun, Dec 28, 2014 at 10:05 PM, Michael Truog wrote: > On 12/28/2014 11:39 AM, Sean Cribbs wrote: > > Global registration is inherently a consensus problem, and thus will have > problems with liveness, as you've discovered. I can't speak to the > implementation of the 'global' module, but I would suspect it is not > partition-tolerant. Is there a way you can reconfigure your application to > have locally-registered processes (one-per-node), or not require a single > distribution point? > > There is also some work done by my colleague Chris Meiklejohn for > eventually-consistent (and partition-tolerant) process groups, if you don't > require a single authoritative process: > https://github.com/cmeiklejohn/riak_pg > > If you need something that has already been in production, you need not look > further than Erlang/OTP with http://www.erlang.org/doc/man/pg2.html . There > also is https://github.com/okeuday/cpg/ which has more features without ets > usage to avoid contention (and it supports the via syntax, if you need that > (i.e., {via,Module,ViaName})). > > > On Sat, Dec 27, 2014 at 5:36 AM, 289602744 <289602744@REDACTED> wrote: >> >> These days,I found global did not work well. >> I have hundreds of nodes,and these nodes are connected with each other. >> But on some nodes, I can't get a process's global registered name with >> global:whereis_name, althrough I can get the node info by >> net_kernel:node_info/1 >> I didnot find a way to resolve this problem. Unless I unregister that >> process's name and then register the name again, and then, I can get the >> global name by global:whereis_name/1. >> Is there anything wrong with global? with so many nodes, I can't >> unregister and then register a process name every time I find global return >> me an undefined. Resurrecting this from the dead, we've just released a highly-scalable, eventually consistency, process registry that's basically the spiritual successor of Riak PG, called Lasp PG. [1] It's just a beta for now, but we're trying to demonstrate how it's easy to build applications on top of the Lasp support libraries. It uses the underlying Lasp network distribution and key-value store for highly scalable (500+ node) clusters, with delta-optimized CRDTs to reduce redundant network transmission and reduce garbage creation. We're hoping that we can show that the libraries that we've built to support Lasp are generally reusable in Erlang, and that the Lasp KV store is a nice reusable peer-to-peer data store for embedding into your applications. Thanks, Christopher [1] https://github.com/lasp-lang/lasp_pg From touch.sereysethy@REDACTED Thu Nov 17 10:47:53 2016 From: touch.sereysethy@REDACTED (Sereysethy TOUCH) Date: Thu, 17 Nov 2016 16:47:53 +0700 Subject: [erlang-questions] Erlang/OTP 19, SSL error: handshake failure Message-ID: Hello, I just updated to Erlang OTP 19. I encountered this problem SSL: certify: ssl_connection.erl:826:Fatal error: handshake failure It is a self-signed certificate and client is ask to present certificate if any. Does anyone know where I should look at or which parameters should I change? Is it a known bug? Thanks, Sethy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ingela.andin@REDACTED Thu Nov 17 12:13:48 2016 From: ingela.andin@REDACTED (Ingela Andin) Date: Thu, 17 Nov 2016 12:13:48 +0100 Subject: [erlang-questions] Erlang/OTP 19, SSL error: handshake failure In-Reply-To: References: Message-ID: Hi! 2016-11-17 10:47 GMT+01:00 Sereysethy TOUCH : > Hello, > > I just updated to Erlang OTP 19. > > I encountered this problem > > SSL: certify: ssl_connection.erl:826:Fatal error: handshake failure > > It is a self-signed certificate and client is ask to present certificate > if any. > > Does anyone know where I should look at or which parameters should I > change? > > Is it a known bug? > It might be, I recommend ssl-8.0.3 released in OTP-19.1.1 available at https://github.com/erlang/otp Regards Ingela Erlang/OTP team - Ericsson AB -------------- next part -------------- An HTML attachment was scrubbed... URL: From cedric.bhihe@REDACTED Thu Nov 17 21:19:14 2016 From: cedric.bhihe@REDACTED (Cedric Bhihe) Date: Thu, 17 Nov 2016 21:19:14 +0100 Subject: [erlang-questions] Erlang list: Toolbar not available in Erlang v19 ? Message-ID: Hello, I just installed Erlang v19 from the tar ball on a Ubuntu 14.04.5 box. sudo apt-get install libwxgtk2.8-dev libgl1-mesa-dev libglu1-mesa-dev libpng3 wget http://erlang.org/download/otp_src_19.1.tar.gz tar -zxf otp_src_19.1.tar.gz cd otp_src_19.1 export ERL_TOP=`pwd` ./configure ./make sudo make install I have seen the use of a toolbar in section 1.2.1 of "getting started" on erlang.org. The description here and elsewhere mentions debugging, trace visual, process management (Pman)... But as I start the shell with `> erl -s toolbar` Erlang crashes and gives me: Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false] {"init terminating in do_boot",{undef,[{toolbar,start,[],[]},{init,start_em,1,[]},{init,do_boot,3,[]}]}} init terminating in do_boot () Crash dump is being written to: erl_crash.dump...done Can someone help me with that ? Is the toolbar available for v19 ? Did I forget compilation options ? Thanks. -- cedric dot bhihe at gmail dot com - /GMT+1/ _________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.valim@REDACTED Thu Nov 17 23:04:16 2016 From: jose.valim@REDACTED (=?UTF-8?Q?Jos=C3=A9_Valim?=) Date: Thu, 17 Nov 2016 23:04:16 +0100 Subject: [erlang-questions] Erlang list: Toolbar not available in Erlang v19 ? In-Reply-To: References: Message-ID: Toolbar was removed in Erlang R16 according to its old docs: http://erlang.org/documentation/doc-5.10/lib/toolbar-1.4.2.3/doc/html/toolbar_chapter.html *Jos? Valim* www.plataformatec.com.br Skype: jv.ptec Founder and Director of R&D On Thu, Nov 17, 2016 at 9:19 PM, Cedric Bhihe wrote: > Hello, I just installed Erlang v19 from the tar ball on a Ubuntu 14.04.5 > box. > > sudo apt-get install libwxgtk2.8-dev libgl1-mesa-dev libglu1-mesa-dev > libpng3 > wget http://erlang.org/download/otp_src_19.1.tar.gz > tar -zxf otp_src_19.1.tar.gz > cd otp_src_19.1 > export ERL_TOP=`pwd` > ./configure > ./make > sudo make install > I have seen the use of a toolbar in section 1.2.1 of "getting started" on > erlang.org. > The description here and elsewhere mentions debugging, trace visual, > process management (Pman)... > But as I start the shell with `> erl -s toolbar` Erlang crashes and gives > me: > > Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:2:2] [async-threads:10] > [hipe] [kernel-poll:false] > {"init terminating in do_boot",{undef,[{toolbar, > start,[],[]},{init,start_em,1,[]},{init,do_boot,3,[]}]}} > init terminating in do_boot () > Crash dump is being written to: erl_crash.dump...done > > Can someone help me with that ? Is the toolbar available for v19 ? Did I > forget compilation options ? Thanks. > > -- > cedric dot bhihe at gmail dot com > - *GMT+1* > > _________________________________________________________________ > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangud@REDACTED Fri Nov 18 06:48:20 2016 From: dangud@REDACTED (Dan Gudmundsson) Date: Fri, 18 Nov 2016 05:48:20 +0000 Subject: [erlang-questions] Erlang list: Toolbar not available in Erlang v19 ? In-Reply-To: References: Message-ID: You can use observer:start() which is the replacement for the old gui tools, or debugger:start() if it is the debugger you are looking for. On Thu, Nov 17, 2016 at 11:04 PM Jos? Valim wrote: > Toolbar was removed in Erlang R16 according to its old docs: > http://erlang.org/documentation/doc-5.10/lib/toolbar-1.4.2.3/doc/html/toolbar_chapter.html > > > > *Jos? Valim* > www.plataformatec.com.br > Skype: jv.ptec > Founder and Director of R&D > > On Thu, Nov 17, 2016 at 9:19 PM, Cedric Bhihe > wrote: > > Hello, I just installed Erlang v19 from the tar ball on a Ubuntu 14.04.5 > box. > > sudo apt-get install libwxgtk2.8-dev libgl1-mesa-dev libglu1-mesa-dev > libpng3 > wget http://erlang.org/download/otp_src_19.1.tar.gz > tar -zxf otp_src_19.1.tar.gz > cd otp_src_19.1 > export ERL_TOP=`pwd` > ./configure > ./make > sudo make install > I have seen the use of a toolbar in section 1.2.1 of "getting started" on > erlang.org. > The description here and elsewhere mentions debugging, trace visual, > process management (Pman)... > But as I start the shell with `> erl -s toolbar` Erlang crashes and gives > me: > > Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:2:2] [async-threads:10] > [hipe] [kernel-poll:false] > {"init terminating in > do_boot",{undef,[{toolbar,start,[],[]},{init,start_em,1,[]},{init,do_boot,3,[]}]}} > init terminating in do_boot () > Crash dump is being written to: erl_crash.dump...done > > Can someone help me with that ? Is the toolbar available for v19 ? Did I > forget compilation options ? Thanks. > > -- > cedric dot bhihe at gmail dot com > - *GMT+1* > > _________________________________________________________________ > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silviu.cpp@REDACTED Fri Nov 18 09:35:27 2016 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Fri, 18 Nov 2016 10:35:27 +0200 Subject: [erlang-questions] ssl session tickets with erlang Message-ID: Hello, There is any way using SSL module from erlang to use ssl session tickets and sharing them between nodes as described in https://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html#sharing-tickets Kind regards, Silviu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Korpilla@REDACTED Fri Nov 18 11:33:19 2016 From: Oliver.Korpilla@REDACTED (Oliver Korpilla) Date: Fri, 18 Nov 2016 11:33:19 +0100 Subject: [erlang-questions] Two-way ports? Message-ID: Hello. I'm trying to solve the following problem: I have to use an external mechanism available in C to send and receive messages. The API provides: * Some basic startup functions registering this Unix process as an entity in that message framework. * Registering for a specific ID to receive messages under. * A non-blocking send. * A blocking receive. Is this something I could in theory manage through a port? I want to: 1) Setup the initial startup (done always). 2) Register a specific ID. 3a) Receive messages asynchronously from the process. 3b) Send messages into the port for immediate delivery. Can this be done in one port? Does it need two ports? Can it be done with ports at all or elegantly? Thanks and best regards, Oliver From serge@REDACTED Fri Nov 18 16:28:34 2016 From: serge@REDACTED (Serge Aleynikov) Date: Fri, 18 Nov 2016 10:28:34 -0500 Subject: [erlang-questions] ANN: erlexec v1.6.4 Message-ID: I'd like to announce a bug fix release v1.6.4 of erlexec that addresses a critical issue of occasionally missing OS process termination signals when two or more spawned OS processes get terminated closely to each other. This issue was observed on busy systems. Also fixed open issues reported on Mac OS X. https://github.com/saleyn/erlexec Serge -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Fri Nov 18 20:23:56 2016 From: mjtruog@REDACTED (Michael Truog) Date: Fri, 18 Nov 2016 11:23:56 -0800 Subject: [erlang-questions] Two-way ports? In-Reply-To: References: Message-ID: <582F554C.4040207@gmail.com> On 11/18/2016 02:33 AM, Oliver Korpilla wrote: > Hello. > > I'm trying to solve the following problem: > > I have to use an external mechanism available in C to send and receive messages. > > The API provides: > * Some basic startup functions registering this Unix process as an entity in that message framework. > * Registering for a specific ID to receive messages under. > * A non-blocking send. > * A blocking receive. > > Is this something I could in theory manage through a port? > > I want to: > 1) Setup the initial startup (done always). > 2) Register a specific ID. > 3a) Receive messages asynchronously from the process. > 3b) Send messages into the port for immediate delivery. > > Can this be done in one port? Does it need two ports? Can it be done with ports at all or elegantly? This can be done in one port. However, it is much easier to use the CloudI C API for this functionality. There is a small example at http://cloudi.org/#C . Best Regards, Michael From vladdu55@REDACTED Fri Nov 18 20:44:33 2016 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Fri, 18 Nov 2016 20:44:33 +0100 Subject: [erlang-questions] cancellable worker process Message-ID: Hi all, In a project I'm working on, I need to be able to start a computation and be able to retrieve partial answers as well as cancel it. I couldn't find anything similar on the net, so I implemented something myself, hoping it may be of more general interest. In that case I will release it properly, with tests and docs. https://gist.github.com/vladdu/911a3ccccc6fa8b0aed08a93ec8fa37e I would appreciate comments about any (more or less glaring) bugs and (hints of) overengineering. ;-) Some of the details could be ported to rpc:async_call, like the ability to call yield from any process, if it is deemed to be useful. best regards, Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.platt@REDACTED Fri Nov 18 22:18:29 2016 From: pablo.platt@REDACTED (pablo platt) Date: Fri, 18 Nov 2016 23:18:29 +0200 Subject: [erlang-questions] UDP buffers and packet loss Message-ID: Hi, I have a UDP erlang media server running on Ubuntu 16.04 in a Virtual Private Server (VPS) with 100 connected clients. 10 clients send 1Mbps. Each packet is 1KB. All clients receive 1Mbps. Every client has a pid that holds a gen_udp socket. I see packet loss on the clients. I'm trying to find what cause the packet loss and how to fix it. mtr -u show no loss. netstat -su show that RcvbufErrors increases over time Does this means that packets are dropped between the linux kernel and Erlang? The socket is created with gen_udp:open(0, [binary, {active, once}]) The default gen_udp socket settings my machine is: buffer = 8192 recbuf = 16384 sndbuf = 212992 inet docs says that buffer should be larger than recbuf and sndbuf and will be set automatically but in my case it is smaller. Is this a problem? Does sndbuf has an effect on UDP socket or only TCP socket? What are the recommended settings for buffer, recbuf and sndbuf for 1Mbps UDP socket? Do I need to change the kernel settings as well? What else can cause packet loss? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From roe.adrian@REDACTED Fri Nov 18 22:59:42 2016 From: roe.adrian@REDACTED (Adrian Roe) Date: Fri, 18 Nov 2016 21:59:42 +0000 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: Message-ID: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> We've had good success tweaking the kernel buffer sizes on servers with a lot of UPD traffic in the past. I found the following in an old shell script that might do the trick: sudo sysctl -w net.core.rmem_max=8000000 sudo sysctl -w net.core.wmem_max=8000000 sudo sysctl -w net.ipv4.route.flush=1 - those change the read and write buffer sizes; there are a whole ton of other options, but we?ve found those two to be sufficient on servers with lots of UDP traffic in the past. Google will no doubt tell you all the others if you?re bored ;) Sent from my iPhone > On 18 Nov 2016, at 21:18, pablo platt wrote: > > Hi, > > I have a UDP erlang media server running on Ubuntu 16.04 in a Virtual Private Server (VPS) with 100 connected clients. > 10 clients send 1Mbps. Each packet is 1KB. > All clients receive 1Mbps. > Every client has a pid that holds a gen_udp socket. > > I see packet loss on the clients. I'm trying to find what cause the packet loss and how to fix it. > mtr -u show no loss. > netstat -su show that RcvbufErrors increases over time > Does this means that packets are dropped between the linux kernel and Erlang? > > The socket is created with gen_udp:open(0, [binary, {active, once}]) > The default gen_udp socket settings my machine is: > buffer = 8192 > recbuf = 16384 > sndbuf = 212992 > > inet docs says that buffer should be larger than recbuf and sndbuf and will be set automatically but in my case it is smaller. Is this a problem? > > Does sndbuf has an effect on UDP socket or only TCP socket? > > What are the recommended settings for buffer, recbuf and sndbuf for 1Mbps UDP socket? > Do I need to change the kernel settings as well? > > What else can cause packet loss? > > Thanks > > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Sat Nov 19 11:05:22 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Sat, 19 Nov 2016 13:05:22 +0300 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: I don't understand what for you increase udp buffer? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.platt@REDACTED Sat Nov 19 11:16:31 2016 From: pablo.platt@REDACTED (pablo platt) Date: Sat, 19 Nov 2016 12:16:31 +0200 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: @max I have packet loss on my UDP sockets. mtr -u to the server show no loss. netstat -su show that RcvbufErrors increases over time I think it means that my Erlang server is dropping packets. If it's not because of small receive buffers, what can be the reason? On Sat, Nov 19, 2016 at 12:05 PM, Max Lapshin wrote: > I don't understand what for you increase udp buffer? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Sat Nov 19 20:51:00 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Sat, 19 Nov 2016 22:51:00 +0300 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: ah, it is receive? We had to write special port driver to receive 500 mbit/s of UDP video multicast -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Sun Nov 20 01:26:40 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sun, 20 Nov 2016 00:26:40 +0000 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: On Sat, Nov 19, 2016 at 11:16 AM pablo platt wrote: > I think it means that my Erlang server is dropping packets. > If it's not because of small receive buffers, what can be the reason? > > > Linux? cat /proc/net/udp and look for drops. If you burst UDP packets, there is a chance the UDP buffer in the kernel fills up and then it will start to drop packets. The way to handle this is either to make sure your processing speed is above the packet arrival rate, and then increasing the buffer size so you can handle the case where the packet arrives but the Erlang system isn't scheduled in due time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ingela.andin@REDACTED Sun Nov 20 15:23:14 2016 From: ingela.andin@REDACTED (Ingela Andin) Date: Sun, 20 Nov 2016 15:23:14 +0100 Subject: [erlang-questions] ssl session tickets with erlang In-Reply-To: References: Message-ID: Hi! Support for RFC 5077 is not implemented at the moment. It can, of course, be done. It is a question of priorities or someone making a PR. Regards Ingela Erlang/OTP team - Ericsson AB 2016-11-18 9:35 GMT+01:00 Caragea Silviu : > Hello, > > There is any way using SSL module from erlang to use ssl session tickets > and sharing them between nodes as described in > https://vincent.bernat.im/en/blog/2011-ssl-session-reuse- > rfc5077.html#sharing-tickets > > Kind regards, > Silviu > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nbartley@REDACTED Sun Nov 20 22:13:10 2016 From: nbartley@REDACTED (nato) Date: Sun, 20 Nov 2016 13:13:10 -0800 Subject: [erlang-questions] send after mechanics Message-ID: On a whim, I started an erlang shell, counted the processes (some 25), then called `erlang:send_after/3` -- I was surprised to see that the count of processes never went up. Do these timer routines not spawn a new process!? Seems like a perfect candidate for such. 'Would love some hand-holding on how this all works before I stare at the source. From raimo+erlang-questions@REDACTED Mon Nov 21 09:01:19 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Mon, 21 Nov 2016 09:01:19 +0100 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: <20161121080119.GA39708@erix.ericsson.se> There is also the option 'read_packets' to http://erlang.org/doc/man/inet.html#setopts-2 that in effect controls the prioritizing between feeding in UDP packets versus executing erlang code. Try increasing it. On Sat, Nov 19, 2016 at 12:16:31PM +0200, pablo platt wrote: > @max > I have packet loss on my UDP sockets. > mtr -u to the server show no loss. > netstat -su show that RcvbufErrors increases over time > I think it means that my Erlang server is dropping packets. > If it's not because of small receive buffers, what can be the reason? > > On Sat, Nov 19, 2016 at 12:05 PM, Max Lapshin wrote: > > > I don't understand what for you increase udp buffer? > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From aschultz@REDACTED Mon Nov 21 09:29:31 2016 From: aschultz@REDACTED (Andreas Schultz) Date: Mon, 21 Nov 2016 09:29:31 +0100 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: On 11/19/2016 08:51 PM, Max Lapshin wrote: > ah, it is receive? > > We had to write special port driver to receive 500 mbit/s of UDP video multicast With [1] I'm able to forward (receive and send) 500mbit/s on a single core. The interface itself is a port driver combined with a nif and works only on Linux (and maybe OSX), but the actual forwarding code is pure Erlang [2]. Andreas [1]: https://github.com/travelping/gen_socket [2]: https://github.com/travelping/gtp_u_edp/blob/master/src/gtp_u_edp_forwarder.erl From pablo.platt@REDACTED Mon Nov 21 09:49:12 2016 From: pablo.platt@REDACTED (pablo platt) Date: Mon, 21 Nov 2016 10:49:12 +0200 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: @Raimo, I'll try increasing read_packets. How does increasing read_packets work compared to increasing recbuf? Both have the same effect only read_packets keep unhanded packets as Erlang messages while recbuf keep them as raw data? Do you have a suggestion how to check the effect of increasing it? How do I know if UDP packets where dropped on the network or on the buffer because my Erlang process didn't process them fast enough? @Andreas How did you compare gen_socket with gen_udp? Before going the nif way, I want to be sure that it improves my throughput. Why a nif can handle more UDP packets than gen_udp? I thought that the Erlang code should be the bottleneck in both cases. On Mon, Nov 21, 2016 at 10:29 AM, Andreas Schultz wrote: > On 11/19/2016 08:51 PM, Max Lapshin wrote: > >> ah, it is receive? >> >> We had to write special port driver to receive 500 mbit/s of UDP video >> multicast >> > > With [1] I'm able to forward (receive and send) 500mbit/s on a single > core. The > interface itself is a port driver combined with a nif and works only on > Linux > (and maybe OSX), but the actual forwarding code is pure Erlang [2]. > > Andreas > > [1]: https://github.com/travelping/gen_socket > [2]: https://github.com/travelping/gtp_u_edp/blob/master/src/gtp_ > u_edp_forwarder.erl > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz@REDACTED Mon Nov 21 10:10:16 2016 From: aschultz@REDACTED (Andreas Schultz) Date: Mon, 21 Nov 2016 10:10:16 +0100 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: <4aed2f1b-5452-3aa1-4414-ea2b1bb0828b@tpip.net> On 11/21/2016 09:49 AM, pablo platt wrote: > @Raimo, I'll try increasing read_packets. > How does increasing read_packets work compared to increasing recbuf? > Both have the same effect only read_packets keep unhanded packets as Erlang messages while recbuf keep them as raw data? > > Do you have a suggestion how to check the effect of increasing it? > How do I know if UDP packets where dropped on the network or on the buffer because my Erlang process didn't process them fast enough? > > @Andreas > How did you compare gen_socket with gen_udp? > Before going the nif way, I want to be sure that it improves my throughput. > Why a nif can handle more UDP packets than gen_udp? I thought that the Erlang code should be the bottleneck in both cases. For one, I can force (as root) the recvbuf size to a very large value [1]. Also, the architecture of gen_socket for receiving packets is very different from gen_udp. With the later, the inet kernel is reading the data of the socket and sending you a message with the data. With gen_socket you get a message telling you that data can be read from the socket and the you have to call recv yourself. That gives you the opportunity to read all pending input without having to got through the message send/receive. Andreas [1]: https://github.com/travelping/ergw/blob/master/src/gtp_socket.erl#L237 > > > On Mon, Nov 21, 2016 at 10:29 AM, Andreas Schultz > wrote: > > On 11/19/2016 08:51 PM, Max Lapshin wrote: > > ah, it is receive? > > We had to write special port driver to receive 500 mbit/s of UDP video multicast > > > With [1] I'm able to forward (receive and send) 500mbit/s on a single core. The > interface itself is a port driver combined with a nif and works only on Linux > (and maybe OSX), but the actual forwarding code is pure Erlang [2]. > > Andreas > > [1]: https://github.com/travelping/gen_socket > [2]: https://github.com/travelping/gtp_u_edp/blob/master/src/gtp_u_edp_forwarder.erl > > > From raimo+erlang-questions@REDACTED Mon Nov 21 10:30:07 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Mon, 21 Nov 2016 10:30:07 +0100 Subject: [erlang-questions] UDP buffers and packet loss In-Reply-To: References: <27B756C3-AB75-4F1F-A805-BEF0723896BA@gmail.com> Message-ID: <20161121093007.GA45647@erix.ericsson.se> On Mon, Nov 21, 2016 at 10:49:12AM +0200, pablo platt wrote: > @Raimo, I'll try increasing read_packets. > How does increasing read_packets work compared to increasing recbuf? > Both have the same effect only read_packets keep unhanded packets as Erlang > messages while recbuf keep them as raw data? I would not say they have the same effect. The kernel puts packets in your receive buffer and when it overflows it drops packets. The reason that it overflows is either that the packets are too large or that your application does not read them fast enough, or both. Your application has to manage both a continous flow of packets and a certain amount of load spikes, where load spikes can be either due to a temporarily high packet rate or due to large packets. A large receive buffer size helps you to handle large packets or temporarily high packet rates at the expense of memory for the socket. There are also kernel parameters for system UDP buffers that can be increased so the kernel does not drop packets even before sorting them to the right socket. A large read_packets helps you handle a high packet rate at the expense of executing erlang code slower. So if you set read_packets to high you will not drop packets, but the erlang node will choke under the packet rate, maybe by running out of memory for the unprocessed packets. > > Do you have a suggestion how to check the effect of increasing it? > How do I know if UDP packets where dropped on the network or on the buffer > because my Erlang process didn't process them fast enough? I am sorry I do not know exactly how to read proper statistics for this... If you increase read_packets and see smaller packet loss it that is fine. But if you increase it more and do not see smaller packet loss then it is high enough for this load case. If you can chooke the node with UDP packets then you have increased it too much. :-) > > @Andreas > How did you compare gen_socket with gen_udp? > Before going the nif way, I want to be sure that it improves my throughput. > Why a nif can handle more UDP packets than gen_udp? I thought that the > Erlang code should be the bottleneck in both cases. > > > On Mon, Nov 21, 2016 at 10:29 AM, Andreas Schultz wrote: > > > On 11/19/2016 08:51 PM, Max Lapshin wrote: > > > >> ah, it is receive? > >> > >> We had to write special port driver to receive 500 mbit/s of UDP video > >> multicast > >> > > > > With [1] I'm able to forward (receive and send) 500mbit/s on a single > > core. The > > interface itself is a port driver combined with a nif and works only on > > Linux > > (and maybe OSX), but the actual forwarding code is pure Erlang [2]. > > > > Andreas > > > > [1]: https://github.com/travelping/gen_socket > > [2]: https://github.com/travelping/gtp_u_edp/blob/master/src/gtp_ > > u_edp_forwarder.erl > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From jesper.louis.andersen@REDACTED Mon Nov 21 16:12:23 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 21 Nov 2016 15:12:23 +0000 Subject: [erlang-questions] send after mechanics In-Reply-To: References: Message-ID: On Sun, Nov 20, 2016 at 10:13 PM nato wrote: > On a whim, I started an erlang shell, counted the processes (some 25), > then called `erlang:send_after/3` -- I was surprised to see that the > count of processes never went up. Do these timer routines not spawn a > new process!? Seems like a perfect candidate for such. 'Would love > some hand-holding on how this all works before I stare at the source. > In a non-optimizing implementation of Erlang, you might indeed implement send-after as a process spawn. Something akin to: send_after(Wait, Target, Msg) -> spawn(fun() -> receive after Wait -> Target ! Msg end end). In practice you would need to extend the receive clause so it can handle a cancel-message in the mailbox and return the right value, but this is somewhat trivial (though it is gritty). In an optimizing Erlang-implemention, you want timers to be internal in the system. You want precise wakeup of timers when they trigger and you want them to have little memory overhead. This guarantees the soft real-time properties and makes timers fast. So timers are implemented directly in the runtime, separate from processes. The "trick" of having a nice "consistent" and "simple" theory which is then torn down in an industrial practical setting is something you see pretty often. On one hand, you can understand the system based on its simple model. And you can verify the correctness of that model against the practical and fast variant of the system. In fact, there is a direct path from this observation to monads and their use in proofs. Another for-efficiency part of the Erlang system are ETS tables, which are implemented not as processes but as highly optimized concurrently accessible memory tables. The reason is that Key-Value lookups in mnesia had the requirement of microsecond latency, and this is not achievable in a process if the system is under heavy load and that processes is at the end of the run-queue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lheadley@REDACTED Mon Nov 21 19:29:48 2016 From: lheadley@REDACTED (Lyn Headley) Date: Mon, 21 Nov 2016 10:29:48 -0800 Subject: [erlang-questions] Preventing memory crashes Message-ID: Erlangers, I am looking to write a process that makes sure my node does not run out of memory and crash. What I would like is a function F that returns a number that represents the amount of memory my node is using. If that number gets above a threshold T, I will kill a process (carefully and safely, possibly after dumping its ets table to disk). After the process exits, I would immediately call the same function F and see whether the number it returns is now less than my threshold T. If not, kill another process, etc. Can I easily write this function F? Does the 'total' number from erlang:memory serve my needs here? In particular, will it go down immediately after I kill a process? Given a carefully chosen value for threshold T, and assuming no atoms or binaries are being allocated, (heap data and ets tables are the main drivers of memory use), will this strategy indeed prevent my node from crashing as long as I kill processes faster than they can grow? Other thoughts? -Lyn From roger@REDACTED Mon Nov 21 19:36:46 2016 From: roger@REDACTED (Roger Lipscombe) Date: Mon, 21 Nov 2016 18:36:46 +0000 Subject: [erlang-questions] Preventing memory crashes In-Reply-To: References: Message-ID: It sounds like the existing high memory alarm would be enough...? See http://erlang.org/doc/man/memsup.html On 21 November 2016 at 18:29, Lyn Headley wrote: > Erlangers, > > I am looking to write a process that makes sure my node does not run > out of memory and crash. What I would like is a function F that > returns a number that represents the amount of memory my node is > using. If that number gets above a threshold T, I will kill a process > (carefully and safely, possibly after dumping its ets table to disk). > After the process exits, I would immediately call the same function F > and see whether the number it returns is now less than my threshold T. > If not, kill another process, etc. > > Can I easily write this function F? Does the 'total' number from > erlang:memory serve my needs here? In particular, will it go down > immediately after I kill a process? > > Given a carefully chosen value for threshold T, and assuming no atoms > or binaries are being allocated, (heap data and ets tables are the > main drivers of memory use), will this strategy indeed prevent my node > from crashing as long as I kill processes faster than they can grow? > > Other thoughts? > > -Lyn > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From bchesneau@REDACTED Mon Nov 21 19:51:19 2016 From: bchesneau@REDACTED (Benoit Chesneau) Date: Mon, 21 Nov 2016 18:51:19 +0000 Subject: [erlang-questions] Preventing memory crashes In-Reply-To: References: Message-ID: or the memory montitor in rabbitmq On Mon, 21 Nov 2016 at 19:36, Roger Lipscombe wrote: > It sounds like the existing high memory alarm would be enough...? > > See http://erlang.org/doc/man/memsup.html > > On 21 November 2016 at 18:29, Lyn Headley wrote: > > Erlangers, > > > > I am looking to write a process that makes sure my node does not run > > out of memory and crash. What I would like is a function F that > > returns a number that represents the amount of memory my node is > > using. If that number gets above a threshold T, I will kill a process > > (carefully and safely, possibly after dumping its ets table to disk). > > After the process exits, I would immediately call the same function F > > and see whether the number it returns is now less than my threshold T. > > If not, kill another process, etc. > > > > Can I easily write this function F? Does the 'total' number from > > erlang:memory serve my needs here? In particular, will it go down > > immediately after I kill a process? > > > > Given a carefully chosen value for threshold T, and assuming no atoms > > or binaries are being allocated, (heap data and ets tables are the > > main drivers of memory use), will this strategy indeed prevent my node > > from crashing as long as I kill processes faster than they can grow? > > > > Other thoughts? > > > > -Lyn > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From taavi@REDACTED Mon Nov 21 19:56:05 2016 From: taavi@REDACTED (Taavi Talvik) Date: Mon, 21 Nov 2016 20:56:05 +0200 Subject: [erlang-questions] Preventing memory crashes In-Reply-To: References: Message-ID: You can monitor processes memory usage, and get signal, when it is over certain threshold with erlang:system_monitor/2 or just set limit and kill process with erlang:process_flag/2 Even set max heap size globally with erlang:system_flag/2 for all newly spawned processes with max_heap_size best regards, taavi > On 21 Nov 2016, at 20:29, Lyn Headley wrote: > > Erlangers, > > I am looking to write a process that makes sure my node does not run > out of memory and crash. What I would like is a function F that > returns a number that represents the amount of memory my node is > using. If that number gets above a threshold T, I will kill a process > (carefully and safely, possibly after dumping its ets table to disk). > After the process exits, I would immediately call the same function F > and see whether the number it returns is now less than my threshold T. > If not, kill another process, etc. > > Can I easily write this function F? Does the 'total' number from > erlang:memory serve my needs here? In particular, will it go down > immediately after I kill a process? > > Given a carefully chosen value for threshold T, and assuming no atoms > or binaries are being allocated, (heap data and ets tables are the > main drivers of memory use), will this strategy indeed prevent my node > from crashing as long as I kill processes faster than they can grow? > > Other thoughts? > > -Lyn > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions ? The single biggest problem in communication is the illusion that it has taken place. (George Bernard Shaw) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkbucc@REDACTED Mon Nov 21 20:03:54 2016 From: mkbucc@REDACTED (Mark Bucciarelli) Date: Mon, 21 Nov 2016 14:03:54 -0500 Subject: [erlang-questions] Preventing memory crashes In-Reply-To: References: Message-ID: <58334556.46bd370a.11fb6.8bc9@mx.google.com> The prometheus.erl project on github exposes that metric; you could look through the sources and see how they do it. -----Original Message----- From: "Lyn Headley" Sent: ?11/?21/?2016 13:29 To: "erlang-questions" Subject: [erlang-questions] Preventing memory crashes Erlangers, I am looking to write a process that makes sure my node does not run out of memory and crash. What I would like is a function F that returns a number that represents the amount of memory my node is using. If that number gets above a threshold T, I will kill a process (carefully and safely, possibly after dumping its ets table to disk). After the process exits, I would immediately call the same function F and see whether the number it returns is now less than my threshold T. If not, kill another process, etc. Can I easily write this function F? Does the 'total' number from erlang:memory serve my needs here? In particular, will it go down immediately after I kill a process? Given a carefully chosen value for threshold T, and assuming no atoms or binaries are being allocated, (heap data and ets tables are the main drivers of memory use), will this strategy indeed prevent my node from crashing as long as I kill processes faster than they can grow? Other thoughts? -Lyn _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Nov 21 22:35:20 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 21 Nov 2016 21:35:20 +0000 Subject: [erlang-questions] Preventing memory crashes In-Reply-To: References: Message-ID: On Mon, Nov 21, 2016 at 7:29 PM Lyn Headley wrote: > > > Other thoughts? > > Many of the suggestions in this thread is good. Let me put in a counterpoint: Detect the OOM situation, but build your system to eventually cope with it through a node crash. First, define the capacity of your system. Once you hit the capacity limit, don't add more work to the system. Gracefully reject work, and handle the situation by adding more nodes if you need more capacity. You need to know the engineering capacity (nominal operation) and peak capacity (when things start going seriously wrong). Second, the system_monitor, suggested by Taavi Talvik is usually a good idea to enable, since you can log whenever a single process uses more than, say, 5% of the systems memory. Also look into the alarm_handler and piggyback on set and cleared alarms to warn about when things start going wrong. Here is why: fatal errors are like many boss enemies in computer games - they telegraph their attacks long before they happen. A fatal error usually makes itself known at a smaller scale long before the fatal error takes down the system. Third, if things start going wrong, chances are you can't gracefully recover from them. Better wipe the whole node and let some other node take over the work. If possible, build your system such that it can start off a safe invariant state it periodically stores back to disk. Any system reaching memory limits are susceptible to a rather fast death through a SIGKILL anyway. The alternative solution is to buy a Turing Machine with infinite tape... -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Korpilla@REDACTED Wed Nov 23 08:19:32 2016 From: Oliver.Korpilla@REDACTED (Oliver Korpilla) Date: Wed, 23 Nov 2016 08:19:32 +0100 Subject: [erlang-questions] Other supervisor implementations? Message-ID: Hello. I asked some questions about the various supervisors available in OTP a while ago. The impression I was left with is that once you have to go dynamic/simple_one_for_one you essentially lose most features you'd want out of supervision. Are there any other, alternate supervisor implementations out there to extend the range of options? Thank you, Oliver From shobhitpratap1@REDACTED Tue Nov 22 22:04:59 2016 From: shobhitpratap1@REDACTED (Shobhit Singh) Date: Tue, 22 Nov 2016 22:04:59 +0100 Subject: [erlang-questions] CQErl - How to update map data type Message-ID: Hello All, I am having hard time coming with the syntax of updating map using cqerl. I have tried the following till now and it doesn't work statement = "UPDATE keyspace SET data[?] = :data_value WHERE scope = ?;", values = [{data,'Key Value'},{data_value, 'Data Value',{scope, 'Scope Value'}] What am I doing wrong here? Also setting ttl does not work statement = "INSERT INTO data(scope) VALUES(?) USING ttl ?", values = [{scope, 'Scope Value'},{[ttl], '3650'}] Anyone, any idea? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex0player@REDACTED Wed Nov 23 10:38:54 2016 From: alex0player@REDACTED (Alex S.) Date: Wed, 23 Nov 2016 12:38:54 +0300 Subject: [erlang-questions] Other supervisor implementations? In-Reply-To: References: Message-ID: <01FA5FEC-2CA9-4A7A-9BFF-A07A1E6D82CC@gmail.com> 23 ????. 2016 ?., ? 10:19, Oliver Korpilla ???????(?): > > Hello. > > I asked some questions about the various supervisors available in OTP a while ago. The impression I was left with is that once you have to go dynamic/simple_one_for_one you essentially lose most features you'd want out of supervision. With simple_one_for_one you lose only in-order termination (and indeed, there?s no semantic order you can impose on the children, as the order of launch is accidental and subject to races), and the ability to launch an arbitrary child spec. That?s about it. (The upgrades might also suck if you decide to completely change the childspec: your children will need to handle the upgrade themselves somehow.) > > Are there any other, alternate supervisor implementations out there to extend the range of options? > > Thank you, > Oliver From oliver.korpilla@REDACTED Wed Nov 23 11:50:58 2016 From: oliver.korpilla@REDACTED (Oliver Korpilla) Date: Wed, 23 Nov 2016 11:50:58 +0100 Subject: [erlang-questions] Other supervisor implementations? In-Reply-To: <01FA5FEC-2CA9-4A7A-9BFF-A07A1E6D82CC@gmail.com> References: <01FA5FEC-2CA9-4A7A-9BFF-A07A1E6D82CC@gmail.com> Message-ID: <008A4C27-56B2-4572-A81C-11CA4A267BD6@gmx.de> Hello. I was left with the impression that restart on error - like with "transient" - was also not available. Only the equivalent of "temporary". Regards, Oliver On November 23, 2016 10:38:54 AM CET, "Alex S." wrote: >23 ????. 2016 ?., ? 10:19, Oliver Korpilla >???????(?): >> >> Hello. >> >> I asked some questions about the various supervisors available in OTP >a while ago. The impression I was left with is that once you have to go >dynamic/simple_one_for_one you essentially lose most features you'd >want out of supervision. >With simple_one_for_one you lose only in-order termination (and indeed, >there?s no semantic order you can impose on the children, as the order >of launch is accidental and subject to races), >and the ability to launch an arbitrary child spec. That?s about it. >(The upgrades might also suck if you decide to completely change the >childspec: your children will need to handle the upgrade themselves >somehow.) >> >> Are there any other, alternate supervisor implementations out there >to extend the range of options? >> >> Thank you, >> Oliver -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex0player@REDACTED Wed Nov 23 13:13:29 2016 From: alex0player@REDACTED (Alex S.) Date: Wed, 23 Nov 2016 15:13:29 +0300 Subject: [erlang-questions] Other supervisor implementations? In-Reply-To: <008A4C27-56B2-4572-A81C-11CA4A267BD6@gmx.de> References: <01FA5FEC-2CA9-4A7A-9BFF-A07A1E6D82CC@gmail.com> <008A4C27-56B2-4572-A81C-11CA4A267BD6@gmx.de> Message-ID: <72733EB3-4A52-45B5-AE33-E0611C42F591@gmail.com> > 23 ????. 2016 ?., ? 13:50, Oliver Korpilla ???????(?): > > Hello. > > I was left with the impression that restart on error - like with "transient" - was also not available. Only the equivalent of "temporary". > > Regards, > Oliver That is not true, permanent and transient restarts are still possible. MFArgs are kept in a dictionary keyed by PID, so when the exit signal is received, the process is restarted. I?ve used it myself, and it works like a charm. > > On November 23, 2016 10:38:54 AM CET, "Alex S." wrote: > 23 ????. 2016 ?., ? 10:19, Oliver Korpilla ???????(?): > > Hello. > > I asked some questions about the various supervisors available in OTP a while ago. The impression I was left with is that once you have to go dynamic/simple_one_for_one you essentially lose most features you'd want out of supervision. > With simple_one_for_one you lose only in-order termination (and indeed, there?s no semantic order you can impose on the children, as the order of launch is accidental and subject to races), > and the ability to launch an arbitrary child spec. That?s about it. > (The upgrades might also suck if you decide to completely change the childspec: your children will need to handle the upgrade themselves somehow.) > > Are there any other, alternate supervisor implementations out there to extend the range of options? > > Thank you, > Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Wed Nov 23 13:47:56 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Wed, 23 Nov 2016 15:47:56 +0300 Subject: [erlang-questions] Help with allocator tuning Message-ID: Hi. I'm trying to get some idea about the proper way of configuring allocators. We are running our erlang software flussonic, it captures around 1,5 gbit/s of input via TCP, allocates lot of binaries from 500 bytes to 1500 bytes and then prepares large binaries around 1 megabyte (video blobs). There are produced around 250 such blobs per second. When I launch erlang with: +stbt db +sbwt short +swt very_low +sfwi 20 +zebwt short +sub true +MBas aoffcaobf +MBacul 0 I get around 2000 mmaps and munmaps per second and recon_alloc tells that I have 98% of usage. It looks rather strange, so I tried to play with tunings and switched to: +stbt db +sbwt short +swt very_low +sfwi 20 +zebwt short +sub true +MBas aoffcaobf +MBsbct 4096 +MBacul de +Mulmbcs 131071 +Mumbcgs 1 +Musmbcs 4095 I'm not quite sure that my settings are sane, but I tried to make very large multiblock areas and try to store my binaries inside large areas (not single block carrier, but multiblock carrier). With these settings I get about 50 mmap/munmap per second. Seems that hugepages are not used (frankly speaking I thought to autoenable them). But with these settings I get about 50% of usage and all servers are quickly getting killer by OOM killer. With these flags I tried to hint allocator to create 128MB large areas and objects smaller than 4 megabytes to put into there areas. So my questions are: 1) should I worry about 2000 of mmap/unmap syscalls per second? 2) should I try to reduce usage of sbct and increase usage of mbct? 3) are my flags to erlang VM compatible with each other? 4) maybe some other hints? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Wed Nov 23 15:22:05 2016 From: lukas@REDACTED (Lukas Larsson) Date: Wed, 23 Nov 2016 15:22:05 +0100 Subject: [erlang-questions] Help with allocator tuning In-Reply-To: References: Message-ID: Hello, On Wed, Nov 23, 2016 at 1:47 PM, Max Lapshin wrote: > We are running our erlang software flussonic, it captures around 1,5 > gbit/s of input via TCP, allocates lot of binaries from 500 bytes to 1500 > bytes and then prepares large binaries around 1 megabyte (video blobs). > There are produced around 250 such blobs per second. > Have you verified that these assumptions are correct via http://ferd.github.io/recon/recon_alloc.html#average_block_sizes-1? Make sure to take multiple snapshots of current, as max is not really all that useful for this measurement. +stbt db +sbwt short +swt very_low +sfwi 20 +zebwt short +sub true +MBas > aoffcaobf +MBacul 0 > The carrier oriented allocator strategies (the ones with the longest names, i.e. CARRIERSTRATcBLOCKSTRAT) were specifically introduced to enable carrier migration. So using one of those together with disabling acul makes little sense. You most likely want to run +MBas aobf if you disable carrier migration. > I get around 2000 mmaps and munmaps per second and recon_alloc tells that > I have 98% of usage. It looks rather strange, so I tried to play with > tunings and switched to: > There is a mseg cache that can be used to cache mmap:ed segments. By default it is set to something like 10 segments, which seems to be too low for your usecase. You can increase the number of segments cached through the +MMmcs switch. The max value is 30, but I know that some other users have tried to use much higher numbers by changing the code in erts and that has been better for them. You may want to take a look at the cache hit rates that you get from http://ferd.github.io/recon/recon_alloc.html#cache_hit_rates-0, to see if your changes have any effect. +stbt db +sbwt short +swt very_low +sfwi 20 +zebwt short +sub true +MBas > aoffcaobf +MBsbct 4096 +MBacul de +Mulmbcs 131071 +Mumbcgs 1 +Musmbcs 4095 > If it is specifically binaries that you are looking at, I would just change the config for +MB and not +Mu. Also having a smaller smbcs than sbct seems a bit odd, why not just up the smbcs to the same value as lmbcs? > I'm not quite sure that my settings are sane, but I tried to make very > large multiblock areas and try to store my binaries inside large areas (not > single block carrier, but multiblock carrier). > > With these settings I get about 50 mmap/munmap per second. Seems that > hugepages are not used (frankly speaking I thought to autoenable them). > If you align the mbcs with the size of transparent huge pages that could be beneficial. On my system they are set to 2 MB, is the 128 MB that you are trying to hit what they are set to on your system? > But with these settings I get about 50% of usage and all servers are > quickly getting killer by OOM killer. > This is quite odd, it almost feels like the carrier pool is misbehaving. Have you checked if a large amount of the carriers are in the carrier pool when this happens? Maybe try to lower the usage needed to put them in the pool, i.e. something like "+MBacul 10". I assume that you are running a reasonably late version of Erlang/OTP? I remember that we did some bug fixes a while back in regards to the pool. With these flags I tried to hint allocator to create 128MB large areas and > objects smaller than 4 megabytes to put into there areas. > So my questions are: > > 1) should I worry about 2000 of mmap/unmap syscalls per second? > Depends on how many schedulers you have running. I don't have any figures about how many mmaps/scheduler per second is good, but I would say that the fewer syscalls you do you have the better it is. > 2) should I try to reduce usage of sbct and increase usage of mbct? > It's a bit of a tradeoff. Having too large items in the mbc allocations makes it harder for them to find spots to place blocks, while on the other hand the mbc allocators are better at scalability then the sbc allocators. So by placing too large blocks in the mbc, you get fragmentation issues. But if you place too many blocks in the sbcs, you get scalability issues instead :) In general you want to have the majority of your allocations go to mbcs, what the ratio should be is hard to tell. > 3) are my flags to erlang VM compatible with each other? > Seem to be. > 4) maybe some other hints? > Measure and try to really understand what erlang:system_info({allocator,binary_alloc}) is giving you. recon_alloc is a great tool, but it is built with an interface to find the specific problems that we have encountered and it hides information from you. Most of the time I end up writing small scripts that analyze the data in a new way looking for exactly what I want to see over time. Also reading the erts_alloc documentation is well worth doing very carefully. There is also the possibility to completely disable erts alloc and fallback to malloc, you do that via "+Mea min". Doing that you loose a bunch of nice statistics and scalability features. However more man hours have been spent optimizing them so they are a little bit faster per allocated item allocation. Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Wed Nov 23 22:16:07 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 24 Nov 2016 00:16:07 +0300 Subject: [erlang-questions] Help with allocator tuning In-Reply-To: References: Message-ID: Lukas, thank you a lot! +MBas aoffcaobf +MMmcs 30 +MBsbct 2048 +MBacul 10 +MBlmbcs 2048 +MBsmbcs 2048 these settings looks very good. Scheduler utilization is very good now (screenshot attached). hugepages are not used, recon_alloc:memory(usage) shows 78% (rather good), recon_alloc:cache_hit_rates() shows 99% recon_alloc:sbcs_to_mbcs(current) shows less than 0,2% Will check these settings on different installations. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zkessin@REDACTED Thu Nov 24 08:10:21 2016 From: zkessin@REDACTED (Zachary Kessin) Date: Thu, 24 Nov 2016 09:10:21 +0200 Subject: [erlang-questions] Including an Elixir Hex Dep in Erlang Message-ID: I want to write a project in Erlang, but there are a few elixir packages that would make life simpler, Is there an easy way to have rebar3 compile and build them? -- Zach Kessin http://elm-test.com Twitter: @zkessin Skype: zachkessin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Thu Nov 24 08:44:18 2016 From: bchesneau@REDACTED (Benoit Chesneau) Date: Thu, 24 Nov 2016 07:44:18 +0000 Subject: [erlang-questions] Including an Elixir Hex Dep in Erlang In-Reply-To: References: Message-ID: On Thu, Nov 24, 2016 at 8:10 AM Zachary Kessin wrote: > I want to write a project in Erlang, but there are a few elixir packages > that would make life simpler, Is there an easy way to have rebar3 compile > and build them? > > > rebar3_elixir_compile does exactly that: https://github.com/barrel-db/rebar3_elixir_compile - beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Korpilla@REDACTED Thu Nov 24 08:56:05 2016 From: Oliver.Korpilla@REDACTED (Oliver Korpilla) Date: Thu, 24 Nov 2016 08:56:05 +0100 Subject: [erlang-questions] Including an Elixir Hex Dep in Erlang In-Reply-To: References: Message-ID: Hello, Zach. I hope you will not consider this off-topic, but we've also found that working in elixir's mix integrated perfectly for us. Erlang and elixir code live side by side according to their respective preferred directory names (include/src for Erlang and lib for elixir) and integrate just fine. We usually deploy with "mix build.escript" which generates an escript containing already the byte code of our application including the elixir distribution. In fact we deploy several such "standalone" applications built on the same OTP release installed in our target. The integration is really seamless. Currently we're also preparing to roll out a port app written in C and this integrates nicely. If you already have the intention of using some elixir then rolling out with mix can do the job just nicely. Several of our developers have also written mix tasks for facilitate this with good results. - Oliver ? Gesendet:?Donnerstag, 24. November 2016 um 08:10 Uhr Von:?"Zachary Kessin" An:?"Erlang Questions" Betreff:?[erlang-questions] Including an Elixir Hex Dep in Erlang I want to write a project in Erlang, but there are a few elixir packages that would make life simpler, Is there an easy way to have rebar3 compile and build them? ? ? ?-- Zach Kessin http://elm-test.com Twitter: @zkessin[https://twitter.com/zkessin] Skype: zachkessin_______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions[http://erlang.org/mailman/listinfo/erlang-questions] From vans_163@REDACTED Fri Nov 25 04:53:32 2016 From: vans_163@REDACTED (Vans S) Date: Fri, 25 Nov 2016 03:53:32 +0000 (UTC) Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> Message-ID: <1851734291.107205.1480046012268@mail.yahoo.com> So far if I use open_port({spawn, Params}, [stderr_to_stdout, exit_status]) to create an OS process, if the erlang process that created it dies, the os process stays alive. I want the os process to die with the erlang process, often this is the case if the process reads stdin. Currently this process does not read stdin and it does not die when the erlang process dies. Any ideas how to do this? From g@REDACTED Fri Nov 25 10:40:53 2016 From: g@REDACTED (Guilherme Andrade) Date: Fri, 25 Nov 2016 09:40:53 +0000 Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> Message-ID: On 25 Nov 2016 3:53 a.m., "Vans S" wrote: > > So far if I use open_port({spawn, Params}, [stderr_to_stdout, exit_status]) to create an OS process, if the erlang process that created it dies, the os process stays alive. I want the os process to die with the erlang process, often this is the case if the process reads stdin. > > Currently this process does not read stdin and it does not die when the erlang process dies. > > Any ideas how to do this? erlexec[1] might be what you're looking for. [1]: https://github.com/saleyn/erlexec > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Fri Nov 25 11:05:11 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Fri, 25 Nov 2016 11:05:11 +0100 Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: <1851734291.107205.1480046012268@mail.yahoo.com> References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> Message-ID: <20161125100511.GA42316@erix.ericsson.se> On Fri, Nov 25, 2016 at 03:53:32AM +0000, Vans S wrote: > So far if I use open_port({spawn, Params}, [stderr_to_stdout, exit_status]) to create an OS process, if the erlang process that created it dies, the os process stays alive. I want the os process to die with the erlang process, often this is the case if the process reads stdin. > > Currently this process does not read stdin and it does not die when the erlang process dies. > > Any ideas how to do this? I have used a wrapper script that prints the process number on stdout and then execs the target program. Then I can kill the spawned process using os:cmd or such. -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From jose.valim@REDACTED Fri Nov 25 11:27:09 2016 From: jose.valim@REDACTED (=?UTF-8?Q?Jos=C3=A9_Valim?=) Date: Fri, 25 Nov 2016 11:27:09 +0100 Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: <20161125100511.GA42316@erix.ericsson.se> References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> <20161125100511.GA42316@erix.ericsson.se> Message-ID: If I remember correctly, the stdin of the program started with open_port will be closed when the Erlang VM terminates. So if the OS process is listening to stdin and exiting when the stdin closes, you don't need to worry about terminating it. When they don't do so, you can also write a script that spawns the OS process and traverses stdin until it closes and then it terminates the OS process when stdin closes. Something like: #!/bin/bash name=$1 shift $name $* pid=$! while read line ; do : done < /dev/stdin kill -KILL $pid If you save it as "wrap" then you can start your OS process as "wrap NAME ARGS". This can be handy if you cannot rely on C/C++ extensions. If anyone knows any drawback to such approach, I would love to hear. *Jos? Valim* www.plataformatec.com.br Skype: jv.ptec Founder and Director of R&D -------------- next part -------------- An HTML attachment was scrubbed... URL: From askjuise@REDACTED Fri Nov 25 11:34:41 2016 From: askjuise@REDACTED (Alexander Petrovsky) Date: Fri, 25 Nov 2016 13:34:41 +0300 Subject: [erlang-questions] Logs and stdout [OFFTOPIC] Message-ID: Hello! On my work I have to write in golang, so, with my erlang background I used to write logs into file on disk via lager, for me it's very conveniently and I see some pros: - We can route logs by application and control them are more flexible; - We can control when and what and how to sink data to disk, so, we have some guarantees. For me, stdout/stderr cons is stdX it's always pipe, so, when log reader got stuck, all my app is got stuck too. The my very (for me) strange golang programmers colleagues, says - no, it's wrong and bad, we all must write logs to stdout, here there is cons for write log files: - Disk can corrupt! Write to stdout, catch up by journald and route to some place. So, there is no disk point of failure any more, you app now network depends only; - Stdout more flexible, it's more cross platform; - Hey, boy, did you know about 12 app factor (*oh my god*)? Sorry for full of pain and holly war topic, but for me is very interesting what's erlang community think about it? -- ?????????? ????????? / Alexander Petrovsky, Skype: askjuise Phone: +7 914 8 820 815 -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Fri Nov 25 12:00:33 2016 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Fri, 25 Nov 2016 12:00:33 +0100 Subject: [erlang-questions] Logs and stdout [OFFTOPIC] In-Reply-To: References: Message-ID: There are no better solutions, only pros and cons. The good thing about lager is that all solutions are available and can be stacked, so you can customize where your logs go as you will. You can for example send them both on disk and to journald directly with https://github.com/travelping/lager_journald_backend - this one allows setting extra fields, I am not sure you can do that when redirecting stdout. Then you get the best of both worlds. When writing logs to files, there shouldn't be an assumption that disks are involved, you could very well write to a pseudo partition mounted as a file system, to a remote partition mounted locally, to a memory file system... Stdout is not better supported across platforms. The Windows task manager and services will not log stdout by default, for example. On Unix, it largely depends on how the node is started. If your node is an always-on server managed by systemd/init, then sure, but if you make an interactive application this output will be dropped. Stdout also doesn't sound more flexible, see the comment about extra fields in journald for example. Different log storages provide different extensions that you may wish to take advantage of. The 12 factor app document reads very much like a "works for us". There are many more scenarios than the ones it covers. Saving to log files on disk works for my customers as far as I know. In their case it's fine to lose some logs because of disk corruption, as the logs are used for debugging purposes only. Your mileage may vary. On 11/25/2016 11:34 AM, Alexander Petrovsky wrote: > Hello! > > On my work I have to write in golang, so, with my erlang background I > used to write logs into file on disk via lager, for me it's very > conveniently and I see some pros: > > - We can route logs by application and control them are more flexible; > - We can control when and what and how to sink data to disk, so, we have > some guarantees. > > For me, stdout/stderr cons is stdX it's always pipe, so, when log reader > got stuck, all my app is got stuck too. > > The my very (for me) strange golang programmers colleagues, says - no, > it's wrong and bad, we all must write logs to stdout, here there is cons > for write log files: > - Disk can corrupt! Write to stdout, catch up by journald and route to > some place. So, there is no disk point of failure any more, you app now > network depends only; > - Stdout more flexible, it's more cross platform; > - Hey, boy, did you know about 12 app factor (*oh my god*)? > > Sorry for full of pain and holly war topic, but for me is very > interesting what's erlang community think about it? > > > -- > ?????????? ????????? / Alexander Petrovsky, > > Skype: askjuise > Phone: +7 914 8 820 815 > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Lo?c Hoguin https://ninenines.eu From per@REDACTED Fri Nov 25 12:33:09 2016 From: per@REDACTED (Per Hedeland) Date: Fri, 25 Nov 2016 12:33:09 +0100 Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> <20161125100511.GA42316@erix.ericsson.se> Message-ID: <58382175.8050103@hedeland.org> On 2016-11-25 11:27, Jos? Valim wrote: > If I remember correctly, the stdin of the program started with open_port will be closed when the Erlang VM terminates. Right - this is guaranteed by the OS kernel, which closes all open file descriptors for a process when it terminates. Should be true even for Windows with whatever is used to communicate with port programs there. > So if the OS process is listening to stdin and exiting when the stdin closes, you > don't need to worry about terminating it. Yes. And this is the *only* way to ensure that the port program *always* terminates when it should - it covers all of - port is explicitly closed - port-owning Erlang process terminates - Erlang VM terminates With Raimo's suggestion, the port program will continue to run if the Erlang VM terminates "abruptly" (e.g. killed by e.g. the OOM-killer, or failing to allocate memory). The design that stipulates that port programs should (attempt to) read stdin and terminate when they see EOF wasn't done on a whim... > When they don't do so, you can also write a script that spawns the OS process and traverses stdin until it closes and then it terminates the OS process when stdin closes. Something like: > > #!/bin/bash > name=$1 > shift > $name $* > pid=$! > while read line ; do > : > done < /dev/stdin > kill -KILL $pid > > > If you save it as "wrap" then you can start your OS process as "wrap NAME ARGS". This can be handy if you cannot rely on C/C++ extensions. If anyone knows any drawback to such approach, I would love > to hear. Well, your script is missing a '&' on the "$name $*" line to make the "target" program run in the (script's) background, but otherwise it should be fine. I'd like to suggest a bit of simplification and increased portability though: #!/bin/sh "$@" & pid=$! while read line ; do : done kill -KILL $pid (And you could of course using something "nicer" than -KILL, e.g. -TERM, if you are sure that it will be sufficient to terminate the program.) --Per Hedeland From Dinislam.Salikhov@REDACTED Fri Nov 25 13:20:01 2016 From: Dinislam.Salikhov@REDACTED (Salikhov Dinislam) Date: Fri, 25 Nov 2016 15:20:01 +0300 Subject: [erlang-questions] load_file() and purge() Message-ID: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> Hi! Let's assume there is a module forty_two.beam loaded by VM. Then the module is updated and hot loaded: > code:load_file(forty_two). {module, forty_two} Then let's assume that there is no process using an old version of the module (for example, the module contains only pure functions). Despite of that following attempt to update the module fails: > code:load_file(forty_two). Loading of forty_two.beam failed: not_purged {error,not_purged} Is there any rationale why the old code is not automatically purged? IMO, it would be convenient if _unused_ old version of the code would be implicitly removed in such case. Salikhov Dinislam From alex0player@REDACTED Fri Nov 25 13:27:01 2016 From: alex0player@REDACTED (Alex S.) Date: Fri, 25 Nov 2016 15:27:01 +0300 Subject: [erlang-questions] load_file() and purge() In-Reply-To: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> References: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> Message-ID: <4881896B-7589-4428-A264-427AC7A41BD3@gmail.com> > 25 ????. 2016 ?., ? 15:20, Salikhov Dinislam ???????(?): > > Hi! > > Let's assume there is a module forty_two.beam loaded by VM. > Then the module is updated and hot loaded: > > code:load_file(forty_two). > {module, forty_two} > > Then let's assume that there is no process using an old version of the module (for example, the module contains only pure functions). Even pure functions are executed by the process, and the process can run out of reductions in the middle of that, and stall potentially indefinitely. To detect old code would require not only checking execution states, but also funs (which no longer kill the process, but cease working, and that may be undesired). From per@REDACTED Fri Nov 25 13:38:46 2016 From: per@REDACTED (Per Hedeland) Date: Fri, 25 Nov 2016 13:38:46 +0100 Subject: [erlang-questions] load_file() and purge() In-Reply-To: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> References: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> Message-ID: <583830D6.7070301@hedeland.org> On 2016-11-25 13:20, Salikhov Dinislam wrote: > > Let's assume there is a module forty_two.beam loaded by VM. > Then the module is updated and hot loaded: > > code:load_file(forty_two). > {module, forty_two} > > Then let's assume that there is no process using an old version of the module (for example, the module contains only pure functions). > Despite of that following attempt to update the module fails: > > code:load_file(forty_two). > Loading of forty_two.beam failed: not_purged > {error,not_purged} > > Is there any rationale why the old code is not automatically purged? code:load_file/1 *never* purges old code. Try the handy shell function l/1, defined in c.erl: l(Mod) -> code:purge(Mod), code:load_file(Mod). > IMO, it would be convenient if _unused_ old version of the code would be implicitly removed in such case. You could define your own load function with appropriate use of code:soft_purge/1 for that. --Per Hedeland From Dinislam.Salikhov@REDACTED Fri Nov 25 13:45:55 2016 From: Dinislam.Salikhov@REDACTED (Salikhov Dinislam) Date: Fri, 25 Nov 2016 15:45:55 +0300 Subject: [erlang-questions] load_file() and purge() In-Reply-To: <583830D6.7070301@hedeland.org> References: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> <583830D6.7070301@hedeland.org> Message-ID: <6d9f05b9-d734-02e2-5bf1-b1e8f5135c1d@kaspersky.com> On 11/25/2016 03:38 PM, Per Hedeland wrote: > On 2016-11-25 13:20, Salikhov Dinislam wrote: >> Let's assume there is a module forty_two.beam loaded by VM. >> Then the module is updated and hot loaded: >> > code:load_file(forty_two). >> {module, forty_two} >> >> Then let's assume that there is no process using an old version of the module (for example, the module contains only pure functions). >> Despite of that following attempt to update the module fails: >> > code:load_file(forty_two). >> Loading of forty_two.beam failed: not_purged >> {error,not_purged} >> >> Is there any rationale why the old code is not automatically purged? > code:load_file/1 *never* purges old code. Try the handy shell function > l/1, defined in c.erl: > > l(Mod) -> > code:purge(Mod), > code:load_file(Mod). Yes, I know that. I'd like to know why load_file() is implemented this way. IMHO, in the case I've provided soft_purge() could be nicely called by load_file() to ease programmer's life :) >> IMO, it would be convenient if _unused_ old version of the code would be implicitly removed in such case. > You could define your own load function with appropriate use of > code:soft_purge/1 for that. > > --Per Hedeland From per@REDACTED Fri Nov 25 16:07:15 2016 From: per@REDACTED (Per Hedeland) Date: Fri, 25 Nov 2016 16:07:15 +0100 Subject: [erlang-questions] load_file() and purge() In-Reply-To: <6d9f05b9-d734-02e2-5bf1-b1e8f5135c1d@kaspersky.com> References: <6ac0a92a-2b06-4452-17ae-cd4b4e7e1046@kaspersky.com> <583830D6.7070301@hedeland.org> <6d9f05b9-d734-02e2-5bf1-b1e8f5135c1d@kaspersky.com> Message-ID: <583853A3.3020906@hedeland.org> On 2016-11-25 13:45, Salikhov Dinislam wrote: > On 11/25/2016 03:38 PM, Per Hedeland wrote: >> On 2016-11-25 13:20, Salikhov Dinislam wrote: >>> Let's assume there is a module forty_two.beam loaded by VM. >>> Then the module is updated and hot loaded: >>> > code:load_file(forty_two). >>> {module, forty_two} >>> >>> Then let's assume that there is no process using an old version of the module (for example, the module contains only pure functions). >>> Despite of that following attempt to update the module fails: >>> > code:load_file(forty_two). >>> Loading of forty_two.beam failed: not_purged >>> {error,not_purged} >>> >>> Is there any rationale why the old code is not automatically purged? >> code:load_file/1 *never* purges old code. Try the handy shell function >> l/1, defined in c.erl: >> >> l(Mod) -> >> code:purge(Mod), >> code:load_file(Mod). > Yes, I know that. I'd like to know why load_file() is implemented this way. Maybe because it should only do what its name implies... > IMHO, in the case I've provided soft_purge() could be nicely called by load_file() But you don't know that it *is* that case until you have called soft_purge()... - determining that the old code is not being used is not zero cost, and other users of the function may prefer that the decision to do that is left to them. > to ease programmer's life :) IME, the programmer's life is most eased by functions that do 0 or 1 thing, and make it trivial (by return value or failure) to determine which, rather than functions that do 0, 1, or 2 things, and require analysis (maybe even of a *combination* of return value and failure) to determine what they actually did. The *shell user's* life on the other hand might perhaps be eased by a programmer providing a function like the one below. --Per >>> IMO, it would be convenient if _unused_ old version of the code would be implicitly removed in such case. >> You could define your own load function with appropriate use of >> code:soft_purge/1 for that. >> >> --Per Hedeland From vans_163@REDACTED Fri Nov 25 18:17:33 2016 From: vans_163@REDACTED (Vans S) Date: Fri, 25 Nov 2016 17:17:33 +0000 (UTC) Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: <58382175.8050103@hedeland.org> References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> <20161125100511.GA42316@erix.ericsson.se> <58382175.8050103@hedeland.org> Message-ID: <1456918899.191389.1480094253759@mail.yahoo.com> Per Hedeland and Jos? Valim that method works great. One small peeve is it makes an extra process but the only way around this would be to patch the OS Processes code to terminate when stdin closes. For now using the shell script is more then enough. On Friday, November 25, 2016 6:33 AM, Per Hedeland wrote: On 2016-11-25 11:27, Jos? Valim wrote: > If I remember correctly, the stdin of the program started with open_port will be closed when the Erlang VM terminates. Right - this is guaranteed by the OS kernel, which closes all open file descriptors for a process when it terminates. Should be true even for Windows with whatever is used to communicate with port programs there. > So if the OS process is listening to stdin and exiting when the stdin closes, you > don't need to worry about terminating it. Yes. And this is the *only* way to ensure that the port program *always* terminates when it should - it covers all of - port is explicitly closed - port-owning Erlang process terminates - Erlang VM terminates With Raimo's suggestion, the port program will continue to run if the Erlang VM terminates "abruptly" (e.g. killed by e.g. the OOM-killer, or failing to allocate memory). The design that stipulates that port programs should (attempt to) read stdin and terminate when they see EOF wasn't done on a whim... > When they don't do so, you can also write a script that spawns the OS process and traverses stdin until it closes and then it terminates the OS process when stdin closes. Something like: > > #!/bin/bash > name=$1 > shift > $name $* > pid=$! > while read line ; do > : > done < /dev/stdin > kill -KILL $pid > > > If you save it as "wrap" then you can start your OS process as "wrap NAME ARGS". This can be handy if you cannot rely on C/C++ extensions. If anyone knows any drawback to such approach, I would love > to hear. Well, your script is missing a '&' on the "$name $*" line to make the "target" program run in the (script's) background, but otherwise it should be fine. I'd like to suggest a bit of simplification and increased portability though: #!/bin/sh "$@" & pid=$! while read line ; do : done kill -KILL $pid (And you could of course using something "nicer" than -KILL, e.g. -TERM, if you are sure that it will be sufficient to terminate the program.) --Per Hedeland _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From per@REDACTED Fri Nov 25 20:38:00 2016 From: per@REDACTED (Per Hedeland) Date: Fri, 25 Nov 2016 20:38:00 +0100 Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: <1456918899.191389.1480094253759@mail.yahoo.com> References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> <20161125100511.GA42316@erix.ericsson.se> <58382175.8050103@hedeland.org> <1456918899.191389.1480094253759@mail.yahoo.com> Message-ID: <74f88446-78d7-6741-f246-28a440416493@hedeland.org> On 2016-11-25 18:17, Vans S wrote: > Per Hedeland and Jos? Valim that method works great. One small peeve is it makes an extra process Yes, but that shouldn't be any actual cost beyond a comparitively small amount of memory (that can be mostly paged out even if you don't have swap) - it doesn't actually *do* anything, just sits in a blocking read(2) of its stdin. > but the only way around this would be to patch the OS Processes code to terminate when stdin closes. Assuming that you mean modifying the "target" program, yes - this would be the "best" solution, but there are many cases where it is problematic - obviously if you don't actually have the source code, but even if you do, the modification can be non-trivial in a complex program, and it's a pain to keep modifying the code and running the modified version when new versions are released, etc. > For now using the shell script is more then enough. Great! --Per From vans_163@REDACTED Fri Nov 25 21:44:42 2016 From: vans_163@REDACTED (Vans S) Date: Fri, 25 Nov 2016 20:44:42 +0000 (UTC) Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: <74f88446-78d7-6741-f246-28a440416493@hedeland.org> References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> <20161125100511.GA42316@erix.ericsson.se> <58382175.8050103@hedeland.org> <1456918899.191389.1480094253759@mail.yahoo.com> <74f88446-78d7-6741-f246-28a440416493@hedeland.org> Message-ID: <331621424.562377.1480106682942@mail.yahoo.com> > Yes, but that shouldn't be any actual cost beyond a comparitively small > amount of memory (that can be mostly paged out even if you don't have > swap) - it doesn't actually *do* anything, just sits in a blocking > read(2) of its stdin. The cost is indeed there, but in human cognitive resources :) I need to manage these processes and know if something is running or not outside of erlang. A simple way I was doing this was by grepping /proc both for the process name and for a uuid stored in its commandline. Maybe there is a better way? With these 2 processes having duplicate command lines I need to write some extra logic. Another key reason to grep /proc cmdline like this, is that I can guarantee a process with a certain uuid is not started twice, say if an erlang process goes into an undefined state. On Friday, November 25, 2016 2:38 PM, Per Hedeland wrote: On 2016-11-25 18:17, Vans S wrote: > Per Hedeland and Jos? Valim that method works great. One small peeve is it makes an extra process Yes, but that shouldn't be any actual cost beyond a comparitively small amount of memory (that can be mostly paged out even if you don't have swap) - it doesn't actually *do* anything, just sits in a blocking read(2) of its stdin. > but the only way around this would be to patch the OS Processes code to terminate when stdin closes. Assuming that you mean modifying the "target" program, yes - this would be the "best" solution, but there are many cases where it is problematic - obviously if you don't actually have the source code, but even if you do, the modification can be non-trivial in a complex program, and it's a pain to keep modifying the code and running the modified version when new versions are released, etc. > For now using the shell script is more then enough. Great! --Per From mjtruog@REDACTED Fri Nov 25 21:48:20 2016 From: mjtruog@REDACTED (Michael Truog) Date: Fri, 25 Nov 2016 12:48:20 -0800 Subject: [erlang-questions] How do you kill an OS process that was opened with open_port on a brutal_kill? In-Reply-To: <1456918899.191389.1480094253759@mail.yahoo.com> References: <1851734291.107205.1480046012268.ref@mail.yahoo.com> <1851734291.107205.1480046012268@mail.yahoo.com> <20161125100511.GA42316@erix.ericsson.se> <58382175.8050103@hedeland.org> <1456918899.191389.1480094253759@mail.yahoo.com> Message-ID: <5838A394.5010708@gmail.com> On 11/25/2016 09:17 AM, Vans S wrote: > Per Hedeland and Jos? Valim that method works great. One small peeve is it makes an extra process but the only way around this would be to patch the OS Processes code to terminate when stdin closes. For now using the shell script is more then enough. A well-behaved port should be checking the file descriptor to see if it is closed. I prefer to use the poll function and check revents on the file descriptor for the POLLHUP, since I know that if that is present, Erlang has exited. However, it is possible that the port execution takes a long time before actually getting to the poll function, so, it is nice to have a termination timeout on the Erlang side that uses os:cmd("kill -9 " ++ PID) to ensure the OS process actually dies at some point in time. Otherwise, it is possible to suspend the port and have it hanging around forever. If you use external services in CloudI, this is handled for you, which helps to avoid spending time on these details. Best Regards, Michael > On Friday, November 25, 2016 6:33 AM, Per Hedeland wrote: > On 2016-11-25 11:27, Jos? Valim wrote: >> If I remember correctly, the stdin of the program started with open_port will be closed when the Erlang VM terminates. > Right - this is guaranteed by the OS kernel, which closes all open file > descriptors for a process when it terminates. Should be true even for > Windows with whatever is used to communicate with port programs there. > >> So if the OS process is listening to stdin and exiting when the stdin closes, you >> don't need to worry about terminating it. > Yes. And this is the *only* way to ensure that the port program *always* > terminates when it should - it covers all of > > - port is explicitly closed > - port-owning Erlang process terminates > - Erlang VM terminates > > With Raimo's suggestion, the port program will continue to run if the > Erlang VM terminates "abruptly" (e.g. killed by e.g. the OOM-killer, or > failing to allocate memory). The design that stipulates that port > programs should (attempt to) read stdin and terminate when they see EOF > wasn't done on a whim... > >> When they don't do so, you can also write a script that spawns the OS process and traverses stdin until it closes and then it terminates the OS process when stdin closes. Something like: >> >> #!/bin/bash >> name=$1 >> shift >> $name $* >> pid=$! >> while read line ; do >> : >> done < /dev/stdin >> kill -KILL $pid >> >> >> If you save it as "wrap" then you can start your OS process as "wrap NAME ARGS". This can be handy if you cannot rely on C/C++ extensions. If anyone knows any drawback to such approach, I would love >> to hear. > Well, your script is missing a '&' on the "$name $*" line to make the > "target" program run in the (script's) background, but otherwise it > should be fine. I'd like to suggest a bit of simplification and > increased portability though: > > #!/bin/sh > "$@" & > pid=$! > while read line ; do > : > done > kill -KILL $pid > > (And you could of course using something "nicer" than -KILL, e.g. -TERM, > if you are sure that it will be sufficient to terminate the program.) > > --Per Hedeland > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From essen@REDACTED Mon Nov 28 11:56:55 2016 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Mon, 28 Nov 2016 11:56:55 +0100 Subject: [erlang-questions] [ANN] Ranch 1.3 Message-ID: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> Hello, A new version of Ranch has been released. This release should fix most issues people were having. Highlights: * Add ssl to the list of dependencies (no more getting stuck on shutdown) * Allow configuring a listener with only SNI, without a default certificate * Fix double removal of connections bug (the number of active connections should now be exact) The full release notes are available at: https://ninenines.eu/articles/ranch-1.3/ And the changelog: https://git.ninenines.eu/ranch.git/plain/CHANGELOG.asciidoc Enjoy! -- Lo?c Hoguin https://ninenines.eu From carlsson.richard@REDACTED Mon Nov 28 14:57:15 2016 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Mon, 28 Nov 2016 14:57:15 +0100 Subject: [erlang-questions] Feedback wanted: Remove compilation times from BEAM files in OTP 19? In-Reply-To: References: Message-ID: Well, that took a while, but this has now been merged: https://github.com/erlang/otp/pull/1257 /Richard 2016-04-08 22:42 GMT+02:00 Richard Carlsson : > Hi Serge (and everyone else)! I've actually got an improved version of > that module change detection (i.e., based on md5) that I was going to clean > up and submit to OTP as a standard part of the shell. I think the original > snippets have been drifting around on the interwebs for aeons, and I'm not > sure who wrote them originally. We've been using them at Klarna forever. > But we needed something more reliable (that works even if you have made a > completely new build of every module, but not actually changing more than a > few), so I started by pushing some patches to OTP that made the md5 of > loaded modules easily available in module_info. Then based on that I wrote > an improved shell function like you just did - we've now tried it out for a > while and found it stable, so it's a good time to push this as well. > > I think my only hesitation is where to put the core "module changed" > functionality if I'm to submit it as part of OTP: the code module? > beam_lib? somewhere else? Any opinions out there? > > /Richard > > 2016-04-08 15:37 GMT+02:00 Serge Aleynikov : > >> ?Thank you! >> >> I updated the https://github.com/saleyn/util/blob/master/src/user_ >> default.erl per your suggestion. >> >> On Fri, Apr 8, 2016 at 8:54 AM, Bj?rn Gustavsson >> wrote: >> >>> On Fri, Apr 8, 2016 at 2:28 PM, Serge Aleynikov >>> wrote: >>> > I've been relying on module's compilation time for figuring out the >>> changed >>> > modules (see below). >>> > >>> > In the absence of {time, CompileTime}, would there be a way to >>> determine the >>> > modification time stamp of the file at the time it was loaded by code >>> > loader? >>> >>> Yes. Use Mod:module_info(md5) to calculate MD5 for >>> the loaded module and compare it to beam_lib:md5(Mod). >>> >>> Example: >>> >>> 13> c:module_info(md5). >>> <<79,26,188,243,168,60,58,45,34,69,19,222,138,190,214,118>> >>> 14> beam_lib:md5(code:which(c)). >>> {ok,{c,<<79,26,188,243,168,60,58,45,34,69,19,222,138, >>> 190,214,118>>}} >>> >>> /Bjorn >>> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Mon Nov 28 15:11:54 2016 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Mon, 28 Nov 2016 15:11:54 +0100 Subject: [erlang-questions] [ANN] Ranch 1.3 In-Reply-To: References: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> Message-ID: Cowboy 1.x is compatible with Ranch 1.3, but no further releases of Cowboy 1.x are planned at this time, so you'll need to enforce fetching Ranch 1.3 from your own project at this point. On the other hand Cowboy 2.0.0-pre.4 is coming out in a few days, will depend on Ranch 1.3 and will be the recommended Cowboy version from that point onward. Cheers, On 11/28/2016 03:09 PM, Garry Hodgson wrote: > nice. will cowboy be updated to use this version? > we're using 1.x, and it and master branches show 1.2.0 > > > On 11/28/16 5:56 AM, Lo?c Hoguin wrote: >> Hello, >> >> A new version of Ranch has been released. >> >> This release should fix most issues people were having. Highlights: >> >> * Add ssl to the list of dependencies (no more getting stuck on shutdown) >> * Allow configuring a listener with only SNI, without a default >> certificate >> * Fix double removal of connections bug (the number of active >> connections should now be exact) >> >> The full release notes are available at: >> https://ninenines.eu/articles/ranch-1.3/ >> >> And the changelog: >> https://git.ninenines.eu/ranch.git/plain/CHANGELOG.asciidoc >> >> Enjoy! >> > -- Lo?c Hoguin https://ninenines.eu From garry@REDACTED Mon Nov 28 15:09:06 2016 From: garry@REDACTED (Garry Hodgson) Date: Mon, 28 Nov 2016 09:09:06 -0500 Subject: [erlang-questions] [ANN] Ranch 1.3 In-Reply-To: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> References: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> Message-ID: nice. will cowboy be updated to use this version? we're using 1.x, and it and master branches show 1.2.0 On 11/28/16 5:56 AM, Lo?c Hoguin wrote: > Hello, > > A new version of Ranch has been released. > > This release should fix most issues people were having. Highlights: > > * Add ssl to the list of dependencies (no more getting stuck on shutdown) > * Allow configuring a listener with only SNI, without a default > certificate > * Fix double removal of connections bug (the number of active > connections should now be exact) > > The full release notes are available at: > https://ninenines.eu/articles/ranch-1.3/ > > And the changelog: > https://git.ninenines.eu/ranch.git/plain/CHANGELOG.asciidoc > > Enjoy! > From ok@REDACTED Tue Nov 29 03:05:46 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 29 Nov 2016 15:05:46 +1300 Subject: [erlang-questions] Logs and stdout [OFFTOPIC] In-Reply-To: References: Message-ID: On 25/11/16 11:34 PM, Alexander Petrovsky wrote: > - Hey, boy, did you know about 12 app factor (*oh my god*)? "12-factor app" was not handed down on Mt Sinai; Jibril did not dictate it to the Prophet; Joseph Smith Jr never found it on the golden plates. Heck, even Ra?l never heard about it from the elohim. "12-factor app" is OPINION. Using lager, the vast bulk of your code doesn't know and doesn't care where the logs are going. It *can* go to stdout if that's what you want. On the other hand, if you need log rotation and such, and you only write to stdout, then you have to depend on something outside your application to do it. stdout is where information is sent to die. (:-) From garryh@REDACTED Mon Nov 28 15:57:52 2016 From: garryh@REDACTED (Garry Hodgson) Date: Mon, 28 Nov 2016 09:57:52 -0500 Subject: [erlang-questions] [ANN] Ranch 1.3 In-Reply-To: References: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> Message-ID: <67eef033-7c48-8662-09fc-16ad3237eab4@att.com> cool. thanks. On 11/28/2016 09:11 AM, Lo?c Hoguin wrote: > Cowboy 1.x is compatible with Ranch 1.3, but no further releases of > Cowboy 1.x are planned at this time, so you'll need to enforce > fetching Ranch 1.3 from your own project at this point. > > On the other hand Cowboy 2.0.0-pre.4 is coming out in a few days, will > depend on Ranch 1.3 and will be the recommended Cowboy version from > that point onward. > > Cheers, > > On 11/28/2016 03:09 PM, Garry Hodgson wrote: >> nice. will cowboy be updated to use this version? >> we're using 1.x, and it and master branches show 1.2.0 >> >> >> On 11/28/16 5:56 AM, Lo?c Hoguin wrote: >>> Hello, >>> >>> A new version of Ranch has been released. >>> >>> This release should fix most issues people were having. Highlights: >>> >>> * Add ssl to the list of dependencies (no more getting stuck on >>> shutdown) >>> * Allow configuring a listener with only SNI, without a default >>> certificate >>> * Fix double removal of connections bug (the number of active >>> connections should now be exact) >>> >>> The full release notes are available at: >>> https://ninenines.eu/articles/ranch-1.3/ >>> >>> And the changelog: >>> https://git.ninenines.eu/ranch.git/plain/CHANGELOG.asciidoc >>> >>> Enjoy! >>> >> > From frank.muller.erl@REDACTED Tue Nov 29 06:28:16 2016 From: frank.muller.erl@REDACTED (Frank Muller) Date: Tue, 29 Nov 2016 05:28:16 +0000 Subject: [erlang-questions] Help with allocator tuning In-Reply-To: References: Message-ID: Hi, Max: Can you please explain how do you measure that schedulers utilisation, and ensure it's good (or not)? Lukas: from where Max was able to derive that 2048 is the best choice for: +MBsbc, +MBlmbcs and +MBsmbcs? Erlang memory management is kind of mystery, and I would appreciate if you can teach us how to figure out the right numbers. /Frank Le mer. 23 nov. 2016 ? 22:16, Max Lapshin a ?crit : > Lukas, thank you a lot! > > > +MBas aoffcaobf +MMmcs 30 +MBsbct 2048 +MBacul 10 +MBlmbcs 2048 +MBsmbcs > 2048 > > > these settings looks very good. Scheduler utilization is very good now > (screenshot attached). > > > hugepages are not used, recon_alloc:memory(usage) shows 78% (rather > good), > > recon_alloc:cache_hit_rates() shows 99% > > recon_alloc:sbcs_to_mbcs(current) shows less than 0,2% > > > > Will check these settings on different installations. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nem@REDACTED Tue Nov 29 07:12:11 2016 From: nem@REDACTED (Geoff Cant) Date: Mon, 28 Nov 2016 22:12:11 -0800 Subject: [erlang-questions] Logs and stdout [OFFTOPIC] In-Reply-To: References: Message-ID: <6CA450A8-0572-4164-B322-6DC2A5D16DFE@erlang.geek.nz> "12 Factor? (aka https://12factor.net) is a set of conventions to make linux applications easier to run in containers and to allow those containers to be managed and orchestrated uniformly. These conventions were established by one of Heroku?s co-founders, Adam Wiggins (hi Adam!), and are mostly still good ways to adapt linux applications to running on a Platform as a Service (Heroku/CloudFoundry/Deis/Flynn/Mesosphere/Kubernetes/?). 12 Factor is a little dated now - it?s last update was in 2012, but the ideas are still mostly relevant. In the case of logging, the reasoning behind the stdout convention is that is was a relatively easy to implement (for the app developer), language/framework independent approach to getting a real-time ish stream of logs out of any application. The platform the application runs on can then handle log forwarding and distribution uniformly. This would have been hard to impossible to do if every application was allowed to choose its own method for emitting log events (syslog vs stdout vs files vs snmp traps vs probably lots of other things). Logs as files, in particular, would have to be a complicated convention (where do you write them, how many files do you write, how are they rotated, do they get truncated to avoid running out of space, what constitutes a complete message in a file if you?re trying to stream them, ...), and this complexity would have been development costs paid in every single application for the PaaS. The corollary of this is that if you?re not running your application on a PaaS, then you?re not going to get much benefit from writing logs to stdout. Aside: I worked on Heroku?s log pipeline systems for a while, and like to see this factor revisited. ?\n' framed opaque-byte messages on stdout is a protocol due for improvement: you can?t write neatly formatted multi-line messages and have them pass through the log pipeline very easily (change the framing), it?s an async one-way protocol (there?s no facility for reliable transmission of messages, so you can?t use it for audit logs), it is sub-optimal transmission format for sending metrics (metrics can be aggregated during transport, and logs can?t). So let?s hope there are some updates to 12 factor in 2017. I have quibbles about configuration passed as environment variables too, but unix doesn?t offer a lot of good choices here. -Geoff ps: yeah - use lager. It?ll work both on a PaaS and your own box. > On 28 Nov, 2016, at 18:05, Richard A. O'Keefe wrote: > > > > On 25/11/16 11:34 PM, Alexander Petrovsky wrote: >> - Hey, boy, did you know about 12 app factor (*oh my god*)? > > "12-factor app" was not handed down on Mt Sinai; > Jibril did not dictate it to the Prophet; > Joseph Smith Jr never found it on the golden plates. > Heck, even Ra?l never heard about it from the elohim. > > "12-factor app" is OPINION. > > Using lager, the vast bulk of your code doesn't know and > doesn't care where the logs are going. It *can* go to > stdout if that's what you want. > > On the other hand, if you need log rotation and such, > and you only write to stdout, then you have to depend > on something outside your application to do it. > > stdout is where information is sent to die. (:-) > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From askjuise@REDACTED Tue Nov 29 09:48:40 2016 From: askjuise@REDACTED (Alexander Petrovsky) Date: Tue, 29 Nov 2016 11:48:40 +0300 Subject: [erlang-questions] Logs and stdout [OFFTOPIC] In-Reply-To: <6CA450A8-0572-4164-B322-6DC2A5D16DFE@erlang.geek.nz> References: <6CA450A8-0572-4164-B322-6DC2A5D16DFE@erlang.geek.nz> Message-ID: Thanks a lot for feedback, it's very interesting to look at things through different eyes. >From myself I would add about pipes: - http://www.pixelbeat.org/programming/stdio_buffering/ - https://brandonwamboldt.ca/how-linux-pipes-work-under-the-hood-1518/ 2016-11-29 9:12 GMT+03:00 Geoff Cant : > > "12 Factor? (aka https://12factor.net) is a set of conventions to make > linux applications easier to run in containers and to allow those > containers to be managed and orchestrated uniformly. These conventions were > established by one of Heroku?s co-founders, Adam Wiggins (hi Adam!), and > are mostly still good ways to adapt linux applications to running on a > Platform as a Service (Heroku/CloudFoundry/Deis/ > Flynn/Mesosphere/Kubernetes/?). 12 Factor is a little dated now - it?s > last update was in 2012, but the ideas are still mostly relevant. > > > In the case of logging, the reasoning behind the stdout convention is that > is was a relatively easy to implement (for the app developer), > language/framework independent approach to getting a real-time ish stream > of logs out of any application. The platform the application runs on can > then handle log forwarding and distribution uniformly. This would have been > hard to impossible to do if every application was allowed to choose its own > method for emitting log events (syslog vs stdout vs files vs snmp traps vs > probably lots of other things). Logs as files, in particular, would have to > be a complicated convention (where do you write them, how many files do you > write, how are they rotated, do they get truncated to avoid running out of > space, what constitutes a complete message in a file if you?re trying to > stream them, ...), and this complexity would have been development costs > paid in every single application for the PaaS. > > The corollary of this is that if you?re not running your application on a > PaaS, then you?re not going to get much benefit from writing logs to stdout. > > > Aside: I worked on Heroku?s log pipeline systems for a while, and like to > see this factor revisited. ?\n' framed opaque-byte messages on stdout is a > protocol due for improvement: you can?t write neatly formatted multi-line > messages and have them pass through the log pipeline very easily (change > the framing), it?s an async one-way protocol (there?s no facility for > reliable transmission of messages, so you can?t use it for audit logs), it > is sub-optimal transmission format for sending metrics (metrics can be > aggregated during transport, and logs can?t). So let?s hope there are some > updates to 12 factor in 2017. I have quibbles about configuration passed as > environment variables too, but unix doesn?t offer a lot of good choices > here. > > > -Geoff > ps: yeah - use lager. It?ll work both on a PaaS and your own box. > > > On 28 Nov, 2016, at 18:05, Richard A. O'Keefe wrote: > > > > > > > > On 25/11/16 11:34 PM, Alexander Petrovsky wrote: > >> - Hey, boy, did you know about 12 app factor (*oh my god*)? > > > > "12-factor app" was not handed down on Mt Sinai; > > Jibril did not dictate it to the Prophet; > > Joseph Smith Jr never found it on the golden plates. > > Heck, even Ra?l never heard about it from the elohim. > > > > "12-factor app" is OPINION. > > > > Using lager, the vast bulk of your code doesn't know and > > doesn't care where the logs are going. It *can* go to > > stdout if that's what you want. > > > > On the other hand, if you need log rotation and such, > > and you only write to stdout, then you have to depend > > on something outside your application to do it. > > > > stdout is where information is sent to die. (:-) > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- ?????????? ????????? / Alexander Petrovsky, Skype: askjuise Phone: +7 914 8 820 815 -------------- next part -------------- An HTML attachment was scrubbed... URL: From grahamrhay@REDACTED Tue Nov 29 10:26:50 2016 From: grahamrhay@REDACTED (Graham Hay) Date: Tue, 29 Nov 2016 09:26:50 +0000 Subject: [erlang-questions] Logs and stdout [OFFTOPIC] In-Reply-To: References: <6CA450A8-0572-4164-B322-6DC2A5D16DFE@erlang.geek.nz> Message-ID: > if you?re not running your application on a PaaS, then you?re not going to get much benefit from writing logs to stdout If you're using systemd (it's hard not to, now), then it will redirect stdout to syslog for you; which means you have just one input for any log processing (e.g. ELK). Life is much easier if every app in your system behaves the same, no matter what language it's built with. From max.lapshin@REDACTED Tue Nov 29 17:49:07 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Tue, 29 Nov 2016 19:49:07 +0300 Subject: [erlang-questions] Help with allocator tuning In-Reply-To: References: Message-ID: We look at two important things: erlang:statistics(total_active_tasks) when it is higher than 1500, it is not good. then we take periodically : erlang:statistics(scheduler_wall_time) it returns list of {_, Active, Total} sum all Active, sum all Total and you get scheduler utilisation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Tue Nov 29 17:49:54 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Tue, 29 Nov 2016 19:49:54 +0300 Subject: [erlang-questions] [ANN] Ranch 1.3 In-Reply-To: <67eef033-7c48-8662-09fc-16ad3237eab4@att.com> References: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> <67eef033-7c48-8662-09fc-16ad3237eab4@att.com> Message-ID: So it is a good idea first to upgrade ranch and then go with beta cowboy to production in a week later? -------------- next part -------------- An HTML attachment was scrubbed... URL: From frank.muller.erl@REDACTED Tue Nov 29 18:30:31 2016 From: frank.muller.erl@REDACTED (Frank Muller) Date: Tue, 29 Nov 2016 17:30:31 +0000 Subject: [erlang-questions] Help with allocator tuning In-Reply-To: References: Message-ID: Very informative, thank you Max. Hope "Lukas" will have time to explain the rest (i.e VM's mem_alloc setttings). /Frank Le mar. 29 nov. 2016 ? 17:49, Max Lapshin a ?crit : > We look at two important things: > > erlang:statistics(total_active_tasks) > > when it is higher than 1500, it is not good. > > > then we take periodically : > > erlang:statistics(scheduler_wall_time) > > > it returns list of {_, Active, Total} > > sum all Active, sum all Total and you get scheduler utilisation. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Tue Nov 29 18:37:51 2016 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Tue, 29 Nov 2016 18:37:51 +0100 Subject: [erlang-questions] [ANN] Ranch 1.3 In-Reply-To: References: <0647e4d8-5ea5-d2fc-45a2-7e442c800214@ninenines.eu> <67eef033-7c48-8662-09fc-16ad3237eab4@att.com> Message-ID: Up to you. :-) Ranch 1.3 isn't very different so it should be pretty painless to update. On 11/29/2016 05:49 PM, Max Lapshin wrote: > So it is a good idea first to upgrade ranch and then go with beta cowboy > to production in a week later? -- Lo?c Hoguin https://ninenines.eu From duncan@REDACTED Tue Nov 29 21:34:07 2016 From: duncan@REDACTED (duncan@REDACTED) Date: Tue, 29 Nov 2016 13:34:07 -0700 Subject: [erlang-questions] [ANN] Ranch 1.3 Message-ID: <20161129133407.7e43b23f706d1a78218bd3e1c66e57ee.034d6056c2.wbe@email23.godaddy.com> An HTML attachment was scrubbed... URL: From t@REDACTED Tue Nov 29 21:43:32 2016 From: t@REDACTED (Tristan Sloughter) Date: Tue, 29 Nov 2016 12:43:32 -0800 Subject: [erlang-questions] [ANN] Ranch 1.3 In-Reply-To: <20161129133407.7e43b23f706d1a78218bd3e1c66e57ee.034d6056c2.wbe@email23.godaddy.com> References: <20161129133407.7e43b23f706d1a78218bd3e1c66e57ee.034d6056c2.wbe@email23.godaddy.com> Message-ID: <1480452212.4026569.802917673.09E80D8B@webmail.messagingengine.com> Duncan, no, if you want to use a different version of a transitive dependency you must make it non-transitive or use an override to set cowboys deps for it (though using an override for this is overkill and should just include ranch at the top level). Upgrading ranch without overriding or making it a top level dep would mean the lock file would not match what is specified in the configs. Changing the version of a package a dependency depends on is taking responsibility for that dependency. -- Tristan Sloughter t@REDACTED On Tue, Nov 29, 2016, at 12:34 PM, duncan@REDACTED wrote: > > I use cowboy and cowboy has a dependency on ranch. > > If I "rebar3 upgrade ranch", it tells me "Dependency ranch is > transient and cannot be safely upgraded. Promote it to your top-level > rebar.config file to upgrade it." > > If I rebar3 upgrade cowboy", it tells me "No upgrade needed for > cowboy, ..., No upgrade needed for ranch". > > Is there a way to upgrade ranch without promoting it in rebar.config? > Ie - it may not be 'needed' but I would like to do it anyway. Wrt > rebar.config, I agree it's unlikely cowboy would evolve to not being > dependent on ranch, but it's more the principle that I'd prefer to > keep my dependencies to only what I am actually using. On the other > hand, I would like to upgrade the pieceparts as they come out instead > of doing a big bang later. I'm guessing some of my other dependencies > are in a similar state (eg cowlib complaining about random - it has > been fixed but not in version I have). > Does rebar3 have an 'force upgrading dependencies' option? Or is there > some other trick I could do? > > Duncan Sparrell > s-Fractal Consulting LLC > iPhone, iTypo, iApologize > >> -------- Original Message -------- Subject: Re: [erlang-questions] >> [ANN] Ranch 1.3 From: Lo?c_Hoguin Date: Tue, >> November 29, 2016 12:37 pm To: Max Lapshin , >> Garry Hodgson Cc: Garry Hodgson >> , Erlang Questions > questions@REDACTED> >> >> Up to you. :-) >> >> Ranch 1.3 isn't very different so it should be pretty painless to >> update. >> >> On 11/29/2016 05:49 PM, Max Lapshin wrote: >> > So it is a good idea first to upgrade ranch and then go with beta >> > cowboy to production in a week later? >> >> -- >> Lo?c Hoguin https://ninenines.eu >> _______________________________________________ >> erlang-questions mailing list erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > _________________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstimpson@REDACTED Wed Nov 30 03:30:28 2016 From: jstimpson@REDACTED (Jesse Stimpson) Date: Tue, 29 Nov 2016 21:30:28 -0500 Subject: [erlang-questions] Performance of base64 decode vs mime_decode Message-ID: Hello, While profiling my app I came across a significant difference in the performance between base64:decode/1 and base64:mime_decode/1. The implementations are very similar, but based on my experimentation, I think the impl of mime_decode is preventing the Erlang compiler from enabling some tail recursion optimizations. Unfortunately, I don't have deep enough knowledge of the compiler to confirm this directly, but I did make some modifications and was able to achieve some better results. Here are my findings: 27> base64_timing:run(). Function | Time | Rel --------------------------+-----------+----- base64:decode/1 287838 us 1.0 base64:mime_decode/1 493222 us 1.71 base64_edits:mime_decode/1 270877 us 0.94 TESTS ok ok Note: The timing test provided in base64_timing.erl creates a 7 MB base64-encoded binary and decodes it using the various functions. It also includes a copy of the unit tests for base64:mime_decode/1 that appear in the Erlang 19.1 release. The basic idea of the modifications in base64_edits is a focus on maintaining clean tail recursion throughout the implementation. I found that any expression that would "collapse" the tail (T) by reading it in full (specifically: pattern matching against <<>>) would nearly double the execution time. The decode map is simply copied from base64.erl for convenience. With that, my questions -- 1. Is there a flaw in my measurements and/or mime_decode modifications? 2. Is there interest in me submitting a patch with these findings? Thanks! Jesse Stimpson -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: base64_timing.erl Type: application/octet-stream Size: 2634 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: base64_edits.erl Type: application/octet-stream Size: 3052 bytes Desc: not available URL: From ivan@REDACTED Wed Nov 30 11:35:47 2016 From: ivan@REDACTED (Ivan Uemlianin) Date: Wed, 30 Nov 2016 10:35:47 +0000 Subject: [erlang-questions] specifying OS-level environment variables Message-ID: <492aa5d5-c38a-4d4d-a88a-1f5cc4f923c3@llaisdy.com> Dear All I am writing an erlang application, and one of its dependencies requires an OS environment variable to be set. For sake of argument: XYZ_HOME = /path/to/lib/ What is the best way to express this requirement to the user? I can think of two ways. Are there other, better, ways? 1. Just put it in the documentation, along with other system requirements: "Needs XYZ_HOME to be set otherwise won't work." 2. Put a "sensible default" in the .app.src and/or sys.config, document these application configs, and during application startup use os:putenv, e.g.: {ok, XYZ_HOME} = application:get_env(myapp, xyz_home), os:putenv("XYZ_HOME", XYZ_HOME), The first doesn't seem very friendly and I shouldn't think will be very effective. Is the second a correct use case for os:putenv? Is there a third way that is even better? With thanks and best wishes Ivan -- ============================================================ Ivan A. Uemlianin PhD Llaisdy Speech Technology Research and Development ivan@REDACTED @llaisdy llaisdy.wordpress.com github.com/llaisdy www.linkedin.com/in/ivanuemlianin festina lente ============================================================ From sergej.jurecko@REDACTED Wed Nov 30 13:12:24 2016 From: sergej.jurecko@REDACTED (=?utf-8?Q?Sergej_Jure=C4=8Dko?=) Date: Wed, 30 Nov 2016 13:12:24 +0100 Subject: [erlang-questions] specifying OS-level environment variables In-Reply-To: <492aa5d5-c38a-4d4d-a88a-1f5cc4f923c3@llaisdy.com> References: <492aa5d5-c38a-4d4d-a88a-1f5cc4f923c3@llaisdy.com> Message-ID: I would put a default value into vm.args -env Variable Value and start your app with: -args_file path/to/vm.args regards, Sergej > On 30 Nov 2016, at 11:35, Ivan Uemlianin wrote: > > Dear All > > I am writing an erlang application, and one of its dependencies requires an OS environment variable to be set. For sake of argument: > > XYZ_HOME = /path/to/lib/ > > What is the best way to express this requirement to the user? I can think of two ways. Are there other, better, ways? > > 1. Just put it in the documentation, along with other system requirements: "Needs XYZ_HOME to be set otherwise won't work." > > 2. Put a "sensible default" in the .app.src and/or sys.config, document these application configs, and during application startup use os:putenv, e.g.: > > {ok, XYZ_HOME} = application:get_env(myapp, xyz_home), > os:putenv("XYZ_HOME", XYZ_HOME), > > The first doesn't seem very friendly and I shouldn't think will be very effective. Is the second a correct use case for os:putenv? > > Is there a third way that is even better? > > With thanks and best wishes > > Ivan > > > -- > ============================================================ > Ivan A. Uemlianin PhD > Llaisdy > Speech Technology Research and Development > > ivan@REDACTED > @llaisdy > llaisdy.wordpress.com > github.com/llaisdy > www.linkedin.com/in/ivanuemlianin > > festina lente > ============================================================ > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From silviu.cpp@REDACTED Wed Nov 30 13:35:36 2016 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Wed, 30 Nov 2016 14:35:36 +0200 Subject: [erlang-questions] Erlang SSL benchmark (ssl\p1_tls\fast_tls\etls) Message-ID: Hello, After I saw the etls benchmarks from (https://github.com/kzemek/etls) I started working on a small app to test those and try to keep all tunings uniform. Also I found this bug that makes etls benchmark not relevant for the moment: https://github.com/kzemek/etls/issues/8 My project can be found here: https://github.com/silviucpp/tls_bench I also included results for Erlang compiled with BoringSSL on OS X. In case you can spot bugs please let me know. The problem I have and I don't understand is why p1_tls and fast_tls from process one has such bad performances on Linux. On OSX they works nice but on Ubuntu I have no idea why they outperform. I saw that Erlang it's using EVP_* which is hardware accelerated but also the API used by p1_tls and fast_tls should be.. My conclusion so far is that Erlang SSL seems to perform pretty nice. If you have any idea about my problem let me know ! Silviu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan@REDACTED Wed Nov 30 13:36:06 2016 From: ivan@REDACTED (Ivan Uemlianin) Date: Wed, 30 Nov 2016 12:36:06 +0000 Subject: [erlang-questions] specifying OS-level environment variables In-Reply-To: References: <492aa5d5-c38a-4d4d-a88a-1f5cc4f923c3@llaisdy.com> Message-ID: <708acaa7-8139-1d51-1042-596e19854fe4@llaisdy.com> Dear Sergej Excellent thanks. That looks like the tight way to do it. Best wishes Ivan On 30/11/2016 12:12, Sergej Jure?ko wrote: > I would put a default value into vm.args > > -env Variable Value > > and start your app with: -args_file path/to/vm.args > > > regards, > Sergej > >> On 30 Nov 2016, at 11:35, Ivan Uemlianin wrote: >> >> Dear All >> >> I am writing an erlang application, and one of its dependencies requires an OS environment variable to be set. For sake of argument: >> >> XYZ_HOME = /path/to/lib/ >> >> What is the best way to express this requirement to the user? I can think of two ways. Are there other, better, ways? >> >> 1. Just put it in the documentation, along with other system requirements: "Needs XYZ_HOME to be set otherwise won't work." >> >> 2. Put a "sensible default" in the .app.src and/or sys.config, document these application configs, and during application startup use os:putenv, e.g.: >> >> {ok, XYZ_HOME} = application:get_env(myapp, xyz_home), >> os:putenv("XYZ_HOME", XYZ_HOME), >> >> The first doesn't seem very friendly and I shouldn't think will be very effective. Is the second a correct use case for os:putenv? >> >> Is there a third way that is even better? >> >> With thanks and best wishes >> >> Ivan >> >> >> -- >> ============================================================ >> Ivan A. Uemlianin PhD >> Llaisdy >> Speech Technology Research and Development >> >> ivan@REDACTED >> @llaisdy >> llaisdy.wordpress.com >> github.com/llaisdy >> www.linkedin.com/in/ivanuemlianin >> >> festina lente >> ============================================================ >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions -- ============================================================ Ivan A. Uemlianin PhD Llaisdy Speech Technology Research and Development ivan@REDACTED @llaisdy llaisdy.wordpress.com github.com/llaisdy www.linkedin.com/in/ivanuemlianin festina lente ============================================================ From aschultz@REDACTED Wed Nov 30 14:41:27 2016 From: aschultz@REDACTED (Andreas Schultz) Date: Wed, 30 Nov 2016 14:41:27 +0100 Subject: [erlang-questions] Erlang SSL benchmark (ssl\p1_tls\fast_tls\etls) In-Reply-To: References: Message-ID: Hi, On 11/30/2016 01:35 PM, Caragea Silviu wrote: [...] > My conclusion so far is that Erlang SSL seems to perform pretty nice. Nice. To get some idea what the actual Erlang overhead is, could you include the output of `openssl speed -evp aes-128-gcm` in you comparison tables? Not everyone has a OSX box around to test them selfs. Andreas > > If you have any idea about my problem let me know ! > > Silviu > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions >