From vasdeveloper@REDACTED Thu Oct 1 00:13:03 2015 From: vasdeveloper@REDACTED (Theepan) Date: Thu, 1 Oct 2015 03:43:03 +0530 Subject: [erlang-questions] QR Code Generator Message-ID: Team, I am in need of a QR code generator library for Erlang, to generate QR code PNG images of 256 alpha numeric characters. When I searched the Internet, I found one on GitHub. Have anyone of you used it? Do you have any recommendations? The performance I need is 5 QR code images generation per second. Thanks, Theepan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony@REDACTED Thu Oct 1 00:28:53 2015 From: tony@REDACTED (Tony Rogvall) Date: Thu, 1 Oct 2015 00:28:53 +0200 Subject: [erlang-questions] QR Code Generator In-Reply-To: References: Message-ID: Data = list_to_binary(lists:duplicate(256,$x)). timer:tc(fun() -> lists:foreach(fun(_) -> QR = qrcode:encode(Data), qrcode_demo:simple_png_encode(QR) end, lists:seq(1,5)) end). {1401102,ok} The performance is nearly there ( running on my mac using only one core ) /Tony > On 1 okt 2015, at 00:13, Theepan wrote: > > Team, > > I am in need of a QR code generator library for Erlang, to generate QR code PNG images of 256 alpha numeric characters. > > When I searched the Internet, I found one on GitHub. Have anyone of you used it? Do you have any recommendations? > > The performance I need is 5 QR code images generation per second. > > Thanks, > Theepan > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ok@REDACTED Thu Oct 1 05:42:58 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 1 Oct 2015 16:42:58 +1300 Subject: [erlang-questions] ** exception error: no function clause matching test_calculate:validate(["1", "+", "1"], []) (test_calculate.erl, line 11) In-Reply-To: <560C02DC.4090800@home.nl> References: <560C02DC.4090800@home.nl> Message-ID: <18B0CC5F-AC4C-4B09-BC15-31EDCF283569@cs.otago.ac.nz> On 1/10/2015, at 4:42 am, Roelof Wobben wrote: > > -module(test_calculate). > > -export([validate/1]). > > scan(String) -> > validate(string:tokens(String, " ")). I'm going to use an entirely informal type notation here. I am doing this to emphasise that you not only do not need to understand a language's type checker, there doesn't even need to BE a type checker for the language for YOU to check that the types make sense. string:tokens(Source :: , Separator :: ) :: list. You can verify this by calling string:tokens/2 in the Erlang shell. 1> string:tokens("1 + 1", " "). ["1","+","1"] You pass the result of string:tokens/2 to validate/1, so we must have scan(Source :: ) :: SOMETHING. validate(Tokens :: list) :: SOMETHING. That is, scan/1 returns whatever validate/1 returns, and validate/1 is given a LIST OF STRINGS as its argument. > > validate(String) -> > validate(String, []). Right here alarm bells go off. We know from the call that the argument is not a string. It is a LIST of strings. But the argument *NAME* says 'String'. validate([Head | Tail], Validated_list) when Head >= $0 , Head =< $9 -> > validate(Tail, [Head | Validated_list]); Here you are comparing an element of validate's argument as if you are expecting a character. But Head is a STRING. It is also a concern to me that you do not have any comment saying what it *means* for a list of strings to be valid. 2> c(test_calculate). test_calculate.erl:4: Warning: function scan/1 is unused test_calculate.erl:19: Warning: function test/0 is unused {ok,test_calculate} It looks as though you forgot to -export([test/0]). > > validate([Head | Tail], Validated_list) when Head =:= $+ -> > validate(Tail, [Head | Validated_list]); > > validate([], Validated_list) -> > Validated_list. > > test() -> > ["1","+","1"] = scan("1 + 1"), > ["10", "+", "1"] = scan("10 + 1"), > ["1", "+", "10"] = scan("1 + 10"), > io:format("All tests are green ~n"). > > but as soon as I try : > > test_calculate:validate(["1","+", "1"]) > > I see the above error > > Someone a tip where things are going the wrong way ? The error message is pretty explicit. You have a call validate(["1","+","1"], []). No clause matches that call. You expected that some clause *would* match it. So what you do is compare each clause in turn with the call to see why *that* clause did not match. The first clause looks for a CHARACTER between $0 and $9 but it finds the STRING "1". 4> {"1" >= $0, "1" =< $9}. {true,false} So that clause does not match. The second clause looks for the CHARACTER $+ but it finds the STRING "1". So that clause doesn't match. The last clause looks for the empty list, but it finds a list of three strings. So that clause doesn't match. And we've found out what looking at scan/1 and validate/1 told us straight away, that you are muddling up strings and characters. So let's fix that. -module(test_calculate). -export([test/0, validate/1]). scan(String) -> Tokens = string:tokens(String, " "), case validate(Tokens) of ok -> {ok,Tokens} ; Err -> Err end. %% A Token_List is valid if and only if it contains %% number tokens and operator tokens, where a number %% token is a non-empty string of digits and currently %% an operator token is just "+". We either report %% 'ok' for a valid list or {error,Reason,Culprit} for invalid. validate([]) -> ok; validate(["+"|Tokens]) -> validate(Tokens); validate([Num|Tokens]) -> case is_number_token(Num, 0) of true -> validate(Tokens) ; false -> {error,"not a number or operator",Num} end; validate(Other) -> {error,"not a list",Other}. %% is_number_token(String, N) is true if and only if %% String is a list of ASCII decimal digit characters %% and N + length(String) > 0. is_number_token([], N) -> N > 0; is_number_token([C|Cs], N) when C =< $9, C >= $0 -> is_number_token(Cs, N+1); is_number_token(_, _) -> false. test() -> {ok,["1","+","1"] } = scan("1 + 1"), {ok,["10", "+", "1"]} = scan("10 + 1"), {ok,["1", "+", "10"]} = scan("1 + 10"), io:format("All tests are green ~n"). The single most important step in getting that right was WRITING THE COMMENTS. From mrtndimitrov@REDACTED Thu Oct 1 07:05:59 2015 From: mrtndimitrov@REDACTED (Martin Koroudjiev) Date: Thu, 1 Oct 2015 08:05:59 +0300 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: <17396374.bC6hWqK8dy@changa> References: <560A4698.4070609@gmail.com> <4558184.1nhmif8cUn@changa> <17396374.bC6hWqK8dy@changa> Message-ID: <560CBF37.2020704@gmail.com> Thanks all! I all makes sense. I know the example is quite naive but was trying to show code that produces the warning. Regards, Martin On 9/29/2015 2:58 PM, zxq9 wrote: > On 2015?9?29? ??? 20:42:19 zxq9 wrote: >> On 2015?9?29? ??? 11:06:48 Martin Koroudjiev wrote: >>> Hello, >>> >>> I am confused why this code generates warning with the warn_export_vars >>> option: >> This has been discussed several times (here, for example: http://erlang.org/pipermail/erlang-questions/2014-March/078017.html), and because of the weirdness of case scope being one way (its a scope semantically and conceptually, but its not really) and list comprehensions being another (it really is a separate scope, but that's not how some other languages work) this has been made into a warning. > Martin, > > By the way -- something I should have mentioned is that this is almost never a problem in practice because the normal way to deal with lots of declarations within a `case` (usually because you have a large chain of case statements within a single function, so lots of bindings lay around in scope or collide) is to break most of this stuff out into separate functions. > > For example, your original code is something I don't think anyone would write: > >> test(Mode) -> >> case Mode of >> r -> {ok, FP} = file:open("warn.erl", [read]); >> w -> {ok, FP} = file:open("warn.erl", [write]) >> end, >> file:close(FP). > Would be more like: > > test(Mode) -> > {ok, FD} = file:open("warn.erl", [mode(Mode)]), > ok = do_stuff(FD), > file:close(FD). > > mode(r) -> read; > mode(w) -> write. > > This bothered me initially because it is hard to imagine this being more natural any other way -- and its pretty weird to need 'r' or 'w' instead of 'read' or 'write' to begin with. But contrived examples are difficult to compare to real situations sometimes. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From ok@REDACTED Thu Oct 1 08:08:08 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 1 Oct 2015 19:08:08 +1300 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: <560A4698.4070609@gmail.com> References: <560A4698.4070609@gmail.com> Message-ID: <3A357448-C57B-47C2-9841-065B5025652B@cs.otago.ac.nz> On 29/09/2015, at 9:06 pm, Martin Koroudjiev wrote: > test(Mode) -> > case Mode of > r -> {ok, FP} = file:open("warn.erl", [read]); > w -> {ok, FP} = file:open("warn.erl", [write]) > end, > file:close(FP). [Why is this considered worse than this rewrite?] > > test(Mode) -> > {ok, FP} = > case Mode of > r -> file:open("warn.erl", [read]); > w -> file:open("warn.erl", [write]) > end, > file:close(FP). You will get a diversity of opinions on this. In general, I think that *avoiding* variable exports from case, if, receive, &c can lead to contorted code. In this specific instance, however, the original code violates the "once and only once" principle. I'd even prefer to see test(Mode) -> {ok, FP} = file:open("warn.erl", full_mode(Mode)), file:close(FP). full_mode(r) -> [read]; full_mode(w) -> [write]. The kind of thing that I'd call contorted is avoiding case foo(...) of {bar,X,Y} -> ... ; {ugh,Y,X} -> ... end, use(X, Y) by writing {X,Y} = case foo(...) of {bar,X1,Y1} -> ..., {X1,Y1} ; {ugh,Y2,X2} -> ..., {X2,Y2} end, use(X, Y) Stuffing things into a data structure just so that you can *immediately* pull them out again is obfuscation in my book, and renaming what are going to be the same variables is also obfuscation. It's better to look for a somewhat larger refactoring, if you want to avoid this annoyance. From r.wobben@REDACTED Thu Oct 1 08:15:21 2015 From: r.wobben@REDACTED (Roelof Wobben) Date: Thu, 1 Oct 2015 08:15:21 +0200 Subject: [erlang-questions] ** exception error: no function clause matching test_calculate:validate(["1", "+", "1"], []) (test_calculate.erl, line 11) In-Reply-To: <18B0CC5F-AC4C-4B09-BC15-31EDCF283569@cs.otago.ac.nz> References: <560C02DC.4090800@home.nl> <18B0CC5F-AC4C-4B09-BC15-31EDCF283569@cs.otago.ac.nz> Message-ID: <560CCF79.1070702@home.nl> Op 1-10-2015 om 05:42 schreef Richard A. O'Keefe: > On 1/10/2015, at 4:42 am, Roelof Wobben wrote: >> -module(test_calculate). >> >> -export([validate/1]). >> >> scan(String) -> >> validate(string:tokens(String, " ")). > I'm going to use an entirely informal type notation here. > I am doing this to emphasise that you not only do not need > to understand a language's type checker, there doesn't even > need to BE a type checker for the language for YOU to check > that the types make sense. > > string:tokens(Source :: , Separator :: ) :: list. > > You can verify this by calling string:tokens/2 in the Erlang shell. > > 1> string:tokens("1 + 1", " "). > ["1","+","1"] > > You pass the result of string:tokens/2 to validate/1, so we must have > > scan(Source :: ) :: SOMETHING. > validate(Tokens :: list) :: SOMETHING. > > That is, scan/1 returns whatever validate/1 returns, > and validate/1 is given a LIST OF STRINGS as its argument. > >> validate(String) -> >> validate(String, []). > Right here alarm bells go off. We know from the call > that the argument is not a string. It is a LIST of strings. > But the argument *NAME* says 'String'. > > validate([Head | Tail], Validated_list) when Head >= $0 , Head =< $9 -> >> validate(Tail, [Head | Validated_list]); > Here you are comparing an element of validate's argument > as if you are expecting a character. But Head is a STRING. > > It is also a concern to me that you do not have any comment > saying what it *means* for a list of strings to be valid. > > 2> c(test_calculate). > test_calculate.erl:4: Warning: function scan/1 is unused > test_calculate.erl:19: Warning: function test/0 is unused > {ok,test_calculate} > > It looks as though you forgot to -export([test/0]). >> validate([Head | Tail], Validated_list) when Head =:= $+ -> >> validate(Tail, [Head | Validated_list]); >> >> validate([], Validated_list) -> >> Validated_list. >> >> test() -> >> ["1","+","1"] = scan("1 + 1"), >> ["10", "+", "1"] = scan("10 + 1"), >> ["1", "+", "10"] = scan("1 + 10"), >> io:format("All tests are green ~n"). >> >> but as soon as I try : >> >> test_calculate:validate(["1","+", "1"]) >> >> I see the above error >> >> Someone a tip where things are going the wrong way ? > The error message is pretty explicit. > You have a call > validate(["1","+","1"], []). > No clause matches that call. > > You expected that some clause *would* match it. > So what you do is compare each clause in turn with > the call to see why *that* clause did not match. > > The first clause looks for a CHARACTER between > $0 and $9 but it finds the STRING "1". > 4> {"1" >= $0, "1" =< $9}. > {true,false} > > So that clause does not match. > > The second clause looks for the CHARACTER $+ > but it finds the STRING "1". > > So that clause doesn't match. > > The last clause looks for the empty list, > but it finds a list of three strings. > > So that clause doesn't match. > > And we've found out what looking at scan/1 and > validate/1 told us straight away, that you are > muddling up strings and characters. > > So let's fix that. > > -module(test_calculate). > -export([test/0, validate/1]). > > scan(String) -> > Tokens = string:tokens(String, " "), > case validate(Tokens) > of ok -> {ok,Tokens} > ; Err -> Err > end. > > %% A Token_List is valid if and only if it contains > %% number tokens and operator tokens, where a number > %% token is a non-empty string of digits and currently > %% an operator token is just "+". We either report > %% 'ok' for a valid list or {error,Reason,Culprit} for invalid. > > validate([]) -> > ok; > validate(["+"|Tokens]) -> > validate(Tokens); > validate([Num|Tokens]) -> > case is_number_token(Num, 0) > of true -> validate(Tokens) > ; false -> {error,"not a number or operator",Num} > end; > validate(Other) -> > {error,"not a list",Other}. > > %% is_number_token(String, N) is true if and only if > %% String is a list of ASCII decimal digit characters > %% and N + length(String) > 0. > > is_number_token([], N) -> > N > 0; > is_number_token([C|Cs], N) > when C =< $9, C >= $0 -> > is_number_token(Cs, N+1); > is_number_token(_, _) -> > false. > > test() -> > {ok,["1","+","1"] } = scan("1 + 1"), > {ok,["10", "+", "1"]} = scan("10 + 1"), > {ok,["1", "+", "10"]} = scan("1 + 10"), > io:format("All tests are green ~n"). > > The single most important step in getting that right was > WRITING THE COMMENTS. > > > > > ----- > Geen virus gevonden in dit bericht. > Gecontroleerd door AVG - www.avg.com > Versie: 2015.0.6140 / Virusdatabase: 4419/10729 - datum van uitgifte: 09/30/15 > > Thanks for the lesson. One thing is not clear to me. What is the meaning of N. I see that N is equal to zero or to one. Roelof From ok@REDACTED Thu Oct 1 08:28:23 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 1 Oct 2015 19:28:23 +1300 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: <4558184.1nhmif8cUn@changa> References: <560A4698.4070609@gmail.com> <4558184.1nhmif8cUn@changa> Message-ID: On 30/09/2015, at 12:42 am, zxq9 wrote: > > The reason this has been made into a warning (and many tools compile with warnings-as-errors set) is that it is possible to not assign a variable you access outside the case in every branch of it and still get a clean compile: > > foo() -> > case bar() of > {bing, Spam} -> eat(Spam); > {bong, Eggs} -> eat(Eggs) > end, > puke(Spam). No, that one's an outright error because Spam is not defined in *all* branches. It's really misleading to even talk about 'exporting' variables. Before there were funs or list comprehensions, the model was incredibly simple: (a) the scope of a variable is the ENTIRE clause it appears in (b) at any use of a variable, every path from the entry to that use must bind the variable once and only once. (Yuck. I just realised that (a) is an uncomfortably close parallel to JavaScript.) The code above would be just as wrong in Java and for the same reason. funs and list comprehensions break this simple model (in a slightly broken way, what's more; the resulting model is so far from simple that I shan't try to summarise it). But 'if', 'case', 'receive', 'try' have NEVER introduced new scopes in the past and don't know. I'm reminded of programming languages like IMP and Ada where you are not allowed to write p() and q() or r() because as a programmer you are presumed to be too dumb to work effectively with the concept of operator precedence as applied to Boolean operators. > > This has been discussed several times (here, for example: http://erlang.org/pipermail/erlang-questions/2014-March/078017.html), and because of the weirdness of case scope being one way (its a scope semantically and conceptually, but its not really) Case is *NOT* a scope semantically, conceptually, or any other way, and never has been. The *only* nested scopes in Erlang are the late-comers, funs and list comprehensions, and in both cases they are separate scopes for a very simple semantic reason: variables introduced in a list comprehension may be bound zero or more times while the comprehension is being executed. Note the zero: If I do [{X = foo(),X} || Y <- List] the List might be empty and Y and X might *never* be bound to anything. We *could* have a dialect of Erlang in which list comprehensions were NOT an exception to the single-scope model, but since such variables might never be initialised, they would not be available outside. And you would not be allowed to redefine them because they *might* have been bound. variables introduced in a fun may be bound zero or more times, depending on whether and how often the fun is called. Note the zero: If I do F = fun (X} -> Y = X+2, ok end the function F might *never* be called and so X and Y would never get values. We *could* have a dialect of Erlang in which funs were NOT an exception to the single-scope model, but since variables introduced in funs might never be initialised,, they would not be available outside. And you would not be allowed to redefine them because they *might* have been bound. > and list comprehensions being another (it really is a separate scope, but that's not how some other languages work) this has been made into a warning. I'm tolerably familiar with Clean, Haskell, F#, and Erlang. They all make list comprehensions a scope. The only declarative-family language I can think of where there is an analogue of list comprehension that isn't a scope is Prolog (setof/3 and bagof/3), and that works because Prolog is perfectly comfortable with variables that might or might not be bound. So which "other languages" make list comprehension not a scope? > > I remember the warning/error discussion happening, but can't find the notes for it (I had thought the "this is always a warning" thing came up with R16 or 17...?). :-( Anyway, what I remember of it was the danger of not assiging a variable in every branch, That's a quite different issue. The issue we're talking about here is where a variable is unambiguously defined in EVERY branch of a branching construct yet the compiler whines when you try to use it. From ok@REDACTED Thu Oct 1 08:41:35 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 1 Oct 2015 19:41:35 +1300 Subject: [erlang-questions] ** exception error: no function clause matching test_calculate:validate(["1", "+", "1"], []) (test_calculate.erl, line 11) In-Reply-To: <560CCF79.1070702@home.nl> References: <560C02DC.4090800@home.nl> <18B0CC5F-AC4C-4B09-BC15-31EDCF283569@cs.otago.ac.nz> <560CCF79.1070702@home.nl> Message-ID: <78083463-E1C8-4316-AA13-5F45814525E8@cs.otago.ac.nz> On 1/10/2015, at 7:15 pm, Roelof Wobben wrote: > One thing is not clear to me. What is the meaning of N. I see that N is equal to zero or to one. > %% is_number_token(String, N) is true if and only if > %% String is a list of ASCII decimal digit characters > %% and N + length(String) > 0. The comment was supposed to make it tolerably clear. I hope you *don't* see that, because it isn't true. Think of the original token as Seen_Part ++ Unseen_Part In is_number_token(String, N), String = Unseen_Part N = length(Seen_Part) (Recall that the update was not 1 but N+1. So N had to be counting *something*.) Here's what I might have done in Haskell: scan :: String -> Maybe [String] scan cs = if valid_tokens ts then Just ts else Nothing where valid_tokens ts = all valid_token ts valid_token t = t == "+" || not (null t) && all isDigit t It's easy to express universal quantification by a loop that checks each element of a list. We _almost_ always want a universal quantification over an empty set to be true; this is one of the rare cases where we want it to be false and have to do something special to guard against it. The counting pattern is useful for identifiers: is_identifier_token(String) -> is_identifier_token(String, 0). is_identifier_token([], N) -> N > 0; is_identifier_token([C|Cs], N) when C =< $9, C >= $0, N > 0 -> is_identifier_token(Cs, N+1); is_identifier_token([C|Cs], N) when (C bor 32) >= $a, (C bor 32) =< $z -> is_identifier_token(Cs, N+1); is_identifier_token([$_|Cs], N) when N > 0 -> is_identifier_token(Cs, N+1); is_identifier_token(_, _) -> false. This allows underscores and digits, but not at the beginning. And it doesn't allow empty identifiers. From ulf@REDACTED Thu Oct 1 08:48:17 2015 From: ulf@REDACTED (Ulf Wiger) Date: Thu, 1 Oct 2015 08:48:17 +0200 Subject: [erlang-questions] QR Code Generator In-Reply-To: References: Message-ID: On my Mac (2.5 GHZ Intel Core i7), using all cores, I get the same: Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Eshell V5.10.4 (abort with ^G) 1> Data = list_to_binary(lists:duplicate(256,$x)). <<"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"...>> 2> timer:tc(fun() -> lists:foreach(fun(_) -> QR = qrcode:encode(Data), qrcode_demo:simple_png_encode(QR) end, lists:seq(1,5)) end). {1491915,ok} If, OTOH, I do this: 3> PEval = fun(F,N) -> Ps = [spawn_monitor(fun() -> exit({ok,F()}) end) || _ <- lists:seq(1,N)], [receive {'DOWN',Ref,_,_,{ok,R}} -> R after 10000 -> error(timeout) end || {_,Ref} <- Ps] end. #Fun 4> timer:tc(fun() -> PEval(fun() -> QR = qrcode:encode(Data), qrcode_demo:simple_png_encode(QR) end, 5) end). {357630, ?} Then, the performance seems to be really there. Is there something I?m missing, that they have to be generated sequentially? BR, Ulf > On 01 Oct 2015, at 00:28, Tony Rogvall wrote: > > Data = list_to_binary(lists:duplicate(256,$x)). > timer:tc(fun() -> lists:foreach(fun(_) -> QR = qrcode:encode(Data), qrcode_demo:simple_png_encode(QR) end, lists:seq(1,5)) end). > {1401102,ok} > > The performance is nearly there ( running on my mac using only one core ) > > /Tony > >> On 1 okt 2015, at 00:13, Theepan > wrote: >> >> Team, >> >> I am in need of a QR code generator library for Erlang, to generate QR code PNG images of 256 alpha numeric characters. >> >> When I searched the Internet, I found one on GitHub. Have anyone of you used it? Do you have any recommendations? >> >> The performance I need is 5 QR code images generation per second. >> >> Thanks, >> Theepan >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.wobben@REDACTED Thu Oct 1 08:50:40 2015 From: r.wobben@REDACTED (Roelof Wobben) Date: Thu, 1 Oct 2015 08:50:40 +0200 Subject: [erlang-questions] ** exception error: no function clause matching test_calculate:validate(["1", "+", "1"], []) (test_calculate.erl, line 11) In-Reply-To: <78083463-E1C8-4316-AA13-5F45814525E8@cs.otago.ac.nz> References: <560C02DC.4090800@home.nl> <18B0CC5F-AC4C-4B09-BC15-31EDCF283569@cs.otago.ac.nz> <560CCF79.1070702@home.nl> <78083463-E1C8-4316-AA13-5F45814525E8@cs.otago.ac.nz> Message-ID: <560CD7C0.4010402@home.nl> Op 1-10-2015 om 08:41 schreef Richard A. O'Keefe: > On 1/10/2015, at 7:15 pm, Roelof Wobben wrote: >> One thing is not clear to me. What is the meaning of N. I see that N is equal to zero or to one. >> %% is_number_token(String, N) is true if and only if >> %% String is a list of ASCII decimal digit characters >> %% and N + length(String) > 0. > The comment was supposed to make it tolerably clear. > I hope you *don't* see that, because it isn't true. > > Think of the original token as > Seen_Part ++ Unseen_Part > In is_number_token(String, N), > String = Unseen_Part > N = length(Seen_Part) > > (Recall that the update was not 1 but N+1. So N had to be > counting *something*.) > > Here's what I might have done in Haskell: > > scan :: String -> Maybe [String] > scan cs = if valid_tokens ts then Just ts else Nothing > where valid_tokens ts = all valid_token ts > valid_token t = t == "+" || not (null t) && all isDigit t > > It's easy to express universal quantification by a loop that > checks each element of a list. We _almost_ always want a > universal quantification over an empty set to be true; this is > one of the rare cases where we want it to be false and have to > do something special to guard against it. > > The counting pattern is useful for identifiers: > > is_identifier_token(String) -> > is_identifier_token(String, 0). > > is_identifier_token([], N) -> > N > 0; > is_identifier_token([C|Cs], N) > when C =< $9, C >= $0, N > 0 -> > is_identifier_token(Cs, N+1); > is_identifier_token([C|Cs], N) > when (C bor 32) >= $a, (C bor 32) =< $z -> > is_identifier_token(Cs, N+1); > is_identifier_token([$_|Cs], N) > when N > 0 -> > is_identifier_token(Cs, N+1); > is_identifier_token(_, _) -> > false. > > This allows underscores and digits, but not at the beginning. > And it doesn't allow empty identifiers. > > > > > ----- > Geen virus gevonden in dit bericht. > Gecontroleerd door AVG - www.avg.com > Versie: 2015.0.6140 / Virusdatabase: 4419/10735 - datum van uitgifte: 10/01/15 > > > Oke, next task for me, is finding out how I can convert the strings that contain number to real numbers, So "10" will become 10. I think again iterate through the list , looking for strings that contain numbers and then do the math. Roelof From zxq9@REDACTED Thu Oct 1 10:00:38 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 01 Oct 2015 17:00:38 +0900 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: References: <560A4698.4070609@gmail.com> <4558184.1nhmif8cUn@changa> Message-ID: <39021272.nUAaYpW9fM@burrito> On Thursday 01 October 2015 19:28:23 you wrote: > > On 30/09/2015, at 12:42 am, zxq9 wrote: > > > > The reason this has been made into a warning (and many tools compile with warnings-as-errors set) is that it is possible to not assign a variable you access outside the case in every branch of it and still get a clean compile: > > > > foo() -> > > case bar() of > > {bing, Spam} -> eat(Spam); > > {bong, Eggs} -> eat(Eggs) > > end, > > puke(Spam). > > No, that one's an outright error because Spam is not defined > in *all* branches. I didn't realize this actually produces a compile error, but indeed it does. casetest.erl:9: variable 'Spam' unsafe in 'case' (line 5) > It's really misleading to even talk about 'exporting' variables. > Before there were funs or list comprehensions, the model was > incredibly simple: > > (a) the scope of a variable is the ENTIRE clause it appears in > (b) at any use of a variable, every path from the entry to that > use must bind the variable once and only once. > > (Yuck. I just realised that (a) is an uncomfortably close > parallel to JavaScript.) This is the general sentiment that seemed to lead to this becoming a warning. The example you presented earlier, though, illustrated the anti-case: {X,Y} = case foo(...) of {bar,X1,Y1} -> ..., {X1,Y1} ; {ugh,Y2,X2} -> ..., {X2,Y2} end, use(X, Y) Clearly this is awful compared to just referencing the variables directly. But... I almost never see the above in actual code. Usually something like: foo({bar,X,Y}) -> bar_related_thing({X,Y}); foo({ugh,X,Y}) -> ugh_related_thing({X,Y}). Almost invariably with some other state variable coming along for the ride. Considering this more carefully now I don't think either the aesthetics of case statements or whether "exporting" is confusing are important issues. This rarely seems to come up in actual code. I can't think of a place similar to the example above where I don't want a separate function instead of a case. That's probably why I (most folks?) have happily plugged away all this time without ever really noticing this. > I'm reminded of programming languages like IMP and Ada where > you are not allowed to write > p() and q() or r() > because as a programmer you are presumed to be too dumb to > work effectively with the concept of operator precedence as > applied to Boolean operators. Hey! I actually kind of like Ada. :-) There is a balance between providing flexibility and providing constructs that almost encourage programmers to silently drop little landmines in their code. My initial (wrong) assumption that case is supposed to be treated as its own semantic scope (which is why I had always thought the warnings were there, and previous discussions here tended to make me think I wasn't alone in expecting them to work that way) made me feel that referencing variables bound in a case is hackish and dirty -- something that might even break someday if the rule changed. But it is not a particularly confusing idea: Just one scope. I didn't recall the line in the docs that says, explicitly: "The scope for a variable is its function clause. Variables bound in a branch of an if, case, or receive expression must be bound in all branches to have a value outside the expression. Otherwise they are regarded as 'unsafe' outside the expression." It makes me feel sad, all the same. I like limited scope and knowing for sure that it is limited. OTOH, I don't like nested code. Forcing me to accept the return value of a case statement to get anything "out" of it reduces the impulse to pack functions full of cases instead of write separate functions. I suppose that's a language design decision anyway, and it is a decision that has already been made -- just not in the way I would have expected. > > and list comprehensions being another (it really is a separate scope, but that's not how some other languages work) this has been made into a warning. > > I'm tolerably familiar with Clean, Haskell, F#, and Erlang. > They all make list comprehensions a scope. > The only declarative-family language I can think of where there > is an analogue of list comprehension that isn't a scope is Prolog > (setof/3 and bagof/3), and that works because Prolog is perfectly > comfortable with variables that might or might not be bound. > So which "other languages" make list comprehension not a scope? Not declarative ones, imperative languages that include this or that generator/comprehension feature and are familiar with the cool kids. Python, for example: >>> [x for x in [1,2,3]] [1, 2, 3] >>> x 3 I don't think this is addressed in any PEPs, recommended style notes or anything like that. I *do* recall having seen it used more than once to recall the last value of a dynamically generated list after some other processing occurred. (I think in the belly of an XML template language... Cheetah? Django templates? Genji? Something like this) There is (was?) a way to make something similar happen in Ruby as well, but that language is a big pile of curious decisions anyway. I can't seem to find the syntax for it now. In the Javascript recommendation for list comprehensions the situation is more weird: y = [for (z of [1,2,3]) x = z]; x; /* y = [1,2,3] x = 3 */ Both x and y are accessible now, but z is not. Not that Javascript or Ruby (or some aspects of Python) are great examples of clean language design, but when I referenced "how some other languages work" implying that people might expect scopes to work this way with list comprehensions but maybe work the opposite way in case statements (in light of the compiler warning) this is what I meant. Comprehensions in these sort of languages are more like wacky syntax over for loops with a few opportunities (apparently) for optimization. (Some operations in list comprehensions in Python are much faster than in an equivalent for loop.) I imagine people expect them to be similar in Erlang, especially considering that using unassigned list comprehensions as a shorthand for lists:foreach/2 specifically to get side-effects is now actually supported as a an optimization. ...not that expectations borne of Javascript experience are things worth living up to. > That's a quite different issue. The issue we're talking about > here is where a variable is unambiguously defined in EVERY branch > of a branching construct yet the compiler whines when you try to use it. After this discussion I feel like the warning should be removed entirely. "Just a compiler warning" has always struck me as an uncomfortably vague category of "technically right, but we really don't like things that way" that makes a programmer feel guilty about valid code (not to mention sparking discussions like this one several times a year -- though I do enjoy being proven wrong/learning details here I would probably never have stumbled on writing code by myself). -Craig From ahe.sanath@REDACTED Thu Oct 1 11:01:25 2015 From: ahe.sanath@REDACTED (Sanath Prasanna) Date: Thu, 1 Oct 2015 14:31:25 +0530 Subject: [erlang-questions] Fwd: Unexpected behavior of HTTPC module - tls_connection process take more memory In-Reply-To: References: Message-ID: Hi all, Any update on this? Br, Robert On Mon, Sep 28, 2015 at 5:12 PM, Sanath Prasanna wrote: > Hi all, > Any update on this? > Br, > Robert > > On Fri, Sep 18, 2015 at 2:11 PM, Eranga Udesh > wrote: > >> Hi, >> >> I am experiencing the same issue in, >> >> - Erlang version : V6.4 , (Erlang/OTP 17 [erts-6.4] [source] [64-bit]) >> - OS version/architecture (32/64 bit) : Red Hat Enterprise Linux >> Server release 6.6 (64 bit) >> >> The issue doesn't come always but after running the system for 3-10 days >> can experience the tls_connection instances making high reductions and >> consuming high memory as sent by Sanath. Since it's a production system, I >> wasn't able to do much experiments, but applied the patches you sent, which >> didn't solve this issue. >> >> There are 2 servers running Erlang VMs making SSL requests to the same >> HTTPS host. When this issue happens, it happens in both the VMs. So it >> could be triggered by some conditions in SSL or network connection. Even >> though I left the VMs to run for a while to see if they can recover, but no >> success. However if I restart the VMs, it starts to run as normal. >> >> So the summary is, >> >> - It triggers by some condition in remote SSL host or network >> connection >> - Erlang VMs don't recover itself >> - Once restarted, it start working, which implies the issue is local. >> i.e. in Erlang VM (tls_connection) >> >> We will try to recreate it and send you more details. I wonder if anybody >> else is experiencing such issue. >> >> Tks, >> - Eranga >> >> >> >> >> >> >> On Fri, Sep 18, 2015 at 1:36 PM, Ingela Andin >> wrote: >> >>> Hi! >>> >>> As we are very busy with the release 18.1 I have not had time to try and >>> recreate your problem. What version of OTP and the ssl application are you >>> using? >>> Can you reproduce the problem with the latest on github? >>> >>> Regards Ingela Erlang/OTP Team - Ericsson AB >>> >>> >>> On Thu, Sep 17, 2015 at 8:05 PM, Sanath Prasanna >>> wrote: >>> >>>> Hi Ingela, >>>> Any update on this?? >>>> "Even apply your patch, *still problem is persist.* Any >>>> more suggestions to solve this unexpected behavior ? " >>>> Br, >>>> Robert >>>> >>>> On Thu, Sep 17, 2015 at 12:39 PM, Sanath Prasanna >>> > wrote: >>>> >>>>> Hi Ingela, >>>>> Even apply your patch, *still problem is persist.* Any >>>>> more suggestions to solve this unexpected behavior ? >>>>> Br, >>>>> Robert >>>>> >>>>> On Tue, Sep 8, 2015 at 1:52 PM, Sanath Prasanna >>>>> wrote: >>>>> >>>>>> Hi Ingela. >>>>>> Tx a lot for your help & patch related to that.I'll inform you the >>>>>> result after applying & testing patch. >>>>>> Br, >>>>>> Robert >>>>>> >>>>>> On Tue, Sep 8, 2015 at 1:31 PM, Ingela Andin >>>>>> wrote: >>>>>> >>>>>>> Hi! >>>>>>> >>>>>>> It could be an bug that in the ssl application that I just fixed. >>>>>>> The default session cache >>>>>>> was violating the API, and this in turn made the mechanism for not >>>>>>> registering a lot of equivalent >>>>>>> sessions in the client fail. >>>>>>> >>>>>>> Here is the patch: >>>>>>> >>>>>>> >>>>>>> diff --git a/lib/ssl/src/ssl_session.erl >>>>>>> b/lib/ssl/src/ssl_session.erl >>>>>>> index 1770faf..0d6cc93 100644 >>>>>>> --- a/lib/ssl/src/ssl_session.erl >>>>>>> +++ b/lib/ssl/src/ssl_session.erl >>>>>>> @@ -100,14 +100,14 @@ select_session([], _, _) -> >>>>>>> no_session; >>>>>>> select_session(Sessions, #ssl_options{ciphers = Ciphers}, OwnCert) >>>>>>> -> >>>>>>> IsNotResumable = >>>>>>> - fun([_Id, Session]) -> >>>>>>> + fun(Session) -> >>>>>>> not (resumable(Session#session.is_resumable) andalso >>>>>>> lists:member(Session#session.cipher_suite, Ciphers) >>>>>>> andalso (OwnCert == Session#session.own_certificate)) >>>>>>> end, >>>>>>> case lists:dropwhile(IsNotResumable, Sessions) of >>>>>>> [] -> no_session; >>>>>>> - [[Id, _]|_] -> Id >>>>>>> + [Session | _] -> Session#session.session_id >>>>>>> end. >>>>>>> >>>>>>> is_resumable(_, _, #ssl_options{reuse_sessions = false}, _, _, _, >>>>>>> _) -> >>>>>>> diff --git a/lib/ssl/src/ssl_session_cache.erl >>>>>>> b/lib/ssl/src/ssl_session_cach >>>>>>> e.erl >>>>>>> index 11ed310..cfc48cd 100644 >>>>>>> --- a/lib/ssl/src/ssl_session_cache.erl >>>>>>> +++ b/lib/ssl/src/ssl_session_cache.erl >>>>>>> @@ -83,7 +83,7 @@ foldl(Fun, Acc0, Cache) -> >>>>>>> >>>>>>> %%-------------------------------------------------------------------- >>>>>>> select_session(Cache, PartialKey) -> >>>>>>> ets:select(Cache, >>>>>>> - [{{{PartialKey,'$1'}, '$2'},[],['$$']}]). >>>>>>> + [{{{PartialKey,'_'}, '$1'},[],['$1']}]). >>>>>>> >>>>>>> >>>>>>> %%-------------------------------------------------------------------- >>>>>>> %%% Internal functions >>>>>>> >>>>>>> >>>>>>> Regards Ingela Erlang/OTP team - Ericsson AB >>>>>>> >>>>>>> >>>>>>> 2015-09-07 5:52 GMT+02:00 Sanath Prasanna : >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> I am running HTTP client using httpc Module to send both http and >>>>>>>> https requests. normally sending arround 300 request per second without any >>>>>>>> issue. however sometimes erlang node become very slow responsive. at that >>>>>>>> time server load average is very high and using etop can identify >>>>>>>> "tls_connection" process take more memory. when restart the erlang node its >>>>>>>> become normal. as per my investigation normal time memory, processors, >>>>>>>> loadAverage is not increasing. following is the HTTP request config >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> httpc:request(Method, Request, [{timeout, TimeoutTime}], [{sync, >>>>>>>> false}]) >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> below is the etop output at that time >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> procs 1134 processes 1504844 >>>>>>>> code 9309 >>>>>>>> >>>>>>>> runq 0 atom 420 >>>>>>>> ets 29692 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Pid Name or Initial Func Time Reds Memory MsgQ >>>>>>>> Current Function >>>>>>>> >>>>>>>> >>>>>>>> ---------------------------------------------------------------------------------------- >>>>>>>> >>>>>>>> <5490.26428.14>tls_connection:init/ '-' 733224580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26429.14>tls_connection:init/ '-' 1328524580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26430.14>tls_connection:init/ '-' 528924580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26431.14>tls_connection:init/ '-' 1432224580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26432.14>tls_connection:init/ '-' 024580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26433.14>tls_connection:init/ '-' 024580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26434.14>tls_connection:init/ '-' 024580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26435.14>tls_connection:init/ '-' 024580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> <5490.26436.14>tls_connection:init/ '-' 024580768 0 >>>>>>>> gen_fsm:loop/7 >>>>>>>> >>>>>>>> >>>>>>>> can some one help me to solve this issue? >>>>>>>> >>>>>>>> Br, >>>>>>>> >>>>>>>> A.H.E. Robert >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> erlang-questions mailing list >>>>>>>> erlang-questions@REDACTED >>>>>>>> http://erlang.org/mailman/listinfo/erlang-questions >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vasdeveloper@REDACTED Thu Oct 1 11:09:38 2015 From: vasdeveloper@REDACTED (Theepan) Date: Thu, 1 Oct 2015 14:39:38 +0530 Subject: [erlang-questions] QR Code Generator In-Reply-To: References: Message-ID: That seems awesome.. Thank you guys! Theepan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Thu Oct 1 11:35:09 2015 From: ok@REDACTED (ok@REDACTED) Date: Thu, 1 Oct 2015 22:35:09 +1300 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: <39021272.nUAaYpW9fM@burrito> References: <560A4698.4070609@gmail.com> <4558184.1nhmif8cUn@changa> <39021272.nUAaYpW9fM@burrito> Message-ID: <230c74b6bfacc47940a7badde2650a3c.squirrel@chasm.otago.ac.nz> > On Thursday 01 October 2015 19:28:23 zxq9 wrote: >> I'm reminded of programming languages like IMP and Ada where >> you are not allowed to write >> p() and q() or r() >> because as a programmer you are presumed to be too dumb to >> work effectively with the concept of operator precedence as >> applied to Boolean operators. > > Hey! I actually kind of like Ada. :-) Me too. But I do not like being patronised. Personally, I blame Wirth whose broken Pascal precedence rules corrupted generations of programmers, including ones who've never heard of Pascal. > There is a balance between providing flexibility and providing constructs > that almost encourage programmers to silently drop little landmines in > their code. My initial (wrong) assumption that case is supposed to be > treated as its own semantic scope (which is why I had always thought the > warnings were there, and previous discussions here tended to make me think > I wasn't alone in expecting them to work that way) Why do people *expect* something that's expicitly contradicted in the manual and textbooks? > It makes me feel sad, all the same. I like limited scope and knowing for > sure that it is limited. I like that too, which is why I often wish Erlang syntax were more Haskell-like. The answer, of course, is to keep Erlang clauses short. Erlang syntax was strongly influenced by Strand-88 syntax, and that was based on Prolog syntax, and that's where the single-scope model comes from. > Not declarative ones, imperative languages that include this or that > generator/comprehension feature and are familiar with the cool kids. > > Python, for example: > >>>> [x for x in [1,2,3]] > [1, 2, 3] >>>> x > 3 > There is much to like about Python, but it has its share of stupidities. This is one of them. >>> [x for x in []] [] >>> x Traceback (most recent call last): File "", line 1, in NameError: name 'x' is not defined Talk about landmines! > There is (was?) a way to make something similar happen in Ruby as well, > but that language is a big pile of curious decisions anyway. I can't seem > to find the syntax for it now. "A big pile of curious decisions" sounds as though we've seen the same things in Ruby. I once described it as making it easy for people to swim in the sacred crocodile pond. > > In the Javascript recommendation for list comprehensions the situation is > more weird: JavaScript. Weird. No surprise there. But what you are talking about is a situation where something ISN'T really s scope; which would mean that people used to these things should find it surprising the Erlang list iterations ARE scopes, not that cases AREN'T. > Comprehensions in these sort of languages are more like wacky syntax over > for loops with a few opportunities (apparently) for optimization. The big difference is that Python and Ruby and JavaScript all have Fortran-style mutable variables, and Erlang does not. The only way a list comprehension in Erlang can possibly associate different values with "a variable" is if that variable is (conceptually) many *different* variables, one for each iteration, which means that you can't ask for its value after the loop, because there is no "it" to ask about. (And the last time I looked at how Erlang compiled list comprehensions, this was really true. No variables were harmed in the making of this comprehension.) > especially considering that using unassigned list comprehensions > as a shorthand for lists:foreach/2 specifically to get side-effects is now > actually supported as a an optimization. But those effects do NOT include mutating variables. Each iteration gets its own set of variables because that's the only way that the variables *can* have different values in different iterations. There are plenty of things that are technically legal in Erlang but probably not a good idea. For example, f(X) -> X,-1. is legal. But is it sensible? When I made my Smalltalk compiler report about 'statements with no effect' (which is perfectly legal), it promptly told me about actual errors in my code I hadn't previously noticed. PS: the Erlang system I tried the example above in did not compile at all. I haven't installed R18 yet. Does R18 catch this? From bchesneau@REDACTED Thu Oct 1 14:23:49 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Thu, 01 Oct 2015 12:23:49 +0000 Subject: [erlang-questions] [ann] slack channel for french erlangers Message-ID: Hi, Just a quick mail to announce the launch of a slack channel for the french erlangers where we can exchange in french. Our website is here: http://frencherlang.com You can find us on twitter also @frencherlang . Hopefully it will allows french speaking people to discuss more easily around Erlang. Enjoy! - beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Thu Oct 1 14:38:22 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 01 Oct 2015 21:38:22 +0900 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: <230c74b6bfacc47940a7badde2650a3c.squirrel@chasm.otago.ac.nz> References: <560A4698.4070609@gmail.com> <39021272.nUAaYpW9fM@burrito> <230c74b6bfacc47940a7badde2650a3c.squirrel@chasm.otago.ac.nz> Message-ID: <6257474.9yJsM6Q8yr@changa> On 2015?10?1? ??? 22:35:09 you wrote: > > On Thursday 01 October 2015 19:28:23 zxq9 wrote: > > Personally, I blame Wirth whose broken Pascal precedence > rules corrupted generations of programmers, including ones > who've never heard of Pascal. I've got ambivalent feelings about this statement. Partly because Turbo Pascal was the only real thing available to me when I was a kid, but mostly because the categorical breakdown of precedence operators makes its own kind of sense: 1. Negation 2. Multiplicative operations 3. Additive operations 4. Relational operations Instead of just trying to make it easier to write a compiler I think he tried to logically group precedence by category (or rather, include logical operations within their proper categories). But that does leave the relative precedence of `and` and `or` surprising and perhaps inconvenient unless you think about things this way. I don't know many people who think about operator precedence this way though, so in reality its a moot point and practically speaking Pascal's precedence rules are "just weird" because they don't mimic everything else. But there is an underlying logic all the same, and I don't grudge Pascal that (again, childhood bias may just be speaking here -- I thought C was super-awesome-wizardry once I finally got a compiler for it). > Why do people *expect* something that's expicitly contradicted > in the manual and textbooks? Speaking only for myself here... Probably because I read through that part of the manual once quite a while back but have seen this warning several times in the intervening months and years. So the warning seems to speak louder to me, saying, "this is discouraged and breaks some hitherto unspoken conceptual model to which it is important enough to adhere that the compiler will now scold you". But I never see this in my own code. I would never have written the examples shown so far. I tend to use `case` and `if` the same way I use funs -- when I need to acccess some local context *and* the result is more readable that way than passing a context variable along. Other than this I usually just write separate functions, sometimes with lots of clauses. I don't think this is particular unusual as far as Erlang style goes. (?) So... why? Laziness, lack of time, ignorance, unwillingness to adopt a new conceptual model, mental model built on the back of a string of incidentally correct (but unchecked) assumptions, etc. Some subset of these seems likely. It really was explicit in the manual. I think the percentage of cool kids who both: 1- want to dabble with Erlang for long enough to subscribe to the ML and toss out a few FAQs after seeing some buzz about it on Y-combinator 2- actually read the manuals instead of fumbling through a few tutorial blogs is very low. ("Cuz everything is, like, sorta Java or Ruby Javascript or something when you get down to it, right?") > > It makes me feel sad, all the same. I like limited scope and knowing for > > sure that it is limited. > > I like that too, which is why I often wish Erlang syntax > were more Haskell-like. The answer, of course, is to keep > Erlang clauses short. Erlang syntax was strongly influenced by > Strand-88 syntax, and that was based on Prolog syntax, and > that's where the single-scope model comes from. Only being semi-serious here... what would be wrong with just ditching all those other constructs and using only funs and functions for conditionals? To re-use the contrived example: blah(File, Mode) -> case Mode of r -> {ok, FP} = file:open(File, [read]); w -> {ok, FP} = file:open(File, [write]) end, file:close(FP). becomes blah(File, Mode) -> FileOp = fun (r) -> file:open(File, [read]); (w) -> file:open(File, [write]) end, {ok, FP} = FileOp(Mode), file:close(FP). Of course, once we're that far there is almost nothing stopping us from just writing a separate function which is usually what I wind up doing anyway (though this isn't a very powerful example of why). > > Not declarative ones, imperative languages that include this or that > > generator/comprehension feature and are familiar with the cool kids. > > > > Python, for example: > > > >>>> [x for x in [1,2,3]] > > [1, 2, 3] > >>>> x > > 3 > > > There is much to like about Python, but it has its share of > stupidities. This is one of them. > > >>> [x for x in []] > [] > >>> x > Traceback (most recent call last): > File "", line 1, in > NameError: name 'x' is not defined > > Talk about landmines! That made me actually laugh out loud. This particular edge case is just hilarious. Totally understandable from a concept where comprehensions are basically just loops, but still pretty funny, especially since it would be easy to avoid this problem and just make a rule that its safe to always reference it later. > > especially considering that using unassigned list comprehensions > > as a shorthand for lists:foreach/2 specifically to get side-effects is now > > actually supported as a an optimization. > > But those effects do NOT include mutating variables. Each iteration > gets its own set of variables because that's the only way that the > variables *can* have different values in different iterations. True. Though we do have plenty of opportunities to effectively violate this whenever we want -- fortunately full respect for single-assignment is a part of the culture around here (I don't even see much abuse of the process dictionary, though sometimes I see thoughtless assumptions about what the "next" query to a database or any other similar shared resource might return...) > There are plenty of things that are technically legal in Erlang but > probably not a good idea. For example, > > f(X) -> X,-1. > > is legal. But is it sensible? When I made my Smalltalk compiler > report about 'statements with no effect' (which is perfectly legal), > it promptly told me about actual errors in my code I hadn't previously > noticed. > PS: the Erlang system I tried the example above in did not compile > at all. I haven't installed R18 yet. Does R18 catch this? Nope: 4> F = fun(X) -> X, 1 end. #Fun Compiles with no complaints as well in 18.1. I did think it was good that the shell provides the same scrutiny as the compiler in this case: 1> F = 1> fun(X) -> 1> case X of 1> {foo, A} -> A; 1> {bar, B} -> B 1> end, 1> A 1> end. * 5: variable 'A' unsafe in 'case' (line 1) Not all languages REPLs are so thorough about their rules. From henrik.x.nord@REDACTED Thu Oct 1 15:18:47 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Thu, 1 Oct 2015 15:18:47 +0200 Subject: [erlang-questions] Patch package OTP 17.5.6.4 released Message-ID: <560D32B7.9070805@ericsson.com> Patch Package: OTP 17.5.6.4 Git Tag: OTP-17.5.6.4 Date: 2015-10-01 Trouble Report Id: OTP-12911, OTP-12968 Seq num: seq12906 System: OTP Release: 17 Application: debugger-4.0.3.1, erts-6.4.1.3 Predecessor: OTP 17.5.6.3 Check out the git tag OTP-17.5.6.4, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- debugger-4.0.3.1 ------------------------------------------------ --------------------------------------------------------------------- The debugger-4.0.3.1 application can be applied independently of other applications on a full OTP 17 installation. --- Fixed Bugs and Malfunctions --- OTP-12911 Application(s): debugger Related Id(s): seq12906 Fix crash when starting a quick debugging session. Thanks Alan Duffield. Full runtime dependencies of debugger-4.0.3.1: compiler-5.0, erts-6.0, kernel-3.0, stdlib-2.0, wx-1.2 --------------------------------------------------------------------- --- erts-6.4.1.3 ---------------------------------------------------- --------------------------------------------------------------------- The erts-6.4.1.3 application can be applied independently of other applications on a full OTP 17 installation. --- Fixed Bugs and Malfunctions --- OTP-12968 Application(s): erts When tracing with process_dump option, the VM could abort if there was an ongoing binary match somewhere in the call stack of the traced process. Full runtime dependencies of erts-6.4.1.3: kernel-3.0, sasl-2.4, stdlib-2.0 From marlus.saraiva@REDACTED Thu Oct 1 15:45:34 2015 From: marlus.saraiva@REDACTED (Marlus Saraiva) Date: Thu, 1 Oct 2015 10:45:34 -0300 Subject: [erlang-questions] trying to make official docker images for erlang In-Reply-To: References: Message-ID: Hi derek, I'm the maintainer of both, the erlang packages at http://dl-4.alpinelinux.org/alpine/edge/main/x86_64/ and the docker images you've mentioned at https://hub.docker.com/u/msaraiva/ Having minimal containers is something that I take seriously and I think it makes the whole docker experience much more pleasant. The only reason I wouldn't recommend using Alpine Linux for the official image is that there's no "official" support from the OTP team. As far as I know, they don't build/test Erlang/OTP on any Linux distribution based on musl libc and I have no idea if they have any interest in doing that. So, since the official image is intended to reach a broader audience, I guess you'd better stick with the larger images. In case you need more info about Erlang for Alpine Linux, take a look at: https://github.com/msaraiva/alpine-erlang It also contains information about building Erlang/OTP against musl libc. So, if you want to give it a try, you can easily create an image containing a full Erlang/OTP of any version and with as many erlang libs as you want. Cheers, -- Marlus Saraiva https://github.com/msaraiva https://twitter.com/MarlusSaraiva 2015-09-30 0:42 GMT-03:00 derek : > Hi to Erlang users, > > this is effort trying to make official docker images for erlang otp > community, please comment if you like to run it with docker: > > https://github.com/docker-library/official-images/pull/1075 > > Nowadays, docker is the popular way to run many applications, have a > look on docker hub, many popular programming languages and > applications have an official image there, beginners can easily pull a > docker image to start playing without hassling their host Linux, > > https://hub.docker.com/_/python/ > https://hub.docker.com/_/golang/ > https://hub.docker.com/_/ruby/ > ... > > But there is no official one for erlang yet, let's try to make it happen, > https://hub.docker.com/_/erlang/ > > I have researched some existing effort like these, looks like many > ones are already on this way just haven't communicated thru > @erlang.org yet > > - https://hub.docker.com/r/correl/erlang/ > this might be the earliest currently have the most stars by `docker > search erlang`, provided erlang-otp-17.5 and rebar and relx in a > single image, and compiling each one from source code, I haven't tried > it, but presumably would be close to 1GB image; > - https://hub.docker.com/r/unbalancedparentheses/erlang > this one support all versions of erlang R6B-0, R7, R8, ... R16, up to > 17.4 > from > https://github.com/unbalancedparentheses/docker-erlang/blob/master/17.4/install.sh > it also provided erlang & rebar & relx all compiled from source code > - https://hub.docker.com/r/msaraiva/erlang/ > this one is providing erlang-17.5 and 18, on top of Alpine Linux > docker image, it's very slim, as small as 16.78 MB, while erlang lib > is broken into very small packages, like most OS distro does, broken > into erlang-compiler, erlang-dialyzer, erlang-otp, erlang-snmp, and > etc. its base erlang:18 image has 5 packages under > /usr/lib/erlang/lib/... (compared a full erlang-otp has 52 lib > packages) > http://dl-4.alpinelinux.org/alpine/edge/main/x86_64/ (search erlang) > this one also provided elixir images in different Dockerfile, > presumably also slim > - https://hub.docker.com/r/voidlock/erlang > is very similar to my way, support erlang R16 thru 18.1; I would > not start my project if I know this earlier, but it installed > update-locale for UTF-8 I'm not sure for what, is that required by > erlang runtime ? > - https://github.com/synctree/docker-erlang/blob/master/R17/Dockerfile > this one installs erlang solutions precompiled deb on top of > debian:8 (or debian:jessie) image. > > $ docker search erlang > NAME DESCRIPTION > STARS OFFICIAL AUTOMATED > correl/erlang Erlang/OTP for Docker > 14 [OK] > ... > > While I just started from scratch, before above PR to official images > got merged, you can try it with cloning this repo, and build images > locally, > > https://github.com/c0b/docker-erlang-otp > > So there are two ways to make images: > 1) build from source code, from standard debian:8 image, start with > apt-get install build-essential and gcc and some lib...-dev and > download erlang-otp source code and build, this usually ends up with a > fat image, close to 1GB; > 2) install from some binary erlang packages, like the one from > erlang-solutions, could end up with smaller image; packages from most > OS distributions also provided erlang but relatively not up to date > like debian https://packages.debian.org/source/sid/erlang fedora > centos similar > > Here from my repo I mainted one each for latest 4 erlang releases, > (R15, R16, 17, 18), each with different variant, following the best > practices from docker official images guideline, should end up with > full feature while relatively slim images: > https://github.com/docker-library/official-images > > 1. the standard variant erlang:18, erlang:17, erlang:R16, erlang:R15 > builds from source code, on top of > https://hub.docker.com/_/buildpack-deps/ :jessie, it covered gcc > compiler and some popular -dev packages, for port driver written in C; > while it doesn't have java compiler so out of the standard erlang-otp > provided 52 packages under /usr/lib/erlang/lib/... from this one odbc > / jinterface / wxwidgets won't work, I assume to run GUI programs in > docker is not popular, so here we can save space; jinterface is > similar, the java dependencies are too fat, I assume demand is low; > 2. the -onbuild variant for each erlang version, to utilize ONBUILD > instruction from Dockerfile, those are for starters > 3. -esl variant is to pull erlang-solutions deb package to install on > top of debian:jessie, results in relatively slim image, but I am > trying to avoid wxwidgets / jinterface dependencies, reasons same as > above. > > All these images are almost full featured Erlang-OTP images (except > wxwidgets & jinterface), you can run it like this once build locally > (or pull over docker hub if above PR can be merged), > > $ docker run -it --rm erlang:18.1 > Erlang/OTP 18 [erts-7.1] [source] [64-bit] [smp:8:8] > [async-threads:10] [hipe] [kernel-poll:false] > > Eshell V7.1 (abort with ^G) > 1> uptime(). # the new > uptime() shell command since OTP 18 > 3 seconds > ok > 2> application:which_applications(). > [{stdlib,"ERTS CXC 138 10","2.6"}, > {kernel,"ERTS CXC 138 10","4.1"}] > 3> > User switch command > --> q > root@REDACTED:/# ls /usr/local/lib/erlang/lib/ > asn1-4.0 cosProperty-1.2 edoc-0.7.17 gs-1.6 observer-2.1 > public_key-1.0.1 stdlib-2.6 xmerl-1.3.8 common_test-1.11 > cosTime-1.2 eldap-1.2 hipe-3.13 orber-3.8 reltool-0.7 > syntax_tools-1.7 compiler-6.0.1 cosTransactions-1.3 > erl_docgen-0.4 ic-4.4 os_mon-2.4 runtime_tools-1.9.1 > test_server-3.9 cosEvent-2.2 crypto-3.6.1 erl_interface-3.8 > inets-6.0.1 ose-1.1 sasl-2.6 tools-2.8.1 cosEventDomain-1.2 > debugger-4.1.1 erts-7.1 kernel-4.1 otp_mibs-1.1 snmp-5.2 > typer-0.9.9 cosFileTransfer-1.2 dialyzer-2.8.1 et-1.5.1 > megaco-3.18 parsetools-2.1 ssh-4.1 webtool-0.9 cosNotification-1.2 > diameter-1.11 eunit-2.2.11 mnesia-4.13.1 percept-0.8.11 > ssl-7.1 wx-1.5 > root@REDACTED:/# ls /usr/local/lib/erlang/lib/ | wc -l > 50 > > Size: > > $ docker images |grep ^erlang > erlang 18.1-esl 138c797adec7 5 days ago > 286.9 MB > erlang 18.1 27ad0fc44644 5 days ago > 741.5 MB > erlang R16B03-1 e0deec5e1e72 6 days ago > 740.2 MB > erlang 18.0.3 52d4a7a4a281 6 days ago > 743.7 MB > > Comments are welcome. > > > Thanks, > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garry@REDACTED Thu Oct 1 18:05:49 2015 From: garry@REDACTED (Garry Hodgson) Date: Thu, 01 Oct 2015 12:05:49 -0400 Subject: [erlang-questions] net_kernel fails to start (nodistribution) Message-ID: <560D59DD.2040506@research.att.com> I'm seeing something odd on one of the openstack VM's we're using. We've been using this VM for some time with no problems, but all of the sudden I can't run erlang with distribution. Running just "erl" is fine, but if I specify an sname it hangs for minutes, then fails with crash dump (and output at end of message). I've restarted epmd, and run it in debug mode, but don't see anything amiss. We did make some changes yesterday to the OpenStack security groups for the VM, which may be related (ports 80, 443, 2121 and 62201). Hostname, ifconfig, and iptables rules all seem ok. Looking through the code, it appears the failure occurs in net_kernel:init_node(Name, LongOrShortNames), or maybe further down in start_protos( Name, Node ). I've never seen anything like this, and google only shows me a few rabbit mq questions that didn't help much. Any insight would be appreciated. Thanks example output: # erl -sname foo {error_logger,{{2015,10,1},{14,58,47}},"Protocol: ~tp: register/listen error: ~tp~n",["inet_tcp",etimedout]} {error_logger,{{2015,10,1},{14,58,47}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.21.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,322}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}},{ancestors,[net_sup,kernel_sup,<0.10.0>]},{messages,[]},{links,[#Port<0.54>,<0.18.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,739}],[]]} {error_logger,{{2015,10,1},{14,58,47}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[foo,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2015,10,1},{14,58,47}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2015,10,1},{14,58,47}},crash_report,[[{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.9.0>},{registered_name,[]},{error_info,{exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,133}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}},{ancestors,[<0.8.0>]},{messages,[{'EXIT',<0.10.0>,normal}]},{links,[<0.8.0>,<0.7.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,117}],[]]} {error_logger,{{2015,10,1},{14,58,47}},std_info,[{application,kernel},{exited,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{k -- Garry Hodgson Lead Member of Technical Staff AT&T Chief Security Office (CSO) "This e-mail and any files transmitted with it are AT&T property, are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited." From sid5@REDACTED Thu Oct 1 18:08:06 2015 From: sid5@REDACTED (Sid Muller) Date: Thu, 1 Oct 2015 18:08:06 +0200 Subject: [erlang-questions] runtime code upgrade, process ceases to exist. Message-ID: Hi, I'm struggling to understand this runtime code upgrade oddity and was hoping someone could shed some light on this issue. The issue that I'm having is that the process running the latest code dies after the code is loaded with l(module) for the second time. I understand that only 2 versions of the software can be running at any given time but what I'm not understanding is why does the process with the latest code die or go away. I have process (a) that runs and calls b:do_stuff() which does stuff and responds back to (a). Module b has an upgrade function that will call into the latest module: upgrade()-> Ref = make_ref(), ?SERVER ! {self(), Ref, upgrade}, receive {Ref, ok} -> ok end. do_stuff()-> Ref = make_ref(), ?SERVER ! {self(), Ref, do_stuff}, %% <-after second l(b) function fails here because ?SERVER is no longer registered, process is gone receive {Ref, ok} -> ok end. server()-> receive {Client, Ref, upgrade} -> update_internal_structures(), Client ! {Ref, ok}, ?MODULE:server(); {Client, Ref, do_stuff} -> do_stuff(), Client ! {Ref, ok} end server(). so after I load the new code with l(b). I call b:upgrade() from the shell and I can tell from the output in do_stuff() that new code is running when process a sends us do_stuff message. This is all fine until I make another change to module (b) but when I call l(b) process (b) is wiped out, gone, no more... It's not a crash in process (b), it just ceases to exist. And it's very frustrating because I have exactly the same code in another process which seems to survive multiple l(c) without any issues. The only difference between the two is that process(b) has a link to a dets process because it opens dets files and the link is created by the dets backend I believe. I must not be understanding this hot code upgrade.... Can anyone shed any light on this? From sverker.eriksson@REDACTED Thu Oct 1 18:29:01 2015 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Thu, 1 Oct 2015 18:29:01 +0200 Subject: [erlang-questions] runtime code upgrade, process ceases to exist. In-Reply-To: References: Message-ID: <560D5F4D.7050608@ericsson.com> Your server function is not tail recursive. You must do your call to ?MODULE:server() tail recursive. Otherwise you leave a return address referring to the old code on the call stack. And that is why your process gets killed when the old code is purged. /Sverker On 10/01/2015 06:08 PM, Sid Muller wrote: > Hi, > > I'm struggling to understand this runtime code upgrade oddity and was hoping someone could shed some light on this issue. > > The issue that I'm having is that the process running the latest code dies after the code is loaded with l(module) for the second time. > > I understand that only 2 versions of the software can be running at any given time but what I'm not understanding is why does the process with the latest code die or go away. > > I have process (a) that runs and calls b:do_stuff() which does stuff and responds back to (a). > > Module b has an upgrade function that will call into the latest module: > > upgrade()-> > Ref = make_ref(), > ?SERVER ! {self(), Ref, upgrade}, > receive > {Ref, ok} -> > ok > end. > > do_stuff()-> > Ref = make_ref(), > ?SERVER ! {self(), Ref, do_stuff}, %% <-after second l(b) function fails here because ?SERVER is no longer registered, process is gone > receive > {Ref, ok} -> > ok > end. > > server()-> > receive > {Client, Ref, upgrade} -> > update_internal_structures(), > Client ! {Ref, ok}, > ?MODULE:server(); > {Client, Ref, do_stuff} -> > do_stuff(), > Client ! {Ref, ok} > end > server(). > > > so after I load the new code with l(b). I call b:upgrade() from the shell and I can tell from the output in do_stuff() that new code is running when process a sends us do_stuff message. This is all fine until I make another change to module (b) but when I call l(b) process (b) is wiped out, gone, no more... It's not a crash in process (b), it just ceases to exist. And it's very frustrating because I have exactly the same code in another process which seems to survive multiple l(c) without any issues. The only difference between the two is that process(b) has a link to a dets process because it opens dets files and the link is created by the dets backend I believe. > > I must not be understanding this hot code upgrade.... > > Can anyone shed any light on this? > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From garry@REDACTED Thu Oct 1 21:06:56 2015 From: garry@REDACTED (Garry Hodgson) Date: Thu, 01 Oct 2015 15:06:56 -0400 Subject: [erlang-questions] net_kernel fails to start (nodistribution) In-Reply-To: <560D59DD.2040506@research.att.com> References: <560D59DD.2040506@research.att.com> Message-ID: <560D8450.7070501@research.att.com> problem solved. it turns out that it was due to a faulty iptables rule set. whew! On 10/01/2015 12:05 PM, Garry Hodgson wrote: > I'm seeing something odd on one of the openstack VM's > we're using. We've been using this VM for some time with no > problems, but all of the sudden I can't run erlang with > distribution. Running just "erl" is fine, but if I specify an > sname it hangs for minutes, then fails with crash dump > (and output at end of message). > > I've restarted epmd, and run it in debug mode, but don't > see anything amiss. We did make some changes yesterday > to the OpenStack security groups for the VM, which may be > related (ports 80, 443, 2121 and 62201). Hostname, ifconfig, > and iptables rules all seem ok. > > Looking through the code, it appears the failure occurs in > net_kernel:init_node(Name, LongOrShortNames), or maybe > further down in start_protos( Name, Node ). > > I've never seen anything like this, and google only shows me > a few rabbit mq questions that didn't help much. > Any insight would be appreciated. > > Thanks > > example output: > > # erl -sname foo > {error_logger,{{2015,10,1},{14,58,47}},"Protocol: ~tp: register/listen > error: ~tp~n",["inet_tcp",etimedout]} > {error_logger,{{2015,10,1},{14,58,47}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.21.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,322}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}},{ancestors,[net_sup,kernel_sup,<0.10.0>]},{messages,[]},{links,[#Port<0.54>,<0.18.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,739}],[]]} > > {error_logger,{{2015,10,1},{14,58,47}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[foo,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} > > {error_logger,{{2015,10,1},{14,58,47}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} > > {error_logger,{{2015,10,1},{14,58,47}},crash_report,[[{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.9.0>},{registered_name,[]},{error_info,{exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,133}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}},{ancestors,[<0.8.0>]},{messages,[{'EXIT',<0.10.0>,normal}]},{links,[<0.8.0>,<0.7.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,117}],[]]} > > {error_logger,{{2015,10,1},{14,58,47}},std_info,[{application,kernel},{exited,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}},{type,permanent}]} > > {"Kernel pid > terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}}"} > > Crash dump was written to: erl_crash.dump > Kernel pid terminated (application_controller) > ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{k > -- Garry Hodgson Lead Member of Technical Staff AT&T Chief Security Office (CSO) "This e-mail and any files transmitted with it are AT&T property, are confidential, and are intended solely for the use of the individual or entity to whom this e-mail is addressed. If you are not one of the named recipient(s) or otherwise have reason to believe that you have received this message in error, please notify the sender and delete this message immediately from your computer. Any other use, retention, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited." From denc716@REDACTED Thu Oct 1 23:54:23 2015 From: denc716@REDACTED (derek) Date: Thu, 1 Oct 2015 14:54:23 -0700 Subject: [erlang-questions] trying to make official docker images for erlang In-Reply-To: References: Message-ID: On Thu, Oct 1, 2015 at 6:45 AM, Marlus Saraiva wrote: > Hi derek, > > I'm the maintainer of both, the erlang packages at > http://dl-4.alpinelinux.org/alpine/edge/main/x86_64/ yesterday I've seen that there are 53 erlang-*.apk packages, similar like debian maintaining packages into smaller pieces, but more up to date; while how are those pieces got compiled, does Alpine also have similar structure like debian packaging? have files like debian/rules in deb packaging? for a deb pkg, just look at the debian https://packages.debian.org/source/sid/erlang (find erlang_18.0-dfsg-2.debian.tar.xz has all instructions how to build and split into smaller pieces) https://pkgs.alpinelinux.org/package/main/x86_64/erlang erlang-18.1-r0.apk2015-Oct-01 14:43:111.9Mapplication/octet-stream erlang-asn1-18.1-r0.apk2015-Oct-01 14:43:12773.4Kapplication/octet-stream erlang-common-test-18.1-r0.apk2015-Oct-01 14:43:12816.5Kapplication/octet-stream erlang-compiler-18.1-r0.apk2015-Oct-01 14:43:121.1Mapplication/octet-stream [...] and it looks like your alpine base image is smaller than the official one, do you think your alpine image is better than the official alpine image? if so, could you try to push it to become new official ? https://hub.docker.com/r/msaraiva/alpine/ https://hub.docker.com/_/alpine/ > and the docker images you've mentioned at https://hub.docker.com/u/msaraiva/ > > Having minimal containers is something that I take seriously and I think it > makes the whole docker experience much more pleasant. The only reason I > wouldn't recommend using Alpine Linux for the official image is that there's > no "official" support from the OTP team. As far as I know, they don't > build/test Erlang/OTP on any Linux distribution based on musl libc and I > have no idea if they have any interest in doing that. So, since the official > image is intended to reach a broader audience, I guess you'd better stick > with the larger images. I guess so, to make an official image is mainly targeting for new audiences, for newbies to easily start programming in erlang, and provides a standard Linux with glibc, and bash (and all tools like sed, awk for bash shell scripting), might be easier to troubleshoot than the alpine busybox musl if problems rise up; Comparing to python official image, it's also building from source code, with a similar standard ~700MB size should be ok, https://hub.docker.com/_/python/ > > In case you need more info about Erlang for Alpine Linux, take a look at: > https://github.com/msaraiva/alpine-erlang > It also contains information about building Erlang/OTP against musl libc. > So, if you want to give it a try, > you can easily create an image containing a full Erlang/OTP of any version > and with as many erlang libs as you want. I see there are 3 patches required to be built with musl, and it seems targeting to be a replacement of glibc, I am not familiar about that and not sure what mature level is it; so for general audience, might be easier to start an official image FROM debian:jessie and default Linux glibc bash shell utils http://www.musl-libc.org/faq.html As advocated in official-images page: https://github.com/docker-library/official-images#library-definition-files there is request for language upstream's opinion, so please also comment there if you could, would be much appreciated https://github.com/docker-library/official-images/pull/1075#issuecomment-143883926 I don't see what open source license do you put your code under? would you mind I copy some code (from your msaraiva/alpine-erlang) and provide another alpine-slim varaint? https://github.com/c0b/docker-erlang-otp/tree/master/18 Thanks a lot! > > > Cheers, > > -- > Marlus Saraiva > https://github.com/msaraiva > https://twitter.com/MarlusSaraiva From ok@REDACTED Fri Oct 2 04:24:04 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 2 Oct 2015 15:24:04 +1300 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: <6257474.9yJsM6Q8yr@changa> References: <560A4698.4070609@gmail.com> <39021272.nUAaYpW9fM@burrito> <230c74b6bfacc47940a7badde2650a3c.squirrel@chasm.otago.ac.nz> <6257474.9yJsM6Q8yr@changa> Message-ID: On 2/10/2015, at 1:38 am, zxq9 wrote: > On 2015?10?1? ??? 22:35:09 you wrote: >>> On Thursday 01 October 2015 19:28:23 zxq9 wrote: >> >> Personally, I blame Wirth whose broken Pascal precedence >> rules corrupted generations of programmers, including ones >> who've never heard of Pascal. > > I've got ambivalent feelings about this statement. Partly because Turbo Pascal was the only real thing available to me when I was a kid, but mostly because the categorical breakdown of precedence operators makes its own kind of sense: > > 1. Negation > 2. Multiplicative operations > 3. Additive operations > 4. Relational operations Before Pascal there was pretty nearly a consensus that you had LOGICAL operators or and COMPARISON operators (which bridge the gap between Boolean and numeric) all at the same level NUMERIC operators sum, difference product, ratio, quotient, remainder power Pascal (a) Used the wrong operators for sets. Union and intersection behave like OR and AND. The Booleans *do* form a ring (isomorphic to Z/2) but the operation that is analogous to addition is EXCLUSIVE OR, not inclusive or. (b) Used the wrong semantics for the logical operators. By the time Pascal was developed, it was clear that 'strict' Boolean operators were of almost no use. More precisely, ISO 7185 section 6.7.2.3 says nothing about the operational semantics or AND and OR, but section 6.7.2.1 makes it clear than in p(x) $ q(x), q(x) may be evaluated before p(x) whatever operator $ is, so short-circuit AND and OR we do not get. (c) Treats 'or' as if it were an 'adding operator' like + and -. But it is not. The Boolean operator which is analogous to + and - (to both simultaneously, in fact) is EXCLUSIVE OR. (d) Provides an integer quotient operator and an integer remainder operator THAT DO NOT FIT TOGETHER. With all the questions about what exactly div and mod should do, we surely expect that (x div y)*y + (x mod y) = x whenever the left hand side is defined. But in Pascal, (-2) div 3 = 0 (-2) mod 3 = 1 ((-2) div 3)*3 + (-2) mod 3 = 1 /= -1. Nicklaus Wirth was a brilliant language designer but whatever process led to *this* fiasco, I would hesitate to call it design. > > Instead of just trying to make it easier to write a compiler I think he tried to logically group precedence by category (or rather, include logical operations within their proper categories). In previous languages the operators *WERE* logically grouped by semantic category (logical operators , mixed operators , numeric ones). He jumbled them up in a very unhelpful way. There are just three differences between Pascal and Algol 60 which cannot be explained in terms of making a single-pass load-and-go compiler: (1) Pascal has records and pointers (2) Pascal has files and I/O operations (3) Pascal has user-defined types As for (1), that had been pioneered by Algol W and included in Algol 68, which Wirth was consciously reacting against; Algol 68 also had standard I/O and user-defined types. Algol 68 had unions which were *safe* at run time, and garbage collection. Pascal sacrificed safety for compiler+run time implementation ease; garbage collection ditto. Of course Algol 68 didn't have sets, but it did have 'bits' and 'long bits', and realistically, that's all Pascal really had. And Algol 68 *did* have strings that made sense. And while Algol 68 was strongly criticised for lacking a module system, it did pick one up before Pascal did. (The module system in Turbo Pascal is not the module system in the standard.) And Algol 68 didn't have enumerations, but it *did* have closures (albeit ones that couldn't outlive their creating context), and I know which I'd rather have. Nope; Pascal was deliberately crippled to make it small and fast at the expense of safety. To this day you find programmers who think you *have* to put parentheses around (x > y), all because of Pascal. > But that does leave the relative precedence of `and` and `or` surprising and perhaps inconvenient unless you think about things this way. I don't know many people who think about operator precedence this way though, so in reality its a moot point and practically speaking Pascal's precedence rules are "just weird" because they don't mimic everything else. They are weird because they trample on an earlier convention well established in programming and mathematics that segregated the operations a different way. > > > Only being semi-serious here... what would be wrong with just ditching all those other constructs and using only funs and functions for conditionals? That would take care of 'if' and 'case' but not 'try' or 'receive'. >>> as a shorthand for lists:foreach/2 specifically to get side-effects is now >>> actually supported as a an optimization. >> >> But those effects do NOT include mutating variables. Each iteration >> gets its own set of variables because that's the only way that the >> variables *can* have different values in different iterations. > > True. Though we do have plenty of opportunities to effectively violate this whenever we want -- I'm not sure I understand you. Enlighten me. - The file system is a global shared mutable variable. - The pid registry is a global shared mutable variable. - DETS and ETS are mutable data structures. - Other data bases ditto. - "The" process dictionary is a process-scope mutable variable which could in principle be eliminated by threading a state through all calls. From gollamudiramana3@REDACTED Fri Oct 2 02:15:56 2015 From: gollamudiramana3@REDACTED (Ramana Gollamudi) Date: Thu, 1 Oct 2015 20:15:56 -0400 Subject: [erlang-questions] CMAC code Message-ID: I am coding a 3GPP S1ap/NAS protocol test driver simulating a bunch of eNBs and UEs generating messages to the MME. The NAS protocol messages have a security header with a Message Authentication Field (MAC) in the header. Does anyone know of any Erlang code that can generate the MAC for the NAS header? When I searched the Internet, I found AES code on GITHUB implemented in C. I could use this, but I would have to use C port or NIF to use the use that code. Any pointers or suggestions would be greatly appreciated Thanks, Ramana -------------- next part -------------- An HTML attachment was scrubbed... URL: From sid5@REDACTED Fri Oct 2 05:50:44 2015 From: sid5@REDACTED (Sid Muller) Date: Fri, 2 Oct 2015 05:50:44 +0200 Subject: [erlang-questions] runtime code upgrade, process ceases to exist. In-Reply-To: <560D5F4D.7050608@ericsson.com> References: , <560D5F4D.7050608@ericsson.com> Message-ID: Thank you! You are a lifesaver sir! > Sent: Thursday, October 01, 2015 at 9:29 AM > From: "Sverker Eriksson" > To: "Sid Muller" , erlang-questions > Subject: Re: [erlang-questions] runtime code upgrade, process ceases to exist. > > Your server function is not tail recursive. > > You must do your call to ?MODULE:server() > tail recursive. Otherwise you leave a return address > referring to the old code on the call stack. And that is > why your process gets killed when the old code is purged. > > /Sverker > > On 10/01/2015 06:08 PM, Sid Muller wrote: > > Hi, > > > > I'm struggling to understand this runtime code upgrade oddity and was hoping someone could shed some light on this issue. > > > > The issue that I'm having is that the process running the latest code dies after the code is loaded with l(module) for the second time. > > > > I understand that only 2 versions of the software can be running at any given time but what I'm not understanding is why does the process with the latest code die or go away. > > > > I have process (a) that runs and calls b:do_stuff() which does stuff and responds back to (a). > > > > Module b has an upgrade function that will call into the latest module: > > > > upgrade()-> > > Ref = make_ref(), > > ?SERVER ! {self(), Ref, upgrade}, > > receive > > {Ref, ok} -> > > ok > > end. > > > > do_stuff()-> > > Ref = make_ref(), > > ?SERVER ! {self(), Ref, do_stuff}, %% <-after second l(b) function fails here because ?SERVER is no longer registered, process is gone > > receive > > {Ref, ok} -> > > ok > > end. > > > > server()-> > > receive > > {Client, Ref, upgrade} -> > > update_internal_structures(), > > Client ! {Ref, ok}, > > ?MODULE:server(); > > {Client, Ref, do_stuff} -> > > do_stuff(), > > Client ! {Ref, ok} > > end > > server(). > > > > > > so after I load the new code with l(b). I call b:upgrade() from the shell and I can tell from the output in do_stuff() that new code is running when process a sends us do_stuff message. This is all fine until I make another change to module (b) but when I call l(b) process (b) is wiped out, gone, no more... It's not a crash in process (b), it just ceases to exist. And it's very frustrating because I have exactly the same code in another process which seems to survive multiple l(c) without any issues. The only difference between the two is that process(b) has a link to a dets process because it opens dets files and the link is created by the dets backend I believe. > > > > I must not be understanding this hot code upgrade.... > > > > Can anyone shed any light on this? > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > From zxq9@REDACTED Fri Oct 2 06:07:04 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 02 Oct 2015 13:07:04 +0900 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: References: <560A4698.4070609@gmail.com> <6257474.9yJsM6Q8yr@changa> Message-ID: <3375988.2SqYq3xi9U@burrito> On Friday 02 October 2015 15:24:04 you wrote: > > On 2/10/2015, at 1:38 am, zxq9 wrote: > > > On 2015?10?1? ??? 22:35:09 you wrote: > Pascal > > (a) Used the wrong operators for sets. > Union and intersection behave like OR and AND. > The Booleans *do* form a ring (isomorphic to Z/2) but the > operation that is analogous to addition is EXCLUSIVE OR, > not inclusive or. I did not realize this. ...snip... (I *thoroughly* enjoyed the response there. Thank you! I imagine if I keep going with that line I'll annoy people who don't really care much about language design and its bizarre history, but that was really interesting! The Algol/Pascal comparison leads me, once again, to ponder why Algol never took off yet Go is buzzed as though it is some revolutionary thing. Meh. I think I would have enjoyed a chance to actually go to school.) > >>> as a shorthand for lists:foreach/2 specifically to get side-effects is now > >>> actually supported as a an optimization. > >> > >> But those effects do NOT include mutating variables. Each iteration > >> gets its own set of variables because that's the only way that the > >> variables *can* have different values in different iterations. > > > > True. Though we do have plenty of opportunities to effectively violate this whenever we want -- > > I'm not sure I understand you. Enlighten me. > > - The file system is a global shared mutable variable. > - The pid registry is a global shared mutable variable. > - DETS and ETS are mutable data structures. > - Other data bases ditto. > - "The" process dictionary is a process-scope mutable variable > which could in principle be eliminated by threading a state > through all calls. I'm sorry, I wasn't clear with how/where I wrote that. I meant "True: that funs and list comprehensions do not include mutating variables" but that we could write a fun or list comprehension that accesses or mutates some global state in its body, leaving us with mysterious side-effects. We could enclose a file descriptor or a port, or a db connection, for example, and get up to all sorts of madness while other processes are accessing the same things concurrently. It is very nice that I never encounter cases of this in the wild. That's just another example of where you can't protect people from themselves, though. Your example of provably meaningless function definitions like `f(X) -> X, -1.` comes to mind. As you noted earlier, patronizing the programmer is not the solution. Come to think of it "global" means different things in Erlang depending on the nature of the thing being discussed. "Global" within: the cluster, the node, the network, the filesystem, the process. I suppose that's just part of distributed computing, though, whenever you can create labels relative to different levels of the cluster. -Craig From ok@REDACTED Fri Oct 2 06:10:32 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 2 Oct 2015 17:10:32 +1300 Subject: [erlang-questions] variable exported from case in OTP 18 In-Reply-To: References: <560A4698.4070609@gmail.com> <39021272.nUAaYpW9fM@burrito> <230c74b6bfacc47940a7badde2650a3c.squirrel@chasm.otago.ac.nz> <6257474.9yJsM6Q8yr@changa> Message-ID: <01ED072C-E351-4F30-8B70-9033D2C5809E@cs.otago.ac.nz> On 2/10/2015, at 3:24 pm, Richard A. O'Keefe wrote: > whenever the left hand side is defined. But in Pascal, > (-2) div 3 = 0 > (-2) mod 3 = 1 > ((-2) div 3)*3 + (-2) mod 3 = 1 /= -1. That should of course be ((-2) div 3)*3 + (-2) mod 3 = 1 /= -2. Sigh. From chuanshuodelist@REDACTED Sat Oct 3 12:45:32 2015 From: chuanshuodelist@REDACTED (lili) Date: Sat, 3 Oct 2015 18:45:32 +0800 (CST) Subject: [erlang-questions] use inet_res:getbyname get timeout when a lot of processes called Message-ID: <60ba353a.3d36.1502d4e8689.Coremail.chuanshuodelist@126.com> Hi friends: I write a little program to look up a number of host name's Address. I create 1W process to lookup a lot of host's address. case inet_res:getbyname(www.xxx.com, a) of it return {error, timeout}. but when i one by one call getbyname is ok. Who know this reason. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donpedrothird@REDACTED Sat Oct 3 23:13:16 2015 From: donpedrothird@REDACTED (John Doe) Date: Sun, 4 Oct 2015 00:13:16 +0300 Subject: [erlang-questions] use inet_res:getbyname get timeout when a lot of processes called In-Reply-To: <60ba353a.3d36.1502d4e8689.Coremail.chuanshuodelist@126.com> References: <60ba353a.3d36.1502d4e8689.Coremail.chuanshuodelist@126.com> Message-ID: DNS resolving is slow, DNS servers could throttle requests going from the same IP. You would get this problem in any language. You would need to rotate multiple DNS servers to make it work. Look at {nameservers, [ nameserver() ]} option. 2015-10-03 13:45 GMT+03:00 lili : > Hi friends: > I write a little program to look up a number of host name's Address. > I create 1W process to lookup a lot of host's address. > case inet_res:getbyname(www.xxx.com, a) of > it return {error, timeout}. but when i one by one call getbyname is ok. > Who know this reason. > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdl.web@REDACTED Sun Oct 4 03:45:50 2015 From: sdl.web@REDACTED (Leo Liu) Date: Sun, 04 Oct 2015 09:45:50 +0800 Subject: [erlang-questions] dbg:stop/0 confusing Message-ID: Hi there, I check the source for dbg:stop/0 and it flushes trace messages and then stops the `dbg' process. However the doc says: -------------------------------- stop() -> ok Stops the dbg server and clears all trace flags for all processes and all trace patterns for all functions. Also shuts down all trace clients and closes all trace ports. Note that no trace patterns are affected by this function. -------------------------------- where is this ``clears all trace flags for all processes and all trace patterns for all functions'' implemented? The documentation is confusing on the difference between dbg:stop/0 and dbg:stop_clear/0. Help? Leo From ggaliens@REDACTED Sun Oct 4 16:14:39 2015 From: ggaliens@REDACTED (ggaliens@REDACTED) Date: Sun, 4 Oct 2015 14:14:39 +0000 Subject: [erlang-questions] Igor (module) merge and override a function. Message-ID: <20151004141439.U8VAX.185699.root@cdptpa-web10> Igor (module) merge and override a function. If I do a merge and the target file has a function exported from the source file ... I get a "backup" copy of that function with a long module decoration prefix, as well as the desired function which has replaced the renamed one. Is there a simple way to turn of any and all backups so that the function in target module just gets completely overwritten without a backup ? Please advise .... g g a l i e n s ( at ) n y c a p . r r . com From lukas@REDACTED Mon Oct 5 09:10:27 2015 From: lukas@REDACTED (Lukas Larsson) Date: Mon, 5 Oct 2015 09:10:27 +0200 Subject: [erlang-questions] dbg:stop/0 confusing In-Reply-To: References: Message-ID: Hello! On Sun, Oct 4, 2015 at 3:45 AM, Leo Liu wrote: > Hi there, > > I check the source for dbg:stop/0 and it flushes trace messages and then > stops the `dbg' process. However the doc says: > > -------------------------------- > stop() -> ok > > Stops the dbg server and clears all trace flags for all processes and > all trace patterns for all functions. Also shuts down all trace clients > and closes all trace ports. > > Note that no trace patterns are affected by this function. > -------------------------------- > > where is this ``clears all trace flags for all processes and all trace > patterns for all functions'' implemented? The clearing of trace flags for processes happens automatically by the vm when the tracer process is terminated. The clearing of trace patterns is done using dbg:ctp(). > The documentation is confusing > on the difference between dbg:stop/0 and dbg:stop_clear/0. Help? > The documentation is indeed a confusing and I believe wrong. dbg:stop() does not clean the function trace patterns, while dbg:stop_clear() does. So the "and all trace patterns for all functions" part of the dbg:stop() documentation should be removed. Thanks for pointing this out, I'll put a patch in for the next release. Unless someone can see something that I cannot? Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchesneau@REDACTED Mon Oct 5 09:57:56 2015 From: bchesneau@REDACTED (Benoit Chesneau) Date: Mon, 05 Oct 2015 07:57:56 +0000 Subject: [erlang-questions] [ann] new Erlang.paris meetup on 2015/10/14 Message-ID: We are organising a new session of the french meet-up Erlang.paris on 10-14 in Paris. You can register and discover the program on our website: http://erlang.paris It will be hosted by leboncoin.fr Last session slides have been published on Speakerdeck: https://speakerdeck.com/erlangparis Looking forward to see you there :) - beno?t -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdl.web@REDACTED Mon Oct 5 15:01:09 2015 From: sdl.web@REDACTED (Leo Liu) Date: Mon, 05 Oct 2015 21:01:09 +0800 Subject: [erlang-questions] dbg:stop/0 confusing References: Message-ID: On 2015-10-05 15:10 +0800, Lukas Larsson wrote: > The documentation is indeed a confusing and I believe wrong. dbg:stop() > does not clean the function trace patterns, while dbg:stop_clear() does. So > the "and all trace patterns for all functions" part of the dbg:stop() > documentation should be removed. Thanks for the clarification. Leo From josh.rubyist@REDACTED Mon Oct 5 21:20:01 2015 From: josh.rubyist@REDACTED (Josh Adams) Date: Mon, 5 Oct 2015 14:20:01 -0500 Subject: [erlang-questions] distributed gproc and gen_leader Message-ID: I'm interested in using gproc in distributed mode, and it's not immediately clear to me what the 'right' gen_leader implementation to use is. Right now I'm using Torben Hoffman's, but I was hopeful there would be a nice document someone has written up regarding pros/cons of various implementations, ideally with a suggestion for which to use in the general case. Is anyone aware of such a document? Failing that, can anyone share anecdotes regarding how I might choose a gen_leader implementation, or whether or not distributed gproc is something I should base a distributed system around? Really just using it for global process registry, and also taking advantage of the properties to provide a simple ad-hoc 'database query' style for finding processes that are fit for various purposes within the system. Thanks in advance, -- Josh Adams From henrik.x.nord@REDACTED Tue Oct 6 10:59:08 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Tue, 6 Oct 2015 10:59:08 +0200 Subject: [erlang-questions] Patch Package OTP 18.1.1 Released Message-ID: <56138D5C.80700@ericsson.com> Patch Package: OTP 18.1.1 Git Tag: OTP-18.1.1 Date: 2015-10-06 Trouble Report Id: OTP-13013, OTP-13022, OTP-13025 Seq num: seq12957 System: OTP Release: 18 Application: inets-6.0.2, mnesia-4.13.2 Predecessor: OTP 18.1 Check out the git tag OTP-18.1.1, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- inets-6.0.2 ----------------------------------------------------- --------------------------------------------------------------------- The inets-6.0.2 application can be applied independently of other applications on a full OTP 18 installation. --- Fixed Bugs and Malfunctions --- OTP-13022 Application(s): inets Avoid crash in mod_auth_server and mod_security_server due to using an atom instead of a string when creating a name. --- Improvements and New Features --- OTP-13013 Application(s): inets Add function response_default_headers/0 to httpd customize API, to allow user to specify default values for HTTP response headers. Full runtime dependencies of inets-6.0.2: erts-6.0, kernel-3.0, mnesia-4.12, runtime_tools-1.8.14, ssl-5.3.4, stdlib-2.0 --------------------------------------------------------------------- --- mnesia-4.13.2 --------------------------------------------------- --------------------------------------------------------------------- The mnesia-4.13.2 application can be applied independently of other applications on a full OTP 18 installation. --- Fixed Bugs and Malfunctions --- OTP-13025 Application(s): mnesia Related Id(s): seq12957 Fixed a process and file descriptor leak in mnesia:restore/2. Full runtime dependencies of mnesia-4.13.2: erts-7.0, kernel-3.0, stdlib-2.0 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From wde@REDACTED Tue Oct 6 12:11:44 2015 From: wde@REDACTED (wde@REDACTED) Date: Tue, 6 Oct 2015 12:11:44 +0200 Subject: [erlang-questions] distributed gproc and gen_leader In-Reply-To: References: Message-ID: <56139E60.5020904@free.fr> Le 10/05/2015 09:20 PM, Josh Adams a ?crit : > I'm interested in using gproc in distributed mode, and it's not > immediately clear to me what the 'right' gen_leader implementation to > use is. Right now I'm using Torben Hoffman's, but I was hopeful there > would be a nice document someone has written up regarding pros/cons of > various implementations, ideally with a suggestion for which to use in > the general case. > > Is anyone aware of such a document? Failing that, can anyone share > anecdotes regarding how I might choose a gen_leader implementation, or > whether or not distributed gproc is something I should base a > distributed system around? Really just using it for global process > registry, and also taking advantage of the properties to provide a > simple ad-hoc 'database query' style for finding processes that are > fit for various purposes within the system. > > Thanks in advance, > I have used the following module as leader election system : https://github.com/ngmoco/gl_async_bully regards, From ulf@REDACTED Tue Oct 6 12:32:52 2015 From: ulf@REDACTED (Ulf Wiger) Date: Tue, 6 Oct 2015 12:32:52 +0200 Subject: [erlang-questions] distributed gproc and gen_leader In-Reply-To: References: Message-ID: <8CD2C278-67D9-4197-8C43-1A3A6E9EB645@feuerlabs.com> Having written gproc, here are my 2c: - Gproc can support distributed systems in two ways: 1. Global gproc, which relies on full replication 2. Local gproc perhaps also making use of the remote lookup functions [1]. Regardless of any specific issues of ?global?, ?gproc_dist? et al, global name registration is problematic: it scales poorly, and you have to contend with consistency issues. - Gproc supports two different leader election approaches: 1. gen_leader (whichever one you decide to trust) 2. locks_leader (which is gen_leader-inspired, but uses a totally different algorithm) One might paraphrase Sir Tony Hoare and suggest that there are two ways to pick a global synchronization library: one is to pick one so ubiquitous that the deficiencies are well-known, another is to pick one so obscure that there are no well-known deficiencies. Locks_leader has no well-known deficiencies. ;-) I will not swear that gproc/gen_leader is more performant or reliable than ?global? (which is both supported and much more battle-tested). A reason for picking global gproc might be that the gproc semantics of registration and lookup are desirable, but I will say that I?ve received very little user feedback on global gproc. If a few users, or for that matter one dedicated user, would form a club and start beating it up, I will do my best to respond. :) I personally believe more in gproc/locks_leader, at least long-term, but that branch is currently not up to date. I have not yet decided whether to commit to that version as the default. [1] https://github.com/uwiger/gproc/blob/master/doc/gproc.md#await-3 https://github.com/uwiger/gproc/blob/master/doc/gproc.md#bcast-2 https://github.com/uwiger/gproc/blob/master/doc/gproc.md#wide_await-3 BR, Ulf W > On 05 Oct 2015, at 21:20, Josh Adams wrote: > > I'm interested in using gproc in distributed mode, and it's not > immediately clear to me what the 'right' gen_leader implementation to > use is. Right now I'm using Torben Hoffman's, but I was hopeful there > would be a nice document someone has written up regarding pros/cons of > various implementations, ideally with a suggestion for which to use in > the general case. > > Is anyone aware of such a document? Failing that, can anyone share > anecdotes regarding how I might choose a gen_leader implementation, or > whether or not distributed gproc is something I should base a > distributed system around? Really just using it for global process > registry, and also taking advantage of the properties to provide a > simple ad-hoc 'database query' style for finding processes that are > fit for various purposes within the system. > > Thanks in advance, > > -- > Josh Adams > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com From siraaj@REDACTED Tue Oct 6 15:04:36 2015 From: siraaj@REDACTED (Siraaj Khandkar) Date: Tue, 6 Oct 2015 09:04:36 -0400 Subject: [erlang-questions] Erlang OTP 18 memory leak / SSL In-Reply-To: References: Message-ID: On Sat, Sep 19, 2015 at 2:34 PM, Siraaj Khandkar wrote: > On Sat, Sep 19, 2015 at 12:34 PM, Sereysethy TOUCH < > touch.sereysethy@REDACTED> wrote: > >> Hello, >> >> I just recently updated Erlang to latest version OTP 18 on Ubuntu server. >> It uses cowboy (websocket), ranch, ssl, erlydtl & rabbitmq. It used to work >> fine in OTP 17. The program is correctly compiled but during the execution >> the memory kept increasing. I need to restart the process every one or two >> hours to free some memory. >> >> I have read a post here [ >> http://erlang.2086793.n4.nabble.com/R18-Unbounded-SSL-Session-ETS-Table-Growth-td4713697.html] >> which discussed about the ssl_session_cache ETS table which can become very >> large. >> >> The process beam.smp can go up to more than 5G during a few hours of >> executions. >> >> I am not yet sure what is the root cause of this issue. >> >> Does anyone know how to fix this? Or where should I look at? >> > > > We just experienced the same after upgrading from 17.0 (leaking over > days/weeks) to 17.5.3 (leaking over hours). I should be able to share some > data in a few of days. > As promised, attached is a screenshot of memory usage (in bytes) before and after upgrade to 18.1 (from 17.5, on Oct 2nd at 15:00ish). This is a machine with constant load, that does not much but poll over https every 15 seconds. Top plot is total memory used by VM. Middle is memory usage aggregated by process origin (name, if registered, otherwise its spawn history (init call, otp init call, otp ancestry)). Bottom is reductions per process origin (same schema as above). Those, pre-18.1-upgrade, sharp drops are VM deaths (restarted by upstart). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: screenshot--2015-10-05--18.41.30.jpg Type: image/jpeg Size: 183452 bytes Desc: not available URL: From rvirding@REDACTED Wed Oct 7 03:00:10 2015 From: rvirding@REDACTED (Robert Virding) Date: Tue, 6 Oct 2015 18:00:10 -0700 Subject: [erlang-questions] Strange behaviour of exit(kill) Message-ID: I am giving an Erlang course and we are looking at the error handling. When showing examples I found a very strange behaviour (to me) of doing exit(kill). The linked process gets the 'kill' but it is trappable. However, if I use exit(P, kill) to send the kill signal it is, as it should be, not trappable. Erlang/OTP 18 [erts-7.0] [source-4d83b58] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.0 (abort with ^G) 1> process_flag(trap_exit, true). false 2> spawn_link(fun () -> exit(normal) end). <0.36.0> 3> flush(). Shell got {'EXIT',<0.36.0>,normal} ok 4> spawn_link(fun () -> exit(die) end). <0.39.0> 5> flush(). Shell got {'EXIT',<0.39.0>,die} ok 6> S = self(). <0.33.0> 7> spawn_link(fun () -> exit(S, die) end). <0.43.0> 8> flush(). Shell got {'EXIT',<0.43.0>,die} Shell got {'EXIT',<0.43.0>,normal} ok 9> spawn_link(fun () -> exit(kill) end). <0.46.0> 10> flush(). Shell got {'EXIT',<0.46.0>,kill} ok 11> spawn_link(fun () -> exit(S, kill) end). ** exception exit: killed The shell evaluator process traps exits and then spawn_links a number of processes which exit/1 and exit/2 with different reasons. Everything behaves normally until 9> where I spawn_link a process which does an exit(kill). I get the 'kill' signal, but it is a trappable 'kill' signal! If I send the 'kill' signal with exit/2 it is not trappable, as it shouldn't be. What gives? So just receiving a 'kill' signal is not what kills me but it has to be sent in a certain way. So the process receiving a signal knows how the signal was sent. This is really inconsistent! It should be the signal itself which determines what the receiving process does. I would definitely class this as a bug. Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Oct 7 03:25:38 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 07 Oct 2015 10:25:38 +0900 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: References: Message-ID: <4419117.afWZKFkaIl@burrito> On Tuesday 06 October 2015 18:00:10 Robert Virding wrote: > 7> spawn_link(fun () -> exit(S, die) end). > <0.43.0> > 8> flush(). > Shell got {'EXIT',<0.43.0>,die} > Shell got {'EXIT',<0.43.0>,normal} > ok > 9> spawn_link(fun () -> exit(kill) end). > <0.46.0> > 10> flush(). > Shell got {'EXIT',<0.46.0>,kill} > ok > 11> spawn_link(fun () -> exit(S, kill) end). > ** exception exit: killed > What gives? So just receiving a 'kill' signal is not what kills me but it > has to be sent in a certain way. So the process receiving a signal knows > how the signal was sent. This is really inconsistent! It should be the > signal itself which determines what the receiving process does. I would > definitely class this as a bug. Hi, Robert. +1 This has bothered me too whenever I stop to think about it (usually I don't). It is definitely hard to understand when you're trying to build a mental model of how things work. Is it really a bug? It is documented to work this way, and seems to have been made this way on purpose. So maybe not that kind of a bug, but a design bug. It is semantically overloaded from the perspective of the user: Case 1: I receive a message {'EXIT', Pid, Reason = kill} where Pid /= self() which is just informing me of the demise of another process due to Reason. Case 2: I receive a message {'EXIT', Pid, Reason = kill} where Pid /= self() which is telling me I should exit with Reason. What is the difference? It is invisible at the level that we normally deal with the environment. I am really uncomfortable with the result of the flush() on line 8. 'kill' is supposed to be special, but it is only special to the first process receiving the message. An identically formed message *appears* to propagate, but clearly something more interesting is happening underneath. And no clues are provided to the user about why there is a difference. I feel like exit(Pid, kill) should send something special and clearly untrappable to Pid (like {'EXIT', Pid, kill} is an untrappable form, always -- or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically untrappable by way of matching on self()?), and then Pid should propagate something different like {'EXIT', Pid, killed} to make the difference clear. -Craig From zxq9@REDACTED Wed Oct 7 03:33:24 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 07 Oct 2015 10:33:24 +0900 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: <4419117.afWZKFkaIl@burrito> References: <4419117.afWZKFkaIl@burrito> Message-ID: <1713776.Oj5rNu4sYV@burrito> On Wednesday 07 October 2015 10:25:38 zxq9 wrote: > or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically untrappable by way of matching on self()? That was too much to hope for: 1> P = spawn(fun Loop() -> receive M -> io:format("Got ~p~n", [M]), Loop() end end). <0.1889.0> 2> P ! {'EXIT', P, kill}. Got {'EXIT',<0.1889.0>,kill} {'EXIT',<0.1889.0>,kill} 3> P ! {'EXIT', P, blam}. Got {'EXIT',<0.1889.0>,blam} {'EXIT',<0.1889.0>,blam} 4> exit(P, kill). true 5> P ! {'EXIT', P, blam}. {'EXIT',<0.1889.0>,blam} If it *did* turn out that matching {'EXIT', self(), kill} was untrappable I would just say "ah, that makes sense -- now I can understand the mechanism behind this without thinking about VM details". Instead it appears to be a case of mysterious activity underlying a message form that is semantically overloaded. And that stinks. -Craig From rvirding@REDACTED Wed Oct 7 03:46:55 2015 From: rvirding@REDACTED (Robert Virding) Date: Tue, 6 Oct 2015 18:46:55 -0700 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: <1713776.Oj5rNu4sYV@burrito> References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> Message-ID: It's all about signals and not messages. Sending a message to a process should *NEVER* by default kill it even if it has the same format as an 'EXIT' message. NEVER!. A signal is converted to a message when it arrives at a process which is trapping exits unless it is the 'kill' which is untrappable and the process always dies. Explicitly sending the SIGNAL with exit(Pid, kill) should unconditionally kill the process as should dying with the reason 'kill' in exit(kill) which also sends the SIGNAL 'kill'. In both cases the process receives the SIGNAL 'kill', as shown in my example, but in one case it is trappable and in the other it is untrappable. My point is that the *same* signal results in different behaviour depending on how it was sent. That's incocnsistent. Robert On 6 October 2015 at 18:33, zxq9 wrote: > On Wednesday 07 October 2015 10:25:38 zxq9 wrote: > > > or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically > untrappable by way of matching on self()? > > That was too much to hope for: > > 1> P = spawn(fun Loop() -> receive M -> io:format("Got ~p~n", [M]), Loop() > end end). > <0.1889.0> > 2> P ! {'EXIT', P, kill}. > Got {'EXIT',<0.1889.0>,kill} > {'EXIT',<0.1889.0>,kill} > 3> P ! {'EXIT', P, blam}. > Got {'EXIT',<0.1889.0>,blam} > {'EXIT',<0.1889.0>,blam} > 4> exit(P, kill). > true > 5> P ! {'EXIT', P, blam}. > {'EXIT',<0.1889.0>,blam} > > If it *did* turn out that matching {'EXIT', self(), kill} was untrappable > I would just say "ah, that makes sense -- now I can understand the > mechanism behind this without thinking about VM details". Instead it > appears to be a case of mysterious activity underlying a message form that > is semantically overloaded. And that stinks. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Oct 7 03:51:14 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 07 Oct 2015 10:51:14 +0900 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: References: <1713776.Oj5rNu4sYV@burrito> Message-ID: <3668099.9GAfdER8LF@burrito> On Tuesday 06 October 2015 18:46:55 you wrote: > It's all about signals and not messages. Sending a message to a process > should *NEVER* by default kill it even if it has the same format as an > 'EXIT' message. NEVER!. A signal is converted to a message when it arrives > at a process which is trapping exits unless it is the 'kill' which is > untrappable and the process always dies. ok > Explicitly sending the SIGNAL with exit(Pid, kill) should unconditionally > kill the process as should dying with the reason 'kill' in exit(kill) which > also sends the SIGNAL 'kill'. In both cases the process receives the SIGNAL > 'kill', as shown in my example, but in one case it is trappable and in the > other it is untrappable. I didn't realize it was propagating the signal in addition to the message. That's even more weird then. -Craig From zachary.hueras@REDACTED Wed Oct 7 03:20:13 2015 From: zachary.hueras@REDACTED (Soup) Date: Tue, 6 Oct 2015 21:20:13 -0400 Subject: [erlang-questions] [erlang-bugs] Strange behaviour of exit(kill) In-Reply-To: References: Message-ID: At 9, the spawned process is killing itself with reason kill, and the shell received a trappable exit message. At 11, the spawned process is killing the *shell* with reason kill. That's not a trappable signal coming from another process: it's an emulator command to kill the specified process, and the reason kill makes it untrappable. It's a little nuanced, but pretty straightforward otherwise. When a process dies, it informs linked processes by way of message, which forces exit unless the receiving process is trapping exits. *exit/2 does not produce a message*, it tries to terminate the process. It's only converted into a message if the process is trapping exits, and reason kill bypasses the trapping logic. On Oct 6, 2015 9:00 PM, "Robert Virding" wrote: > I am giving an Erlang course and we are looking at the error handling. > When showing examples I found a very strange behaviour (to me) of doing > exit(kill). The linked process gets the 'kill' but it is trappable. > However, if I use exit(P, kill) to send the kill signal it is, as it should > be, not trappable. > > Erlang/OTP 18 [erts-7.0] [source-4d83b58] [64-bit] [smp:8:8] > [async-threads:10] [hipe] [kernel-poll:false] > > Eshell V7.0 (abort with ^G) > 1> process_flag(trap_exit, true). > false > 2> spawn_link(fun () -> exit(normal) end). > <0.36.0> > 3> flush(). > Shell got {'EXIT',<0.36.0>,normal} > ok > 4> spawn_link(fun () -> exit(die) end). > <0.39.0> > 5> flush(). > Shell got {'EXIT',<0.39.0>,die} > ok > 6> S = self(). > <0.33.0> > 7> spawn_link(fun () -> exit(S, die) end). > <0.43.0> > 8> flush(). > Shell got {'EXIT',<0.43.0>,die} > Shell got {'EXIT',<0.43.0>,normal} > ok > 9> spawn_link(fun () -> exit(kill) end). > <0.46.0> > 10> flush(). > Shell got {'EXIT',<0.46.0>,kill} > ok > 11> spawn_link(fun () -> exit(S, kill) end). > ** exception exit: killed > > The shell evaluator process traps exits and then spawn_links a number of > processes which exit/1 and exit/2 with different reasons. Everything > behaves normally until 9> where I spawn_link a process which does an > exit(kill). I get the 'kill' signal, but it is a trappable 'kill' signal! > If I send the 'kill' signal with exit/2 it is not trappable, as it > shouldn't be. > > What gives? So just receiving a 'kill' signal is not what kills me but it > has to be sent in a certain way. So the process receiving a signal knows > how the signal was sent. This is really inconsistent! It should be the > signal itself which determines what the receiving process does. I would > definitely class this as a bug. > > Robert > > > _______________________________________________ > erlang-bugs mailing list > erlang-bugs@REDACTED > http://erlang.org/mailman/listinfo/erlang-bugs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.hueras@REDACTED Wed Oct 7 04:43:14 2015 From: zachary.hueras@REDACTED (Soup) Date: Tue, 6 Oct 2015 22:43:14 -0400 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: <3668099.9GAfdER8LF@burrito> References: <1713776.Oj5rNu4sYV@burrito> <3668099.9GAfdER8LF@burrito> Message-ID: Think about this in the context of a supervisor and its children. If the child is killed via exit/2, should the supervisor also be killed? And its supervisor? While I agree the behavior is inconsistent, it still seems appropriate to me. A kill signal should only kill the process it was meant to kill and any non-trapping processes that are linked to it. Without this behavior, it would be possible to kill the entire application supervision tree by calling exit(Pid, kill) on the lowliest child of an application because no process would be able to trap the signal. ?That being said, you could also go the other way. If I start a process that traps exits as part of a supervision tree, then exit(Pid, kill) its supervisor?, that process could remain alive even though its supervisor no longer exists. A rogue process, if you will. Personally, I would prefer to deal with the implications of the latter than the former, though. On Tue, Oct 6, 2015 at 9:51 PM, zxq9 wrote: > On Tuesday 06 October 2015 18:46:55 you wrote: > > It's all about signals and not messages. Sending a message to a process > > should *NEVER* by default kill it even if it has the same format as an > > 'EXIT' message. NEVER!. A signal is converted to a message when it > arrives > > at a process which is trapping exits unless it is the 'kill' which is > > untrappable and the process always dies. > > ok > > > Explicitly sending the SIGNAL with exit(Pid, kill) should unconditionally > > kill the process as should dying with the reason 'kill' in exit(kill) > which > > also sends the SIGNAL 'kill'. In both cases the process receives the > SIGNAL > > 'kill', as shown in my example, but in one case it is trappable and in > the > > other it is untrappable. > > I didn't realize it was propagating the signal in addition to the message. > That's even more weird then. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.hueras@REDACTED Wed Oct 7 04:53:10 2015 From: zachary.hueras@REDACTED (Soup) Date: Tue, 6 Oct 2015 22:53:10 -0400 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: References: <1713776.Oj5rNu4sYV@burrito> <3668099.9GAfdER8LF@burrito> Message-ID: You could even say that OTP relies on this behavior to implement the brutal_kill termination strategy. If I have a temporary child that is itself a supervisor that implements brutal_kill, it should absolutely *not* kill its supervisor, because it is only a temporary child. Not being able to trap kill signals meant for a process to which a given process is linked would make this setup impossible. Hell, what would happen if I called supervisor:terminate_child(Pid, Child) if the supervisor happened to use the brutal_kill termination strategy? Bye-bye VM. On Tue, Oct 6, 2015 at 10:43 PM, Soup wrote: > Think about this in the context of a supervisor and its children. If the > child is killed via exit/2, should the supervisor also be killed? And its > supervisor? > > While I agree the behavior is inconsistent, it still seems appropriate to > me. A kill signal should only kill the process it was meant to kill and any > non-trapping processes that are linked to it. Without this behavior, it > would be possible to kill the entire application supervision tree by > calling exit(Pid, kill) on the lowliest child of an application because no > process would be able to trap the signal. > > ?That being said, you could also go the other way. If I start a process > that traps exits as part of a supervision tree, then exit(Pid, kill) its > supervisor?, that process could remain alive even though its supervisor no > longer exists. A rogue process, if you will. > > Personally, I would prefer to deal with the implications of the latter > than the former, though. > > > On Tue, Oct 6, 2015 at 9:51 PM, zxq9 wrote: > >> On Tuesday 06 October 2015 18:46:55 you wrote: >> > It's all about signals and not messages. Sending a message to a process >> > should *NEVER* by default kill it even if it has the same format as an >> > 'EXIT' message. NEVER!. A signal is converted to a message when it >> arrives >> > at a process which is trapping exits unless it is the 'kill' which is >> > untrappable and the process always dies. >> >> ok >> >> > Explicitly sending the SIGNAL with exit(Pid, kill) should >> unconditionally >> > kill the process as should dying with the reason 'kill' in exit(kill) >> which >> > also sends the SIGNAL 'kill'. In both cases the process receives the >> SIGNAL >> > 'kill', as shown in my example, but in one case it is trappable and in >> the >> > other it is untrappable. >> >> I didn't realize it was propagating the signal in addition to the >> message. That's even more weird then. >> >> -Craig >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From huss01@REDACTED Wed Oct 7 09:51:22 2015 From: huss01@REDACTED (=?UTF-8?Q?H=C3=A5kan_Huss?=) Date: Wed, 7 Oct 2015 09:51:22 +0200 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> Message-ID: 2015-10-07 3:46 GMT+02:00 Robert Virding : > It's all about signals and not messages. Sending a message to a process > should *NEVER* by default kill it even if it has the same format as an > 'EXIT' message. NEVER!. A signal is converted to a message when it arrives > at a process which is trapping exits unless it is the 'kill' which is > untrappable and the process always dies. > > Yes, but the 'kill' signal is not an exit signal with reason kill. The 'kill' signal can only be sent by calling exit/2 with Reason = kill, which is documented to have the effect that "an untrappable exit signal is sent to Pid which will unconditionally exit with exit reason killed." There is no mention of how the exit reason in that exit signal, and since it is not trappable there is no way to observe it. > Explicitly sending the SIGNAL with exit(Pid, kill) should unconditionally > kill the process > Yes. as should dying with the reason 'kill' in exit(kill) which also sends the > SIGNAL 'kill'. > No, this sends an exit signal with reason kill, but that is not the same ass the signal sent using exit(Pid, kill). > In both cases the process receives the SIGNAL 'kill', as shown in my > example, but in one case it is trappable and in the other it is untrappable. > No, in one case it receives an exit signal with reason kill, in the other case it receives the special untrappable exit signal which causes unconditional termination. > My point is that the *same* signal results in different behaviour > depending on how it was sent. That's incocnsistent. > I agree that it is inconsistent. I would have preferred that the exit(Pid, kill) was a separate function, e.g., kill(Pid) and that exit(Pid, kill) would be handled as any other exit/2 call. But I won't hold my breath in anticipation of that being changed... /H?kan > Robert > > > On 6 October 2015 at 18:33, zxq9 wrote: > >> On Wednesday 07 October 2015 10:25:38 zxq9 wrote: >> >> > or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically >> untrappable by way of matching on self()? >> >> That was too much to hope for: >> >> 1> P = spawn(fun Loop() -> receive M -> io:format("Got ~p~n", [M]), >> Loop() end end). >> <0.1889.0> >> 2> P ! {'EXIT', P, kill}. >> Got {'EXIT',<0.1889.0>,kill} >> {'EXIT',<0.1889.0>,kill} >> 3> P ! {'EXIT', P, blam}. >> Got {'EXIT',<0.1889.0>,blam} >> {'EXIT',<0.1889.0>,blam} >> 4> exit(P, kill). >> true >> 5> P ! {'EXIT', P, blam}. >> {'EXIT',<0.1889.0>,blam} >> >> If it *did* turn out that matching {'EXIT', self(), kill} was untrappable >> I would just say "ah, that makes sense -- now I can understand the >> mechanism behind this without thinking about VM details". Instead it >> appears to be a case of mysterious activity underlying a message form that >> is semantically overloaded. And that stinks. >> >> -Craig >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvirding@REDACTED Wed Oct 7 12:27:24 2015 From: rvirding@REDACTED (Robert Virding) Date: Wed, 7 Oct 2015 03:27:24 -0700 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> Message-ID: I still find that extremely inconsistent, there are actually 2 'kill' signals: one that is sent with exit(Pid, kill) and the other which sent when you do exit(kill). So I can trap 'kill' and I can't trap 'kill', great. I would personally go the other way and say that kill is kill however it is sent. But I agree with you, I'm not holding my breath waiting for it to be fixed. Robert P.S. I am not even going to mention the terribly inconsistent handling of errors in link/1. On 7 October 2015 at 00:51, H?kan Huss wrote: > 2015-10-07 3:46 GMT+02:00 Robert Virding : > >> It's all about signals and not messages. Sending a message to a process >> should *NEVER* by default kill it even if it has the same format as an >> 'EXIT' message. NEVER!. A signal is converted to a message when it arrives >> at a process which is trapping exits unless it is the 'kill' which is >> untrappable and the process always dies. >> >> Yes, but the 'kill' signal is not an exit signal with reason kill. The > 'kill' signal can only be sent by calling exit/2 with Reason = kill, which > is documented to have the effect that "an untrappable exit signal is sent > to Pid which will unconditionally exit with exit reason killed." There is > no mention of how the exit reason in that exit signal, and since it is not > trappable there is no way to observe it. > > >> Explicitly sending the SIGNAL with exit(Pid, kill) should unconditionally >> kill the process >> > Yes. > > as should dying with the reason 'kill' in exit(kill) which also sends the >> SIGNAL 'kill'. >> > No, this sends an exit signal with reason kill, but that is not the same > ass the signal sent using exit(Pid, kill). > > >> In both cases the process receives the SIGNAL 'kill', as shown in my >> example, but in one case it is trappable and in the other it is untrappable. >> > No, in one case it receives an exit signal with reason kill, in the other > case it receives the special untrappable exit signal which causes > unconditional termination. > > >> My point is that the *same* signal results in different behaviour >> depending on how it was sent. That's incocnsistent. >> > I agree that it is inconsistent. I would have preferred that the exit(Pid, > kill) was a separate function, e.g., kill(Pid) and that exit(Pid, kill) > would be handled as any other exit/2 call. But I won't hold my breath in > anticipation of that being changed... > > /H?kan > > >> Robert >> >> >> On 6 October 2015 at 18:33, zxq9 wrote: >> >>> On Wednesday 07 October 2015 10:25:38 zxq9 wrote: >>> >>> > or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically >>> untrappable by way of matching on self()? >>> >>> That was too much to hope for: >>> >>> 1> P = spawn(fun Loop() -> receive M -> io:format("Got ~p~n", [M]), >>> Loop() end end). >>> <0.1889.0> >>> 2> P ! {'EXIT', P, kill}. >>> Got {'EXIT',<0.1889.0>,kill} >>> {'EXIT',<0.1889.0>,kill} >>> 3> P ! {'EXIT', P, blam}. >>> Got {'EXIT',<0.1889.0>,blam} >>> {'EXIT',<0.1889.0>,blam} >>> 4> exit(P, kill). >>> true >>> 5> P ! {'EXIT', P, blam}. >>> {'EXIT',<0.1889.0>,blam} >>> >>> If it *did* turn out that matching {'EXIT', self(), kill} was >>> untrappable I would just say "ah, that makes sense -- now I can understand >>> the mechanism behind this without thinking about VM details". Instead it >>> appears to be a case of mysterious activity underlying a message form that >>> is semantically overloaded. And that stinks. >>> >>> -Craig >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdl.web@REDACTED Wed Oct 7 14:28:12 2015 From: sdl.web@REDACTED (Leo Liu) Date: Wed, 07 Oct 2015 20:28:12 +0800 Subject: [erlang-questions] What is ``internal ets tables''? Message-ID: Hi there, I am looking at observer/src/cdv_int_tab_cb.erl and find this ``internal ets tables''. Any idea what it is? Thanks Leo From roger@REDACTED Wed Oct 7 14:57:12 2015 From: roger@REDACTED (Roger Lipscombe) Date: Wed, 7 Oct 2015 13:57:12 +0100 Subject: [erlang-questions] rebar eunit dies with {"Kernel pid terminated", error_logger, killed} Message-ID: I'm using rebar (rebar 2.6.1 R15B03 20150928_141254 git 2.6.1) to run eunit, and it's failing on OS X with {"Kernel pid terminated",error_logger,killed}. The same project run on Linux, with the same version of rebar, runs successfully. How do I go about figuring out what's wrong with it? From huss01@REDACTED Wed Oct 7 15:15:11 2015 From: huss01@REDACTED (=?UTF-8?Q?H=C3=A5kan_Huss?=) Date: Wed, 7 Oct 2015 15:15:11 +0200 Subject: [erlang-questions] Strange behaviour of exit(kill) In-Reply-To: References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> Message-ID: 2015-10-07 12:27 GMT+02:00 Robert Virding : > I still find that extremely inconsistent, there are actually 2 'kill' > signals: one that is sent with exit(Pid, kill) and the other which sent > when you do exit(kill). So I can trap 'kill' and I can't trap 'kill', great. > > I agree that it is confusing. But note that exit(kill) will not unconditionally terminate the process, so it is apparent that exit/1 and exit/2 are doing different things: Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.0 (abort with ^G) 1> self(). <0.34.0> 2> catch exit(kill). {'EXIT',kill} 3> self(). <0.34.0> 4> catch exit(self(), kill). ** exception exit: killed 5> self(). <0.39.0> The most confusing thing is that exit(P, Reason) will, in case P is terminated, propagate exit signals with reason Reason in all cases except when Reason is kill. In this case the propagated reason is killed. But as your experiment shows, this is in fact totally unnecessary since an exit signal with reason kill can be caught anyway... I would personally go the other way and say that kill is kill however it is > sent. But I agree with you, I'm not holding my breath waiting for it to be > fixed. > Robert > > P.S. I am not even going to mention the terribly inconsistent handling of > errors in link/1. > > > On 7 October 2015 at 00:51, H?kan Huss wrote: > >> 2015-10-07 3:46 GMT+02:00 Robert Virding : >> >>> It's all about signals and not messages. Sending a message to a process >>> should *NEVER* by default kill it even if it has the same format as an >>> 'EXIT' message. NEVER!. A signal is converted to a message when it arrives >>> at a process which is trapping exits unless it is the 'kill' which is >>> untrappable and the process always dies. >>> >>> Yes, but the 'kill' signal is not an exit signal with reason kill. The >> 'kill' signal can only be sent by calling exit/2 with Reason = kill, which >> is documented to have the effect that "an untrappable exit signal is sent >> to Pid which will unconditionally exit with exit reason killed." There >> is no mention of how the exit reason in that exit signal, and since it is >> not trappable there is no way to observe it. >> >> >>> Explicitly sending the SIGNAL with exit(Pid, kill) should >>> unconditionally kill the process >>> >> Yes. >> >> as should dying with the reason 'kill' in exit(kill) which also sends the >>> SIGNAL 'kill'. >>> >> No, this sends an exit signal with reason kill, but that is not the same >> ass the signal sent using exit(Pid, kill). >> >> >>> In both cases the process receives the SIGNAL 'kill', as shown in my >>> example, but in one case it is trappable and in the other it is untrappable. >>> >> No, in one case it receives an exit signal with reason kill, in the other >> case it receives the special untrappable exit signal which causes >> unconditional termination. >> >> >>> My point is that the *same* signal results in different behaviour >>> depending on how it was sent. That's incocnsistent. >>> >> I agree that it is inconsistent. I would have preferred that the >> exit(Pid, kill) was a separate function, e.g., kill(Pid) and that exit(Pid, >> kill) would be handled as any other exit/2 call. But I won't hold my breath >> in anticipation of that being changed... >> >> /H?kan >> >> >>> Robert >>> >>> >>> On 6 October 2015 at 18:33, zxq9 wrote: >>> >>>> On Wednesday 07 October 2015 10:25:38 zxq9 wrote: >>>> >>>> > or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically >>>> untrappable by way of matching on self()? >>>> >>>> That was too much to hope for: >>>> >>>> 1> P = spawn(fun Loop() -> receive M -> io:format("Got ~p~n", [M]), >>>> Loop() end end). >>>> <0.1889.0> >>>> 2> P ! {'EXIT', P, kill}. >>>> Got {'EXIT',<0.1889.0>,kill} >>>> {'EXIT',<0.1889.0>,kill} >>>> 3> P ! {'EXIT', P, blam}. >>>> Got {'EXIT',<0.1889.0>,blam} >>>> {'EXIT',<0.1889.0>,blam} >>>> 4> exit(P, kill). >>>> true >>>> 5> P ! {'EXIT', P, blam}. >>>> {'EXIT',<0.1889.0>,blam} >>>> >>>> If it *did* turn out that matching {'EXIT', self(), kill} was >>>> untrappable I would just say "ah, that makes sense -- now I can understand >>>> the mechanism behind this without thinking about VM details". Instead it >>>> appears to be a case of mysterious activity underlying a message form that >>>> is semantically overloaded. And that stinks. >>>> >>>> -Craig >>>> _______________________________________________ >>>> erlang-questions mailing list >>>> erlang-questions@REDACTED >>>> http://erlang.org/mailman/listinfo/erlang-questions >>>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From seancribbs@REDACTED Wed Oct 7 15:55:19 2015 From: seancribbs@REDACTED (Sean Cribbs) Date: Wed, 7 Oct 2015 08:55:19 -0500 Subject: [erlang-questions] rebar eunit dies with {"Kernel pid terminated", error_logger, killed} In-Reply-To: References: Message-ID: I have also previously seen eunit under rebar cause this crash. I think there's a race-condition at completion of the suite but I've never been able to find it. Can you run the suite successfully outside rebar? On Wed, Oct 7, 2015 at 7:57 AM, Roger Lipscombe wrote: > I'm using rebar (rebar 2.6.1 R15B03 20150928_141254 git 2.6.1) to run > eunit, and it's failing on OS X with {"Kernel pid > terminated",error_logger,killed}. The same project run on Linux, with > the same version of rebar, runs successfully. > > How do I go about figuring out what's wrong with it? > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker.eriksson@REDACTED Wed Oct 7 16:33:43 2015 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Wed, 7 Oct 2015 16:33:43 +0200 Subject: [erlang-questions] What is ``internal ets tables''? In-Reply-To: References: Message-ID: <56152D47.1010207@ericsson.com> It's ets tables used by ets itself to map from pid to owned tables and from pid to fixated tables. However, looking at the code, they are only dumped if the VM was built in debug mode. /Sverker, Erlang/OTP On 10/07/2015 02:28 PM, Leo Liu wrote: > Hi there, > > I am looking at observer/src/cdv_int_tab_cb.erl and find this ``internal > ets tables''. Any idea what it is? > > Thanks > Leo > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From tuncer.ayaz@REDACTED Wed Oct 7 20:42:21 2015 From: tuncer.ayaz@REDACTED (Tuncer Ayaz) Date: Wed, 7 Oct 2015 20:42:21 +0200 Subject: [erlang-questions] rebar eunit dies with {"Kernel pid terminated", error_logger, killed} In-Reply-To: References: Message-ID: On Wed, Oct 7, 2015 at 2:57 PM, Roger Lipscombe wrote: > I'm using rebar (rebar 2.6.1 R15B03 20150928_141254 git 2.6.1) to run > eunit, and it's failing on OS X with {"Kernel pid > terminated",error_logger,killed}. The same project run on Linux, with > the same version of rebar, runs successfully. > > How do I go about figuring out what's wrong with it? To rule it out, can you try and see if {eunit_opts, [{reset_after_eunit, false}]}. makes a difference? From co7eb@REDACTED Wed Oct 7 20:39:57 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 14:39:57 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) Message-ID: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> I fellows, I was down for a while, connection payment problems, lol. So I am back, and I have a few ideas I would like to share. Also my cowboy extension framework is marching very nice, I had dedicate some time to it in this offline days. My thoughts, when I was doing some algorithms for the framework, I found some questions I would like to share because for example, when working with lists module, and doing some recursive functions, it is usually a problem that we need to do lists:reverse at the end of the algorism to get the data in the right order. So I can imagine Erlang implement lists using double linked lists for obvious purposes, I haven't see the source code so I am just guessing here, so for example when doing length you must spend one O(N) to transverse the list, one O(1) for each element to count them and one final O(1) to return the list. So when using lists:reverse/1 it takes four times what lengths does, so I can play guess, and just that, that you are using indeed a double linked list so you spend one O(n) to transverse the list, three O(1), which is an O(1) in total of course I am just counting the operations as well, to swap the preview and next pointers for each node so the list will be in the reverse order and other O(1) to return the new list which is a pointer of course to the old list just reversed. So guessing that it is what you do, how bad could be instead of doing that just make list have one bit more of size in memory by saying the order it have. So when you use lists:reverse/1 is just changing that bit to true and when iterating the list instead of using the next function use preview function and viseversa. The problem will be that you must spend one more O(1) for each element when iterating to choose what function is executed depending on the reverse bit. I just did a little simple erlang example to prove my point, of course it is just for demonstration purposes not for real implementation. -module(mylists). -compile(export_all). -type mylists_ref() :: erlang:ref(). -spec new(list()) -> mylists_ref(). new(List)-> Ref = erlang:make_ref(), erlang:put({Ref, first}, erlang:hd(List)), store_list(List, Ref), Ref. store_list([A], Ref) -> erlang:put({Ref, final}, A); store_list([A,B | _] = [_ | Rest], Ref) -> erlang:put({Ref, A, next}, B), erlang:put({Ref, B, preview}, A), store_list(Rest, Ref). -spec next(mylists_ref()) -> Element :: any(). next(MyListRef) -> case erlang:get({MyListRef, cursor}) of undefined -> Element = case erlang:get({MyListRef, reversed}) of true -> erlang:get({MyListRef, final}); undefined -> erlang:get({MyListRef, first}) end, erlang:put({MyListRef, cursor}, Element), Element; Element -> NewElement = case erlang:get({MyListRef, reversed}) of true -> erlang:get({MyListRef, Element, preview}); undefined -> erlang:get({MyListRef, Element, next}) end, case NewElement of undefined -> undefined; Next -> erlang:put({MyListRef, cursor}, Next), Next end end. reset(MyListRef) -> erlang:erase({MyListRef, cursor}). -spec reverse(mylists_ref()) -> MyReverseListRef :: mylists_ref(). reverse(MyListRef) -> reset(MyListRef), case erlang:get({MyListRef, reversed}) of undefined -> erlang:put({MyListRef, reversed}, true); true -> erlang:erase({MyListRef, reversed}) end, ok. test() -> MyListRef = mylists:new([1,2,3,4]), FirstElem = mylists:next(MyListRef), mylists:reverse(MyListRef), LastElem = mylists:next(MyListRef), {FirstElem, LastElem}. Cheers, Ivan (son of Gilberio). -------------- next part -------------- An HTML attachment was scrubbed... URL: From co7eb@REDACTED Wed Oct 7 20:54:55 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 14:54:55 -0400 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> Message-ID: <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> Hi everyone, Dot notation with maps would be very nice. I am currently using maps in the database layer in my little framework and still I have to do not handy but very fast I must say, pattern machine operations to gets the desired values, instead I could use something like this Map.id, . would be nice!! instead of {id:= Id} = Map. Cheers, Ivan (son of Gilberio). From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of Milton Inostroza Aguilera Sent: Sunday, September 6, 2015 3:02 PM To: Theepan Cc: Erlang Questions Mailing List Subject: Re: [erlang-questions] Accessing a single value from MAPS Hi Theepan, On Sep 6, 2015, at 1:45 PM, Theepan > wrote: Team, What is the syntactic sugar to access single value from MAPS? I know there is this "get" method, but will be good to have something like DOT notation, like we do with records. Could not find it in the documents. A = #{my_key => 1}. #{my_key := Val} = A. Val. For information about maps you can read this [1] Hope this helps, [1] http://learnyousomeerlang.com/maps#about-this-chapter It will be handy to access nested MAPS. Thanks, Theepan _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From cian@REDACTED Wed Oct 7 21:47:57 2015 From: cian@REDACTED (Cian Synnott) Date: Wed, 7 Oct 2015 20:47:57 +0100 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> Message-ID: Hi Ivan, On Wed, Oct 7, 2015 at 7:39 PM, Ivan Carmenates Garcia wrote: > lists module, and doing some recursive functions, it is usually a problem > that we need to do lists:reverse at the end of the algorism to get the data > in the right order. > Are you sure that it's a problem, e.g. do you have very long lists, or measurements that demonstrate an issue? See http://emauton.org/2015/01/25/lists:reverse-1-performance-in-erlang/ for a brief note on why you probably don't need to worry about this. :o) > So I can imagine Erlang implement lists using double linked lists for > obvious purposes, > They're singly-linked - this is why we tend to build them in "reverse order" in the first place. Cian From co7eb@REDACTED Thu Oct 8 00:49:56 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 18:49:56 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> Message-ID: <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> Nice. So in order of balance between memory and speed is the best ways as it is?. Okay, sounds good to me, also Joe was saying something that I like some kind about algorithms optimizations, because I am a little bit hard about it, but for example in this little framework I am doing for myself and the community if they like of course there are parts in which maybe I think some algorithms could be better but if it works and it does quite well, i.e. if in my computer with one millions of iterations in one process this is important because each user will have one process for its own, it take 21 seconds to perform and in a core i3 with 2 GB of single channel memory it take 9 seconds, then in a real server with 64 cores and super memory it will take none milliseconds to perform so, in the future machines will be better each time and maybe we don?t have to be so extreme about performance. That gives me a little more of hope lol. Best regards, Ivan (son of Gilberio) From: Erik S?e S?rensen [mailto:eriksoe@REDACTED] Sent: Wednesday, October 7, 2015 3:21 PM To: Ivan Carmenates Garcia Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) Hi Ivan - Erlang is using singly linked lists for obvious(?) reasons: If you have a list L, then you can both prepend an element X and prepend an element Y in constant time: [X|L], [Y|L] getting two new lists. This approach is compact, a standard approach for functional languages, and performs quite well. Let's do the math - the cost of building a list of length N using both approaches: - Using doubly linked lists: Allocate and populate 3*N memory words. The end result uses 3*N words. - Using singly linked lists: Allocate and populate 2*N memory words, then doing it again - a total cost of 4*N. The end result uses 2*N words. (I here assume that the list objects have no header; afaik this is true in the Erlang VM.) So using doubly linked lists - the semantic difference aside - costs only about 25% less, even using your approach, while using 50% more memory for the end result. And that 25% figure is the best case - what also needs to be taken into consideration is that performance would suffer in other places, in those operations where that extra bit of yours would have to be tested. Believe it or not, these simple cons lists are hard to beat; that's why they are still around after more than 50 years. :-) /Erik 2015-10-07 20:39 GMT+02:00 Ivan Carmenates Garcia >: I fellows, I was down for a while, connection payment problems, lol. So I am back, and I have a few ideas I would like to share. Also my cowboy extension framework is marching very nice, I had dedicate some time to it in this offline days. My thoughts, when I was doing some algorithms for the framework, I found some questions I would like to share because for example, when working with lists module, and doing some recursive functions, it is usually a problem that we need to do lists:reverse at the end of the algorism to get the data in the right order. So I can imagine Erlang implement lists using double linked lists for obvious purposes, I haven?t see the source code so I am just guessing here, so for example when doing length you must spend one O(N) to transverse the list, one O(1) for each element to count them and one final O(1) to return the list. So when using lists:reverse/1 it takes four times what lengths does, so I can play guess, and just that, that you are using indeed a double linked list so you spend one O(n) to transverse the list, three O(1), which is an O(1) in total of course I am just counting the operations as well, to swap the preview and next pointers for each node so the list will be in the reverse order and other O(1) to return the new list which is a pointer of course to the old list just reversed. So guessing that it is what you do, how bad could be instead of doing that just make list have one bit more of size in memory by saying the order it have. So when you use lists:reverse/1 is just changing that bit to true and when iterating the list instead of using the next function use preview function and viseversa. The problem will be that you must spend one more O(1) for each element when iterating to choose what function is executed depending on the reverse bit. I just did a little simple erlang example to prove my point, of course it is just for demonstration purposes not for real implementation. -module(mylists). -compile(export_all). -type mylists_ref() :: erlang:ref(). -spec new(list()) -> mylists_ref(). new(List)-> Ref = erlang:make_ref(), erlang:put({Ref, first}, erlang:hd(List)), store_list(List, Ref), Ref. store_list([A], Ref) -> erlang:put({Ref, final}, A); store_list([A,B | _] = [_ | Rest], Ref) -> erlang:put({Ref, A, next}, B), erlang:put({Ref, B, preview}, A), store_list(Rest, Ref). -spec next(mylists_ref()) -> Element :: any(). next(MyListRef) -> case erlang:get({MyListRef, cursor}) of undefined -> Element = case erlang:get({MyListRef, reversed}) of true -> erlang:get({MyListRef, final}); undefined -> erlang:get({MyListRef, first}) end, erlang:put({MyListRef, cursor}, Element), Element; Element -> NewElement = case erlang:get({MyListRef, reversed}) of true -> erlang:get({MyListRef, Element, preview}); undefined -> erlang:get({MyListRef, Element, next}) end, case NewElement of undefined -> undefined; Next -> erlang:put({MyListRef, cursor}, Next), Next end end. reset(MyListRef) -> erlang:erase({MyListRef, cursor}). -spec reverse(mylists_ref()) -> MyReverseListRef :: mylists_ref(). reverse(MyListRef) -> reset(MyListRef), case erlang:get({MyListRef, reversed}) of undefined -> erlang:put({MyListRef, reversed}, true); true -> erlang:erase({MyListRef, reversed}) end, ok. test() -> MyListRef = mylists:new([1,2,3,4]), FirstElem = mylists:next(MyListRef), mylists:reverse(MyListRef), LastElem = mylists:next(MyListRef), {FirstElem, LastElem}. Cheers, Ivan (son of Gilberio). _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Thu Oct 8 01:39:19 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 8 Oct 2015 12:39:19 +1300 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> Message-ID: <7C2E3E6B-6426-48D6-AE56-32933E57D5F1@cs.otago.ac.nz> On 8/10/2015, at 7:39 am, Ivan Carmenates Garcia wrote: > So I can imagine Erlang implement lists using double linked lists for obvious purposes, (a) Why would you imagine that? That's a horrible thing to imagine. (b) WHAT obvious purposes? (c) Doubly linked lists cannot be used the way Erlang uses lists without massive copying overheads. > I haven?t see the source code so I am just guessing here, so for example when doing length you must spend one O(N) to transverse the list, one O(1) for each element to count them and one final O(1) to return the list. Yes, it takes O(N) time to traverse a list. That is also true of doubly linked lists. > So when using lists:reverse/1 it takes four times what lengths does, Four times *what*? lists:reverse/1 may well be implemented in C by now, but in Erlang we can write length(L) -> length_plus(L, 0). length_plus([_|Xs], N) -> length_plus(Xs, 1+N); length_plus([], N) -> N. reverse(L) -> reverse_append(L, []). reverse_append([X|Xs], Ys) -> reverse_append(Xs, [X|Ys]); reverse_append([], Ys) -> Ys. This is a *single* traversal. The two algorithms are essentially the same. Such extra cost as there is in reverse/1 compared with length/1 has to do with allocating and initialising memory; using doubly linked lists would only increase that cost. > so I can play guess, and just that, that you are using indeed a double linked list so you spend one O(n) to transverse the list, three O(1), which is an O(1) in total of course I am just counting the operations as well, to swap the preview and next pointers for each node so the list will be in the reverse order and other O(1) to return the new list which is a pointer of course to the old list just reversed. There is no changing of pointers in Erlang lists. There is no possibility of in-place updates of lists. If I do L = [1,2,3], R = lists:reverse(L) then I expect L to be completely unaltered. No Erlang operation is permitted to (visibly) alter any reachable value in any (detectable) way. From ok@REDACTED Thu Oct 8 01:51:07 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Thu, 8 Oct 2015 12:51:07 +1300 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> Message-ID: <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> On 8/10/2015, at 11:49 am, Ivan Carmenates Garcia wrote: > Nice. > > So in order of balance between memory and speed is the best ways as it is?. Ill-posed qwuestion. No answer is possible until you specify best in *what respect* for *what purpose*. Doubly linked lists are said to have their uses, but any time I've been tempted to use them, some other data structure has always turned out to be much better. There are actually several different ways to implement singly linked lists (anyone else out there remember CDR-coding?) more than one of which could be used for Erlang, so it's even harder to assign any meaning to the question than you might think. For example, I have a library that I've ported to several functional programming languages providing unrolled lists. For example, data URSL t = Cons4 t t t t (URSL t) | End3 t t t | End2 t t | End1 t | End0 This takes 6 words for every 4 elements, an average of 1.5 words per element. (A doubly linked list would require at least twice as much.) On the other hand, it's very bad at prepending a single element. So unroll from the _other_ end; that way you need one End but more than one Cons. > Okay, sounds good to me, also Joe was saying something that I like some kind about algorithms optimizations, because I am a little bit hard about it, but for example in this little framework I am doing for myself and the community if they like of course there are parts in which maybe I think some algorithms could be better but if it works and it does quite well, i.e. if in my computer with one millions of iterations in one process this is important because each user will have one process for its own, it take 21 seconds to perform and in a core i3 with 2 GB of single channel memory it take 9 seconds, then in a real server with 64 cores and super memory it will take none milliseconds to perform so, in the future machines will be better each time and maybe we don?t have to be so extreme about performance. This is extremely vague. All I can say is Remember that the BIGGEST performance gains come from optimising at the HIGHEST level. From co7eb@REDACTED Thu Oct 8 01:43:09 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 19:43:09 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> Message-ID: <001501d10159$edb4c7e0$c91e57a0$@frcuba.co.cu> Hi Cian, well I have no long list, short list actually, but what I want is millions of repentances of the same algorithm. i.e. for concurrency of millions of users. Lol well maybe I was a little bit extreme here. I treasure Erlang as the best language ever and the future language, multicore, concurrency, etc. That's why maybe sometime I could get so extreme!. For example I wrote this about Erlang a few days ago and sent it to a person to defend it against the world and get a job, please correct me if I am wrong in some aspects. The future is multicore, and each day more computers with more amount of cpu cores and the world is more concurrent. We choose Erlang because it is the language of the future so it is already ready for multicore programming and there is no other language like it in this aspect. The majority of programmers rather easy and comfortable languages like Php or Pyhton, because of its beautiful syntax, but this languages are not ready for the future, they are sequentially and has no clue about what multicore programming is or it is very hard to implement. Python and Php for example are interpreted languages and because of it they are very slow so we have to spent more resources in servers and hardware to get more reach. That is why facebook had to made its own version of php compiling it to c++ looking for performance, they had to do it because a huge of amount of programmers that php has because the necessity of gain money quickly. However facebook choose to implement its chat system using Erlang, they spend two years of research to converge in that choice. Python for example is a better language than Php because it is well structured and more powerful yet is a script language similar to JavaScript and its better end is for make plugins and utilities libraries. Erlang is growing up because of its power to handle millions of concurrent process without any difficulty and very easy to implement and that so to build entire distributed systems with huge reliance. It is not only the way of programming easy, it is to do it so a little more complex however not with more effort and adopting the new tendency that is distributed functional, concurrency and fail tolerant oriented programming. A simple example is: The Oxford University of England does not had any postgraduate courses last year in Informatics Sciences and they open one this year and guess what the topic of the course is: "Modern programming languages, including declarative, logical and parallel paradigms, functional programming, concurrency, distributed systems.". ... Well sorry about the speech lol and the translation I just made, I wrote it originally in Spanish. Best regards, Ivan (son of Gilberio). -----Original Message----- From: Cian Synnott [mailto:cian@REDACTED] Sent: Wednesday, October 7, 2015 3:48 PM To: Ivan Carmenates Garcia Cc: erlang-questions@REDACTED Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) Hi Ivan, On Wed, Oct 7, 2015 at 7:39 PM, Ivan Carmenates Garcia wrote: > lists module, and doing some recursive functions, it is usually a > problem that we need to do lists:reverse at the end of the algorism to > get the data in the right order. > Are you sure that it's a problem, e.g. do you have very long lists, or measurements that demonstrate an issue? See http://emauton.org/2015/01/25/lists:reverse-1-performance-in-erlang/ for a brief note on why you probably don't need to worry about this. :o) > So I can imagine Erlang implement lists using double linked lists for > obvious purposes, > They're singly-linked - this is why we tend to build them in "reverse order" in the first place. Cian From co7eb@REDACTED Thu Oct 8 02:10:33 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 20:10:33 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <7C2E3E6B-6426-48D6-AE56-32933E57D5F1@cs.otago.ac.nz> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <7C2E3E6B-6426-48D6-AE56-32933E57D5F1@cs.otago.ac.nz> Message-ID: <001b01d1015d$c1910cb0$44b32610$@frcuba.co.cu> Yes well, nothing to say about that. I was in a high moment of "foundment". If that word even exists lol. -----Original Message----- From: Richard A. O'Keefe [mailto:ok@REDACTED] Sent: Wednesday, October 7, 2015 7:39 PM To: Ivan Carmenates Garcia Cc: erlang-questions@REDACTED Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) On 8/10/2015, at 7:39 am, Ivan Carmenates Garcia wrote: > So I can imagine Erlang implement lists using double linked lists for > obvious purposes, (a) Why would you imagine that? That's a horrible thing to imagine. (b) WHAT obvious purposes? (c) Doubly linked lists cannot be used the way Erlang uses lists without massive copying overheads. > I haven't see the source code so I am just guessing here, so for example when doing length you must spend one O(N) to transverse the list, one O(1) for each element to count them and one final O(1) to return the list. Yes, it takes O(N) time to traverse a list. That is also true of doubly linked lists. > So when using lists:reverse/1 it takes four times what lengths does, Four times *what*? lists:reverse/1 may well be implemented in C by now, but in Erlang we can write length(L) -> length_plus(L, 0). length_plus([_|Xs], N) -> length_plus(Xs, 1+N); length_plus([], N) -> N. reverse(L) -> reverse_append(L, []). reverse_append([X|Xs], Ys) -> reverse_append(Xs, [X|Ys]); reverse_append([], Ys) -> Ys. This is a *single* traversal. The two algorithms are essentially the same. Such extra cost as there is in reverse/1 compared with length/1 has to do with allocating and initialising memory; using doubly linked lists would only increase that cost. > so I can play guess, and just that, that you are using indeed a double linked list so you spend one O(n) to transverse the list, three O(1), which is an O(1) in total of course I am just counting the operations as well, to swap the preview and next pointers for each node so the list will be in the reverse order and other O(1) to return the new list which is a pointer of course to the old list just reversed. There is no changing of pointers in Erlang lists. There is no possibility of in-place updates of lists. If I do L = [1,2,3], R = lists:reverse(L) then I expect L to be completely unaltered. No Erlang operation is permitted to (visibly) alter any reachable value in any (detectable) way.= From co7eb@REDACTED Thu Oct 8 02:10:33 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 20:10:33 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> Message-ID: <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> Yes that is so, but in my case, I cannot find a better way to improve it, I did it as better could so, that's when Joes words give me pleasure. ;) Regards, Ivan (son of Gilberio). -----Original Message----- From: Richard A. O'Keefe [mailto:ok@REDACTED] Sent: Wednesday, October 7, 2015 7:51 PM To: Ivan Carmenates Garcia Cc: Erik S?e S?rensen; Erlang Questions Mailing List Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) On 8/10/2015, at 11:49 am, Ivan Carmenates Garcia wrote: > Nice. > > So in order of balance between memory and speed is the best ways as it is?. Ill-posed qwuestion. No answer is possible until you specify best in *what respect* for *what purpose*. Doubly linked lists are said to have their uses, but any time I've been tempted to use them, some other data structure has always turned out to be much better. There are actually several different ways to implement singly linked lists (anyone else out there remember CDR-coding?) more than one of which could be used for Erlang, so it's even harder to assign any meaning to the question than you might think. For example, I have a library that I've ported to several functional programming languages providing unrolled lists. For example, data URSL t = Cons4 t t t t (URSL t) | End3 t t t | End2 t t | End1 t | End0 This takes 6 words for every 4 elements, an average of 1.5 words per element. (A doubly linked list would require at least twice as much.) On the other hand, it's very bad at prepending a single element. So unroll from the _other_ end; that way you need one End but more than one Cons. > Okay, sounds good to me, also Joe was saying something that I like some kind about algorithms optimizations, because I am a little bit hard about it, but for example in this little framework I am doing for myself and the community if they like of course there are parts in which maybe I think some algorithms could be better but if it works and it does quite well, i.e. if in my computer with one millions of iterations in one process this is important because each user will have one process for its own, it take 21 seconds to perform and in a core i3 with 2 GB of single channel memory it take 9 seconds, then in a real server with 64 cores and super memory it will take none milliseconds to perform so, in the future machines will be better each time and maybe we don?t have to be so extreme about performance. This is extremely vague. All I can say is Remember that the BIGGEST performance gains come from optimising at the HIGHEST level. From co7eb@REDACTED Thu Oct 8 02:20:29 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 7 Oct 2015 20:20:29 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> Message-ID: <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> For example this is one of the algorithms, I optimize it as well as I could: Considering here that the order of the fields is very important!. %% ------------------------------------------------------------------- %% @private %% @doc %% Parses the specified list of full fields into a string containing %% all full fields separated my comma. %% example: %%

%%   parse_full_fields([{users, '*'}, name, {roles, [id, level]},

%%       {users, name, alias}], fun get_postgres_operator/2) ->

%%       {"users.'*',name,roles.id,roles.level,users.name AS alias",

%%           [users, roles]}.

%% 
%% Returns `{[], []}' if called with `[]'. %% @throws {error, invalid_return_fields_spec, InvalidForm :: any()} %% @end %% ------------------------------------------------------------------- -spec parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> Str when FullFieldsSpecs :: proplists:proplist(), Separator :: string(), OperatorFinder :: fun(), Str :: string(). parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> {ParsedFields, TableNames} = parse_full_fields2(FullFieldsSpecs, [], [], OperatorFinder), {string:join(lists:reverse(ParsedFields), Separator), TableNames}. %% @private %% base case. parse_full_fields2([], ParsedKeys, TableNames, _) -> {ParsedKeys, TableNames}; %% {table_name(), [field_name(), ...]}. parse_full_fields2([{TableName, [_FieldName | _] = FieldNames} | Rest], ParsedKeys, TableNames, OperatorFinder) -> %% --> NewParsedKeys = parse_field_names(FieldNames, [], atom_to_list(TableName), OperatorFinder), %% lists:map(fun(FieldName) -> %% atom_to_list(TableName) ++ OperatorFinder('.', special_op) %% ++ parse_field_name(FieldName, OperatorFinder) %% end, FieldNames), parse_full_fields2(Rest, NewParsedKeys ++ ParsedKeys, [TableName | TableNames], OperatorFinder); %% {table_name(), field_name(), field_name_alias()}. parse_full_fields2([{TableName, FieldName, Alias} | Rest], ParsedKeys, TableNames, OperatorFinder) when FieldName =/= '*' -> %% --> parse_full_fields2(Rest, [ atom_to_list(TableName) ++ OperatorFinder('.', special_op) ++ parse_field_name(FieldName, OperatorFinder) ++ " " ++ OperatorFinder('ALIAS', special_op) ++ " " ++ atom_to_list(Alias) | ParsedKeys], [TableName | TableNames], OperatorFinder); %% {table_name(), '*', field_name_alias()}. Throws an ERROR!. parse_full_fields2([{_TableName, _FieldName, _Alias} = InvalidForm | _], _, _, _) -> erlang:throw({error, {invalid_return_fields_spec, InvalidForm}}); %% {table_name(), field_name()}. parse_full_fields2([{TableName, FieldName} | Rest], ParsedKeys, TableNames, OperatorFinder) -> %% --> parse_full_fields2(Rest, [atom_to_list(TableName) ++ OperatorFinder('.', special_op) ++ parse_field_name(FieldName, OperatorFinder) | ParsedKeys], [TableName | TableNames], OperatorFinder); %% [field_name(), ...]. parse_full_fields2([FieldName | Rest], ParsedKeys, TableNames, OperatorFinder) when is_atom(FieldName) -> %% --> parse_full_fields2(Rest, [parse_field_name(FieldName, OperatorFinder) | ParsedKeys], TableNames, OperatorFinder); %% Throws an ERROR! if no valid form is provided. parse_full_fields2(InvalidForm, _, _, _) -> erlang:throw({error, {invalid_return_fields_spec, InvalidForm}}). %% @private parse_field_name('*', OperatorFinder) -> OperatorFinder('*', special_op); parse_field_name(FieldName, _OperatorFinder) -> atom_to_list(FieldName). %% @private parse_field_names([], ParsedFieldNames, _, _) -> ParsedFieldNames; parse_field_names([FieldName | Rest], ParsedFieldNames, BaseTableName, OperatorFinder) when is_atom(FieldName) -> %% -- > parse_field_names(Rest, [ BaseTableName ++ OperatorFinder('.', special_op) ++ parse_field_name(FieldName, OperatorFinder) | ParsedFieldNames], BaseTableName, OperatorFinder); %% throws an ERROR! if no valid form is provided. parse_field_names(InvalidForm, _, _, _) -> erlang:throw({error, {invalid_return_fields_spec, InvalidForm}}). Cheers, -----Original Message----- From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of Ivan Carmenates Garcia Sent: Wednesday, October 7, 2015 8:11 PM To: 'Richard A. O'Keefe' Cc: 'Erlang Questions Mailing List' Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) Yes that is so, but in my case, I cannot find a better way to improve it, I did it as better could so, that's when Joes words give me pleasure. ;) Regards, Ivan (son of Gilberio). -----Original Message----- From: Richard A. O'Keefe [ mailto:ok@REDACTED] Sent: Wednesday, October 7, 2015 7:51 PM To: Ivan Carmenates Garcia Cc: Erik S?e S?rensen; Erlang Questions Mailing List Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) On 8/10/2015, at 11:49 am, Ivan Carmenates Garcia < co7eb@REDACTED> wrote: > Nice. > > So in order of balance between memory and speed is the best ways as it is?. Ill-posed qwuestion. No answer is possible until you specify best in *what respect* for *what purpose*. Doubly linked lists are said to have their uses, but any time I've been tempted to use them, some other data structure has always turned out to be much better. There are actually several different ways to implement singly linked lists (anyone else out there remember CDR-coding?) more than one of which could be used for Erlang, so it's even harder to assign any meaning to the question than you might think. For example, I have a library that I've ported to several functional programming languages providing unrolled lists. For example, data URSL t = Cons4 t t t t (URSL t) | End3 t t t | End2 t t | End1 t | End0 This takes 6 words for every 4 elements, an average of 1.5 words per element. (A doubly linked list would require at least twice as much.) On the other hand, it's very bad at prepending a single element. So unroll from the _other_ end; that way you need one End but more than one Cons. > Okay, sounds good to me, also Joe was saying something that I like some kind about algorithms optimizations, because I am a little bit hard about it, but for example in this little framework I am doing for myself and the community if they like of course there are parts in which maybe I think some algorithms could be better but if it works and it does quite well, i.e. if in my computer with one millions of iterations in one process this is important because each user will have one process for its own, it take 21 seconds to perform and in a core i3 with 2 GB of single channel memory it take 9 seconds, then in a real server with 64 cores and super memory it will take none milliseconds to perform so, in the future machines will be better each time and maybe we don?t have to be so extreme about performance. This is extremely vague. All I can say is Remember that the BIGGEST performance gains come from optimising at the HIGHEST level. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin@REDACTED Thu Oct 8 03:53:03 2015 From: martin@REDACTED (Martin Karlsson) Date: Thu, 8 Oct 2015 14:53:03 +1300 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems Message-ID: I struggle a lot with how to design erlang systems. Everything is easy and very powerful as long as you stay on one node. Supervision tree, and processes and all that. However, to be fault tolerant you need at least three servers and here is where my problem comes in. All of a sudden the nice design is not so nice any longer. gen_server is all about state. And if you want to be fault-tolerant this state must somehow be shared, or at least it is my assumption that it has to be shared. If not I'd be happy to hear about alternative approaches. If state needs to be shared I only see two alternatives: 1) Push state to the database. To me this is an anti-pattern. All of a sudden I don't need gen_server's or supervisor's or anything because the state is fetched from the database anyway. So basically by pushing everything to the database I don't need erlang either. Pushing to the database can therefore not be the solution. 2) Implement some distributed protocols to solve these problems. Distribution however is not trivial and something you want to rely on robust libraries to do. As we know Erlang/OTP doesn't provide any except application failover which people seem not to recommend. I've found * riak_core, which I find a little bit to coupled with riak to be optimal. I've played around with it and it sort of makes your entire system focus around riak_core. * gen_leader which people in general seem very suspicious of * locks_leader which may be an improvement on gen_leader but don't know how production ready it is * a couple of raft implementations. Very new and haven't tried them out and I guess one of the above libraries can be used to distribute state of every gen_server that needs it. I have a feeling I am sort of blinded by traditional design and can't best see how to leverage Erlang and OTP. Perhaps I am being to strict in my requirements and that the system doesn't actually have to be always consistent and always running etc and I've had a few ideas on to to implement an ad-hoc, bug-ridden version of distribution that may solve my problems but it doesn't feel right. Any insight (reading material, open source software) into how to design distributed, fault-tolerant systems with Erlang/OTP is welcome. Cheers, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From alovedalongthe@REDACTED Thu Oct 8 04:42:25 2015 From: alovedalongthe@REDACTED (David Whitlock) Date: Thu, 8 Oct 2015 09:42:25 +0700 Subject: [erlang-questions] crypto:rand_bytes using deprecated function Message-ID: Hi, The rand_bytes function in the crypto module is using the openssl RAND_pseudo_bytes function, which is deprecated. This raises three issues / questions: 1. Should he function rand_bytes be deprecated? 2. Should the documentation state that it should not be used for cryptographic purposes (this is the openssl recommendation)? 3. In otp/lib/ssl/src/ssl.erl (starting line 595) and in otp/lib/crypto/src/crypto.erl (starting line 643) there are functions which fall back to rand_bytes if strong_rand_bytes cannot be used. It is therefore possible that rand_bytes might be used to generate keys. Should these functions return an error instead? If you need any more info, please let me know, David Whitlock -------------- next part -------------- An HTML attachment was scrubbed... URL: From vances@REDACTED Thu Oct 8 06:18:27 2015 From: vances@REDACTED (Vance Shipley) Date: Thu, 8 Oct 2015 09:48:27 +0530 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: On Thu, Oct 8, 2015 at 7:23 AM, Martin Karlsson wrote: > Perhaps I am being to strict in my requirements and that the system doesn't actually have to be always consistent and always running etc One thing I see too often in my industry is great lengths being taken to make a single service interface instance highly available when the clients are perfectly prepared to handle failures with retries and failover to other service instances. A client which is multi-homed to two or more servers may fail over to another server if informed that a problem happened at it's first choice server (e.g. resource unavailable or process crash). That ends up being a more robust and cheaper end-to-end solution that having a single IP address for the service and moving it between active and standby servers while sharing all state. I see solutions built using load balancers for services using SCTP and have to ask the question, did anyone actually analyze the requirements? For example if your gen_fsm handles connection state for long lived sessions do you need to share all the states involved in setup and teardown or just the connected state? The latter is cheaper and easier and the client may well be robust enough so that everyone remains happy. -- -Vance From desired.mta@REDACTED Thu Oct 8 09:23:52 2015 From: desired.mta@REDACTED (=?UTF-8?Q?Motiejus_Jak=C5=A1tys?=) Date: Thu, 8 Oct 2015 09:23:52 +0200 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: On Thu, Oct 8, 2015 at 3:53 AM, Martin Karlsson wrote: > I struggle a lot with how to design erlang systems. Everything is easy and > very powerful as long as you stay on one node. Supervision tree, and > processes and all that. > > However, to be fault tolerant you need at least three servers and here is > where my problem comes in. All of a sudden the nice design is not so nice > any longer. > > gen_server is all about state. And if you want to be fault-tolerant this > state must somehow be shared, or at least it is my assumption that it has to > be shared. If not I'd be happy to hear about alternative approaches. A very significant number of reliable* distributed applications do not need to consistently share state. Only that 1%** does, and that's difficult, but usually an application is in the 99%-pool. Maybe your application is there too? Think about: * What is your state? Do you really/why do you need it always available/consistent? * How do you handle updates to the state? Often it's possible to push the state consistency problem away from your service -- e.g. the client (multi-homing or sending full batches) or somewhere downstream. If you told us a bit more about the application you're building, you would very likely receive more to-the-point and helpful responses. :-) Also, a book with The Right Questions (not necessarily for Erlang) would be interesting. Often it's about making small compromises in the system you're building (which, turns out, don't matter for the users) for simplicity of the design and implementation (e.g. making it non-shared-state). [*]: that can handle failure of any single server. [**]: number made up of course, but my feeling is that it's really short. From jesper.louis.andersen@REDACTED Thu Oct 8 13:26:11 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 8 Oct 2015 13:26:11 +0200 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> Message-ID: On Wed, Oct 7, 2015 at 8:54 PM, Ivan Carmenates Garcia wrote: > Map.id, ? would be nice!! instead of {id:= Id} = Map. The problem with using '.' here is that it is already taken for the 'end-of-expression' token. So the grammar becomes inconsistent if you use it for this purpose. You would have to pick some other signifier. You would have, e.g., Map#{id} be valid syntax meaning the value that is under 'id' (I think, the grammar might have trouble here as well). -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin@REDACTED Thu Oct 8 09:21:00 2015 From: martin@REDACTED (Martin Karlsson) Date: Thu, 8 Oct 2015 20:21:00 +1300 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: Hi Vance, Thanks for your reply. And I agree that dealing with these things in the client is more flexible. > ...the clients are perfectly prepared to handle failures with retries and > failover to other service instances Unfortunately the client software is already out there so we are also tied with a VirtualIP through Load-Balancer architecture. High Availability from client requests are dealt with OK though as client will retry on failure (albeit to the same VIP) and is routed to a server which can handle it. The problem is more internal gen_servers (lookup tables, global task servers, mini message queues). Small state which is periodically updated and needed to serve clients request (where data is largely same for every client). Some Examples: A background task which periodically fetches some information from an external system and keeps the data in its state, there can only be one task running globally (leader election?, failover?). A gen_server router where various data sources can add themselves so that requests get routed to the correct db-source. Either all nodes most connect to a global gen_server which holds the info or ideally the info is shared among multiple nodes. Again should I push state to database (then why use a gen_server at all?) or what sort of architecture is needed to cater for this? Thanks again, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin@REDACTED Thu Oct 8 12:46:23 2015 From: martin@REDACTED (Martin Karlsson) Date: Thu, 8 Oct 2015 23:46:23 +1300 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: Hi, Thanks for your email. Very helpful and I'm going to give you a lengthy reply in return. > * What is your state? Do you really/why do you need it always available/consistent? This is a good question and got me thinking on how best to reply. While thinking I looked at the problem in a different way. Bascially we have the following data: * Generated data and customer data.Stored in key-value database (mnesia or riak). Should ideally be consistent but we are fine with riak's eventual consistency here. If we have acknowledged that we have received data to a particular device it must be there when the device requests it (which usually happens at most seconds after). Some customer data can be lost without implications whereas loss of other data would lead to loss of service to a particular device. * Transaction Logs. Changes to data-base data is stored in transaction logs and replicated to external systems. We must capture all data but it doesn't have to be always available as long as no data is lost. This is not a real-time sync and larger delays are tolerated depending on use case. * Configuration data. A current configuration which applies at a specific time. This data changes periodically (by timed tasks) and by operational input. Changes perhaps 10 times daily. Must be available, can handle short inconsistencies (i.e a node uses a few minutes stale data is OK). > A very significant number of reliable* distributed applications do not > need to consistently share state. Only that 1%** does, and that's > difficult, but usually an application is in the 99%-pool. Maybe your > application is there too? Yes, for a number of our sub-component (perhaps all) I think this would be the case. > If you told us a bit more about the application you're building, you > would very likely receive more to-the-point and helpful responses. :-) Well, you asked for it ;) The system generates and imports data from various sources which it serves to millions of devices. The data is requested by the clients and served through HTTP. It has to be highly available with low latency and handle fair bit of concurrency (hence erlang). The generated data is also replicated to other systems (both for multi-data center replication) and as export to independent systems. It is all hosted and operated by our customers. I think this is important to mention because from an operations point of view the system needs to be very simple and all fault-tolerance should ideally be automatic, which also means it is hard to use separate products for each and every need. For example we cannot afford a tech-stack like: Postgres + Riak + RabbitMQ + Redis + HAProxy or similar as the operational overhead is too much. We do support either riak or mnesia as a database and that is about it. Then our product needs to take care of the rest. The system contains a number of sub-systems, and I guess I've struggled to some degree with multi-node on each and everyone of them. It is so easy to start with a gen_server for prototyping but moving to multi-node from there has been hard. *) Process register. Each client gets its own process which is held in a process register. Started as a gen_server, then ets for better performance and then onto mnesia to get distribution. I'd really like to implement this with consistent hashing and keeping the primary and secondary node list in some shared state. Sounds a lot like riak_core to me. It is fine for a process to crash and die but we need to avoid data inconsistencies that could happen if two processes were started on different nodes (during a net-split say) and then one is persisted meaning all the other data is lost. In this case it is better to lose both. *) Periodic tasks. We have a fair bit of background tasks that need to be run. Half of these need to be global. I.e they should run on an interval but ideally only one node. Currently each task is completely independent and runs in its own process. For global tasks we simply run them on one node and disable them on the others. Not ideal from a fault-tolerant point of view. If we would've hosted it our-selves I could be fine with such a solution but we need automated failover. I've played around with gen_leader and locks_leader to have a distributed task list but don't know if this is the right approach. *) Routing. I have a gen_server router which re-directs request to the specific data source. The gen_server process should run on every node and must share state. The state doesn't change very often though but is dynamic and is dictated by external system (which tells us to change this once in a while). *) Caching. Costly computations are cached in a process per "Id". It is cached on the node where it is requested, however it can be requested on multiple node and when a state change is triggered (which can happen on any node) all nodes must be notified. Currently we use pg2 process groups and send the new state to each process. Of course a bit ad-hoc and if a message doesn't get there they'll be out of sync. *) Message Queues or transaction logs. Data must be replicated to distributed (not via distributed erlang though) systems, which means we need to store some amount of transactions. Data is immutable and is always only appended to. Transactions can happen on any node. Also started as a gen_server but moved into mnesia for distribution. I've been looking for OTP application's which already are a persistent distributed message queue but haven't found any (and as mentioned an external MQ is probably not doable). Here I thought I needed strict ordering but I don't really as long as I can guarantee that all the data eventually arrives at its destination. Again, we do have sort of working solutions for the above but I feel they are inconsistent and not robust enough. It is hard to fit all pieces above into a coherent system. >Often it's about making small compromises in the >system you're building (which, turns out, don't matter for the users) Fully agree. Hopefully this gives you an idea of what the system need to do. Cheers, Martin On Thu, Oct 8, 2015 at 8:23 PM, Motiejus Jak?tys wrote: > On Thu, Oct 8, 2015 at 3:53 AM, Martin Karlsson > wrote: > > I struggle a lot with how to design erlang systems. Everything is easy > and > > very powerful as long as you stay on one node. Supervision tree, and > > processes and all that. > > > > However, to be fault tolerant you need at least three servers and here is > > where my problem comes in. All of a sudden the nice design is not so nice > > any longer. > > > > gen_server is all about state. And if you want to be fault-tolerant this > > state must somehow be shared, or at least it is my assumption that it > has to > > be shared. If not I'd be happy to hear about alternative approaches. > > A very significant number of reliable* distributed applications do not > need to consistently share state. Only that 1%** does, and that's > difficult, but usually an application is in the 99%-pool. Maybe your > application is there too? > > Think about: > * What is your state? Do you really/why do you need it always > available/consistent? > * How do you handle updates to the state? Often it's possible to push > the state consistency problem away from your service -- e.g. the > client (multi-homing or sending full batches) or somewhere downstream. > > If you told us a bit more about the application you're building, you > would very likely receive more to-the-point and helpful responses. :-) > > Also, a book with The Right Questions (not necessarily for Erlang) > would be interesting. Often it's about making small compromises in the > system you're building (which, turns out, don't matter for the users) > for simplicity of the design and implementation (e.g. making it > non-shared-state). > > [*]: that can handle failure of any single server. > [**]: number made up of course, but my feeling is that it's really short. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vances@REDACTED Thu Oct 8 14:28:53 2015 From: vances@REDACTED (Vance Shipley) Date: Thu, 8 Oct 2015 17:58:53 +0530 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: On Thu, Oct 8, 2015 at 7:23 AM, Martin Karlsson wrote: > Any insight (reading material, open source software) into how to design > distributed, fault-tolerant systems with Erlang/OTP is welcome. It should be pointed out, for future Googlers, that a simple archetype available in OTP for active/standby (or 1+N) is: - distributed application (http://www.erlang.org/doc/design_principles/distributed_applications.html) - distributed database (http://www.erlang.org/doc/apps/mnesia/Mnesia_chap5.html#id79121) 1) Set the application variable 'distributed' in the kernel application to indicate which nodes the application may run. (http://www.erlang.org/doc/man/kernel_app.html) 2) Create mnesia table(s) with copies at all nodes. (http://www.erlang.org/doc/man/mnesia.html#create_schema-1) 3) On application start take over the IP address (ifconfig, arping) Note that there is really no new code to write to make you application distributed! -- -Vance From carlsson.richard@REDACTED Thu Oct 8 14:37:04 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Thu, 8 Oct 2015 14:37:04 +0200 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> Message-ID: Not quite. The end-of-form (or 'dot') token requires that the period character is followed by whitespace, a comment, or end-of-file. Otherwise it's a '.' token, as is already in use in expressions like Record#recordname.fieldname. Once upon a time there was a special syntax called Mnemosyne for writing Mnesia queries. To support this, the general form Expr.field was included in the grammar (but only had a meaning within a Mnemosyne query). The "packages" dotted namespaces piggy-backed on this grammar rule, and when packages were removed, this rule was also removed (since Mnemosyne was no longer supported either). But it could be put back and reused for maps. In fact, I see that it temporarily _was_ used: see f00675d and 703a9aa. /Richard On Thu, Oct 8, 2015 at 1:26 PM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > > On Wed, Oct 7, 2015 at 8:54 PM, Ivan Carmenates Garcia > wrote: > >> Map.id, ? would be nice!! instead of {id:= Id} = Map. > > > The problem with using '.' here is that it is already taken for the > 'end-of-expression' token. So the grammar becomes inconsistent if you use > it for this purpose. You would have to pick some other signifier. You would > have, e.g., Map#{id} be valid syntax meaning the value that is under 'id' > (I think, the grammar might have trouble here as well). > > > -- > J. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Thu Oct 8 14:37:19 2015 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Thu, 8 Oct 2015 14:37:19 +0200 Subject: [erlang-questions] [erlang-bugs] Strange behaviour of exit(kill) In-Reply-To: <20151007122725.GB3459@wagner.intra.a-tono.com> References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> <20151007122725.GB3459@wagner.intra.a-tono.com> Message-ID: <5616637F.4010102@ninenines.eu> On 10/07/2015 02:27 PM, Francesco Lattanzio wrote: > I always thought that when a process dies because it was sent a 'kill' > message it would broadcast to the linked processes a 'killed' EXIT > message (see Concurrent Programming in Erlang - Part I, D.3 Exit > signals, p. 193 ). > However for some reason recent implementations of the VM broadcasts a > 'kill' EXIT message (I could only test it on Erlang VMs as old as > R13B04). 1> Pid = spawn_link(fun() -> receive after infinity -> ok end end). <0.36.0> 2> exit(Pid, kill). ** exception exit: killed 3> f(). ok 4> Pid = spawn_link(fun() -> receive after infinity -> ok end end). <0.41.0> 5> process_flag(trap_exit, true). false 6> exit(Pid, kill). true 7> flush(). Shell got {'EXIT',<0.41.0>,killed} ok Cheers, > I'm not asking to revert this behaviour (I bet such a change would > impact a lot of code), however it would be nice to know why it was > chosen a two-semantics kill message instead of more obvious two > one-semantic kill and killed message (if someone knows). > > On Wed, Oct 07, 2015 at 03:27:24AM -0700, Robert Virding wrote: >> I still find that extremely inconsistent, there are actually 2 'kill' signals: one that is sent with exit(Pid, kill) and the other >> which sent when you do exit(kill). So I can trap 'kill' and I can't trap 'kill', great. >> >> I would personally go the other way and say that kill is kill however it is sent. But I agree with you, I'm not holding my breath >> waiting for it to be fixed. >> >> Robert >> >> P.S. I am not even going to mention the terribly inconsistent handling of errors in link/1. >> >> >> On 7 October 2015 at 00:51, H?kan Huss wrote: >> >> 2015-10-07 3:46 GMT+02:00 Robert Virding : >> >> It's all about signals and not messages. Sending a message to a process should *NEVER* by default kill it even if it has >> the same format as an 'EXIT' message. NEVER!. A signal is converted to a message when it arrives at a process which is >> trapping exits unless it is the 'kill' which is untrappable and the process always dies. >> >> >> Yes, but the 'kill' signal is not an exit signal with reason kill. The 'kill' signal can only be sent by calling exit/2 with >> Reason = kill, which is documented to have the effect that "an untrappable exit signal is sent to Pid which will >> unconditionally exit with exit reason killed." There is no mention of how the exit reason in that exit signal, and since it is >> not trappable there is no way to observe it. >> >> >> Explicitly sending the SIGNAL with exit(Pid, kill) should unconditionally kill the process >> >> Yes. >> >> >> as should dying with the reason 'kill' in exit(kill) which also sends the SIGNAL 'kill'. >> >> No, this sends an exit signal with reason kill, but that is not the same ass the signal sent using exit(Pid, kill). >> >> >> In both cases the process receives the SIGNAL 'kill', as shown in my example, but in one case it is trappable and in the >> other it is untrappable. >> >> No, in one case it receives an exit signal with reason kill, in the other case it receives the special untrappable exit signal >> which causes unconditional termination. >> >> >> My point is that the *same* signal results in different behaviour depending on how it was sent. That's incocnsistent. >> >> I agree that it is inconsistent. I would have preferred that the exit(Pid, kill) was a separate function, e.g., kill(Pid) and >> that exit(Pid, kill) would be handled as any other exit/2 call. But I won't hold my breath in anticipation of that being >> changed... >> >> /H?kan >> >> >> Robert >> >> >> On 6 October 2015 at 18:33, zxq9 wrote: >> >> On Wednesday 07 October 2015 10:25:38 zxq9 wrote: >> >> > or maybe it is that {'EXIT', Pid = self(), kill} *is* specifically untrappable by way of matching on self()? >> >> That was too much to hope for: >> >> 1> P = spawn(fun Loop() -> receive M -> io:format("Got ~p~n", [M]), Loop() end end). >> <0.1889.0> >> 2> P ! {'EXIT', P, kill}. >> Got {'EXIT',<0.1889.0>,kill} >> {'EXIT',<0.1889.0>,kill} >> 3> P ! {'EXIT', P, blam}. >> Got {'EXIT',<0.1889.0>,blam} >> {'EXIT',<0.1889.0>,blam} >> 4> exit(P, kill). >> true >> 5> P ! {'EXIT', P, blam}. >> {'EXIT',<0.1889.0>,blam} >> >> If it *did* turn out that matching {'EXIT', self(), kill} was untrappable I would just say "ah, that makes sense -- now >> I can understand the mechanism behind this without thinking about VM details". Instead it appears to be a case of >> mysterious activity underlying a message form that is semantically overloaded. And that stinks. >> >> -Craig >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> >> >> >> > >> _______________________________________________ >> erlang-bugs mailing list >> erlang-bugs@REDACTED >> http://erlang.org/mailman/listinfo/erlang-bugs > > -- Lo?c Hoguin http://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From vances@REDACTED Thu Oct 8 14:48:36 2015 From: vances@REDACTED (Vance Shipley) Date: Thu, 8 Oct 2015 18:18:36 +0530 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: Martin, A challenge in sharing state between nodes in a cluster is that as the amount of data, and number of nodes, increases the CPU and IO usage rises, often to a point where it no longer makes sense. One thing we've done to mitigate this is to use multicasting where nodes update other nodes in one message instead of one message per node as it is with Erlang distribution. The multicast solution has the advantage that as the cluster size increases the traffic goes up linearly instead of exponentially. The downside is that it's not reliable however often best effort is enough and late arrival data is useless anyway in some cases. I've often day dreamed about prototyping mnesia replication using multicast ... Something I haven't tried, but I see that others have, is using remote direct memory access (RDMA) such as RoCE(*) which basically syncs a chunk of memory on a network interface cards which you can read directly (DMA). Fun stuff I'm sure. Probably getting out of hand for your problem. :) (*) https://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet -- -Vance From grahamrhay@REDACTED Thu Oct 8 15:10:15 2015 From: grahamrhay@REDACTED (Graham Hay) Date: Thu, 8 Oct 2015 14:10:15 +0100 Subject: [erlang-questions] Help with design of distributed fault-tolerant systems In-Reply-To: References: Message-ID: > Note that there is really no new code to write to make you application > distributed! You will need to handle netsplits in mnesia yourself though (e.g. https://github.com/uwiger/unsplit) -------------- next part -------------- An HTML attachment was scrubbed... URL: From co7eb@REDACTED Thu Oct 8 15:24:48 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Thu, 8 Oct 2015 09:24:48 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <7C2E3E6B-6426-48D6-AE56-32933E57D5F1@cs.otago.ac.nz> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <7C2E3E6B-6426-48D6-AE56-32933E57D5F1@cs.otago.ac.nz> Message-ID: <002201d101cc$b6229190$2267b4b0$@frcuba.co.cu> Yes I mean the C implementation, because if you do 1 millon of repetances of lists:length/1 in Erlang and the same for lists:reverse/1 for the same list, lists:length takes 16 milliseconds in my pc and lists:reverse takes 64 milliseconds. > So when using lists:reverse/1 it takes four times what lengths does, Four times *what*? lists:reverse/1 may well be implemented in C by now, but in Erlang we can write length(L) -> length_plus(L, 0). length_plus([_|Xs], N) -> length_plus(Xs, 1+N); length_plus([], N) -> N. reverse(L) -> reverse_append(L, []). reverse_append([X|Xs], Ys) -> reverse_append(Xs, [X|Ys]); reverse_append([], Ys) -> Ys. This is a *single* traversal. The two algorithms are essentially the same. Such extra cost as there is in reverse/1 compared with length/1 has to do with allocating and initialising memory; using doubly linked lists would only increase that cost. > so I can play guess, and just that, that you are using indeed a double linked list so you spend one O(n) to transverse the list, three O(1), which is an O(1) in total of course I am just counting the operations as well, to swap the preview and next pointers for each node so the list will be in the reverse order and other O(1) to return the new list which is a pointer of course to the old list just reversed. There is no changing of pointers in Erlang lists. There is no possibility of in-place updates of lists. If I do L = [1,2,3], R = lists:reverse(L) then I expect L to be completely unaltered. No Erlang operation is permitted to (visibly) alter any reachable value in any (detectable) way.= From mononcqc@REDACTED Thu Oct 8 16:41:37 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Thu, 8 Oct 2015 10:41:37 -0400 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> Message-ID: <20151008144136.GI1744@fhebert-ltm1> On 10/08, Richard Carlsson wrote: >Not quite. The end-of-form (or 'dot') token requires that the period >character is followed by whitespace, a comment, or end-of-file. Otherwise >it's a '.' token, as is already in use in expressions like >Record#recordname.fieldname. > >Once upon a time there was a special syntax called Mnemosyne for writing >Mnesia queries. To support this, the general form Expr.field was included >in the grammar (but only had a meaning within a Mnemosyne query). The >"packages" dotted namespaces piggy-backed on this grammar rule, and when >packages were removed, this rule was also removed (since Mnemosyne was no >longer supported either). But it could be put back and reused for maps. In >fact, I see that it temporarily _was_ used: see f00675d and 703a9aa. > Am I right in assuming mnesmosyne was like records currently are and packages were -- mostly using atoms as fields? There's an interesting distinction for maps in that any data structure? whatsoever might be a key, even tuples: 5> #{{a,b,c} := _} = #{{a,b,c} => d}. #{{a,b,c} => d} which could lead to a dotted notation of the form Map.{a,b,c} which? frankly looks funny. You can imagine trickier datastructures. Something? like #{f => #{'d.e' => #{#{a@REDACTED=>ok} => ok}}} which would require? Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or you? support little chaining, but there's still plenty of ways to make this? terrible. At least, it looks like functions are not allowed in there: 7> #{fun() -> a end := _} = #{fun() -> a end => ok}. * 1: illegal map key in pattern 8> #{fun erlang:exit/2 := _} = #{fun erlang:exit/2 => ok}. * 1: illegal map key in pattern But I could still think of fun ways to use term_to_binary in interesting ways there. From jesper.louis.andersen@REDACTED Thu Oct 8 16:42:47 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 8 Oct 2015 16:42:47 +0200 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> Message-ID: On Thu, Oct 8, 2015 at 2:37 PM, Richard Carlsson wrote: > Not quite. The end-of-form (or 'dot') token requires that the period > character is followed by whitespace, a comment, or end-of-file. Otherwise > it's a '.' token, as is already in use in expressions like > Record#recordname.fieldname. oh, you are right. It might be a problem still, though. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Thu Oct 8 16:47:44 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Thu, 8 Oct 2015 10:47:44 -0400 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <20151008144136.GI1744@fhebert-ltm1> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> <20151008144136.GI1744@fhebert-ltm1> Message-ID: <20151008144743.GK1744@fhebert-ltm1> On 10/08, Fred Hebert wrote: >Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or you? >support little chaining, but there's still plenty of ways to make this? >terrible. > Oh also, if any form of chaining is required, it is now impossible to know if Map.3.5 is supposed to be Map.(3.5) or two maps, one with the key 3 and the key 5. I guess parentheses could make it work. But even with records this was kind of messy. From roberto@REDACTED Thu Oct 8 17:09:58 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Thu, 8 Oct 2015 17:09:58 +0200 Subject: [erlang-questions] Lager backend: error_logger? Message-ID: All, Is it possible to use error_logger inside a lager backend? I imagine this would create recursive calls, but I'm basically wondering how to output a log from a lager backend, if that makes any sense. Any clues welcome :) Thanks, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdl.web@REDACTED Thu Oct 8 17:29:22 2015 From: sdl.web@REDACTED (Leo Liu) Date: Thu, 08 Oct 2015 23:29:22 +0800 Subject: [erlang-questions] What is ``internal ets tables''? References: <56152D47.1010207@ericsson.com> Message-ID: On 2015-10-07 22:33 +0800, Sverker Eriksson wrote: > It's ets tables used by ets itself to map from pid to owned tables > and from pid to fixated tables. > > However, looking at the code, they are only dumped if the VM > was built in debug mode. Thank you that is exactly what I needed. Leo From carlsson.richard@REDACTED Thu Oct 8 20:57:08 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Thu, 8 Oct 2015 20:57:08 +0200 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <20151008144136.GI1744@fhebert-ltm1> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> <20151008144136.GI1744@fhebert-ltm1> Message-ID: On Thu, Oct 8, 2015 at 4:41 PM, Fred Hebert wrote: > On 10/08, Richard Carlsson wrote: > >> Not quite. The end-of-form (or 'dot') token requires that the period >> character is followed by whitespace, a comment, or end-of-file. Otherwise >> it's a '.' token, as is already in use in expressions like >> Record#recordname.fieldname. >> > > Am I right in assuming mnesmosyne was like records currently are and > packages were -- mostly using atoms as fields? > > There's an interesting distinction for maps in that any data structure? > whatsoever might be a key, even tuples: > Yes, only atoms were allowed as field selectors back then. I think a general operator '.' would be possible, but it might lead to horrible code, as you point out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Thu Oct 8 20:59:50 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Thu, 8 Oct 2015 20:59:50 +0200 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <20151008144743.GK1744@fhebert-ltm1> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> <20151008144136.GI1744@fhebert-ltm1> <20151008144743.GK1744@fhebert-ltm1> Message-ID: On Thu, Oct 8, 2015 at 4:47 PM, Fred Hebert wrote: > On 10/08, Fred Hebert wrote: > >> Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or you? >> support little chaining, but there's still plenty of ways to make this? >> terrible. >> > > Oh also, if any form of chaining is required, it is now impossible to know > if Map.3.5 is supposed to be Map.(3.5) or two maps, one with the key 3 and > the key 5. > Since tokenization (which is greedy) happens before parsing, Map.3.5 would always be parsed as Map.(3.5). You'd have to add parentheses to get Map.(3).(5). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony@REDACTED Thu Oct 8 17:55:07 2015 From: tony@REDACTED (Tony Rogvall) Date: Thu, 8 Oct 2015 17:55:07 +0200 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <20151008144743.GK1744@fhebert-ltm1> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> <20151008144136.GI1744@fhebert-ltm1> <20151008144743.GK1744@fhebert-ltm1> Message-ID: <7667F6B9-C68C-4FE9-8DF6-89DD94062B78@rogvall.se> Why not only support atom keys for the dot notation and let more complex keys use maps:get??? That would cover most of my uses of maps ( but far from all ) /Tony > On 8 okt. 2015, at 16:47, Fred Hebert wrote: > >> On 10/08, Fred Hebert wrote: >> Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or you? >> support little chaining, but there's still plenty of ways to make this? >> terrible. > > Oh also, if any form of chaining is required, it is now impossible to know if Map.3.5 is supposed to be Map.(3.5) or two maps, one with the key 3 and the key 5. I guess parentheses could make it work. But even with records this was kind of messy. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From pierrefenoll@REDACTED Thu Oct 8 22:24:41 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Thu, 8 Oct 2015 13:24:41 -0700 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <7667F6B9-C68C-4FE9-8DF6-89DD94062B78@rogvall.se> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> <20151008144136.GI1744@fhebert-ltm1> <20151008144743.GK1744@fhebert-ltm1> <7667F6B9-C68C-4FE9-8DF6-89DD94062B78@rogvall.se> Message-ID: <27C6222F-CBCE-4CAF-B8DF-A5210835EDFF@gmail.com> One could also enforce clean syntax by only allowing a variable | atomic | braced_expr as keys in the grammar. Your key is too ugly? Put it in a variable. > On 08 Oct 2015, at 08:55, Tony Rogvall wrote: > > Why not only support atom keys for the dot notation and let more complex keys use maps:get??? > That would cover most of my uses of maps ( but far from all ) > /Tony >>> On 8 okt. 2015, at 16:47, Fred Hebert wrote: >>> >>> On 10/08, Fred Hebert wrote: >>> Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or you? >>> support little chaining, but there's still plenty of ways to make this? >>> terrible. >> >> Oh also, if any form of chaining is required, it is now impossible to know if Map.3.5 is supposed to be Map.(3.5) or two maps, one with the key 3 and the key 5. I guess parentheses could make it work. But even with records this was kind of messy. >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From lloyd@REDACTED Fri Oct 9 01:08:47 2015 From: lloyd@REDACTED (lloyd@REDACTED) Date: Thu, 8 Oct 2015 19:08:47 -0400 (EDT) Subject: [erlang-questions] Initializing mnesia Message-ID: <1444345727.2776431@apps.rackspace.com> Hello, At first blush, initializing mnesia looks like a piece of cake: init() -> mnesia:create_schema([node()]), mnesia:start(), mnesia:create_table(.... But suppose one wants to automate it--- "Look, Ma, no hands!" by, say, running it under a top-level supervisor. Seems we need to: - check to see if we have a schema -- if not, create schema - check to see if mnesia is running -- if not, start it - check to see if tables are defined -- if not, define them Surfing around I found this: http://erlang.2086793.n4.nabble.com/When-to-create-mnesia-schema-for-OTP-applications-td2115607.html Seems to work with Seth Falcon's fix; but the fix produces a warning when compiled. I could live with that, but it bugs me. Seems to me there must be a more elegant way to solve the problem. Spent half a day on it but the explosion of crash conditions and valid returns put my brain in tilt mode. It's not like I'm inventing the wheel here. Does some kind should have a simple mean-and-lean solution to the problem? Many thanks, LRP ********************************************* My books: THE GOSPEL OF ASHES http://thegospelofashes.com Strength is not enough. Do they have the courage and the cunning? Can they survive long enough to save the lives of millions? FREEIN' PANCHO http://freeinpancho.com A community of misfits help a troubled boy find his way AYA TAKEO http://ayatakeo.com Star-crossed love, war and power in an alternative universe Available through Amazon or by request from your favorite bookstore ********************************************** From zxq9@REDACTED Fri Oct 9 01:18:50 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 09 Oct 2015 08:18:50 +0900 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <7667F6B9-C68C-4FE9-8DF6-89DD94062B78@rogvall.se> References: <20151008144743.GK1744@fhebert-ltm1> <7667F6B9-C68C-4FE9-8DF6-89DD94062B78@rogvall.se> Message-ID: <1647607.DFRhj7cxfs@changa> WHAT?!?!? All that inconsistency in the language SO PEOPLE CAN USE GODDAM DOTS?!?!? NO! OK. That's out of my system. Seriously, what is this obsession with dots? It amazes me that this sort of thing comes up so often, is demonstrated to be a bad idea, and then comes up again weeks later. Thoughtlessly adding syntactic sugar without a reason better than "braces and hashes and whatnot are considered ugly by (my) current fashion standards" is how you wind up with an unrecoverable stew of profoundly weird and unrecoverably ugly syntax and semantics. Consider Ruby. Or C++. Compare that to Python, where adding a syntactic convenience usually requires something close to a multi-year civil war. The difference in outcome is clear. I would much prefer that Erlang continued to err on the side of being too slow to depart from its Prologish syntax roots and remain consistent and unsurprising, regardless the prevailing syntax fads of this or that decade. In fact, I would prefer if most of the code I see continues uses maps:get/2,3 and wouldn't be bothered at all if there had never been any specific hash-and-braces-sometimes-with-arrows syntax for maps. The syntax of maps is utterly uninteresting -- the underlying data structure is the useful part. -Craig On 2015?10?8? ??? 17:55:07 Tony Rogvall wrote: > Why not only support atom keys for the dot notation and let more complex keys use maps:get??? > That would cover most of my uses of maps ( but far from all ) > /Tony > > On 8 okt. 2015, at 16:47, Fred Hebert wrote: > > > >> On 10/08, Fred Hebert wrote: > >> Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or you? > >> support little chaining, but there's still plenty of ways to make this? > >> terrible. > > > > Oh also, if any form of chaining is required, it is now impossible to know if Map.3.5 is supposed to be Map.(3.5) or two maps, one with the key 3 and the key 5. I guess parentheses could make it work. But even with records this was kind of messy. From rvirding@REDACTED Fri Oct 9 02:28:07 2015 From: rvirding@REDACTED (Robert Virding) Date: Thu, 8 Oct 2015 17:28:07 -0700 Subject: [erlang-questions] [erlang-bugs] Strange behaviour of exit(kill) In-Reply-To: <20151008125730.GA1012@wagner.intra.a-tono.com> References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> <20151007122725.GB3459@wagner.intra.a-tono.com> <5616637F.4010102@ninenines.eu> <20151008125730.GA1012@wagner.intra.a-tono.com> Message-ID: I think to realise here is that exit(kill) sends a 'kill' SIGNAL not a message. It is the fact that the shell process is trapping exits which means that the signal is converted to a message when it arrives at the shell process. Sending a message, irrespective of it format, will never kill a process. My point was just that if the same 'kill' signal is sent by exit/1 or exit/2 it will result in different behaviour in the process which receives the signal. So it is not just the signal itself which causes the behaviour but how it was sent. I find this inconsistent. Should a word on the screen look different whether I write with the left hand or the right hand? Robert On 8 October 2015 at 05:57, Francesco Lattanzio < francesco.lattanzio@REDACTED> wrote: > But: > > 1> Pid = spawn_link(fun() -> exit(kill) end). > ** exception exit: killed > 2> f(). > ok > 3> process_flag(trap_exit, true). > false > 4> Pid = spawn_link(fun() -> exit(kill) end). > <0.40.0> > 5> flush(). > Shell got {'EXIT',<0.40.0>,kill} > ok > > Regards. > > On Thu, Oct 08, 2015 at 02:37:19PM +0200, Lo?c Hoguin wrote: > > On 10/07/2015 02:27 PM, Francesco Lattanzio wrote: > > >I always thought that when a process dies because it was sent a 'kill' > > >message it would broadcast to the linked processes a 'killed' EXIT > > >message (see Concurrent Programming in Erlang - Part I, D.3 Exit > > >signals, p. 193 ). > > >However for some reason recent implementations of the VM broadcasts a > > >'kill' EXIT message (I could only test it on Erlang VMs as old as > > >R13B04). > > > > 1> Pid = spawn_link(fun() -> receive after infinity -> ok end end). > > <0.36.0> > > 2> exit(Pid, kill). > > ** exception exit: killed > > 3> f(). > > ok > > 4> Pid = spawn_link(fun() -> receive after infinity -> ok end end). > > <0.41.0> > > 5> process_flag(trap_exit, true). > > false > > 6> exit(Pid, kill). > > true > > 7> flush(). > > Shell got {'EXIT',<0.41.0>,killed} > > ok > > > > Cheers, > > > > >I'm not asking to revert this behaviour (I bet such a change would > > >impact a lot of code), however it would be nice to know why it was > > >chosen a two-semantics kill message instead of more obvious two > > >one-semantic kill and killed message (if someone knows). > > > > > >On Wed, Oct 07, 2015 at 03:27:24AM -0700, Robert Virding wrote: > > >>I still find that extremely inconsistent, there are actually 2 'kill' > signals: one that is sent with exit(Pid, kill) and the other > > >>which sent when you do exit(kill). So I can trap 'kill' and I can't > trap 'kill', great. > > >> > > >>I would personally go the other way and say that kill is kill however > it is sent. But I agree with you, I'm not holding my breath > > >>waiting for it to be fixed. > > >> > > >>Robert > > >> > > >>P.S. I am not even going to mention the terribly inconsistent handling > of errors in link/1. > > >> > > >> > > >>On 7 October 2015 at 00:51, H?kan Huss wrote: > > >> > > >> 2015-10-07 3:46 GMT+02:00 Robert Virding : > > >> > > >> It's all about signals and not messages. Sending a message to > a process should *NEVER* by default kill it even if it has > > >> the same format as an 'EXIT' message. NEVER!. A signal is > converted to a message when it arrives at a process which is > > >> trapping exits unless it is the 'kill' which is untrappable > and the process always dies. > > >> > > >> > > >> Yes, but the 'kill' signal is not an exit signal with reason > kill. The 'kill' signal can only be sent by calling exit/2 with > > >> Reason = kill, which is documented to have the effect that "an > untrappable exit signal is sent to Pid which will > > >> unconditionally exit with exit reason killed." There is no > mention of how the exit reason in that exit signal, and since it is > > >> not trappable there is no way to observe it. > > >> > > >> > > >> Explicitly sending the SIGNAL with exit(Pid, kill) should > unconditionally kill the process > > >> > > >> Yes. > > >> > > >> > > >> as should dying with the reason 'kill' in exit(kill) which > also sends the SIGNAL 'kill'. > > >> > > >> No, this sends an exit signal with reason kill, but that is not > the same ass the signal sent using exit(Pid, kill). > > >> > > >> > > >> In both cases the process receives the SIGNAL 'kill', as > shown in my example, but in one case it is trappable and in the > > >> other it is untrappable. > > >> > > >> No, in one case it receives an exit signal with reason kill, in > the other case it receives the special untrappable exit signal > > >> which causes unconditional termination. > > >> > > >> > > >> My point is that the *same* signal results in different > behaviour depending on how it was sent. That's incocnsistent. > > >> > > >> I agree that it is inconsistent. I would have preferred that the > exit(Pid, kill) was a separate function, e.g., kill(Pid) and > > >> that exit(Pid, kill) would be handled as any other exit/2 call. > But I won't hold my breath in anticipation of that being > > >> changed... > > >> > > >> /H?kan > > >> > > >> > > >> Robert > > >> > > >> > > >> On 6 October 2015 at 18:33, zxq9 wrote: > > >> > > >> On Wednesday 07 October 2015 10:25:38 zxq9 wrote: > > >> > > >> > or maybe it is that {'EXIT', Pid = self(), kill} *is* > specifically untrappable by way of matching on self()? > > >> > > >> That was too much to hope for: > > >> > > >> 1> P = spawn(fun Loop() -> receive M -> io:format("Got > ~p~n", [M]), Loop() end end). > > >> <0.1889.0> > > >> 2> P ! {'EXIT', P, kill}. > > >> Got {'EXIT',<0.1889.0>,kill} > > >> {'EXIT',<0.1889.0>,kill} > > >> 3> P ! {'EXIT', P, blam}. > > >> Got {'EXIT',<0.1889.0>,blam} > > >> {'EXIT',<0.1889.0>,blam} > > >> 4> exit(P, kill). > > >> true > > >> 5> P ! {'EXIT', P, blam}. > > >> {'EXIT',<0.1889.0>,blam} > > >> > > >> If it *did* turn out that matching {'EXIT', self(), kill} > was untrappable I would just say "ah, that makes sense -- now > > >> I can understand the mechanism behind this without > thinking about VM details". Instead it appears to be a case of > > >> mysterious activity underlying a message form that is > semantically overloaded. And that stinks. > > >> > > >> -Craig > > >> _______________________________________________ > > >> erlang-questions mailing list > > >> erlang-questions@REDACTED > > >> http://erlang.org/mailman/listinfo/erlang-questions > > >> > > >> > > >> > > >> _______________________________________________ > > >> erlang-questions mailing list > > >> erlang-questions@REDACTED > > >> http://erlang.org/mailman/listinfo/erlang-questions > > >> > > >> > > >> > > >> > > >> > > > > > >>_______________________________________________ > > >>erlang-bugs mailing list > > >>erlang-bugs@REDACTED > > >>http://erlang.org/mailman/listinfo/erlang-bugs > > > > > > > > > > -- > > Lo?c Hoguin > > http://ninenines.eu > > Author of The Erlanger Playbook, > > A book about software development using Erlang > > _______________________________________________ > > erlang-bugs mailing list > > erlang-bugs@REDACTED > > http://erlang.org/mailman/listinfo/erlang-bugs > > -- > FRANCESCO LATTANZIO : SYSTEM & SOFTWARE > A-TONO TECHNOLOGY : VIA DEL CHIESINO, 10 - 56025 PONTEDERA (PI) : T +39 > 02 32069314 : SKYPE franz.lattanzio > a-tono.com : twitter.com/ATonoOfficial : facebook.com/ATonoOfficial > > Information in this email is confidential and may be privileged. It is > intended for the addresses only. > If you have received it in error, please notify the sender immediately and > delete it from your system. > You should not otherwise copy it, retransmit it or use or disclose its > content to anyone. Thank you for your co-operation. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvirding@REDACTED Fri Oct 9 02:33:01 2015 From: rvirding@REDACTED (Robert Virding) Date: Thu, 8 Oct 2015 17:33:01 -0700 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <1647607.DFRhj7cxfs@changa> References: <20151008144743.GK1744@fhebert-ltm1> <7667F6B9-C68C-4FE9-8DF6-89DD94062B78@rogvall.se> <1647607.DFRhj7cxfs@changa> Message-ID: In that case skip the dot and use Map#{key} which does look a bit strange but I don't think causes any problems. It at least looks "mappy". Robert On 8 October 2015 at 16:18, zxq9 wrote: > WHAT?!?!? > > All that inconsistency in the language SO PEOPLE CAN USE GODDAM DOTS?!?!? > > NO! > > OK. That's out of my system. > > Seriously, what is this obsession with dots? It amazes me that this sort > of thing comes up so often, is demonstrated to be a bad idea, and then > comes up again weeks later. Thoughtlessly adding syntactic sugar without a > reason better than "braces and hashes and whatnot are considered ugly by > (my) current fashion standards" is how you wind up with an unrecoverable > stew of profoundly weird and unrecoverably ugly syntax and semantics. > Consider Ruby. Or C++. Compare that to Python, where adding a syntactic > convenience usually requires something close to a multi-year civil war. The > difference in outcome is clear. > > I would much prefer that Erlang continued to err on the side of being too > slow to depart from its Prologish syntax roots and remain consistent and > unsurprising, regardless the prevailing syntax fads of this or that decade. > In fact, I would prefer if most of the code I see continues uses > maps:get/2,3 and wouldn't be bothered at all if there had never been any > specific hash-and-braces-sometimes-with-arrows syntax for maps. > > The syntax of maps is utterly uninteresting -- the underlying data > structure is the useful part. > > -Craig > > On 2015?10?8? ??? 17:55:07 Tony Rogvall wrote: > > Why not only support atom keys for the dot notation and let more complex > keys use maps:get??? > > That would cover most of my uses of maps ( but far from all ) > > /Tony > > > On 8 okt. 2015, at 16:47, Fred Hebert wrote: > > > > > >> On 10/08, Fred Hebert wrote: > > >> Map.f.'d.e'.#{a@REDACTED=>ok} to go fetch the final 'ok'. Either that or > you? > > >> support little chaining, but there's still plenty of ways to make > this? > > >> terrible. > > > > > > Oh also, if any form of chaining is required, it is now impossible to > know if Map.3.5 is supposed to be Map.(3.5) or two maps, one with the key 3 > and the key 5. I guess parentheses could make it work. But even with > records this was kind of messy. > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Fri Oct 9 03:22:38 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 9 Oct 2015 14:22:38 +1300 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <002201d101cc$b6229190$2267b4b0$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <7C2E3E6B-6426-48D6-AE56-32933E57D5F1@cs.otago.ac.nz> <002201d101cc$b6229190$2267b4b0$@frcuba.co.cu> Message-ID: <6F991BE8-F855-4913-ABA8-6BFD6A56B565@cs.otago.ac.nz> On 9/10/2015, at 2:24 am, Ivan Carmenates Garcia wrote: > > Yes I mean the C implementation, because if you do 1 millon of repetances of > lists:length/1 in Erlang and the same for lists:reverse/1 for the same list, > lists:length takes 16 milliseconds in my pc and lists:reverse takes 64 > milliseconds. This is pretty meaningless. What is the length of the lists? Here are some results of mine, obtained just now: % |L| = 100 |L| = 1000 % length/1 in Erlang 0.26 usec 2.56 usec % reverse/1 in Erlang 0.44 usec 3.32 usec % length/1 built-in 0.13 usec 1.21 usec % lists:reverse/1 0.36 usec 3.44 usec The "in Erlang" entries were obtained with code manually unrolled by a factor of 4. The question was "four times *what*", and it appears to be that the answer was "CPU time". I actually get a little under a factor of 3 there, but this probably depends on the version and the machine and a lot of other things. From mononcqc@REDACTED Fri Oct 9 03:41:39 2015 From: mononcqc@REDACTED (Fred Hebert) Date: Thu, 8 Oct 2015 20:41:39 -0500 Subject: [erlang-questions] [erlang-bugs] Strange behaviour of exit(kill) In-Reply-To: <561717D6.2030509@proxel.se> References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> <20151007122725.GB3459@wagner.intra.a-tono.com> <5616637F.4010102@ninenines.eu> <20151008125730.GA1012@wagner.intra.a-tono.com> <561717D6.2030509@proxel.se> Message-ID: <20151009014138.GL1744@fhebert-ltm1> On 10/09, Roland Karlsson wrote: >So Robert, tell me who missed it. >What is the difference in behaviour when using exit/1 or exit/2? >And is the difference still there if the process kills itself with exit/2? > >/Roland > exit/1 raises a 'kill' exception (you can do it with erlang:raise(exit, kill, Stacktrace)) that will unwind the stack and can be caught by try ... catch. In the case where the exception is not caught, the process is terminated and the reason of its death forwarded as a signal. exit/2 sends an exit signal asynchronously. The confusing aspect is that a 'kill' signal sent by exit/2 is untrappable, but a 'kill' signal that comes from a process terminating after an exception doesn't. This means the underlying secret is you have 2 levels of signals (at least): - termination signals (uncaught exception create one of these, dying from any other signal does the same) - kill signals (untrappable) The interesting bit then is why is there a need to translate 'kill' into 'killed'? From what I can tell, it's just so you can know the process was brutally killed -- if you only had 'kill' then you know it came from an exception. Buuuut here's the kicker: 1> spawn_link(fun() -> exit(kill) end), flush(). Shell got {'EXIT',<0.87.0>,kill} ok 2> spawn_link(fun() -> spawn_link(fun() -> exit(kill) end), timer:sleep(infinity) end), flush(). Shell got {'EXIT',<0.89.0>,killed} 3> spawn_link(fun() -> process_flag(trap_exit, true), spawn_link(fun() -> exit(kill) end), timer:sleep(infinity) end), flush(). ok woops. There's no real consistency there. In the end: - a special 'kill' signal that cannot be trapped will unconditionally kill a process and produce a signal 'killed' for peer processes - an exception 'kill' bubbling up is reported as a trappable exit signal with the reason 'kill' - that 'kill' exit signal is converted to 'killed' for other processes, regardless of the source, as long as it's not from a local stack. This just sounds like a leaky abstraction where the conversion of kill -> killed only takes place on incoming signals being converted, unconditionally. Still, you have two types of signals: untrappable (exit(Pid, kill)), and trappable (anything else, even with a reason kill, if produced from a stacktrace). It's confusing and always has been so. From vances@REDACTED Fri Oct 9 03:56:11 2015 From: vances@REDACTED (Vance Shipley) Date: Fri, 9 Oct 2015 07:26:11 +0530 Subject: [erlang-questions] Initializing mnesia In-Reply-To: <1444345727.2776431@apps.rackspace.com> References: <1444345727.2776431@apps.rackspace.com> Message-ID: On Fri, Oct 9, 2015 at 4:38 AM, wrote: > init() -> > mnesia:create_schema([node()]), > mnesia:start(), > mnesia:create_table(.... You need to wait until your tables are created before you can use them: mnesia:create_schema([node()]), wait_for_tables([schema], 10000) mnesia:start(), mnesia:create_table(.... wait_for_tables(... -- -Vance From ok@REDACTED Fri Oct 9 04:17:23 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 9 Oct 2015 15:17:23 +1300 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> Message-ID: <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> On 8/10/2015, at 1:20 pm, Ivan Carmenates Garcia wrote: > For example this is one of the algorithms, I optimize it as well as I could: Start by optimising the documentation. > > Considering here that the order of the fields is very important!. > > %% ------------------------------------------------------------------- > %% @private > %% @doc > %% Parses the specified list of full fields into a string containing > %% all full fields separated my comma. > %% example: > %%
> %%   parse_full_fields([{users, '*'}, name, {roles, [id, level]},
> %%       {users, name, alias}], fun get_postgres_operator/2) ->
> %%       {"users.'*',name,roles.id,roles.level,users.name AS alias",
> %%           [users, roles]}.
> %% 
> %% Returns `{[], []}' if called with `[]'. > %% @throws {error, invalid_return_fields_spec, InvalidForm :: any()} > %% @end > %% ------------------------------------------------------------------- Parsing turns a string into structure. Turning structure into a string is the OPPOSITE of parsing. You are not parsing but UNparsing. > -spec parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> Str when > FullFieldsSpecs :: proplists:proplist(), > Separator :: string(), > OperatorFinder :: fun(), > Str :: string(). This is seriously confusing. The thing about a proplist, as we've recently discussed in this list, is that it contains - atoms, where 'x' is equivalent to {'x',true} - pairs {Key, Value} - junk, which is completely ignored. While I don't fully understand your example, it's clear that name is NOT equivalent to {name,true} and that {users, name, alias} is NOT ignored as junk. So whatever you have, it certainly isn't a proplist. Finally, your -spec describes a function with three arguments, but the example in your comment has only two arguments. It looks like you are mapping 'atom' -> "atom" {'table', 'field'} -> "table.field" {'table', ['f1',...,'fn']} -> "table.f1" ... "table.fn" {'table', 'field', 'alias'} -> "table.field AS alias" It would seem more logical to have field() = atom() | {atom(),atom()}. % {name,alias} fields() = field() | list(field()). slot() = atom() | {atom(),fields()}. % {table,fields} so that it's obvious how to put an alias inside a list of fields. -module(goo). -export([ tables/1, unparse_return_full_fields/1, unparse_slots/1 ]). unparse_return_full_fields(Slots) -> {lists:flatten(unparse_slots(Slots)), tables(Slots)}. tables([{Table,_}|Slots]) -> Tables = tables(Slots), case lists:member(Table, Tables) of true -> Tables ; false -> [Table|Tables] end; tables([Name|Slots]) when is_atom(Name) -> tables(Slots); tables([]) -> []. unparse_slots([Slot|Slots]) -> [ unparse_slot(Slot) | unparse_remaining_slots(Slots) ]. unparse_remaining_slots([]) -> ""; unparse_remaining_slots(Slots) -> ["," | unparse_slots(Slots)]. unparse_slot({Table,Fields}) -> unparse_fields(Fields, atom_to_list(Table)); unparse_slot(Name) when is_atom(Name) -> atom_to_list(Name). unparse_fields([Field|Fields], Table) -> [ unparse_fields(Field, Table) | unparse_remaining_fields(Fields, Table) ]; unparse_fields({Name,Alias}, Table) -> [Table, ".", atom_to_list(Name), " AS ", atom_to_list(Alias)]; unparse_fields(Name, Table) when is_atom(Name) -> [Table, ".", atom_to_list(Name)]. unparse_remaining_fields([], _) -> ""; unparse_remaining_fields(Fields, Table) -> ["," | unparse_fields(Fields, Table)]. With that definition, we get 1> c(goo). 2> goo:unparse_slots([{users, '*'}, name, {roles, [id, level]}, {users, {name, alias}}]). %% Note difference here.[["users",".","*"], ",","name",",", [["roles",".","id"],",",["roles",".","level"]], ",", ["users",".","name"," AS ","alias"]] This is a tree of strings, not a string. But for many purposes, it's just as good. (It's called an iolist.) Flattening it gives "users.*,name,roles.id,roles.level,users.name AS alias" 3> goo:unparse_return_full_fields([{users, '*'}, name, {roles, [id, level]}, {users, {name, alias}}]). {"users.*,name,roles.id,roles.level,users.name AS alias", [roles,users]} You will note that the functions above don't have any use for reversing a list. And I must repeat that for many purposes, such as sending text to another OS process, there isn't any *need* to flatten the tree of strings. Even in your code, > parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> > {ParsedFields, TableNames} = > parse_full_fields2(FullFieldsSpecs, [], [], OperatorFinder), > {string:join(lists:reverse(ParsedFields), Separator), TableNames}. there isn't actually any need to do this: my_reverse_join([X|Xs], Sep) -> my_reverse_join(Xs, X, Sep); my_reverse_join([], _) -> "". my_reverse_join([], Acc, _) -> Acc; my_reverse_join([Y|Ys], Acc, Sep) -> my_reverse_join(Ys, Y++Sep++Acc, Sep). 9> goo:my_reverse_join(["harry","deacon","thom","avery"]). "avery,thom,deacon,harry" We see two ideas here which are likely to be applicable to your code in context. (1) It may be possible to *eliminate* a call to lists:reverse/1 by fusing it with the function the result is being passed to. (2) It may be possible to *eliminate* appends (++) by constructing an iolist (a tree of strings) instead of a string. For example, io:put_chars/2 wants a unicode:chardata(). http://www.erlang.org/doc/man/unicode.html#type-chardata http://www.erlang.org/doc/man/io.html#put_chars-2 From ok@REDACTED Fri Oct 9 04:20:02 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 9 Oct 2015 15:20:02 +1300 Subject: [erlang-questions] Accessing a single value from MAPS In-Reply-To: <20151008144136.GI1744@fhebert-ltm1> References: <99891FB1-21C7-48A9-8B67-B8285919B1A5@minostro.com> <05e001d10131$a9c91d10$fd5b5730$@frcuba.co.cu> <20151008144136.GI1744@fhebert-ltm1> Message-ID: <5BAC5CB9-1F30-44E0-ADB9-B4D74BAEA372@cs.otago.ac.nz> On 9/10/2015, at 3:41 am, Fred Hebert wrote: > > There's an interesting distinction for maps in that any data structure? > whatsoever might be a key, even tuples: > > 5> #{{a,b,c} := _} = #{{a,b,c} => d}. > #{{a,b,c} => d} Sigh. The key might even be a map, so that m1.m2.k _could_ be parsed as m1.(m2.k). Frames were so much simpler than maps... From co7eb@REDACTED Fri Oct 9 05:40:28 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Thu, 8 Oct 2015 23:40:28 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> Message-ID: <001801d10244$4095dd10$c1c19730$@frcuba.co.cu> Regards, Well seems to me that there is no more optimizations for the algorithm (except for yours 'my_reverse_join' see below), I tested yours and mine and they both take exactly the same for the final result the unparsed "string", well mine is 200 millisecond faster in 1 million of iterations, yours take 64XX main 62XX approx. The problem is that using lists:flatten is the hell slow, I also tested your algorithm just after the io lists return and it is very faster its took 14XX milliseconds, so when you use the final lists:flatten/1 everything goes to crap, sorry about the word. So it is amazing who faster it is and then lists:flatten by its own take the another 5 seconds, I did knew that because I read Erlang session about 7 myths off bla bla .. and optimizations stuffs some time ago and they say lists:flatten is slow also I tested it when I was constructing the algorithm first time I did avoid constructing the io lists because of that. I did clear some things like unparse well I need another name I don't like unparse either and parse is wrong, I will come up with something. Also I do have something to thank you, your reverse 'my_reverse_join' little algorithm is a crack, It sped up mine in almost one second. So no need to use lists:reverse/1 and string:join/2, thanks for that, now it takes 54XX instead of 62XX. I also have questions for you if you are so kind to answer. Regards proplists, well, it is necessary to make lists of options as proplists? yes, proplists:compact/1 and expand/1 for [a] to [{a, true}] and all those stuffs but what I was trying to do is to make it simple for the user, because more tuples are more complicate to write in the case of {users, name, alias} well that could be right adding more brackets like {users, {name, alias}} because it organizes the thing. But i.e.: I have another algorithm for match specs in which I can do {name | {table, name} | value, op, name | {table, name} | value } also [{..., op, ...}, ...], logic_op, ... so it will be easy for the user to build something like [ [{id, '==', 1}, {age, '==', 31}], 'or', {username, '==', "john"} ] And not something like [ { [{id, '==', 1}, {age, '==', 31}], 'or', {username, '==', "john"} } ] It was also more easy to build the algorithm for me without the tuples for the logic operator and I also validate it easy if other term is missing by checking if there is a logic_op with an empty Rest in the list to unparse. Kindly regards, Ivan (son of Gilberio). -----Original Message----- From: Richard A. O'Keefe [mailto:ok@REDACTED] Sent: Thursday, October 8, 2015 10:17 PM To: Ivan Carmenates Garcia Cc: Erlang Questions Mailing List Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) On 8/10/2015, at 1:20 pm, Ivan Carmenates Garcia wrote: > For example this is one of the algorithms, I optimize it as well as I could: Start by optimising the documentation. > > Considering here that the order of the fields is very important!. > > %% ------------------------------------------------------------------- > %% @private > %% @doc > %% Parses the specified list of full fields into a string containing > %% all full fields separated my comma. > %% example: > %%
> %%   parse_full_fields([{users, '*'}, name, {roles, [id, level]},
> %%       {users, name, alias}], fun get_postgres_operator/2) ->
> %%       {"users.'*',name,roles.id,roles.level,users.name AS alias",
> %%           [users, roles]}.
> %% 
> %% Returns `{[], []}' if called with `[]'. > %% @throws {error, invalid_return_fields_spec, InvalidForm :: any()} > %% @end %% > ------------------------------------------------------------------- Parsing turns a string into structure. Turning structure into a string is the OPPOSITE of parsing. You are not parsing but UNparsing. > -spec parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> Str when > FullFieldsSpecs :: proplists:proplist(), > Separator :: string(), > OperatorFinder :: fun(), > Str :: string(). This is seriously confusing. The thing about a proplist, as we've recently discussed in this list, is that it contains - atoms, where 'x' is equivalent to {'x',true} - pairs {Key, Value} - junk, which is completely ignored. While I don't fully understand your example, it's clear that name is NOT equivalent to {name,true} and that {users, name, alias} is NOT ignored as junk. So whatever you have, it certainly isn't a proplist. Finally, your -spec describes a function with three arguments, but the example in your comment has only two arguments. It looks like you are mapping 'atom' -> "atom" {'table', 'field'} -> "table.field" {'table', ['f1',...,'fn']} -> "table.f1" ... "table.fn" {'table', 'field', 'alias'} -> "table.field AS alias" It would seem more logical to have field() = atom() | {atom(),atom()}. % {name,alias} fields() = field() | list(field()). slot() = atom() | {atom(),fields()}. % {table,fields} so that it's obvious how to put an alias inside a list of fields. -module(goo). -export([ tables/1, unparse_return_full_fields/1, unparse_slots/1 ]). unparse_return_full_fields(Slots) -> {lists:flatten(unparse_slots(Slots)), tables(Slots)}. tables([{Table,_}|Slots]) -> Tables = tables(Slots), case lists:member(Table, Tables) of true -> Tables ; false -> [Table|Tables] end; tables([Name|Slots]) when is_atom(Name) -> tables(Slots); tables([]) -> []. unparse_slots([Slot|Slots]) -> [ unparse_slot(Slot) | unparse_remaining_slots(Slots) ]. unparse_remaining_slots([]) -> ""; unparse_remaining_slots(Slots) -> ["," | unparse_slots(Slots)]. unparse_slot({Table,Fields}) -> unparse_fields(Fields, atom_to_list(Table)); unparse_slot(Name) when is_atom(Name) -> atom_to_list(Name). unparse_fields([Field|Fields], Table) -> [ unparse_fields(Field, Table) | unparse_remaining_fields(Fields, Table) ]; unparse_fields({Name,Alias}, Table) -> [Table, ".", atom_to_list(Name), " AS ", atom_to_list(Alias)]; unparse_fields(Name, Table) when is_atom(Name) -> [Table, ".", atom_to_list(Name)]. unparse_remaining_fields([], _) -> ""; unparse_remaining_fields(Fields, Table) -> ["," | unparse_fields(Fields, Table)]. With that definition, we get 1> c(goo). 2> goo:unparse_slots([{users, '*'}, name, {roles, [id, level]}, {users, {name, alias}}]). %% Note difference here.[["users",".","*"], ",","name",",", [["roles",".","id"],",",["roles",".","level"]], ",", ["users",".","name"," AS ","alias"]] This is a tree of strings, not a string. But for many purposes, it's just as good. (It's called an iolist.) Flattening it gives "users.*,name,roles.id,roles.level,users.name AS alias" 3> goo:unparse_return_full_fields([{users, '*'}, name, {roles, [id, level]}, {users, {name, alias}}]). {"users.*,name,roles.id,roles.level,users.name AS alias", [roles,users]} You will note that the functions above don't have any use for reversing a list. And I must repeat that for many purposes, such as sending text to another OS process, there isn't any *need* to flatten the tree of strings. Even in your code, > parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> > {ParsedFields, TableNames} = > parse_full_fields2(FullFieldsSpecs, [], [], OperatorFinder), > {string:join(lists:reverse(ParsedFields), Separator), TableNames}. there isn't actually any need to do this: my_reverse_join([X|Xs], Sep) -> my_reverse_join(Xs, X, Sep); my_reverse_join([], _) -> "". my_reverse_join([], Acc, _) -> Acc; my_reverse_join([Y|Ys], Acc, Sep) -> my_reverse_join(Ys, Y++Sep++Acc, Sep). 9> goo:my_reverse_join(["harry","deacon","thom","avery"]). "avery,thom,deacon,harry" We see two ideas here which are likely to be applicable to your code in context. (1) It may be possible to *eliminate* a call to lists:reverse/1 by fusing it with the function the result is being passed to. (2) It may be possible to *eliminate* appends (++) by constructing an iolist (a tree of strings) instead of a string. For example, io:put_chars/2 wants a unicode:chardata(). http://www.erlang.org/doc/man/unicode.html#type-chardata http://www.erlang.org/doc/man/io.html#put_chars-2 From co7eb@REDACTED Fri Oct 9 05:51:36 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Thu, 8 Oct 2015 23:51:36 -0400 Subject: [erlang-questions] Initializing mnesia In-Reply-To: <1444345727.2776431@apps.rackspace.com> References: <1444345727.2776431@apps.rackspace.com> Message-ID: <001901d10245$cdc2a6e0$6947f4a0$@frcuba.co.cu> Hi Lloyd I think there is no need to check all those conditions you write about, you just have to put the init method in a supervisor three and don't worry about checking if schema exits because you can always create it and if it already exists nothing happened mnesia don't erase the previews schema neither the tables with information. i.e. I have this and works pretty well for me. init([]) -> case application:get_env(cowboy_enhancer, session_manager) of %% TODO: do something with config. {ok, Config} -> mnesia:create_schema([node()]), mnesia:start(), mnesia:create_table(session, [ {disc_copies, [node()]}, {type, set}, {attributes, record_info(fields, session)}]), case mnesia:wait_for_tables([session], 5000) of {timeout, _RemainingTabs} -> {stop, mnesia_timeout}; ok -> %% starts the session garbage collector. start_garbage_collector(), {ok, #state{}} end; undefined -> {stop, invalid_or_missing_configuration} end. NOTE: you do need to wait for tables to be ready in all involved nodes just as Vance said to you. -----Original Message----- From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of lloyd@REDACTED Sent: Thursday, October 8, 2015 7:09 PM To: Erlang (E-mail) Subject: [erlang-questions] Initializing mnesia Hello, At first blush, initializing mnesia looks like a piece of cake: init() -> mnesia:create_schema([node()]), mnesia:start(), mnesia:create_table(.... But suppose one wants to automate it--- "Look, Ma, no hands!" by, say, running it under a top-level supervisor. Seems we need to: - check to see if we have a schema -- if not, create schema - check to see if mnesia is running -- if not, start it - check to see if tables are defined -- if not, define them Surfing around I found this: http://erlang.2086793.n4.nabble.com/When-to-create-mnesia-schema-for-OTP-app lications-td2115607.html Seems to work with Seth Falcon's fix; but the fix produces a warning when compiled. I could live with that, but it bugs me. Seems to me there must be a more elegant way to solve the problem. Spent half a day on it but the explosion of crash conditions and valid returns put my brain in tilt mode. It's not like I'm inventing the wheel here. Does some kind should have a simple mean-and-lean solution to the problem? Many thanks, LRP ********************************************* My books: THE GOSPEL OF ASHES http://thegospelofashes.com Strength is not enough. Do they have the courage and the cunning? Can they survive long enough to save the lives of millions? FREEIN' PANCHO http://freeinpancho.com A community of misfits help a troubled boy find his way AYA TAKEO http://ayatakeo.com Star-crossed love, war and power in an alternative universe Available through Amazon or by request from your favorite bookstore ********************************************** _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From brandjoe@REDACTED Fri Oct 9 00:07:55 2015 From: brandjoe@REDACTED (=?UTF-8?Q?J=c3=b6rgen_Brandt?=) Date: Fri, 9 Oct 2015 00:07:55 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang Message-ID: <5616E93B.4060608@hu-berlin.de> Hello, is there an Erlang library for transactional message passing, using patterns in communication and error handling to improve fault tolerance? This (or a similar) question may have been asked before and, of course, there is plenty of research on fault tolerance and failure transparency. Nevertheless, in my work with scientific workflows it seems that certain patterns in error handling exist. In this mail I'm trying to motivate a way to externalize common error handling in a standardized service (a transaction server) but I'm unsure whether such a thing already exists, whether I'm missing an important feature, and whether it's a good idea anyway. Large distributed systems are composed of many services. They process many tasks concurrently and need fault tolerance to yield correct results and maintain availability. Erlang seemed a good choice because it provides facilities to automatically improve availability, e.g., by using supervisers. In addition, it encourages a programming style that separates processing logic from error handling. In general, each service has its own requirements, implying that a general approach to error handling (beyond restarting) is infeasible. However, if an application exhibits recurring patterns in the way error handling corresponds to the messages passed between services, we can abstract these patterns to reuse them. Fault tolerance is important because it directly translates to scalability. Consider an application (with transient software faults), processing user queries. The application reports errors back to the user as they appear. If a user query is a long- running job (hours, days), the number of subtasks created from this job (thousands), the number of services to process one subtask, and the number of machines involved are large, then the occurrence of an error is near to certain. Quietly restarting the application and rerunning the query may reduce the failure probability but even if the application succeeds, the number of retries and, thus, the time elapsed to success may be prohibitive. What is needed is a system that does not restart the whole application but only the service that failed reissuing only the unfinished requests that this service received before failing. Consequently, the finer the granularity at which errors are handled, the less work has to be redone when errors occur, allowing a system to host longer-running jobs, containing more subtasks, involving more services for each subtask, and running on more machines in feasible time. Scientific workflows are a good example for a large distributed application exhibiting patterns in communication and error handling. A scientific workflow system consumes an input query in the form of an expression in the workflow language. On evaluation of this expression it identifies subtasks that can be executed in any order. E.g., a variant calling workflow from bioinformatics unfolds into several hundred to a thousand subtasks each of which is handed down in the form of requests through a number of services: Upon identification of the subtask in (i) the query interpreter, a request is sent to (ii) a cache service. This service keeps track of all previously run subtasks and returns the cached result if available. If not, a request is sent to (iii) a scheduling service. This service determines the machine, to run the subtask. The scheduler tries both, to adequately distribute the work load among workers (load balancing) and to minimize data transfers among nodes (data locality). Having decided where to run the subtask, a request is sent to (iv) the corresponding worker which executes the subtask and returns the result up the chain of services. Every subtask goes through this life cycle. Apart from the interplay of the aforementioned services we want the workflow system to behave in a particular way when one of these services dies: - Each workflow is evaluated inside its own interpreter process. A workflow potentially runs for a long time and at some point we might want to kill the interpreter process. When this happens, the system has to identify all open requests originating from that interpreter and cancel them. - When an important service (say the scheduler) dies, a supervisor will restart it, this way securing the availability of the application. Upon a fresh start, none of the messages this service has received will be there anymore. Instead of having to notify the client of this important service (in this case the cache) to give it the chance to repair the damage, we want all the messages, that have been sent to the important service (scheduler) and have not been quited, to be resent to the freshly started service (scheduler). - When a worker dies, from a hardware fault, we cannot expect a supervisor to restart it (on the same machine). In this case we want to notify the scheduler not to expect a reply to his request anymore. Also we want to reissue the original request to the scheduler to give it the chance to choose a different machine to run the subtask on. - When a request is canceled at a high level (say at the cache level because the interpreter died) All subsequent requests (to the scheduler and in the worker) corresponding to the original request should have been canceled before the high level service (cache) is notified, thereby relieving him of the duty to cancel them himself. Since there is no shared memory in Erlang, the state of a process is defined only by the messages received (and its init parameters which are assumed constant). To reestablish the state of a process after failure we propose three different ways to send messages to a process and their corresponding informal error handling semantics: tsend( Dest, Req, replay ) -> TransId when Dest :: atom(), Req :: term(), TransId :: reference(). Upon calling tsend/3, a transaction server creates a record of the request to be sent and relays it to the destination (must be a registered process). At the same time it creates a monitor on both the request's source and destination. When the source dies, it will send an abort message to the destination. When the destination dies, initially, nothing happens. When the supervisor restarts the destination, the transaction server replays all unfinished requests to the destination. tsend( Dest, Req, replay, Precond ) -> TransId when Dest :: atom(), Req :: term(), Precond :: reference(), TransId :: reference(). The error handling for tsend/4 with replay works just the same as tsend/3. Additionally, when the request with the id Precond is canceled, this request is also canceled. tsend( Dest, Req, reschedule, Precond ) -> TransId when Dest :: atom() | pid(), Req :: term(), Precond :: reference(), TransId :: reference(). Upon calling tsend/4, with reschedule, as before, a transaction server creates a record of the request and monitors both source and destination. When the destination dies, instead of waiting for a supervisor to restart it, the original request identified with Precond is first canceled at the source and then replayed to the source. Since we do not rely on the destination to be a permanent process, we can also identify it per Pid while we had to require a registered service under replay error handling. commit( TransId, Reply ) -> ok when TransId :: reference(), Reply :: term(). When a service is done working on a request, it sends a commit which relays the reply to the transaction source and removes the record for this request from the transaction server. A service participating in transaction handling has to provide the following two callbacks: handle_recv( TransId::reference(), Req::_, State::_ ) -> {noreply, NewState::_}. handle_abort( TransId::reference(), State::_ ) -> {noreply, NewState::_}. While the so-defined transaction protocol is capable of satisfying the requirements introduced for the workflow system example the question is, is it general enough to be applicable also in other applications? This conduct has its limitations. The introduced transaction protocol may be suited to deal with transient software faults (Heisenbugs) but its effectiveness to mitigate hardware faults or deterministic software faults (Bohrbugs) is limited. In addition, with the introduction of the transaction server we created a single point of failure. Concludingly, the restarting of a service by a supervisor is sufficient to secure the availability of a service in the presence of software faults but large scale distributed systems require a more fine-grained approach to error handling. To identify patterns in message passing and error handling gives us the opportunity to reduce error handling code and, thereby, avoid the introduction of bugs into error handling. The proposed transaction protocol may be suitable to achieve this goal. I had hoped to get some feedback on the concept, in order to have an idea whether I am on the right track. If a similar library is already around and I just couldn't find it, if I am missing an obvious feature, a pattern that is important but just doesn't appear in the context of scientific workflows, it would be helpful to know about it. Thanks in advance. Cheers J?rgen From mrallen1@REDACTED Fri Oct 9 07:49:41 2015 From: mrallen1@REDACTED (Mark Allen) Date: Fri, 9 Oct 2015 05:49:41 +0000 (UTC) Subject: [erlang-questions] Lager backend: error_logger? In-Reply-To: References: Message-ID: <1502764429.1208004.1444369782013.JavaMail.yahoo@mail.yahoo.com> On Thursday, October 8, 2015 10:10 AM, Roberto Ostinelli wrote: > Is it possible to use error_logger inside a lager backend?>?> I imagine this would create recursive calls, but I'm basically wondering how to output a?> log from a lager backend, if that makes any sense.? I'm sorry I don't understand what you mean - what is it you're trying to do?? Lager already intercepts calls to error_logger and redirects them into its own event handler(s) for log messages. ?So if you use error_logger:info_msg in your source code, then lager will already capture that and process it. If you want to take this conversation offline that would be fine too. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From grahamrhay@REDACTED Fri Oct 9 10:39:01 2015 From: grahamrhay@REDACTED (Graham Hay) Date: Fri, 9 Oct 2015 09:39:01 +0100 Subject: [erlang-questions] Initializing mnesia In-Reply-To: <1444345727.2776431@apps.rackspace.com> References: <1444345727.2776431@apps.rackspace.com> Message-ID: There was another discussion here (http://erlang.2086793.n4.nabble.com/Mnesia-create-tables-best-practices-td4710757.html), about including an initialised db in a release. From zxq9@REDACTED Fri Oct 9 10:43:33 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 09 Oct 2015 17:43:33 +0900 Subject: [erlang-questions] Initializing mnesia In-Reply-To: References: <1444345727.2776431@apps.rackspace.com> Message-ID: <1649616.GLHeFUKtGy@burrito> On Friday 09 October 2015 09:39:01 Graham Hay wrote: > There was another discussion here > (http://erlang.2086793.n4.nabble.com/Mnesia-create-tables-best-practices-td4710757.html), > about including an initialised db in a release. The non-annoying link: http://erlang.org/pipermail/erlang-questions/2015-February/083160.html From tty.erlang@REDACTED Fri Oct 9 11:45:34 2015 From: tty.erlang@REDACTED (T Ty) Date: Fri, 9 Oct 2015 10:45:34 +0100 Subject: [erlang-questions] Initializing mnesia In-Reply-To: <1649616.GLHeFUKtGy@burrito> References: <1444345727.2776431@apps.rackspace.com> <1649616.GLHeFUKtGy@burrito> Message-ID: It boils down to what presuppositions you want your application to have. I prefer splitting initialization of the working environment from starting the application. Meaning when starting my application I assume mnesia tables have been created and I don't have to try to be clever in my application code. If the tables aren't there application doesn't start. Since your application is being started via some shell or init script place testing for preconditions and creation of mnesia tables there instead. In the application you might want do to mnesia:wait_for_tables/2 and ultimately: If the tables aren't there application doesn't start. ok = mnesia:wait_for_tables([session], 5000) On Fri, Oct 9, 2015 at 9:43 AM, zxq9 wrote: > On Friday 09 October 2015 09:39:01 Graham Hay wrote: > > There was another discussion here > > ( > http://erlang.2086793.n4.nabble.com/Mnesia-create-tables-best-practices-td4710757.html > ), > > about including an initialised db in a release. > > The non-annoying link: > http://erlang.org/pipermail/erlang-questions/2015-February/083160.html > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Fri Oct 9 12:36:53 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 09 Oct 2015 19:36:53 +0900 Subject: [erlang-questions] Initializing mnesia In-Reply-To: References: <1444345727.2776431@apps.rackspace.com> <1649616.GLHeFUKtGy@burrito> Message-ID: <1713925.0ozpOjlMp6@changa> On 2015?10?9? ??? 10:45:34 T Ty wrote: > It boils down to what presuppositions you want your application to have. I > prefer splitting initialization of the working environment from starting > the application. > > Meaning when starting my application I assume mnesia tables have been > created and I don't have to try to be clever in my application code. If the > tables aren't there application doesn't start. > > Since your application is being started via some shell or init script place > testing for preconditions and creation of mnesia tables there instead. > > In the application you might want do to mnesia:wait_for_tables/2 and > ultimately: If the tables aren't there application doesn't start. This plays directly into my restart/zomg-crash/live-config strategy. The environment is prepared. Applications start. Running things receive configuration data. Running things (re)initialize based on the config data. There may be more steps than this, and that sort of breakdown may exist at different levels in a largish system or service constellation (especially when it involves a lot of non-Erlang services), but as long as I keep this in mind when building a system I tend to not have much trouble envisioning what different parts of my supervision tree should look like, or what "safe state" looks like when things need restarting. Incidentally it also makes testing different configurations easy, because configuration is conceptually REconfiguration, initialization is conceptually REinitializtion, and so on. When I've done things right I know that I can, for example, destructively update a database/external service/VM/whatever, and just let the orchestrating Erlang system's built-in crash/restart/reconfig process occur (at whatever level applies) -- to include whatever Mnesia or other data caches should do. This isn't an ideal way to manage production devops, perhaps, but in development its really nice to have all that stand itself up again without even executing a startup script again. It is also really nice to have confidence that your system would respond exactly that way in production were deep disaster to strike. I know this brings the discussion to a level way above Mnesia initialization, but my point is that the basic idea of separating these different operational preparation steps provides a natural answer to the question "Where should I spin up Mnesia for the first/Nth time?" -Craig From kvratkin@REDACTED Fri Oct 9 15:29:15 2015 From: kvratkin@REDACTED (Kirill Ratkin) Date: Fri, 9 Oct 2015 16:29:15 +0300 Subject: [erlang-questions] RADIUS decode/encode Message-ID: Hi guys, Who played with RADIUS? I'm trying to make test aplication which decode request and encode response (Accept). Here is code: handle_info({udp, Socket, IP, Port, Packet}, State) -> io:format("Packet is ~p~n", [hexlify(Packet)]), <> = Packet, io:format("Packet is ~p,~p,~p,~p,~p~n", [ Code, Identifier, Length, Authenticator, hexlify(Attributes) ]), <> = Attributes, io:format("AVP: ~p, ~p, ~p~n", [Len, Type, Body]), AVPCode = 18, AVPMessage = <<"You dick">>, AVPSize = byte_size(AVPMessage) + 2, AVPResponse = <>, RCode = 2, % calculated base on logic, accept is now for test RLength = byte_size(AVPResponse) + 20, Secret = <<"secret">>, RAuthenticator = erlang:md5(<>), Response = <>, gen_udp:send(Socket, IP, Port, Response), inet:setopts(Socket, [{active, once}]), {noreply, State}; It works but ... 'radclient' says Response Authenticator is not correctly calculated. This is its output: $ echo "User-Name = test" | radclient -x localhost:1812 auth secret Sending Access-Request Id 68 from 0.0.0.0:38654 to 127.0.0.1:1812 User-Name = 'test' Received Access-Accept Id 68 from 127.0.0.1:1812 to 127.0.0.1:38654 length 30 (0) Reply verification failed: Received Access-Accept packet from home server 127.0.0.1 port 1812 with invalid Response Authenticator! (Shared secret is incorrect.) RFC says: Response Authenticator The value of the Authenticator field in Access-Accept, Access- Reject, and Access-Challenge packets is called the Response Authenticator, and contains a one-way MD5 hash calculated over a stream of octets consisting of: the RADIUS packet, beginning with the Code field, including the Identifier, the Length, the Request Authenticator field from the Access-Request packet, and the response Attributes, followed by the shared secret. That is, ResponseAuth = MD5(Code+ID+Length+RequestAuth+Attributes+Secret) where + denotes concatenation. It seems I do how RFC recommends but ... I don't see mistake :(. Please help if you see my fault. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ates@REDACTED Fri Oct 9 16:57:42 2015 From: ates@REDACTED (Artem Teslenko) Date: Fri, 9 Oct 2015 17:57:42 +0300 Subject: [erlang-questions] RADIUS decode/encode In-Reply-To: References: Message-ID: <5617D5E6.8030608@ipv6.dp.ua> Hi, Look at https://github.com/ates/radius project Especially radius_codec module On 10/09/2015 04:29 PM, Kirill Ratkin wrote: > Hi guys, > > Who played with RADIUS? > > I'm trying to make test aplication which decode request and encode > response (Accept). > > Here is code: > > handle_info({udp, Socket, IP, Port, Packet}, State) -> > io:format("Packet is ~p~n", [hexlify(Packet)]), > > < Attributes/binary>> = Packet, > > io:format("Packet is ~p,~p,~p,~p,~p~n", [ > Code, > Identifier, > Length, > Authenticator, > hexlify(Attributes) > ]), > > <> = Attributes, > > io:format("AVP: ~p, ~p, ~p~n", [Len, Type, Body]), > > AVPCode = 18, > AVPMessage = <<"You dick">>, > AVPSize = byte_size(AVPMessage) + 2, > AVPResponse = <>, > RCode = 2, % calculated base on logic, accept is now > for test > RLength = byte_size(AVPResponse) + 20, > Secret = <<"secret">>, > RAuthenticator = erlang:md5(< Authenticator:128, AVPResponse/binary, Secret/binary>>), > Response = < RAuthenticator/binary, AVPResponse/binary>>, > > gen_udp:send(Socket, IP, Port, Response), > > inet:setopts(Socket, [{active, once}]), > {noreply, State}; > > It works but ... 'radclient' says Response Authenticator is not > correctly calculated. > > This is its output: > > $ echo "User-Name = test" | radclient -x localhost:1812 auth secret > Sending Access-Request Id 68 from 0.0.0.0:38654 > to 127.0.0.1:1812 > User-Name = 'test' > Received Access-Accept Id 68 from 127.0.0.1:1812 > to 127.0.0.1:38654 > length 30 > (0) Reply verification failed: Received Access-Accept packet from home > server 127.0.0.1 port 1812 with invalid Response Authenticator! > (Shared secret is incorrect.) > > RFC says: > > Response Authenticator > > The value of the Authenticator field in Access-Accept, Access- > Reject, and Access-Challenge packets is called the Response > Authenticator, and contains a one-way MD5 hash calculated over > a stream of octets consisting of: the RADIUS packet, beginning > with the Code field, including the Identifier, the Length, the > Request Authenticator field from the Access-Request packet, and > the response Attributes, followed by the shared secret. That > is, ResponseAuth = > MD5(Code+ID+Length+RequestAuth+Attributes+Secret) where + > denotes concatenation. > > It seems I do how RFC recommends but ... > I don't see mistake :(. > > Please help if you see my fault. > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vances@REDACTED Fri Oct 9 17:09:05 2015 From: vances@REDACTED (Vance Shipley) Date: Fri, 9 Oct 2015 20:39:05 +0530 Subject: [erlang-questions] RADIUS decode/encode In-Reply-To: <5617D5E6.8030608@ipv6.dp.ua> References: <5617D5E6.8030608@ipv6.dp.ua> Message-ID: On Fri, Oct 9, 2015 at 8:27 PM, Artem Teslenko wrote: > Look at https://github.com/ates/radius project ... or: https://github.com/vances/radierl -- -Vance From essen@REDACTED Fri Oct 9 21:13:29 2015 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Fri, 9 Oct 2015 21:13:29 +0200 Subject: [erlang-questions] [erlang-bugs] Strange behaviour of exit(kill) In-Reply-To: <08cbe95218f76510e156d649cda2c893.squirrel@klotjohan.proxel.se> References: <4419117.afWZKFkaIl@burrito> <1713776.Oj5rNu4sYV@burrito> <20151007122725.GB3459@wagner.intra.a-tono.com> <5616637F.4010102@ninenines.eu> <20151008125730.GA1012@wagner.intra.a-tono.com> <561717D6.2030509@proxel.se> <20151009014138.GL1744@fhebert-ltm1> <08cbe95218f76510e156d649cda2c893.squirrel@klotjohan.proxel.se> Message-ID: <561811D9.9000806@ninenines.eu> On 10/09/2015 10:35 AM, Roland Karlsson wrote: > Woa! > > Thanx for explaining it so well! > > This seems so wrong that I am ashamed being an Erlang evangelist. > I just want to lay down, curl together and cry. > > I have always assumed exit/1 just was implemented as > exit(Reason) -> exit(self(),Reason). > Or, at least, that they had the same semantics. > > Looking in the manual, I see that exit/1 and exit/2 are > correctly described, and I do understand the need > for both (maybe) as exit/1 is synchronous. > > Personally I think exit/2 shall be > deprecated and renamed to send_exit_signal/2 or maybe kill/2. Considering it is called an "exit signal", the name makes sense, although it is confusing. Another clue is that exit/2 does not have error/2 or throw/2 equivalents. Perhaps kill/2 would be better (and would also match the kill command on unix) but I don't see it improve things much. You still have the kill special case with all attached problems. On the other hand, if you had exit/2 (or kill/2) just sending an exit signal (even if reason is kill), and kill/1 sending a kill signal, then things would become much clearer. A change like this would take many years though. > To be frank, I have never really liked the exit/2 name as > it seems counter intuitive to exit someone else. > > > /Roland > > > > >> On 10/09, Roland Karlsson wrote: >>> So Robert, tell me who missed it. >>> What is the difference in behaviour when using exit/1 or exit/2? >>> And is the difference still there if the process kills itself with >>> exit/2? >>> >>> /Roland >>> >> >> exit/1 raises a 'kill' exception (you can do it with erlang:raise(exit, >> kill, Stacktrace)) that will unwind the stack and can be caught by try >> ... catch. In the case where the exception is not caught, the process is >> terminated and the reason of its death forwarded as a signal. >> >> exit/2 sends an exit signal asynchronously. >> >> The confusing aspect is that a 'kill' signal sent by exit/2 is >> untrappable, but a 'kill' signal that comes from a process terminating >> after an exception doesn't. >> >> This means the underlying secret is you have 2 levels of signals (at >> least): >> >> - termination signals (uncaught exception create one of these, dying >> from any other signal does the same) >> - kill signals (untrappable) >> >> The interesting bit then is why is there a need to translate 'kill' into >> 'killed'? From what I can tell, it's just so you can know the process >> was brutally killed -- if you only had 'kill' then you know it came from >> an exception. Buuuut here's the kicker: >> >> 1> spawn_link(fun() -> exit(kill) end), flush(). >> Shell got {'EXIT',<0.87.0>,kill} >> ok >> 2> spawn_link(fun() -> spawn_link(fun() -> exit(kill) end), >> timer:sleep(infinity) end), flush(). >> Shell got {'EXIT',<0.89.0>,killed} >> 3> spawn_link(fun() -> process_flag(trap_exit, true), spawn_link(fun() >> -> exit(kill) end), timer:sleep(infinity) end), flush(). >> ok >> >> woops. There's no real consistency there. In the end: >> >> - a special 'kill' signal that cannot be trapped will unconditionally >> kill a process and produce a signal 'killed' for peer processes >> - an exception 'kill' bubbling up is reported as a trappable exit signal >> with the reason 'kill' >> - that 'kill' exit signal is converted to 'killed' for other processes, >> regardless of the source, as long as it's not from a local stack. >> >> This just sounds like a leaky abstraction where the conversion of kill >> -> killed only takes place on incoming signals being converted, >> unconditionally. Still, you have two types of signals: untrappable >> (exit(Pid, kill)), and trappable (anything else, even with a reason >> kill, if produced from a stacktrace). >> >> It's confusing and always has been so. >> > -- Lo?c Hoguin http://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From eshikafe@REDACTED Fri Oct 9 21:44:26 2015 From: eshikafe@REDACTED (austin aigbe) Date: Fri, 9 Oct 2015 20:44:26 +0100 Subject: [erlang-questions] Erlang shell command from Javascript (Electron) - Output Error Message-ID: Hello, I am trying to call an Erlang module function from javascript but the webpage does not display anything in the 'erl_ver' div element in the index.html file. -------------- index.html --------------- App

Welcome to App

. ----------------- shell_info.erl ----------------- -module(shell_info).-compile(export_all). get_ver() -> io:format(erlang:system_info(system_version)). However, if I remove the cwd option from the javascript exec function, in the index.html file, I get the following error on the webpage: Erlang shell version: {"init terminating in do_boot",{undef,[{shell_info,get_ver,[],[]},{init,start_it,1,[{file,"init.erl"},{line,1054}]},{init,start_em,1,[{file,"init.erl"},{line,1034}]}]}}, Electron version: 0.33.6 How do I display the expected shell_info:get_ver() output on the webpage? Expected output: C:\Users\eausaig\Downloads\electron-v0.33.6-win32-x64\erlang_studio>erl -noshell -s shell_info get_ver -s erlang haltErlang/OTP 18 [erts-7.1] [64-bit] [smp:4:4] [async-threads:10] C:\Users\eausaig\Downloads\electron-v0.33.6-win32-x64\erlang_studio> Regards, Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lloyd@REDACTED Fri Oct 9 21:08:41 2015 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Fri, 9 Oct 2015 14:08:41 -0500 Subject: [erlang-questions] Initializing mnesia In-Reply-To: <1649616.GLHeFUKtGy@burrito> References: <1444345727.2776431@apps.rackspace.com> <1649616.GLHeFUKtGy@burrito> Message-ID: You are troopers all. Much to chew on. Many thanks, Lloyd Sent from my iPad > On Oct 9, 2015, at 3:43 AM, zxq9 wrote: > >> On Friday 09 October 2015 09:39:01 Graham Hay wrote: >> There was another discussion here >> (http://erlang.2086793.n4.nabble.com/Mnesia-create-tables-best-practices-td4710757.html), >> about including an initialised db in a release. > > The non-annoying link: > http://erlang.org/pipermail/erlang-questions/2015-February/083160.html > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From co7eb@REDACTED Sat Oct 10 00:29:13 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Fri, 9 Oct 2015 18:29:13 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> Message-ID: <000001d102e1$ee788cf0$cb69a6d0$@frcuba.co.cu> Well, using this function I write instead of 'lists:flatten/1' your algorithm takes 50XX instead of 62XX milliseconds when using 'lists:flatten/1'. But if instead of adding the "," in your iolist generator algorithm we do it here somehow it takes about 42XX milliseconds, I did but I still missing one comma. I get something like this "users.*,name,roles.id,roles.levelusers.name AS alias, comma missing between role.level and user.name. I think we have no control of that in 'concat_io_list/2' function. concat_io_list([], Acc) -> Acc; concat_io_list([[L | _] = List | Rest], Acc) when is_list(L)-> concat_io_list(Rest, Acc ++ join_simple_list(List, [])); concat_io_list([List | Rest], Acc) -> concat_io_list(Rest, Acc ++ List). join_simple_list([], Acc) -> Acc; join_simple_list([[X | _] = Str | Xs], Acc) when is_number(X) -> join_simple_list(Xs, Acc ++ Str); join_simple_list([Str | Xs], Acc) -> join_simple_list(Xs, Acc ++ join_simple_list(Str, [])). Well thanks for all, Best regards, Ivan (son of Gilberio). -----Original Message----- From: Richard A. O'Keefe [mailto:ok@REDACTED] Sent: Thursday, October 8, 2015 10:17 PM To: Ivan Carmenates Garcia Cc: Erlang Questions Mailing List Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) On 8/10/2015, at 1:20 pm, Ivan Carmenates Garcia < co7eb@REDACTED> wrote: > For example this is one of the algorithms, I optimize it as well as I could: Start by optimising the documentation. > > Considering here that the order of the fields is very important!. > > %% ------------------------------------------------------------------- > %% @private > %% @doc > %% Parses the specified list of full fields into a string containing > %% all full fields separated my comma. > %% example: > %%

> %%   parse_full_fields([{users, '*'}, name, {roles, [id, level]},

> %%       {users, name, alias}], fun get_postgres_operator/2) ->

> %%       {"users.'*',name,roles.id,roles.level,users.name AS alias",

> %%           [users, roles]}.

> %% 
> %% Returns `{[], []}' if called with `[]'. > %% @throws {error, invalid_return_fields_spec, InvalidForm :: any()} > %% @end %% > ------------------------------------------------------------------- Parsing turns a string into structure. Turning structure into a string is the OPPOSITE of parsing. You are not parsing but UNparsing. > -spec parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> Str when > FullFieldsSpecs :: proplists:proplist(), > Separator :: string(), > OperatorFinder :: fun(), > Str :: string(). This is seriously confusing. The thing about a proplist, as we've recently discussed in this list, is that it contains - atoms, where 'x' is equivalent to {'x',true} - pairs {Key, Value} - junk, which is completely ignored. While I don't fully understand your example, it's clear that name is NOT equivalent to {name,true} and that {users, name, alias} is NOT ignored as junk. So whatever you have, it certainly isn't a proplist. Finally, your -spec describes a function with three arguments, but the example in your comment has only two arguments. It looks like you are mapping 'atom' -> "atom" {'table', 'field'} -> "table.field" {'table', ['f1',...,'fn']} -> "table.f1" ... "table.fn" {'table', 'field', 'alias'} -> "table.field AS alias" It would seem more logical to have field() = atom() | {atom(),atom()}. % {name,alias} fields() = field() | list(field()). slot() = atom() | {atom(),fields()}. % {table,fields} so that it's obvious how to put an alias inside a list of fields. -module(goo). -export([ tables/1, unparse_return_full_fields/1, unparse_slots/1 ]). unparse_return_full_fields(Slots) -> {lists:flatten(unparse_slots(Slots)), tables(Slots)}. tables([{Table,_}|Slots]) -> Tables = tables(Slots), case lists:member(Table, Tables) of true -> Tables ; false -> [Table|Tables] end; tables([Name|Slots]) when is_atom(Name) -> tables(Slots); tables([]) -> []. unparse_slots([Slot|Slots]) -> [ unparse_slot(Slot) | unparse_remaining_slots(Slots) ]. unparse_remaining_slots([]) -> ""; unparse_remaining_slots(Slots) -> ["," | unparse_slots(Slots)]. unparse_slot({Table,Fields}) -> unparse_fields(Fields, atom_to_list(Table)); unparse_slot(Name) when is_atom(Name) -> atom_to_list(Name). unparse_fields([Field|Fields], Table) -> [ unparse_fields(Field, Table) | unparse_remaining_fields(Fields, Table) ]; unparse_fields({Name,Alias}, Table) -> [Table, ".", atom_to_list(Name), " AS ", atom_to_list(Alias)]; unparse_fields(Name, Table) when is_atom(Name) -> [Table, ".", atom_to_list(Name)]. unparse_remaining_fields([], _) -> ""; unparse_remaining_fields(Fields, Table) -> ["," | unparse_fields(Fields, Table)]. With that definition, we get 1> c(goo). 2> goo:unparse_slots([{users, '*'}, name, {roles, [id, level]}, {users, {name, alias}}]). %% Note difference here.[["users",".","*"], ",","name",",", [["roles",".","id"],",",["roles",".","level"]], ",", ["users",".","name"," AS ","alias"]] This is a tree of strings, not a string. But for many purposes, it's just as good. (It's called an iolist.) Flattening it gives "users.*,name,roles.id,roles.level,users.name AS alias" 3> goo:unparse_return_full_fields([{users, '*'}, name, {roles, [id, level]}, {users, {name, alias}}]). {"users.*,name,roles.id,roles.level,users.name AS alias", [roles,users]} You will note that the functions above don't have any use for reversing a list. And I must repeat that for many purposes, such as sending text to another OS process, there isn't any *need* to flatten the tree of strings. Even in your code, > parse_return_full_fields(FullFieldsSpecs, Separator, OperatorFinder) -> > {ParsedFields, TableNames} = > parse_full_fields2(FullFieldsSpecs, [], [], OperatorFinder), > {string:join(lists:reverse(ParsedFields), Separator), TableNames}. there isn't actually any need to do this: my_reverse_join([X|Xs], Sep) -> my_reverse_join(Xs, X, Sep); my_reverse_join([], _) -> "". my_reverse_join([], Acc, _) -> Acc; my_reverse_join([Y|Ys], Acc, Sep) -> my_reverse_join(Ys, Y++Sep++Acc, Sep). 9> goo:my_reverse_join(["harry","deacon","thom","avery"]). "avery,thom,deacon,harry" We see two ideas here which are likely to be applicable to your code in context. (1) It may be possible to *eliminate* a call to lists:reverse/1 by fusing it with the function the result is being passed to. (2) It may be possible to *eliminate* appends (++) by constructing an iolist (a tree of strings) instead of a string. For example, io:put_chars/2 wants a unicode:chardata(). http://www.erlang.org/doc/man/unicode.html#type-chardata http://www.erlang.org/doc/man/io.html#put_chars-2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From guidao1013@REDACTED Sat Oct 10 11:59:23 2015 From: guidao1013@REDACTED (Dao Gui) Date: Sat, 10 Oct 2015 17:59:23 +0800 Subject: [erlang-questions] =?utf-8?q?why_isn=27t_scope_in_=22begin_end=22?= =?utf-8?b?IGJsb2Nr77yf?= Message-ID: Sorry to my pool English Example: -module(test). -export([test/0]). test() -> begin A = 1 end, A = 2. %% match error %io:format("~p~n", [A]). %normal output The var A is in begin end scope, but it can visited by other scope. Why isn't erlang support this? no this feature? I can't write the next code: -define(OUTPUT_3(A), begin [_,_,B|_]=A, io:format("~p~n", [B]) end). -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.benavides@REDACTED Sat Oct 10 14:45:42 2015 From: fernando.benavides@REDACTED (Brujo Benavides) Date: Sat, 10 Oct 2015 09:45:42 -0300 Subject: [erlang-questions] =?utf-8?q?why_isn=27t_scope_in_=22begin_end=22?= =?utf-8?b?IGJsb2Nr77yf?= In-Reply-To: References: Message-ID: You can?t write that but you can write this: output_3(A) -> [_,_,B|_]=A, io:format(?~p~n?, [B]). and then, in your code, instead of ?OUTPUT_3(?something?) you just use output_3(?something?) > On Oct 10, 2015, at 06:59, Dao Gui wrote: > > Sorry to my pool English > Example: > > -module(test). > -export([test/0]). > > test() -> > begin A = 1 end, > A = 2. %% match error > %io:format("~p~n", [A]). %normal output > > The var A is in begin end scope, but it can visited by other scope. > Why isn't erlang support this? > > no this feature? I can't write the next code: > -define(OUTPUT_3(A), begin [_,_,B|_]=A, io:format("~p~n", [B]) end). > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From montuori@REDACTED Sat Oct 10 15:22:33 2015 From: montuori@REDACTED (Kevin Montuori) Date: Sat, 10 Oct 2015 08:22:33 -0500 Subject: [erlang-questions] =?utf-8?q?why_isn=27t_scope_in_=22begin_end=22?= =?utf-8?b?IGJsb2Nr77yf?= In-Reply-To: (Dao Gui's message of "Sat, 10 Oct 2015 17:59:23 +0800") References: Message-ID: >>>>> "dg" == Dao Gui writes: dg> The var A is in begin end scope, but it can visited by other dg> scope. Why isn't erlang support this? Because the documentation says so: The scope for a variable is its function clause. Variables bound in a branch of an if, case, or receive expression must be bound in all branches to have a value outside the expression. Otherwise they are regarded as 'unsafe' outside the expression. http://www.erlang.org/doc/reference_manual/expressions.html#id79254 dg> -define(OUTPUT_3(A), begin [_,_,B|_]=A, io:format("~p~n", [B]) end). -define(OUTPUT_3(A), io:format("~p~n", lists:nth(3, A))). k. -- Kevin Montuori montuori@REDACTED From aschultz@REDACTED Sat Oct 10 15:43:24 2015 From: aschultz@REDACTED (Andreas Schultz) Date: Sat, 10 Oct 2015 15:43:24 +0200 (CEST) Subject: [erlang-questions] RADIUS decode/encode In-Reply-To: References: <5617D5E6.8030608@ipv6.dp.ua> Message-ID: <1604170422.432635.1444484604512.JavaMail.zimbra@tpip.net> ----- Original Message ----- > From: "Vance Shipley" > To: "Artem Teslenko" > Cc: "Questions erlang-questions" > Sent: Friday, October 9, 2015 5:09:05 PM > Subject: Re: [erlang-questions] RADIUS decode/encode > On Fri, Oct 9, 2015 at 8:27 PM, Artem Teslenko wrote: >> Look at https://github.com/ates/radius project > > ... or: https://github.com/vances/radierl ... or : https://github.com/travelping/eradius Andreas From aschultz@REDACTED Sat Oct 10 15:47:11 2015 From: aschultz@REDACTED (Andreas Schultz) Date: Sat, 10 Oct 2015 15:47:11 +0200 (CEST) Subject: [erlang-questions] RADIUS decode/encode In-Reply-To: References: Message-ID: <2127303626.432646.1444484831044.JavaMail.zimbra@tpip.net> ----- Original Message ----- > From: "Kirill Ratkin" > To: erlang-questions@REDACTED > Sent: Friday, October 9, 2015 3:29:15 PM > Subject: [erlang-questions] RADIUS decode/encode > Hi guys, > Who played with RADIUS? > I'm trying to make test aplication which decode request and encode response > (Accept). > Here is code: > handle_info({udp, Socket, IP, Port, Packet}, State) -> > io:format("Packet is ~p~n", [hexlify(Packet)]), > <> = > Packet, > io:format("Packet is ~p,~p,~p,~p,~p~n", [ > Code, > Identifier, > Length, > Authenticator, > hexlify(Attributes) > ]), > <> = Attributes, > io:format("AVP: ~p, ~p, ~p~n", [Len, Type, Body]), > AVPCode = 18, > AVPMessage = <<"You dick">>, > AVPSize = byte_size(AVPMessage) + 2, > AVPResponse = <>, > RCode = 2, % calculated base on logic, accept is now for test > RLength = byte_size(AVPResponse) + 20, > Secret = <<"secret">>, > RAuthenticator = erlang:md5(<>), That should be: RAuthenticator = erlang:md5(<>), The Response-Authenticator is calculate over the response packet, with the Authenticator field set to the Request-Authenticator. Andreas > Response = < AVPResponse/binary>>, > gen_udp:send(Socket, IP, Port, Response), > inet:setopts(Socket, [{active, once}]), > {noreply, State}; > It works but ... 'radclient' says Response Authenticator is not correctly > calculated. > This is its output: > $ echo "User-Name = test" | radclient -x localhost:1812 auth secret > Sending Access-Request Id 68 from 0.0.0.0:38654 to 127.0.0.1:1812 > User-Name = 'test' > Received Access-Accept Id 68 from 127.0.0.1:1812 to 127.0.0.1:38654 length 30 > (0) Reply verification failed: Received Access-Accept packet from home server > 127.0.0.1 port 1812 with invalid Response Authenticator! (Shared secret is > incorrect.) > RFC says: > Response Authenticator > The value of the Authenticator field in Access-Accept, Access- > Reject, and Access-Challenge packets is called the Response > Authenticator, and contains a one-way MD5 hash calculated over > a stream of octets consisting of: the RADIUS packet, beginning > with the Code field, including the Identifier, the Length, the > Request Authenticator field from the Access-Request packet, and > the response Attributes, followed by the shared secret. That > is, ResponseAuth = > MD5(Code+ID+Length+RequestAuth+Attributes+Secret) where + > denotes concatenation. > It seems I do how RFC recommends but ... > I don't see mistake :(. > Please help if you see my fault. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From thomas@REDACTED Sat Oct 10 17:46:20 2015 From: thomas@REDACTED (Thomas Gebert) Date: Sat, 10 Oct 2015 11:46:20 -0400 Subject: [erlang-questions] Why should you ever use atoms? Message-ID: I know this is probably kind of a newbie question, but I figured this would be the place to ask it: if atoms aren't garbage collected, why should I use them? For example, it's a common pattern to have something like: myFunction({user, "tombert","eats pizza"}) -> %% do something When I could easily do something like: myFunction({"user", "tombert", "eats pizza"}) -> %% do something ---- I could be way off here, but wouldn't the string be garbage collected? Is there a benefit to atoms that I'm missing? -Tombert. From pierrefenoll@REDACTED Sat Oct 10 17:59:54 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Sat, 10 Oct 2015 08:59:54 -0700 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: <3245F9E3-07F8-4B20-9FD2-75D28F657284@gmail.com> Atom comparison is O(1) (and just one word). String comparison is O(n) > On 10 Oct 2015, at 08:46, Thomas Gebert wrote: > > I know this is probably kind of a newbie question, but I figured this would be the place to ask it: if atoms aren't garbage collected, why should I use them? For example, it's a common pattern to have something like: > > myFunction({user, "tombert","eats pizza"}) -> %% do something > > When I could easily do something like: > > myFunction({"user", "tombert", "eats pizza"}) -> %% do something > > ---- > > I could be way off here, but wouldn't the string be garbage collected? Is there a benefit to atoms that I'm missing? > > -Tombert. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From olopierpa@REDACTED Sat Oct 10 18:00:51 2015 From: olopierpa@REDACTED (Pierpaolo Bernardi) Date: Sat, 10 Oct 2015 18:00:51 +0200 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: On Sat, Oct 10, 2015 at 5:46 PM, Thomas Gebert wrote: > I know this is probably kind of a newbie question, but I figured this would > be the place to ask it: if atoms aren't garbage collected, why should I use > them? For example, it's a common pattern to have something like: > > myFunction({user, "tombert","eats pizza"}) -> %% do something > > When I could easily do something like: > > myFunction({"user", "tombert", "eats pizza"}) -> %% do something > > ---- > > I could be way off here, but wouldn't the string be garbage collected? Is > there a benefit to atoms that I'm missing? Yes. Atoms can be checked for identity with one machine instruction, while strings need to be scanned. From zxq9@REDACTED Sat Oct 10 19:17:11 2015 From: zxq9@REDACTED (zxq9) Date: Sun, 11 Oct 2015 02:17:11 +0900 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: <2211277.niYdm7qWJz@changa> On 2015?10?10? ??? 11:46:20 Thomas Gebert wrote: > I know this is probably kind of a newbie question, but I figured this > would be the place to ask it: if atoms aren't garbage collected, why > should I use them? ...snip... > I could be way off here, but wouldn't the string be garbage collected? > Is there a benefit to atoms that I'm missing? [Spoiler: This is a non-answer, in the sense that I urge you to discover the answer yourself through experience with the environment.] This reminds me of me about 7 years ago, before I had been able to give myself any time with languages that featured atoms. There are *many* good reasons for having atoms (it would be interesting if ROK shows up and enumerates a few of them -- pun noted, to include probable demolition of the pun). If you have ever written anything in C, used databases with serious features, or messed around with very basic hash-like data structures you will instantly appreciate the utility of enums and enum-like data. Even if you haven't, though, just write a few programs in Erlang and you will discover on your own why atoms are useful. At first you must simply have faith that they are useful enough that people we assume to have deeper knowledge than ourselves saw fit to include them. Until we find a pattern of cases that invalidate that assumption we should just go with it. Life is too short to start second guessing people who have produced demonstrably useful things we wish to use until we actually know better (as opposed to pretending that we do). Anyway, without getting even more preachy than this response already is, just go with the flow for now, use atoms where they seem appropriate, and your sense of "appropriate" and "common use case" will mature. At that point you will suddenly realize why this is useful. For me, at least, this sort of experience has more impact than an academic explanation does initially -- after a bit of time getting used to the mentality, though, the academic explanations become very obviously compelling (and suddenly you will appreciate two things: weird features *and* academic explanations). At the very least, who wants to write `<<"something">>` when they can write `something` instead? There is a very clear semantic case for having a data type that "is its own value". -Craig From co7eb@REDACTED Sat Oct 10 20:38:57 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Sat, 10 Oct 2015 14:38:57 -0400 Subject: [erlang-questions] What about this? Message-ID: <000001d1038a$edaf3db0$c90db910$@frcuba.co.cu> Hi fellows, Here I show some examples of how to work with the little framework for cowboy that I am making, I will be glad of hear some opinions from the community in order to help improve it. I must say that I will release it to the community as soon as it is ready for shown (the first version). (This will be about database stuffs) First you configure the backends in a config file app.config. (you can configure as many backends as you want, even in runtime you can update and install database backends). I'll show you how. %% Database backends configuration. {database_manager, [ {main_backend, [ {backend, postgres_backend}, {server, "localhost"}, {username, "postgres"}, {password, "server"}, {database, "my_main_db"}, %% max amount of database connections in the connection pool. {max_reusable_connections, 10}, % 10 connections. %% max time it will wait for an available connection. {wait_for_reusable_connection_timeout, 5000} % 5 seconds. ]}, {test_backend, [ {backend, postgres_backend}, {server, "localhost"}, {username, "postgres"}, {password, "server"}, {database, "test_db"}, %% max amount of database connections in the connection pool. {max_reusable_connections, 50}, % 10 connections. %% max time it will wait for an available connection. {wait_for_reusable_connection_timeout, 3000} % 3 seconds. ]} ]} To install a backend at runtime use database_manager:install_backend/2, passing the backend name and config, also has update_backend/2. Later you will use this in a simple way: database_manager:connection_session(fun(DBSession)-> . end, [{backend, test_backend}]). 'connection_session/1' give us an available connection from the connection pool. Each configured backend has its own connection pool, so we can use they as we like having the possibility of use multiple backend in runtime and at the same time. Let's see some examples of the current database functionalities the framework has: database_manager:connection_session(fun(DBSession) -> database_manager:insert(DBSession, {users, #{username => "John", age => 31}}) end). If no backend option is defined in 'connection_session', 'main_backend' will be used. database_manager:connection_session_transaction(fun(DBSession) -> {ok, #{id := UserId}} = database_manager:insert(DBSession, {users, #{username => "John", age => 31}}, [{return_fields, [id]}]), {ok, 1} = database_manager:insert(DBSession, {roles, #{user_id => UserId, role_level => 1}}) end). We also have 'database_manager:transaction/1' for use with 'connection_session' as we like. It has more functions like update, delete, find. i.e.: database_manager:connection_session(fun(DBSession) -> {ok, DataMap} = database_manager:find(DBSession, users, [{username, '==', "John"}]) end). We can use 'return_fields' option also to select just the fields we want. That was the low level API, it also have a high level API for model validation and of course you can mix between them as you wish. Example: To use the high level model API you need to create a module file that implement a behavior of 'model'. The nice thing here is that you can use validation functions by context, using tags. -module(users_model). -behavior(model). -export([ validation_tests/1, after_validate/1]). validation_tests(ModelDataMap) -> {username := Username, online := Online, password := Password} = ModelDataMap, [ %% tag for common validation tests. {common, [ %% username is required {fun() -> size(Username) =/= 0 end, "username cannot be empty"}]}, {new, [ %% password cannot be empty {fun() -> size(Password) =/= 0 end, "password cannot be empty"}]}, {update, [ %% online must be true or false {fun() -> (Online == true) or (Online == false) end, "online is not true or false"}]} ]. %% optional after_validate(ModelDataMap) -> ok. %% optional id_field_name() -> id. Then you can write {ok, ModelInfo} = model_manager:new_model(users, #{ username => "John", password => "server213*-+", password_old => password}, [common, new]), {ok, UserId} = model_manager:store(ModelInfo, [return_id]), {ok, ModelInfo2} = model_manager:update_model(roles, UserId, #{role_level => 1}, [common, update]), ok = model_manager:store_model(ModelInfo2). Note three things here: - first you can use tags to separate validation tests by context, for example, here I use 'common' to use that validation function for both new and update, and one for each of them. You can choose the name you want. But 'always' is a special name that is used when you don't specify any tag in 'new_model' or 'update_model' functions, them 'always' tag will be used. - second you can use 'return_id' option to return just the id of the inserted record but you can also use 'return_fields' to return any other field. In other to make the framework knows which is the id for 'return_id' option we must use 'id' for the name in the database or specify which is the name of the id in the model module using 'id_field_name/0' function. - third we can do stuffs like this #{username => "John", password => "server213*-+", password_old => password}, meaning that 'passworld_old' will take the value of 'password'. There are many other options and functions in the API, for example you can mix between low level API and High level API using model_manager:store_session(fun(DBSession) -> . end). You of course can use all functionalities form the low level API in the High one, like backend options and many other. Using 'model_manager:store_session/1/2' when using multiple 'store_model' in the same function is recommended because with it you hold the same connection for all 'store_model' and low level API operations. So this will be ok: model_manager:store_session_transaction(fun(_) -> {ok, ModelInfo} = model_manager:new_model(users, #{ username => "John", password => "server213*-+", password_old => password}, [common, new]), {ok, UserId} = model_manager:store(ModelInfo, [return_id]), {ok, ModelInfo2} = model_manager:update_model(roles, UserId, #{role_level => 1}, [common, update]), ok = model_manager:store_model(ModelInfo2) end, [{backend, test_backend}]). If any of the functions fail or the pattern matching fails the hold transaction will rollback. Cheers, Ivan (son of Gilberio). -------------- next part -------------- An HTML attachment was scrubbed... URL: From co7eb@REDACTED Sat Oct 10 20:49:25 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Sat, 10 Oct 2015 14:49:25 -0400 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: <000501d1038c$64295470$2c7bfd50$@frcuba.co.cu> Regards, Even if you use atoms in many functions for pattern matching you won't be using as many as for break down the system or consume the default maximum of atoms allowed by the system. So use them freely in that way, warning a thing you definably must never do is to use it as a conversion from a data that comes from outside of Erlang or other context, for example never do this: my_data_to_atom(StringData) when is_list(StringData) -> lists_to_atom(StringData). This is a simple and pointless example but what I am trying to explain is that StringData is an information that can vary in many ways and one you make a new atom it will be never garbage collected so... unless you have control of the StringData and know that it will be only a few constant values of course. Cheers, Ivan (son of Gilberio). -----Original Message----- From: erlang-questions-bounces@REDACTED [mailto:erlang-questions-bounces@REDACTED] On Behalf Of Thomas Gebert Sent: Saturday, October 10, 2015 11:46 AM To: erlang-questions@REDACTED Subject: [erlang-questions] Why should you ever use atoms? I know this is probably kind of a newbie question, but I figured this would be the place to ask it: if atoms aren't garbage collected, why should I use them? For example, it's a common pattern to have something like: myFunction({user, "tombert","eats pizza"}) -> %% do something When I could easily do something like: myFunction({"user", "tombert", "eats pizza"}) -> %% do something ---- I could be way off here, but wouldn't the string be garbage collected? Is there a benefit to atoms that I'm missing? -Tombert. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From jesper.louis.andersen@REDACTED Sat Oct 10 22:14:22 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sat, 10 Oct 2015 22:14:22 +0200 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: On Sat, Oct 10, 2015 at 5:46 PM, Thomas Gebert wrote: > I know this is probably kind of a newbie question, but I figured this > would be the place to ask it: if atoms aren't garbage collected, why should > I use them? 1) Pierre notes: runtime characteristics are different. This in itself is an important reason. Having a way to introduce static names in code with O(1) comparison helps in defining structure. This structure can then be used in pattern matches to quickly discriminate between cases. If you define {user, "tombert", "easts pizza"}, you are using a common pattern where the first tuple-argument encodes what the remainder of the tuple contains. It has some reminiscence of algebraic datatypes in languages which support those, and is somewhat Erlang's ways of getting access to these. >From a programmers perspective: atoms: used for statically declared words by the programmer and are avoided for dynamic generation and input controlled by an "enemy". binaries: used in places where space saving is paramount, and for data which contain truly binary information. strings: textual strings and the rest of the systems string-like objects are represented as lists of code-points. Apart from this, there are cases where it is nice to signify "namespace difference" by using atoms. e.g., in the term {user, "tombert", "eats pizza"}, A reader immediately understands that 'user' is something the programmer has supplied, whereas "tombert" and "eats pizza" are probably sattelite data entered by a user at some point. Had we writte, e.g., "user", then you somewhat tell the reader that "user" is a field which might get changed to "anything", whereas the atom 'user' signifies that there might be other kinds of terms used in this place, but there is only a finite amount of those. Of course, YMMV, and some people have other views as to how and where to use the different classes. There are also people who thinks we should GC atoms such that we can reuse them freely. The major reason this has not happened yet, is that one has to come up with a good scheme that doesn't slow the interpreter down too much, or breaks the soft-realtime properties of the system. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic.bonfanti@REDACTED Sun Oct 11 07:46:01 2015 From: frederic.bonfanti@REDACTED (Frederic BONFANTI) Date: Sun, 11 Oct 2015 00:46:01 -0500 Subject: [erlang-questions] Erlang executable that reads command line arguments Message-ID: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> Hi guys, given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to 1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) 2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: test123 -A -x 555 If there are straightforward examples available, that will do. Thanks in advance From sergej.jurecko@REDACTED Sun Oct 11 08:52:28 2015 From: sergej.jurecko@REDACTED (Sergej =?UTF-8?B?SnVyZcSNa28=?=) Date: Sun, 11 Oct 2015 08:52:28 +0200 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> References: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> Message-ID: <887870C5-9650-4D47-B1E7-8670B196746E@gmail.com> Use an escript? http://www.erlang.org/doc/man/escript.html Sergej On 11/10/15 07:46, "Frederic BONFANTI" wrote: >Hi guys, > >given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to > >1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) > >2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: > > test123 -A -x 555 > >If there are straightforward examples available, that will do. > >Thanks in advance > >_______________________________________________ >erlang-questions mailing list >erlang-questions@REDACTED >http://erlang.org/mailman/listinfo/erlang-questions From eric.pailleau@REDACTED Sun Oct 11 10:07:52 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Sun, 11 Oct 2015 10:07:52 +0200 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> Message-ID: Hello, And Gnu getopt like module : https://github.com/jcomellas/getopt Regards Le?11 oct. 2015 07:46, Frederic BONFANTI a ?crit?: > > Hi guys, > > given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to > > 1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) > > 2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: > > test123 -A -x 555 > > If there are straightforward examples available, that will do. > > Thanks in advance > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From frederic.bonfanti@REDACTED Sun Oct 11 09:42:45 2015 From: frederic.bonfanti@REDACTED (Frederic BONFANTI) Date: Sun, 11 Oct 2015 02:42:45 -0500 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: <887870C5-9650-4D47-B1E7-8670B196746E@gmail.com> References: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> <887870C5-9650-4D47-B1E7-8670B196746E@gmail.com> Message-ID: The first time I launched it, it requested me to accept socket monitoring (can?t remember exactly) then I couldn?t reproduce that issue. Besides this, it should do Thanks > On Oct 11, 2015, at 1:52 AM, Sergej Jure?ko wrote: > > Use an escript? > > http://www.erlang.org/doc/man/escript.html > > > Sergej > > > > > On 11/10/15 07:46, "Frederic BONFANTI" wrote: > >> Hi guys, >> >> given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to >> >> 1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) >> >> 2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: >> >> test123 -A -x 555 >> >> If there are straightforward examples available, that will do. >> >> Thanks in advance >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > From dmkolesnikov@REDACTED Sun Oct 11 12:40:40 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Sun, 11 Oct 2015 13:40:40 +0300 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> References: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> Message-ID: <9FAC6E44-8C3B-418E-94BF-82CB9FBEB9FA@gmail.com> And if you are using rebar and you have deps to other projects you might add following lines to rebar.config {escript_incl_apps, [ getopt ]}. {escript_emu_args, "%%! +K true +P 10000000\n"}. Sad story, I've not figure out how to add erlang runtime to same package. Anyone, who receives you script needs to have one. Best Regards, Dmitry >-|-|-(*> > On 11 Oct 2015, at 08:46, Frederic BONFANTI wrote: > > Hi guys, > > given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to > > 1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) > > 2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: > > test123 -A -x 555 > > If there are straightforward examples available, that will do. > > Thanks in advance > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vimal7370@REDACTED Sun Oct 11 16:18:50 2015 From: vimal7370@REDACTED (Vimal Kumar) Date: Sun, 11 Oct 2015 19:48:50 +0530 Subject: [erlang-questions] If you make WhatsApp today... Message-ID: Hi, Imagine that WhatsApp does not exist today. If you are asked to write WhatsApp in Erlang and scale it like how they successfully did, will you still be opting for Mnesia, or something else like Riak? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang@REDACTED Sun Oct 11 19:48:41 2015 From: erlang@REDACTED (Joe Armstrong) Date: Sun, 11 Oct 2015 19:48:41 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: Message-ID: On Sun, Oct 11, 2015 at 4:18 PM, Vimal Kumar wrote: > Hi, > > Imagine that WhatsApp does not exist today. If you are asked to write > WhatsApp in Erlang and scale it like how they successfully did, will you > still be opting for Mnesia, or something else like Riak? Neither - I'd define an API that I'd use and implement it with any appropriate database. Then *if* the app took off and I ran into performance problems I'd tweak whatever law behind the API. If the application never flew I'd be saved the trouble of making an efficient back-end. To implement the API I'd choose whatever I was most comfortable with. My goal would be to get something up and running as soon as possible and get users - and not worry about implementation details. As the wise man said "Premature optimisation is the root of all evil". I'd probably use the file system, then ets or dets and "roll my own" first. Data bases using *only* the file system and simple locking can get you a lot of milage before you need to optimize :-) /Joe > > Thank you! > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From erlang@REDACTED Sun Oct 11 20:01:08 2015 From: erlang@REDACTED (Joe Armstrong) Date: Sun, 11 Oct 2015 20:01:08 +0200 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: Using atoms is fine. The dangerous primitive is list_to_atom(X) if your program *never* calls list_to_atom then don't worry. If you call list_to_atom a lot then worry. Note "a lot" means tens of millions of times on a machine with GBytes if memory. There is your friend list_to_existing_atom(X) which exits if X represents a non existent atom. You can, or course, use strings or binaries instead of atoms but your programs will run slower and you'll put more pressure on the garbage collector. Rule of thumb: don't worry, use atoms until you run into problems. Write your code in such a way that swapping from atoms to binaries is no big deal. This is actually about structuring your code - if you use atoms and get problems and have to edit for a hour or so to fix the problem then this is no big deal - if this involves weeks of work then you have a bad design. Make it work - them make it fast/stable/whatever /Joe On Sat, Oct 10, 2015 at 5:46 PM, Thomas Gebert wrote: > I know this is probably kind of a newbie question, but I figured this would > be the place to ask it: if atoms aren't garbage collected, why should I use > them? For example, it's a common pattern to have something like: > > myFunction({user, "tombert","eats pizza"}) -> %% do something > > When I could easily do something like: > > myFunction({"user", "tombert", "eats pizza"}) -> %% do something > > ---- > > I could be way off here, but wouldn't the string be garbage collected? Is > there a benefit to atoms that I'm missing? > > -Tombert. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From gianfranco.alongi@REDACTED Sun Oct 11 20:50:16 2015 From: gianfranco.alongi@REDACTED (Gianfranco Alongi) Date: Sun, 11 Oct 2015 20:50:16 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: Message-ID: This is the correct answer. On Sunday, 11 October 2015, Joe Armstrong wrote: > On Sun, Oct 11, 2015 at 4:18 PM, Vimal Kumar > wrote: > > Hi, > > > > Imagine that WhatsApp does not exist today. If you are asked to write > > WhatsApp in Erlang and scale it like how they successfully did, will you > > still be opting for Mnesia, or something else like Riak? > > Neither - I'd define an API that I'd use and implement it with any > appropriate database. Then *if* the app took off and I ran into performance > problems I'd tweak whatever law behind the API. If the application never > flew I'd be saved the trouble of making an efficient back-end. > > To implement the API I'd choose whatever I was most comfortable with. > > My goal would be to get something up and running as soon as possible > and get users - and not worry about implementation details. > > As the wise man said "Premature optimisation is the root of all evil". > > I'd probably use the file system, then ets or dets and "roll my own" first. > > Data bases using *only* the file system and simple locking can get you > a lot of milage before you need to optimize :-) > > > /Joe > > > > > Thank you! > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Sent from Gmail Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Sun Oct 11 23:27:54 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Mon, 12 Oct 2015 00:27:54 +0300 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: You can think about atoms as of a predefined words of your domain logic. Each domain description has a limited amount of words that describe it, so number of atoms is also limited by design. In a properly designed program you should never meet dynamic creation by list_to_atom, only if you know that it is a string representation of already described atom. In other words, atoms are always written by hands by some human as a part of described functionality. Maybe this will help you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Sun Oct 11 23:44:54 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sun, 11 Oct 2015 23:44:54 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: Message-ID: On Sun, Oct 11, 2015 at 7:48 PM, Joe Armstrong wrote: > I'd probably use the file system, then ets or dets and "roll my own" first. To much extent, this is what WhatsApp is doing according to what I've been able to dissect out of their talks. They use mnesia for storing meta-data, but they use an UFS2 file system on FreeBSD to store the majority of data and just pick it off of the disk whenever they need it. The other trick they use is to use the clients to store the majority of data and only use the servers as a temporary transient store until the client comes back up and requests its missing data. Were I to start, I'd definitely do the API modularity split as well, but I'd probably pick postgresql as a backing store over mnesia, perhaps with mnesia as an in-memory cache for active users. The assumption is user data is not changing that often, and postgresql have the added value of being a system I *know* how to operate and handle. There is much to be said for picking technology which is tried and proven and you know how to operate. A possible switch is Riak for the UFS2 store, but it does require the ability to "append" to an object in Riak. Otherwise it is kind of moot to load data, add a couple of bytes and store the object back again. Perhaps this can be done by "chaining" documents in Riak with content addressing. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Mon Oct 12 01:02:12 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 12 Oct 2015 12:02:12 +1300 Subject: [erlang-questions] =?utf-8?q?why_isn=27t_scope_in_=22begin_end=22?= =?utf-8?b?IGJsb2Nr77yf?= In-Reply-To: References: Message-ID: <0D41BBB1-2CF6-4048-906A-3EE1533D4DB2@cs.otago.ac.nz> On 10/10/2015, at 10:59 pm, Dao Gui wrote: > Sorry to my pool English > Example: > > -module(test). > -export([test/0]). > > test() -> > begin A = 1 end, > A = 2. %% match error > %io:format("~p~n", [A]). %normal output > > The var A is in begin end scope, but it can visited by other scope. > Why isn't erlang support this? Because it doesn't. It never has. It is so documented. There is no such thing as "begin end scope" in Erlang. You might as well ask why %start...%finish in Atlas Autocode and IMP aren't scopes; why do;...end; in PL/I and XPL and PL/M aren't scopes; why begin...end is not a scope in Pascal. | without this feature? I can't write the next code: > -define(OUTPUT_3(A), begin [_,_,B|_]=A, io:format("~p~n", [B]) end). You can get the *effect* you want: -define(OUTPUT_3(A), fun ([_,_,B|_]) -> io:format("~p~n", [B]) end(A)). In general, you can express let Pat1 = Exp1 Pat2 = Exp2 ... Patn = Expn in Body end as fun (Pat1, Pat2, ..., Patn) -> Body end(Exp1, Exp2, ..., Expn) From ok@REDACTED Mon Oct 12 01:22:16 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 12 Oct 2015 12:22:16 +1300 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: <000501d1038c$64295470$2c7bfd50$@frcuba.co.cu> References: <000501d1038c$64295470$2c7bfd50$@frcuba.co.cu> Message-ID: <037DD6F4-BCF7-4522-9F6D-CB1E88157357@cs.otago.ac.nz> For what it's worth, SWI Prolog now has a lock-free atom symbol table and has garbage collected the atom table for a very long time. It is long past time for Erlang's fixed size bottlenecking atom table to be replaced by something less troublesome. There is an EEP proposing an approach to lots-of-atoms based on the LOGIX implementation of Flat Concurrent Prolog. From what I've heard, the SWI Prolog lock-free atom table sounds superior. For what it's worth, benchmarking is always able to surprise us. I benchmarked comparing atoms, comparing strings, and comparing binaries. Atoms: 1.0 Strings: 1.3 Binaries: 1.4 With hindsight, it was obvious: most string comparisons fail at the first character. Of course, if you have lots of strings (maybe file names or URIs) with long common prefixes, comparing those is going to be costly, but you're not going to use atoms for those anyway, and you should probably use a data structure like A/B/C -> [C,B,A] so that the different bits get compared first. From kenji@REDACTED Mon Oct 12 03:04:02 2015 From: kenji@REDACTED (Kenji Rikitake) Date: Mon, 12 Oct 2015 10:04:02 +0900 Subject: [erlang-questions] Building OTP on OS X 10.11 El Capitan Message-ID: <20151012010402.GA9242@k2r.org> OS X 10.11 El Capitan seems to have ditched /usr/include altogether (presumably due to System Integrity Protection policies, even untouchable with XCode installation). This prevents Erlang/OTP 18.1.1 to be built with crypto-related modules including ssh and ssl modules. What I did was to install HomeBrew OpenSSL (after creating /usr/local) and use --with-ssl=/usr/local/opt/openssl when doing Configure. OTOH, I find XCode 7.1 places the directory under /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift-migrator/sdk/MacOSX.sdk/usr/include if you really want to use the stock OpenSSL 0.9.8 (though I haven't tried it). Any other possible practical solutions on this? Regards, Kenji Rikitake From vasdeveloper@REDACTED Mon Oct 12 03:59:38 2015 From: vasdeveloper@REDACTED (Theepan) Date: Mon, 12 Oct 2015 07:29:38 +0530 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: Message-ID: Vimal, As Joe mentioned, focus of a new product should be on key-differentiators and time-to-market. This way you can check the product-market fit, and, if needed, can fail-fast. If you are familiar with Mnesia, it does not take more than 10 minutes to set it up. Since it is a Erlang-integrated DBMS, it will help you code fast in case you need transactional guarantees, and it withstands decent load before you need to look at performance optimisation options. As Jesper mentioned, when you look at storing data, you have to select the right mix of mechanism. It is not always database alone. This worries you only later. Some of them of course you should have it at the initial stage itself -- e.g. storing media files in file system. Theepan On Mon, Oct 12, 2015 at 3:14 AM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > > On Sun, Oct 11, 2015 at 7:48 PM, Joe Armstrong wrote: > >> I'd probably use the file system, then ets or dets and "roll my own" >> first. > > > To much extent, this is what WhatsApp is doing according to what I've been > able to dissect out of their talks. They use mnesia for storing meta-data, > but they use an UFS2 file system on FreeBSD to store the majority of data > and just pick it off of the disk whenever they need it. The other trick > they use is to use the clients to store the majority of data and only use > the servers as a temporary transient store until the client comes back up > and requests its missing data. > > Were I to start, I'd definitely do the API modularity split as well, but > I'd probably pick postgresql as a backing store over mnesia, perhaps with > mnesia as an in-memory cache for active users. The assumption is user data > is not changing that often, and postgresql have the added value of being a > system I *know* how to operate and handle. There is much to be said for > picking technology which is tried and proven and you know how to operate. > > A possible switch is Riak for the UFS2 store, but it does require the > ability to "append" to an object in Riak. Otherwise it is kind of moot to > load data, add a couple of bytes and store the object back again. Perhaps > this can be done by "chaining" documents in Riak with content addressing. > > > -- > J. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas@REDACTED Mon Oct 12 01:13:05 2015 From: thomas@REDACTED (Thomas Gebert) Date: Sun, 11 Oct 2015 19:13:05 -0400 Subject: [erlang-questions] Why should you ever use atoms? In-Reply-To: References: Message-ID: <561AED01.8060209@gebert.sexy> I understood for the most part how to use atoms and what they were used for, I was just curious what technical advantages they had over a string (or binary string) considering that they aren't garbage collected. The O(1) comparison is a valid enough reason that I hadn't thought about, and the idea that they are limited by design actually kind of makes valid sense. On 10/11/2015 05:27 PM, Max Lapshin wrote: > You can think about atoms as of a predefined words of your domain logic. > > Each domain description has a limited amount of words that describe > it, so number of atoms is also limited by design. > > In a properly designed program you should never meet dynamic creation > by list_to_atom, only if you know that it is a > string representation of already described atom. > > In other words, atoms are always written by hands by some human as a > part of described functionality. > > > Maybe this will help you. > > From ulf@REDACTED Mon Oct 12 08:03:08 2015 From: ulf@REDACTED (Ulf Wiger) Date: Mon, 12 Oct 2015 08:03:08 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: Message-ID: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> > On 11 Oct 2015, at 23:44, Jesper Louis Andersen wrote: > > Were I to start, I'd definitely do the API modularity split as well, but I'd probably pick postgresql as a backing store over mnesia, perhaps with mnesia as an in-memory cache for active users. The assumption is user data is not changing that often, and postgresql have the added value of being a system I *know* how to operate and handle. There is much to be said for picking technology which is tried and proven and you know how to operate. FWIW, while playing around* with the extended mnesia version at Klarna, we also experimented with a pgsql backend to mnesia. It worked pretty well, although there were some bootstrapping challenges esp. during testing**, but ultimately, the leveldb backend was a better fit for their needs. * They stopped playing around and are now using ?mnesia_ext? with a leveldb backend ** You could either create a pgsql instance inside the mnesia directory, or connect to a running pgsql, in which case consistency checks where needed As I understand it, Klarna will publish the leveldb backend (the pgsql backend was an experiment, so I don?t know about that). For those who want to experiment with *some* backend, there?s https://github.com/klarna/otp/blob/OTP-17.5.6-mnesia_ext/lib/mnesia/test/mnesia_ext_filesystem.erl which is used in the test suite. A very simple example of how to get started with it can be found here: https://github.com/klarna/otp/blob/OTP-17.5.6-mnesia_ext/lib/mnesia/test/mnesia_ext_test.erl BR, Ulf W Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Mon Oct 12 11:08:03 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Mon, 12 Oct 2015 11:08:03 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? Message-ID: All, This is an operational question, i.e. how to pin point alerts and co. I monitor process queues and get the longest queue with Recon, using: recon:proc_count(message_queue_len, 1). I'm using exometer_core to send data to HostedGraphite. Tonight I received errors stating that the queue exceeded this number, and this is the queue info that Recon gives me: [exometer_report_graphite,{current_function,{prim_inet,send,3}},{initial_call,{proc_lib,init_p,3}}] I got reports every 5 seconds (the interval of time used by exometer to report) from around 00:30 until 08:20, with a message queue growing from a few hundreds to 6k or so. After 08:20, the queue went back to 0. I can see that my server was up (no other errors, all functionalities and pings were correct during the same period). Therefore, it looks to me that exometer was unable to send the data to the HostedGraphite servers during the whole period 00:30 - 08:20, and when it was able to it emptied the queue that went then back to normal. However: HostedGraphite only has a "hole" of data from around 07:40 until 08:20, i.e. not the whole period where the queue kept growing. Therefore, my questions is: Is exometer_core performing some local cache, i.e. if it's unable to send data it will retry? And if so, is this cache limited? Otherwise I don't see what could be causing the queue lenght of exometer_report_graphite to grow. But if this is so, then why am I seeing a hole in data? Than kyou, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Mon Oct 12 11:13:12 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Mon, 12 Oct 2015 11:13:12 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> Message-ID: On Mon, Oct 12, 2015 at 8:03 AM, Ulf Wiger wrote: > * They stopped playing around and are now using ?mnesia_ext? with a > leveldb backend > Correct, we have been using leveldb as a backend for mnesia for some time now, and it works very well. The space savings (both on disk and in memory) are enormous. As I understand it, Klarna will publish the leveldb backend (the pgsql > backend was an experiment, so I don?t know about that). > We intended to do this sooner, but as usual, we've been kept busy with more pressing things. There is one major performance-improving change that we want to do before we push this out, so that people don't start relying on the old behaviour. Hope to have this done soonish. /Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From knv.suresh2009@REDACTED Mon Oct 12 11:39:23 2015 From: knv.suresh2009@REDACTED (suresh knv) Date: Mon, 12 Oct 2015 15:09:23 +0530 Subject: [erlang-questions] Facing problem with the installation of rabbitmqserver on arm64 Message-ID: Hi, I am facing problems with the installation of rabbitmq-server can anyone please helpme out with this. I am trying to install openstack using devstack on arm64 machine,while running ./stack.sh it fails saying that the 2015-10-12 09:16:15.437 | wget is already the newest version. 2015-10-12 09:16:15.437 | 0 upgraded, 0 newly installed, 0 to remove and 30 not upgraded. 2015-10-12 09:16:15.437 | 1 not fully installed or removed. 2015-10-12 09:16:15.437 | After this operation, 0 B of additional disk space will be used. 2015-10-12 09:16:15.466 | Setting up rabbitmq-server (3.2.4-1) ... 2015-10-12 09:16:15.645 | * Starting message broker rabbitmq-server 2015-10-12 09:16:20.142 | * FAILED - check /var/log/rabbitmq/startup_\{log, _err\} 2015-10-12 09:16:20.145 | ...fail! 2015-10-12 09:16:20.149 | invoke-rc.d: initscript rabbitmq-server, action "start" failed. 2015-10-12 09:16:20.155 | dpkg: error processing package rabbitmq-server (--configure): 2015-10-12 09:16:20.155 | subprocess installed post-installation script returned error exit status 1 2015-10-12 09:16:20.264 | Errors were encountered while processing: 2015-10-12 09:16:20.266 | rabbitmq-server 2015-10-12 09:16:20.815 | E: Sub-process /usr/bin/dpkg returned an error code (1) 2015-10-12 09:16:20.822 | + exit_trap 2015-10-12 09:16:20.822 | + local r=100 2015-10-12 09:16:20.825 | ++ jobs -p 2015-10-12 09:16:20.829 | + jobs= 2015-10-12 09:16:20.829 | + [[ -n '' ]] 2015-10-12 09:16:20.829 | + kill_spinner 2015-10-12 09:16:20.829 | + '[' '!' -z '' ']' 2015-10-12 09:16:20.829 | + [[ 100 -ne 0 ]] 2015-10-12 09:16:20.830 | + echo 'Error on exit' 2015-10-12 09:16:20.830 | Error on exit 2015-10-12 09:16:20.830 | + [[ -z /opt/stack/logs ]] 2015-10-12 09:16:20.830 | + /home/ubuntu/devstack/tools/worlddump.py -d /opt/stack/logs 2015-10-12 09:16:21.654 | + exit 100 current erlang version which I am having is R16B03 and /var/log/rabbitmq/start_err is like this Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{k ubuntu@REDACTED:/var/log/rabbitmq$ cat startup_log {error_logger,{{2015,10,12},{9,16,17}},"Protocol: ~tp: register/listen error: ~tp~n",["inet_tcp",econnrefused]} {error_logger,{{2015,10,12},{9,16,17}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.21.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}},{ancestors,[net_sup,kernel_sup,<0.10.0>]},{messages,[]},{links,[#Port<0.93>,<0.18.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,811}],[]]} {error_logger,{{2015,10,12},{9,16,17}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[rabbitmqprelaunch3622,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2015,10,12},{9,16,17}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2015,10,12},{9,16,17}},crash_report,[[{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.9.0>},{registered_name,[]},{error_info,{exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,133}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}},{ancestors,[<0.8.0>]},{messages,[{'EXIT',<0.10.0>,normal}]},{links,[<0.8.0>,<0.7.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,117}],[]]} {error_logger,{{2015,10,12},{9,16,17}},std_info,[{application,kernel},{exited,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}}"} my hostname is arm64 in etc/hosts 127.0.0.1 localhost arm64 ::1 localhost arm64 ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters Thanks Suresh KN V From anna.grzybowska@REDACTED Mon Oct 12 12:45:10 2015 From: anna.grzybowska@REDACTED (Anna Grzybowska) Date: Mon, 12 Oct 2015 11:45:10 +0100 Subject: [erlang-questions] Berlin Erlang Factory Lite 1 Dec - Early Bird rates open! Message-ID: Berlin Erlang Factory Lite 1 Dec - Early Bird rates open! The third edition of the Erlang Factory Lite in Berlin will take place on 1 Dec! Berlin Erlang Factory Lite is a whole day full of Erlang and Elixir and, on top of that, three days of hands-on training on OTP, Advanced Erlang Techniques and Phoenix on 2-4 Dec. So far our speakers are Heinz Gies and Sonny Scroggin and new speakers will be announced shortly! The Early Bird tickets are on sale now and the price for one ticket is 89 EUR + VAT. In addition to that we have a special discount for students - 50% off the regular price. To get it please email conferences@REDACTED from your university email account for details. -- Anna Grzybowska Conference Executive Erlang Solutions Ltd ?New Loom House?Back Church Lane?London? E1 1LU -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf@REDACTED Mon Oct 12 13:19:01 2015 From: ulf@REDACTED (Ulf Wiger) Date: Mon, 12 Oct 2015 13:19:01 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: Message-ID: > On 12 Oct 2015, at 11:08, Roberto Ostinelli wrote: > > Therefore, my questions is: Is exometer_core performing some local cache, i.e. if it's unable to send data it will retry? And if so, is this cache limited? Otherwise I don't see what could be causing the queue lenght of exometer_report_graphite to grow. But if this is so, then why am I seeing a hole in data? No, exometer_report does no such thing, and neither does exometer_report_graphite (is that the one you?re using?) If a cache would be added, it would need to be in the graphite reporter. That is, it could well be a core library function, but the caching would need to be done by the reporter instance. In the case you?re describing, 8 hours of data could end up being quite a lot, so depending on your workload, spooling to disk might be required. BR, Ulf Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf@REDACTED Mon Oct 12 13:24:40 2015 From: ulf@REDACTED (Ulf Wiger) Date: Mon, 12 Oct 2015 13:24:40 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: Message-ID: > On 12 Oct 2015, at 13:19, Ulf Wiger wrote: > > In the case you?re describing, 8 hours of data could end up being quite a lot, so depending on your workload, spooling to disk might be required. ?or, crazy thought, using RabbitMQ as an intermediary. :) http://www.somic.org/2009/05/21/graphite-rabbitmq-integration/ https://github.com/Feuerlabs/exometer/blob/master/doc/exometer_report_amqp.md Some assembly might be required ? BR, Ulf W Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Oct 12 13:58:11 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 12 Oct 2015 13:58:11 +0200 Subject: [erlang-questions] Facing problem with the installation of rabbitmqserver on arm64 In-Reply-To: References: Message-ID: On Mon, Oct 12, 2015 at 11:39 AM, suresh knv wrote: > > {application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}} > This is the underlying problem. Usually it means you have some odd configuration in and around the epmd application. Common errors are firewall rules, or that you have no/oddly-configured localhost interfaces. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Mon Oct 12 15:09:31 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Mon, 12 Oct 2015 15:09:31 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: Message-ID: Thank you Ulf.? Yes, I'm using exometer_report_graphite [1]. Unfortunately this means that I have no idea what happened, i.e. what would make its queue grow like that when a connection is unavailable. [1] https://github.com/Feuerlabs/exometer/blob/master/src/exometer_report_graphite.erl -------------- next part -------------- An HTML attachment was scrubbed... URL: From t@REDACTED Mon Oct 12 15:14:36 2015 From: t@REDACTED (Tristan Sloughter) Date: Mon, 12 Oct 2015 08:14:36 -0500 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> Message-ID: <1444655676.545392.407872329.7AC25FB7@webmail.messagingengine.com> I'd love to see the postgres one even if it was just experimental. Someone else might take it up and continue with it. -- Tristan Sloughter t@REDACTED On Mon, Oct 12, 2015, at 04:13 AM, Richard Carlsson wrote: > On Mon, Oct 12, 2015 at 8:03 AM, Ulf Wiger wrote: >> * They stopped playing around and are now using ?mnesia_ext? with a leveldb backend >> > > Correct, we have been using leveldb as a backend for mnesia for some time now, and it works very well. The space savings (both on disk and in memory) are enormous. >> As I understand it, Klarna will publish the leveldb backend (the pgsql backend was an experiment, so I don?t know about that). > > We intended to do this sooner, but as usual, we've been kept busy with more pressing things. There is one major performance-improving change that we want to do before we push this out, so that people don't start relying on the old behaviour. Hope to have this done soonish. > ??? /Richard > > _________________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.j.thompson@REDACTED Mon Oct 12 15:55:32 2015 From: s.j.thompson@REDACTED (Simon Thompson) Date: Mon, 12 Oct 2015 14:55:32 +0100 Subject: [erlang-questions] CFP: Workshop on Real World Domain Specific Languages, March 2016 Message-ID: <57F1B9D7-FD15-4D3C-ACB8-FB0EA1AD1BA8@kent.ac.uk> CALL FOR PAPERS ================================= Workshop on Real World Domain Specific Languages https://sites.google.com/site/realworlddsl In conjunction with The International Symposium on Code Generation and Optimisation 2016. http://cgo.org/cgo2016 Barcelona, 12-13 March, 2016 ================================= As the use of computers proliferates, the complexity and variety of systems continues to grow. As a result, it is becoming increasingly inflexible to "hard wire? behaviours into software. Software developers can enable more control over their software configurations by exploiting Domain Specific Languages (DSLs). Such DSLs provide a systematic way to structure the underlying computational components: to coin a phrase, a DSL is a library with syntax. There is an enormous variety of DSLs for a very wide range of domains. Most DSLs are highly idiosyncratic, reflecting both the specific natures of their application domains and their designers? own preferences. This workshop will bring together constructors of DSLs for ?real world? domains; that is, DSLs intended primarily to aid in building software to solve real world problems rather than to explore the more theoretical aspects of language design and implementation. We are looking for submissions that present the motivation, design, implementation, use and evaluation of such DSLs. ================================= Key Dates: ---------------- Paper submission deadline: 10 January 2016. Author notification: 2 February 2016. Final manuscript due: 21 February 2016. Workshop: 12-13 March 2016. ---------------- Submission Instructions: The EasyChair submission page for this workshop is https://easychair.org/conferences/?conf=rwdsl16 Accepted submissions will be published in the ACM Digital Library within its International Conference Proceedings Series. Submissions should be 8-10 in ACM double-column format. Authors should follow the information for formatting ACM SIGPLAN conference papers, which can be found at http://www.sigplan.org/Resources/Author. We will provide more details of the submissions format and process at the beginning of November. ================================= Program Chairs Rob Stewart, Heriot-Watt University Greg Michaelson, Heriot-Watt University PC Members Jost Berthold, Commonwealth Bank of Australia Andy Gill, University of Kansas Kevin Hammond, University of St Andrews Rita Loogen, Philipps Universitat Marburg Patrick Maier, University of Glasgow Josef Svenningsson, Chalmers University Simon Thompson, University of Kent Phil Trinder, University of Glasgow Contact Please email inquiries concerning the workshop to: R.Stewart@REDACTED Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson@REDACTED | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt From ulf@REDACTED Mon Oct 12 16:30:19 2015 From: ulf@REDACTED (Ulf Wiger) Date: Mon, 12 Oct 2015 16:30:19 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: Message-ID: > On 12 Oct 2015, at 15:09, Roberto Ostinelli wrote: > > Thank you Ulf.? Yes, I'm using exometer_report_graphite [1]. > > Unfortunately this means that I have no idea what happened, i.e. what would make its queue grow like that when a connection is unavailable. > > > > [1] https://github.com/Feuerlabs/exometer/blob/master/src/exometer_report_graphite.erl Well, the two potentially blocking operations it performs are gen_tcp:connect() (5-second timeout by default) and gen_tcp:send() If you want to create some form of buffering behavior, perhaps exometer_report_logger.erl might be useful. It?s a bit intricate, perhaps, but you can e.g. check the following code for ideas: https://github.com/Feuerlabs/exometer_core/blob/master/test/exometer_report_SUITE.erl https://github.com/Feuerlabs/exometer_core/blob/master/test/exometer_test_udp_reporter.erl But I think that in general, the idea with metrics is that you push them off-host if possible, otherwise let it be, until a connection is re-established. BR, Ulf W Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tobias.Schlager@REDACTED Mon Oct 12 17:14:13 2015 From: Tobias.Schlager@REDACTED (Tobias Schlager) Date: Mon, 12 Oct 2015 15:14:13 +0000 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: , Message-ID: <12F2115FD1CCEE4294943B2608A18FA301A2710ED2@MAIL01.win.lbaum.eu> Hi, Graphite does also support UDP. Maybe that could be an option for you. Fire and forget might in this case be better than loading your system. Another possibility would be to automatically close the TCP session when the other end gets unresponsive using the send_timeout option. Regards Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From shefys@REDACTED Mon Oct 12 18:16:26 2015 From: shefys@REDACTED (Yury Shefer) Date: Mon, 12 Oct 2015 09:16:26 -0700 Subject: [erlang-questions] httpc request - stream to file Message-ID: Hi, I'm studying Erlang and have a simple question. In my http request function i'm using stream to save the output to the file. The question is how to replace the file content instead of adding additional data to it? fetch_db_ver() -> httpc:request(get, {"http://www.domain.com/CURRENTDBVERSION",[]}, [], [{stream, "CURRENTDBVERSION"}] ). current_version() -> file:read_file("CURRENTDBVERSION"). So, instead of 32154810 I'm getting "3215334632153346321533463215334632154810" 19> db:current_version(). {ok,<<"3215334632153346321533463215334632154810">>} -- Thanks a lot, Yury. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.meadows.jonsson@REDACTED Mon Oct 12 20:11:51 2015 From: eric.meadows.jonsson@REDACTED (=?UTF-8?Q?Eric_Meadows=2DJ=C3=B6nsson?=) Date: Mon, 12 Oct 2015 13:11:51 -0500 Subject: [erlang-questions] httpc request - stream to file In-Reply-To: References: Message-ID: Delete the file before calling httpc:request. On Mon, Oct 12, 2015 at 11:16 AM, Yury Shefer wrote: > Hi, > > I'm studying Erlang and have a simple question. In my http request > function i'm using stream to save the output to the file. The question is > how to replace the file content instead of adding additional data to it? > > fetch_db_ver() -> > httpc:request(get, > {"http://www.domain.com/CURRENTDBVERSION",[]}, > [], > [{stream, "CURRENTDBVERSION"}] > ). > > current_version() -> > file:read_file("CURRENTDBVERSION"). > > > So, instead of > > 32154810 > > I'm getting "3215334632153346321533463215334632154810" > > 19> db:current_version(). > {ok,<<"3215334632153346321533463215334632154810">>} > > -- > Thanks a lot, > Yury. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -- Eric Meadows-J?nsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennethlakin@REDACTED Mon Oct 12 21:48:41 2015 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Mon, 12 Oct 2015 12:48:41 -0700 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> Message-ID: <561C0E99.4070803@gmail.com> On 10/12/2015 02:13 AM, Richard Carlsson wrote: > As I understand it, Klarna will publish the leveldb backend (the > pgsql backend was an experiment, so I don?t know about that). The Mnesia leveldb backend relies on mnesia_ext, which is a patch to Mnesia, right? If I'm not wrong about that, will one be required to make use of a patched Erlang/OTP, or are there plans to get the patch into Erlang, or -somehow- make it a library that can be included in one's software? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From steven.charles.davis@REDACTED Tue Oct 13 02:08:12 2015 From: steven.charles.davis@REDACTED (Steve Davis) Date: Mon, 12 Oct 2015 19:08:12 -0500 Subject: [erlang-questions] Mnesia Message-ID: Hi, I?m almost ashamed to say that it?s taken me over 5 years to come around to understanding the value of mnesia. I bought the ?well-known? negatives too fast. I have explored relational connectors, DHT solutions and a number of other approaches... It?s dawned on me *finally* that 90% of the time, a well implemented mnesia solution would have been better, faster, and cheaper. Did I mention "better"? Has anyone else had this experience? /s From steven.charles.davis@REDACTED Tue Oct 13 02:36:01 2015 From: steven.charles.davis@REDACTED (Steve Davis) Date: Mon, 12 Oct 2015 19:36:01 -0500 Subject: [erlang-questions] Mnesia In-Reply-To: References: Message-ID: Yes :) Let?s discuss what the tick boxes are. And be specific on where they break down (e.g. terabyte data storage). And given where they break down, what paths to take. I suspect there?s not too many use cases that aren?t addressed, and those unreasonably dominate discussion about mnesia. /s In general, > On Oct 12, 2015, at 7:29 PM, T Ty wrote: > > Always lol > > Use mnesia until it is no longer useful. As long as you have all the tick boxes marked mnesia is great. And once it breaks down then either consider moving to another system or radically changing your architecture the way WhatsApp did. > > On Tue, Oct 13, 2015 at 1:08 AM, Steve Davis > wrote: > Hi, > > I?m almost ashamed to say that it?s taken me over 5 years to come around to understanding the value of mnesia. > > I bought the ?well-known? negatives too fast. I have explored relational connectors, DHT solutions and a number of other approaches... > > It?s dawned on me *finally* that 90% of the time, a well implemented mnesia solution would have been better, faster, and cheaper. > > Did I mention "better"? > > Has anyone else had this experience? > > /s > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martink@REDACTED Tue Oct 13 03:04:27 2015 From: martink@REDACTED (Martin Karlsson) Date: Tue, 13 Oct 2015 14:04:27 +1300 Subject: [erlang-questions] Mnesia In-Reply-To: References: Message-ID: Hi Steve, I'm with you:) The negatives are being mentioned a lot and unfortunately put people off instead of just being used as good things to know about when you are picking your technology. If you know about the quirks you can usually come up with a workaround that matches your use-case. It is about weighing pros and cons. Why not list a few of the negatives and what you can do to work around them. PROBLEM: Net splits must be manually handled. WORKAROUND: - https://github.com/uwiger/unsplit for automatic healing - Use the majority options (when you need to heal you choose the majority partition when you pick master nodes). This should reduce (eliminate?) risk of data loss. PROBLEM: disc_only_copies uses dets and is limited to 2GB WORKAROUND: - Partition the tables - Use the mnesia_ex patch with leveldb backend (to be released soon I think, it was mentioned in another email just recently) - Perhaps you can use disc_copies and hold your data-set in RAM PROBLEM: Prone to overloading WORKAROUND: Load regulate. Reduce concurrency. I think I read somewhere that mnesia and ets works best with a max concurrency of the number of scheduler threads and degrades from there on. PROBLEM: Slow startup WORKAROUND: Increase the number of table loaders. PROBLEM: Upgrading table definition. transform_table can have problems with big and distributed tables. WORKAROUND: Treat large tables like a key value store to reduce risk of having to modify the record I'm sure there are more problems and workarounds especially if you start pushing it (for example like whatsapp did). Also I've heard dirty_write is not safe with replicated tables. I don't know if this is true but is holding me back from using dirty transactions rather than transactions (which I don't need apart from safety on replicated tables). If anyone knows about this one please let me know. Feel free to add/remove/correct items:) Cheers, Martin On 13 October 2015 at 13:08, Steve Davis wrote: > Hi, > > I?m almost ashamed to say that it?s taken me over 5 years to come around to understanding the value of mnesia. > > I bought the ?well-known? negatives too fast. I have explored relational connectors, DHT solutions and a number of other approaches... > > It?s dawned on me *finally* that 90% of the time, a well implemented mnesia solution would have been better, faster, and cheaper. > > Did I mention "better"? > > Has anyone else had this experience? > > /s > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From g@REDACTED Tue Oct 13 03:48:11 2015 From: g@REDACTED (Garrett Smith) Date: Mon, 12 Oct 2015 20:48:11 -0500 Subject: [erlang-questions] Mnesia In-Reply-To: References: Message-ID: There's no substitute for production experience. If Mnesia is what you really really really want to use - it's super awesome - just dive in and use it. As you say, if you run into issues, you can just work around them. Personally, I'm interested in the physics of a system at scale, less so the brands involved. Get something in motion, measure it, fix it. On Mon, Oct 12, 2015 at 8:04 PM, Martin Karlsson wrote: > Hi Steve, > > I'm with you:) > > The negatives are being mentioned a lot and unfortunately put people > off instead of just being used as good things to know about when you > are picking your technology. > > If you know about the quirks you can usually come up with a workaround > that matches your use-case. > > It is about weighing pros and cons. Why not list a few of the > negatives and what you can do to work around them. > > PROBLEM: Net splits must be manually handled. > WORKAROUND: > - https://github.com/uwiger/unsplit for automatic healing > - Use the majority options (when you need to heal you choose the > majority partition when you pick master nodes). This should reduce > (eliminate?) risk of data loss. > > PROBLEM: disc_only_copies uses dets and is limited to 2GB > WORKAROUND: > - Partition the tables > - Use the mnesia_ex patch with leveldb backend (to be released soon I > think, it was mentioned in another email just recently) > - Perhaps you can use disc_copies and hold your data-set in RAM > > PROBLEM: Prone to overloading > WORKAROUND: Load regulate. Reduce concurrency. I think I read > somewhere that mnesia and ets works best with a max concurrency of the > number of scheduler threads and degrades from there on. > > PROBLEM: Slow startup > WORKAROUND: Increase the number of table loaders. > > PROBLEM: Upgrading table definition. transform_table can have problems > with big and distributed tables. > WORKAROUND: Treat large tables like a key value store to reduce risk > of having to modify the record > > I'm sure there are more problems and workarounds especially if you > start pushing it (for example like whatsapp did). > > Also I've heard dirty_write is not safe with replicated tables. I > don't know if this is true but is holding me back from using dirty > transactions rather than transactions (which I don't need apart from > safety on replicated tables). If anyone knows about this one please > let me know. > > Feel free to add/remove/correct items:) > > Cheers, > Martin > > > On 13 October 2015 at 13:08, Steve Davis wrote: >> Hi, >> >> I?m almost ashamed to say that it?s taken me over 5 years to come around to understanding the value of mnesia. >> >> I bought the ?well-known? negatives too fast. I have explored relational connectors, DHT solutions and a number of other approaches... >> >> It?s dawned on me *finally* that 90% of the time, a well implemented mnesia solution would have been better, faster, and cheaper. >> >> Did I mention "better"? >> >> Has anyone else had this experience? >> >> /s >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From zhuoyikang@REDACTED Tue Oct 13 05:19:10 2015 From: zhuoyikang@REDACTED (yikang zhuo) Date: Tue, 13 Oct 2015 11:19:10 +0800 Subject: [erlang-questions] Memory Leak Message-ID: after i eval gc manually: X=[erlang:garbage_collect(P) || P <- erlang:processes(), {status, waiting} == erlang:process_info(P, status)]. top res -> 8.3g erlang:memory() -> 3.6G [{total,3694253080}, {processes,3301211768}, {processes_used,3300343808}, {system,393041312}, {atom,744345}, {atom_used,715567}, {binary,254047488}, {code,18840350}, {ets,61244696}] wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vances@REDACTED Tue Oct 13 05:43:29 2015 From: vances@REDACTED (Vance Shipley) Date: Tue, 13 Oct 2015 09:13:29 +0530 Subject: [erlang-questions] Memory Leak In-Reply-To: References: Message-ID: On Tue, Oct 13, 2015 at 8:49 AM, yikang zhuo wrote: > top res -> 8.3g > erlang:memory() -> 3.6G ... > wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4 Not in the buggy way you may suspect: https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole Jump down to the section titled "I Just Keep on Bleeding and I Won't Die". -- -Vance From cchalasani@REDACTED Tue Oct 13 03:02:51 2015 From: cchalasani@REDACTED (Chaitanya Chalasani) Date: Tue, 13 Oct 2015 06:32:51 +0530 Subject: [erlang-questions] Mnesia In-Reply-To: References: Message-ID: <85BE6422-4309-465C-B830-6066CA939282@me.com> I have almost always managed to delivery solutions using just mnesia. In one case when the data volumes were too high, I have partitioned the data model to use only mnesia for transactions and the rest to oracle. > On 13-Oct-2015, at 5:38 AM, Steve Davis wrote: > > Hi, > > I?m almost ashamed to say that it?s taken me over 5 years to come around to understanding the value of mnesia. > > I bought the ?well-known? negatives too fast. I have explored relational connectors, DHT solutions and a number of other approaches... > > It?s dawned on me *finally* that 90% of the time, a well implemented mnesia solution would have been better, faster, and cheaper. > > Did I mention "better"? > > Has anyone else had this experience? > > /s > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From mjtruog@REDACTED Tue Oct 13 07:04:35 2015 From: mjtruog@REDACTED (Michael Truog) Date: Mon, 12 Oct 2015 22:04:35 -0700 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: <561C0E99.4070803@gmail.com> References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <561C0E99.4070803@gmail.com> Message-ID: <561C90E3.10900@gmail.com> On 10/12/2015 12:48 PM, Kenneth Lakin wrote: > On 10/12/2015 02:13 AM, Richard Carlsson wrote: >> As I understand it, Klarna will publish the leveldb backend (the >> pgsql backend was an experiment, so I don?t know about that). > The Mnesia leveldb backend relies on mnesia_ext, which is a patch to > Mnesia, right? > > If I'm not wrong about that, will one be required to make use of a > patched Erlang/OTP, or are there plans to get the patch into Erlang, or > -somehow- make it a library that can be included in one's software? > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions It looks like the current patch was being tracked at https://github.com/erlang/otp/pull/783 and that the changes needed more changes before merging on a future pull request (based on this email thread). -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Tue Oct 13 07:18:27 2015 From: mjtruog@REDACTED (Michael Truog) Date: Mon, 12 Oct 2015 22:18:27 -0700 Subject: [erlang-questions] Memory Leak In-Reply-To: References: Message-ID: <561C9423.5070907@gmail.com> On 10/12/2015 08:43 PM, Vance Shipley wrote: > On Tue, Oct 13, 2015 at 8:49 AM, yikang zhuo wrote: >> top res -> 8.3g >> erlang:memory() -> 3.6G > ... >> wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4 > Not in the buggy way you may suspect: > https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole > > Jump down to the section titled "I Just Keep on Bleeding and I Won't Die". > > There was a GC bug in 17.5 mentioned at http://erlang.org/pipermail/erlang-questions/2015-June/085032.html as OTP-12821 . You may not be running into that though. I have seen abnormal GC memory growth with the 17.5 release (from the download page) but not the 18.0 release, and am not sure about the exact changes it was related to (only that it caused the Erlang VM to be killed by the Linux OS, if you are interested in the testing, it is at https://groups.google.com/forum/#!topic/cloudi-questions/bw2D7YOFtKU). From ok@REDACTED Tue Oct 13 07:57:38 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 13 Oct 2015 18:57:38 +1300 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <001801d10244$4095dd10$c1c19730$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> <001801d10244$4095dd10$c1c19730$@frcuba.co.cu> Message-ID: On 9/10/2015, at 4:40 pm, Ivan Carmenates Garcia wrote: > Regards, > > Well seems to me that there is no more optimizations for the algorithm > (except for yours 'my_reverse_join' see below), I tested yours and mine and > they both take exactly the same for the final result the unparsed "string", > well mine is 200 millisecond faster in 1 million of iterations, yours take > 64XX main 62XX approx. The problem is that using lists:flatten is the hell > slow, The problem is that I suggested *NOT* using lists:flatten/1. By the way, you really do not *have* to use any particular built in function. A few minutes' coding produced this result for flattening a tree with a million elements: - Using lists:flatten/1 : 123,939 microseconds - Using my own code : 63,134 microseconds The effect of my code isn't precisely the same as that of lists:flatten/1, but for a tree of strings that doesn't matter. my_flatten([A,B,C|Xs], R) -> my_flatten(A, my_flatten(B, my_flatten(C, my_flatten(Xs, R)))); my_flatten([A|Xs], R) -> my_flatten(A, my_flatten(Xs, R)); my_flatten([], R) -> R; my_flatten(X, R) -> [X|R]. > I also tested your algorithm just after the io lists return and it is > very faster its took 14XX milliseconds, so when you use the final > lists:flatten/1 everything goes to crap, sorry about the word. So it is > amazing who faster it is and then lists:flatten by its own take the another > 5 seconds, I did knew that because I read Erlang session about 7 myths off > bla bla .. and optimizations stuffs some time ago and they say lists:flatten > is slow also I tested it when I was constructing the algorithm first time I > did avoid constructing the io lists because of that. No, the whole point of io lists is to *NOT* flatten them. The advice, in short, was to structure the rest of your program so that you don't NEED to do the flattening. > > I did clear some things like unparse well I need another name I don't like > unparse either and parse is wrong, I will come up with something. Maybe you don't *like* "unparse", but it *is* the standard technical term. Type "unparsing" into the search box of your browser and read! > I also have questions for you if you are so kind to answer. > > Regards proplists, well, it is necessary to make lists of options as > proplists? No, of course not. Even when you encode a list of options as something *like* a proplist, the structure you want will usually be stricter. In any case, what you have here is simply not a list of options. The elements are not options, they are descriptions of columns you want to get back from some data base query. > yes, proplists:compact/1 and expand/1 for [a] to [{a, true}] and > all those stuffs but what I was trying to do is to make it simple for the > user, because more tuples are more complicate to write in the case of > {users, name, alias} (a) Remember, one of the essential things about *being* a proplist is that the consumer is going to IGNORE anything that is not an atom or a pair. If you are not going to do that, you are not treating the data *as* a proplist, and it is "lying to the user" to call it one. (b) Far from making things simpler for the user, you made it much more confusing. There are places where an alias *can* be supplied and places where it *can't*, and there doesn't seem to be any reason for that. Well, it confused the heck out of *me*. > well that could be right adding more brackets like > {users, {name, alias}} because it organizes the thing. But i.e.: I have > another algorithm for match specs in which I can do {name | {table, name} | > value, op, name | {table, name} | value } also [{..., op, ...}, ...], > logic_op, ... so it will be easy for the user to build something like > [ > [{id, '==', 1}, {age, '==', 31}], > 'or', > {username, '==', "john"} > ] There are programming languages in which building queries as data structures is easy. In Lisp, (make-query '(or (and (= id 1) (= age 31)) (= username "john"))) is easy because that's *exactly* what an expression would look like in Lisp. In R, make_query(id = 1 && age == 31 || username == "john") is easy because R functions evaluate their arguments lazily and can get at the abstract syntax trees of those arguments, so again the syntax is exactly what an expression normally looks like. Erlang really isn't one of those languages, but there's a feature of Erlang that means you can get a lot closer. You could use a parse transform to take something like make_query(Id == 1 andalso Age == 31 orelse UserName == "John") which the parser turns into an abstract syntax tree, and then your parse transform could take make_query(...) and turn that into whatever data structure you like. It's hardER in Erlang than it is in Lisp, but it's still much easier than it is with C or Java. (And of course LFE would do it exactly the way Lisp does it.) If I were a user of your system, I would be somewhere between baffled and outraged at the claim that it would be *easy* for me to construct a query in either of the forms you mention. This is perhaps the ideal time to mention the idea of STAGED computation, also known as partial evaluation or partial execution. The function we started discussing looks like a textbook case of something you probably shouldn't be doing at run time anyway. From carlsson.richard@REDACTED Tue Oct 13 09:01:47 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Tue, 13 Oct 2015 09:01:47 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: <561C90E3.10900@gmail.com> References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <561C0E99.4070803@gmail.com> <561C90E3.10900@gmail.com> Message-ID: Yeah, I'm working on a new pull request based on 18.1.1. /Richard On Tue, Oct 13, 2015 at 7:04 AM, Michael Truog wrote: > On 10/12/2015 12:48 PM, Kenneth Lakin wrote: > > On 10/12/2015 02:13 AM, Richard Carlsson wrote: > > As I understand it, Klarna will publish the leveldb backend (the > pgsql backend was an experiment, so I don?t know about that). > > The Mnesia leveldb backend relies on mnesia_ext, which is a patch to > Mnesia, right? > > If I'm not wrong about that, will one be required to make use of a > patched Erlang/OTP, or are there plans to get the patch into Erlang, or > -somehow- make it a library that can be included in one's software? > > > > > > _______________________________________________ > erlang-questions mailing listerlang-questions@REDACTED://erlang.org/mailman/listinfo/erlang-questions > > It looks like the current patch was being tracked at > https://github.com/erlang/otp/pull/783 and that the changes needed more > changes before merging on a future pull request (based on this email > thread). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhuoyikang@REDACTED Tue Oct 13 10:21:46 2015 From: zhuoyikang@REDACTED (yikang zhuo) Date: Tue, 13 Oct 2015 16:21:46 +0800 Subject: [erlang-questions] Memory Leak In-Reply-To: References: Message-ID: "I Just Keep on Bleeding and I Won't Die " very humorous... i follow your post the see what happend in my erlang node , and the binary allocted is also my biggest problem, it almost have 5G?and the usage rate is very low rp(recon_alloc:fragmentation(current)). {mbcs_carriers_size,1438941360 {mbcs_carriers_size,908361904 {mbcs_carriers_size,928284848 {mbcs_carriers_size,817135792 {mbcs_carriers_size,556302512 {mbcs_carriers_size,479494320 {mbcs_carriers_size,399278256 {mbcs_carriers_size,271876272 {mbcs_carriers_size,32944 sum -> 5 799 708 208 {mbcs_usage,0.09427699263575272}, {mbcs_usage,0.010110173004349157}, {mbcs_usage,0.05721490350125805}, {mbcs_usage,0.03553723173589733}, {mbcs_usage,0.0037855806051078915}, {mbcs_usage,0.002756687503618395}, {mbcs_usage,0.0015088825673492223}, {mbcs_usage,0.0015768937717374615}, {mbcs_usage,0.003399708596406022}, but it look like a bug more than alloctor fragment , the default alloctor strage is {as,aoffcbf}]}, does aoffcbf can as bad as such low usage... my erlang is erts6.4 r17.5 , this post describle a memory leak about r17.3 http://erlang.2086793.n4.nabble.com/Possibly-memory-leak-in-R17-td4690007.html https://github.com/erlang/otp/blob/maint/erts/emulator/internal_doc/CarrierMigration.md#searching-the-pool but i use r17.5. rp(erlang:system_info({allocator,binary_alloc})). [{instance,0, [{versions,"0.9","3.0"}, {options,[{e,true}, {t,true}, {ramv,false}, {sbct,524288}, {asbcst,4145152}, {rsbcst,20}, {rsbcmt,80}, {rmbcmt,50}, {mmbcs,32768}, {mmmbc,18446744073709551615}, {mmsbc,256}, {lmbcs,5242880}, {smbcs,262144}, {mbcgs,10}, {acul,0}, {as,aoffcbf}]}, does -->> all [{{binary_alloc,1}, [{sbcs_usage,1.0}, {mbcs_usage,0.09427699263575272}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,135659064}, {mbcs_carriers_size,1438941360}]}, {{binary_alloc,4}, [{sbcs_usage,1.0}, {mbcs_usage,0.010110173004349157}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,9183696}, {mbcs_carriers_size,908361904}]}, {{binary_alloc,2}, [{sbcs_usage,1.0}, {mbcs_usage,0.05721490350125805}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,53111728}, {mbcs_carriers_size,928284848}]}, {{binary_alloc,3}, [{sbcs_usage,1.0}, {mbcs_usage,0.03553723173589733}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,29038744}, {mbcs_carriers_size,817135792}]}, {{binary_alloc,5}, [{sbcs_usage,1.0}, {mbcs_usage,0.0037855806051078915}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,2105928}, {mbcs_carriers_size,556302512}]}, {{binary_alloc,6}, [{sbcs_usage,1.0}, {mbcs_usage,0.002756687503618395}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,1321816}, {mbcs_carriers_size,479494320}]}, {{binary_alloc,7}, [{sbcs_usage,1.0}, {mbcs_usage,0.0015088825673492223}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,602464}, {mbcs_carriers_size,399278256}]}, {{binary_alloc,8}, [{sbcs_usage,1.0}, {mbcs_usage,0.0015768937717374615}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,428720}, {mbcs_carriers_size,271876272}]}, ? 2015-10-13 11:43 GMT+08:00 Vance Shipley : > On Tue, Oct 13, 2015 at 8:49 AM, yikang zhuo wrote: > > top res -> 8.3g > > erlang:memory() -> 3.6G > ... > > wow.. does memory Leak in ejabberd or erlang 17.5 erts-6.4 > > Not in the buggy way you may suspect: > https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole > > Jump down to the section titled "I Just Keep on Bleeding and I Won't Die". > > > -- > -Vance > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikpelinux@REDACTED Tue Oct 13 09:41:54 2015 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Tue, 13 Oct 2015 09:41:54 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: <561C0E99.4070803@gmail.com> References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <561C0E99.4070803@gmail.com> Message-ID: <22044.46530.616247.300935@gargle.gargle.HOWL> Kenneth Lakin writes: > On 10/12/2015 02:13 AM, Richard Carlsson wrote: > > As I understand it, Klarna will publish the leveldb backend (the > > pgsql backend was an experiment, so I don?t know about that). > > The Mnesia leveldb backend relies on mnesia_ext, which is a patch to > Mnesia, right? > > If I'm not wrong about that, will one be required to make use of a > patched Erlang/OTP, or are there plans to get the patch into Erlang, or > -somehow- make it a library that can be included in one's software? There is an open P.R. to include mnesia_ext in OTP, and we have published versions for R15, R16, and 17 on Klarna's github for those that prefer stable releases over bleeding edge. The R15 and R16 versions have had substantial testing in production. You can bundle a patched OTP, or just a patched mnesia, with your application (mnesia itself is just a library, and mnesia_ext doesn't patch the VM). Our Erlang systems use locally patched stable OTP releases anyway, so adding one more patch for mnesia_ext is no big deal. From mikpelinux@REDACTED Tue Oct 13 09:43:33 2015 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Tue, 13 Oct 2015 09:43:33 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: <1444655676.545392.407872329.7AC25FB7@webmail.messagingengine.com> References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <1444655676.545392.407872329.7AC25FB7@webmail.messagingengine.com> Message-ID: <22044.46629.802283.964778@gargle.gargle.HOWL> Tristan Sloughter writes: > I'd love to see the postgres one even if it was just experimental. > Someone else might take it up and continue with it. I'm working on getting that one open-sourced, hopefully this week. From max.lapshin@REDACTED Tue Oct 13 13:54:15 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Tue, 13 Oct 2015 14:54:15 +0300 Subject: [erlang-questions] driver_alloc high usage Message-ID: Hi. Customer came with our flussonic eating too much memory. (flussonic@REDACTED)20> recon_alloc:memory(allocated_types). [{binary_alloc,36737328}, {driver_alloc,13561794864}, {eheap_alloc,101822768}, {ets_alloc,83661104}, {fix_alloc,7377200}, {ll_alloc,82317416}, {sl_alloc,823600}, {std_alloc,2396464}, {temp_alloc,3279800}] Recon alloc told me that 13 GB is used by driver_alloc. There is my driver mpegts_udp that reads UDP with mpegts (unfortunately native inet driver cannot do it, because it consumes too much CPU for each message, so we glue several packets into one long packet, reducing amount of packets by 10-20 and load by 2-3 times). Perhaps my driver can leak. Is it possible to get any other information about this driver_alloc, how is this memory used, etc? -------------- next part -------------- An HTML attachment was scrubbed... URL: From garazdawi@REDACTED Tue Oct 13 14:39:01 2015 From: garazdawi@REDACTED (Lukas Larsson) Date: Tue, 13 Oct 2015 14:39:01 +0200 Subject: [erlang-questions] driver_alloc high usage In-Reply-To: References: Message-ID: You can use instrument and some combination of "+Mis true" and "+Mim true" to get some more details about where the memory is used. See http://www.erlang.org/doc/man/instrument.html for details. Unfortunately it will group all allocations done by driver_alloc into one group. On Tue, Oct 13, 2015 at 1:54 PM, Max Lapshin wrote: > Hi. > > Customer came with our flussonic eating too much memory. > > (flussonic@REDACTED)20> recon_alloc:memory(allocated_types). > > [{binary_alloc,36737328}, > > {driver_alloc,13561794864}, > > {eheap_alloc,101822768}, > > {ets_alloc,83661104}, > > {fix_alloc,7377200}, > > {ll_alloc,82317416}, > > {sl_alloc,823600}, > > {std_alloc,2396464}, > > {temp_alloc,3279800}] > > > > Recon alloc told me that 13 GB is used by driver_alloc. > > > There is my driver mpegts_udp that reads UDP with mpegts (unfortunately > native inet driver cannot do it, because it consumes too much CPU for each > message, so we glue several packets into one long packet, reducing amount > of packets by 10-20 and load by 2-3 times). Perhaps my driver can leak. > > > Is it possible to get any other information about this driver_alloc, how > is this memory used, etc? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Tue Oct 13 14:48:09 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Tue, 13 Oct 2015 15:48:09 +0300 Subject: [erlang-questions] driver_alloc high usage In-Reply-To: References: Message-ID: Ok, will try to. Also I've looked at difference between driver_alloc and driver_free count and possibly I've found problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Tue Oct 13 14:59:20 2015 From: sergej.jurecko@REDACTED (=?UTF-8?Q?Sergej_Jure=C4=8Dko?=) Date: Tue, 13 Oct 2015 14:59:20 +0200 Subject: [erlang-questions] unpacking big/little endian speed difference Message-ID: How come unpacking integers as big endian is faster than little endian, when I'm running on a little endian platform (intel osx)? I know big endian is erlang default, but if you're turning binaries into integers, you must turn them into little endian anyway, so unpacking should basically be memmcpy. The test: % Takes: ~1.3s -define(ENDIAN,unsigned-big). % Takes: ~1.8s % -define(ENDIAN,unsigned-little). loop() -> L = [<<(random:uniform(1000000000)):32/?ENDIAN,1:32>> || _ <- lists:seq(1,1000)], S = os:timestamp(), loop1(1000,L), timer:now_diff(os:timestamp(),S). loop1(0,_) -> ok; loop1(N,L) -> lists:sort(fun(<>,<>) -> A =< B end, L), loop1(N-1,L). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony@REDACTED Tue Oct 13 16:21:01 2015 From: tony@REDACTED (Tony Rogvall) Date: Tue, 13 Oct 2015 16:21:01 +0200 Subject: [erlang-questions] unpacking big/little endian speed difference In-Reply-To: References: Message-ID: Checking the code with a simple disassembler, that can inspect the loaded code, using erts_debug:disassemble: big-endian: > dis:func(decode_bench, '-loop/2-fun-0-', 2). 000000001813E9A8: [i_func_info_IaaI,0,decode_bench,'-loop/2-fun-0-',2] 000000001813E9D0: [i_bs_start_match2_rfIId,{x,0},{f,403958184},2,0,{x,0}] 000000001813E9F8: [i_bs_get_integer_32_rfId,{x,0},{f,403958480},2,{x,2}] 000000001813EA18: [i_bs_skip_bits_all2_frI,{f,403958480},{x,0},8] 000000001813EA30: [i_bs_start_match2_xfIId,{x,1},{f,403958480},3,0,{x,3}] 000000001813EA60: [i_bs_get_integer_32_xfId,{x,3},{f,403958480},4,{x,4}] 000000001813EA88: [bs_test_unit8_fx,{f,403958480},{x,3}] 000000001813EAA0: [i_fetch_xx,{x,2},{x,4}] 000000001813EAB0: [i_bif2_body_bd,'=<','/',2,{x,0}] 000000001813EAC8: [return] 000000001813EAD0: [bs_context_to_binary_r,{x,0}] 000000001813EAD8: [i_is_lt_spec_frr,{f,403958184},{x,0},{x,0}] The same for little-endian: > dis:func(decode_bench, '-loop/2-fun-0-', 2). 0000000018121D30: [i_func_info_IaaI,0,decode_bench,'-loop/2-fun-0-',2] 0000000018121D58: [i_bs_start_match2_rfIId,{x,0},{f,403840304},2,0,{x,0}] 0000000018121D80: [i_bs_get_integer_small_imm_rIfId, {x,0}, 32, {f,403840616}, 2, {x,2}] 0000000018121DA8: [i_bs_skip_bits_all2_frI,{f,403840616},{x,0},8] 0000000018121DC0: [i_bs_start_match2_xfIId,{x,1},{f,403840616},3,0,{x,3}] 0000000018121DF0: [i_bs_get_integer_small_imm_xIfId, {x,3}, 32, {f,403840616}, 2, {x,4}] 0000000018121E20: [bs_test_unit8_fx,{f,403840616},{x,3}] 0000000018121E38: [i_fetch_xx,{x,2},{x,4}] 0000000018121E48: [i_bif2_body_bd,'=<','/',2,{x,0}] 0000000018121E60: [return] 0000000018121E68: [bs_context_to_binary_r,{x,0}] 0000000018121E70: [i_is_lt_spec_frr,{f,403840304},{x,0},{x,0}] The difference between i_bs_get_integer_32_rfId and i_bs_get_integer_small_imm_xIfId this is most probably the cause of the performance difference. /Tony > On 13 okt 2015, at 14:59, Sergej Jure?ko wrote: > > How come unpacking integers as big endian is faster than little endian, when I'm running on a little endian platform (intel osx)? > I know big endian is erlang default, but if you're turning binaries into integers, you must turn them into little endian anyway, so unpacking should basically be memmcpy. > > The test: > > % Takes: ~1.3s > -define(ENDIAN,unsigned-big). > > % Takes: ~1.8s > % -define(ENDIAN,unsigned-little). > > loop() -> > L = [<<(random:uniform(1000000000)):32/?ENDIAN,1:32>> || _ <- lists:seq(1,1000)], > S = os:timestamp(), > loop1(1000,L), > timer:now_diff(os:timestamp(),S). > > loop1(0,_) -> > ok; > loop1(N,L) -> > lists:sort(fun(<>,<>) -> A =< B end, L), > loop1(N-1,L). > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kostis@REDACTED Tue Oct 13 16:31:05 2015 From: kostis@REDACTED (Kostis Sagonas) Date: Tue, 13 Oct 2015 16:31:05 +0200 Subject: [erlang-questions] unpacking big/little endian speed difference In-Reply-To: References: Message-ID: <561D15A9.70104@cs.ntua.gr> On 10/13/2015 02:59 PM, Sergej Jure?ko wrote: > How come unpacking integers as big endian is faster than little endian, > when I'm running on a little endian platform (intel osx)? > I know big endian is erlang default, but if you're turning binaries into > integers, you must turn them into little endian anyway, so unpacking > should basically be memmcpy. > > The test: > > % Takes: ~1.3s > -define(ENDIAN,unsigned-big). > > % Takes: ~1.8s > % -define(ENDIAN,unsigned-little). Well, if you are interested in performance, simply compile to native code your module and the time will drop to less than half... ================================================================ Eshell V7.1 (abort with ^G) 1> c(endian). {ok,endian} 2> endian:loop(). 1097971 3> endian:loop(). 1327621 4> endian:loop(). 1162506 5> endian:loop(). 1103813 6> c(endian, [native]). {ok,endian} 7> endian:loop(). 414000 8> endian:loop(). 441327 ================================================================ Kostis From co7eb@REDACTED Tue Oct 13 16:40:47 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Tue, 13 Oct 2015 10:40:47 -0400 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> <001801d10244$4095dd10$c1c19730$@frcuba.co.cu> Message-ID: <003301d105c5$2767b820$76372860$@frcuba.co.cu> Hi Richard, Yes I will definably like to build queries like list comprehensions or like your nice example, but I don't understand so well parse transform because I didn't find so much documentation about it in Erlang doc, I was trying with qlc but yet nothing I could archive. If you could point me in the right direction to get a good reference and documentation for doing parse transforms, that would be nice because that will be definitely some well form improvement for my little framework. Regards, Ivan (son of Gilberio). -----Original Message----- From: Richard A. O'Keefe [mailto:ok@REDACTED] Sent: Tuesday, October 13, 2015 1:58 AM To: Ivan Carmenates Garcia Cc: erlang-questions@REDACTED Subject: Re: [erlang-questions] Coming Back (maybe improving lists:reverse/1) On 9/10/2015, at 4:40 pm, Ivan Carmenates Garcia wrote: > Regards, > > Well seems to me that there is no more optimizations for the algorithm > (except for yours 'my_reverse_join' see below), I tested yours and > mine and they both take exactly the same for the final result the > unparsed "string", well mine is 200 millisecond faster in 1 million of > iterations, yours take 64XX main 62XX approx. The problem is that > using lists:flatten is the hell slow, The problem is that I suggested *NOT* using lists:flatten/1. By the way, you really do not *have* to use any particular built in function. A few minutes' coding produced this result for flattening a tree with a million elements: - Using lists:flatten/1 : 123,939 microseconds - Using my own code : 63,134 microseconds The effect of my code isn't precisely the same as that of lists:flatten/1, but for a tree of strings that doesn't matter. my_flatten([A,B,C|Xs], R) -> my_flatten(A, my_flatten(B, my_flatten(C, my_flatten(Xs, R)))); my_flatten([A|Xs], R) -> my_flatten(A, my_flatten(Xs, R)); my_flatten([], R) -> R; my_flatten(X, R) -> [X|R]. > I also tested your algorithm just after the io lists return and it is > very faster its took 14XX milliseconds, so when you use the final > lists:flatten/1 everything goes to crap, sorry about the word. So it > is amazing who faster it is and then lists:flatten by its own take the > another > 5 seconds, I did knew that because I read Erlang session about 7 myths > off bla bla .. and optimizations stuffs some time ago and they say > lists:flatten is slow also I tested it when I was constructing the > algorithm first time I did avoid constructing the io lists because of that. No, the whole point of io lists is to *NOT* flatten them. The advice, in short, was to structure the rest of your program so that you don't NEED to do the flattening. > > I did clear some things like unparse well I need another name I don't > like unparse either and parse is wrong, I will come up with something. Maybe you don't *like* "unparse", but it *is* the standard technical term. Type "unparsing" into the search box of your browser and read! > I also have questions for you if you are so kind to answer. > > Regards proplists, well, it is necessary to make lists of options as > proplists? No, of course not. Even when you encode a list of options as something *like* a proplist, the structure you want will usually be stricter. In any case, what you have here is simply not a list of options. The elements are not options, they are descriptions of columns you want to get back from some data base query. > yes, proplists:compact/1 and expand/1 for [a] to [{a, true}] and all > those stuffs but what I was trying to do is to make it simple for the > user, because more tuples are more complicate to write in the case of > {users, name, alias} (a) Remember, one of the essential things about *being* a proplist is that the consumer is going to IGNORE anything that is not an atom or a pair. If you are not going to do that, you are not treating the data *as* a proplist, and it is "lying to the user" to call it one. (b) Far from making things simpler for the user, you made it much more confusing. There are places where an alias *can* be supplied and places where it *can't*, and there doesn't seem to be any reason for that. Well, it confused the heck out of *me*. > well that could be right adding more brackets like {users, {name, > alias}} because it organizes the thing. But i.e.: I have another > algorithm for match specs in which I can do {name | {table, name} | > value, op, name | {table, name} | value } also [{..., op, ...}, ...], > logic_op, ... so it will be easy for the user to build something like > [ > [{id, '==', 1}, {age, '==', 31}], > 'or', > {username, '==', "john"} > ] There are programming languages in which building queries as data structures is easy. In Lisp, (make-query '(or (and (= id 1) (= age 31)) (= username "john"))) is easy because that's *exactly* what an expression would look like in Lisp. In R, make_query(id = 1 && age == 31 || username == "john") is easy because R functions evaluate their arguments lazily and can get at the abstract syntax trees of those arguments, so again the syntax is exactly what an expression normally looks like. Erlang really isn't one of those languages, but there's a feature of Erlang that means you can get a lot closer. You could use a parse transform to take something like make_query(Id == 1 andalso Age == 31 orelse UserName == "John") which the parser turns into an abstract syntax tree, and then your parse transform could take make_query(...) and turn that into whatever data structure you like. It's hardER in Erlang than it is in Lisp, but it's still much easier than it is with C or Java. (And of course LFE would do it exactly the way Lisp does it.) If I were a user of your system, I would be somewhere between baffled and outraged at the claim that it would be *easy* for me to construct a query in either of the forms you mention. This is perhaps the ideal time to mention the idea of STAGED computation, also known as partial evaluation or partial execution. The function we started discussing looks like a textbook case of something you probably shouldn't be doing at run time anyway. From gpadovani@REDACTED Tue Oct 13 18:03:53 2015 From: gpadovani@REDACTED (Gianluca Padovani) Date: Tue, 13 Oct 2015 18:03:53 +0200 Subject: [erlang-questions] Delete key doesn't work in erl/iex shell Message-ID: Hi, I'm using Ubuntu 14.04 and in Elixir and Erlang shell the delete key doesn't work. I found this issue[1] on elixir repo but they suggest to fix this problem on erlang shell. Do you consider this issue a bug? Is it possible to fix it or find a workaround? Thank you very much Gianluca [1] https://github.com/elixir-lang/elixir/issues/3471 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierrefenoll@REDACTED Tue Oct 13 19:43:44 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Tue, 13 Oct 2015 10:43:44 -0700 Subject: [erlang-questions] Delete key doesn't work in erl/iex shell In-Reply-To: References: Message-ID: Just wait a little :) https://github.com/erlang/otp/commit/2c7e387961251af59f14bca39cbf8fbbe880383e Cheers, -- Pierre Fenoll On 13 October 2015 at 09:03, Gianluca Padovani wrote: > Hi, > I'm using Ubuntu 14.04 and in Elixir and Erlang shell the delete key > doesn't work. I found this issue[1] on elixir repo but they suggest to fix > this problem on erlang shell. Do you consider this issue a bug? Is it > possible to fix it or find a workaround? > > Thank you very much > Gianluca > > [1] https://github.com/elixir-lang/elixir/issues/3471 > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lloyd@REDACTED Tue Oct 13 19:46:14 2015 From: lloyd@REDACTED (lloyd@REDACTED) Date: Tue, 13 Oct 2015 13:46:14 -0400 (EDT) Subject: [erlang-questions] Mnesia In-Reply-To: References: Message-ID: <1444758374.470121944@apps.rackspace.com> Hello, During a break at the Chicago Erlang Workshop Fred Hebert mentioned that Mnesia is a fine database for the 1990s. Mnesia was a major draw when I was first attracted to Erlang in part because I hate transiting syntactic boundaries when I'm programming. I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. Martin's list of negatives may be a good place to start. So, just how much effort and knowledge would it take to overcome or ameliorate these and other unspecified negatives of Mnesia to produce Mnesia2? Best, LRP -----Original Message----- From: "Martin Karlsson" Sent: Monday, October 12, 2015 9:04pm To: "Steve Davis" Cc: "Erlang Questions" Subject: Re: [erlang-questions] Mnesia Hi Steve, I'm with you:) The negatives are being mentioned a lot and unfortunately put people off instead of just being used as good things to know about when you are picking your technology. If you know about the quirks you can usually come up with a workaround that matches your use-case. It is about weighing pros and cons. Why not list a few of the negatives and what you can do to work around them. PROBLEM: Net splits must be manually handled. WORKAROUND: - https://github.com/uwiger/unsplit for automatic healing - Use the majority options (when you need to heal you choose the majority partition when you pick master nodes). This should reduce (eliminate?) risk of data loss. PROBLEM: disc_only_copies uses dets and is limited to 2GB WORKAROUND: - Partition the tables - Use the mnesia_ex patch with leveldb backend (to be released soon I think, it was mentioned in another email just recently) - Perhaps you can use disc_copies and hold your data-set in RAM PROBLEM: Prone to overloading WORKAROUND: Load regulate. Reduce concurrency. I think I read somewhere that mnesia and ets works best with a max concurrency of the number of scheduler threads and degrades from there on. PROBLEM: Slow startup WORKAROUND: Increase the number of table loaders. PROBLEM: Upgrading table definition. transform_table can have problems with big and distributed tables. WORKAROUND: Treat large tables like a key value store to reduce risk of having to modify the record I'm sure there are more problems and workarounds especially if you start pushing it (for example like whatsapp did). Also I've heard dirty_write is not safe with replicated tables. I don't know if this is true but is holding me back from using dirty transactions rather than transactions (which I don't need apart from safety on replicated tables). If anyone knows about this one please let me know. Feel free to add/remove/correct items:) Cheers, Martin On 13 October 2015 at 13:08, Steve Davis wrote: > Hi, > > I?m almost ashamed to say that it?s taken me over 5 years to come around to understanding the value of mnesia. > > I bought the ?well-known? negatives too fast. I have explored relational connectors, DHT solutions and a number of other approaches... > > It?s dawned on me *finally* that 90% of the time, a well implemented mnesia solution would have been better, faster, and cheaper. > > Did I mention "better"? > > Has anyone else had this experience? > > /s > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From zxq9@REDACTED Wed Oct 14 02:03:20 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 14 Oct 2015 09:03:20 +0900 Subject: [erlang-questions] Mnesia In-Reply-To: <1444758374.470121944@apps.rackspace.com> References: <1444758374.470121944@apps.rackspace.com> Message-ID: <2643813.LQAQ6q5oQX@changa> On 2015?10?13? ??? 13:46:14 lloyd@REDACTED wrote: > Hello, > > During a break at the Chicago Erlang Workshop Fred Hebert mentioned that Mnesia is a fine database for the 1990s. > > Mnesia was a major draw when I was first attracted to Erlang in part because I hate transiting syntactic boundaries when I'm programming. > > I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. > > Martin's list of negatives may be a good place to start. > > So, just how much effort and knowledge would it take to overcome or ameliorate these and other unspecified negatives of Mnesia to produce Mnesia2? As with any view, perspective is what drives the perception of utility. If we assume that the only thing that matters is web applications or server-oriented architectures where every potential user in the world is going to be pounding the crap out of a central data center or, even worse, a floating set of vaguely provisioned services within something like AWS, then sure. Maybe Mnesia isn't the thing for that, and certainly a LOT of development is focused on that case. Most of that development is iterative rehashes of what everyone else has already done, though. This is a red flag to me. Social networks, messaging systems, click analytics, GIS frameworks for click analytics, and lots of other things called "analytics" but really boil down to adtech, adtech, adtech, adtech, and some more adtech, and marketing campaigns about campaigns about adtech. The elephant in the room being that the ad market is falling apart because it isn't worth anything near like what we imagined. If views were what mattered it would be impossible for Twitter to be in such trouble, Arstechnica be struggling to maintain its editorial standards, or sites like Huffington Post and Forbes to be forced to display clickbait panels and load 40 ~ 70 third party plants per page load. For this case -- the forefront of tech buzz and where a huge, and very publicly visible percentage of investment and development effort go -- maybe Mnesia is not the best tool. But for other use cases it is immensely useful, *especially* as a live cache with db features. Small and medium business represents about ~ 90% of the activity in many economies. Here in Japan the figure floats around 92%; in the US it is something comparable. When small businesses latch on to a solution that *actually* helps them become more efficient (instead of shelling out for yet another upgrade to a spreadsheet program that will be used to do exactly what they would have done with a spreadsheet in the 80's -- or even funnier, for a version of that spreadsheet that still does the same things, but slower, in a browser), and that solution becomes widespread enough to have an actual impact *the entire economy* improves without anyone really noticing why. That is AMAZING. This is much more interesting than chasing click statistics in the interest of brokering ad sales at the speed of light so that users can continue to ignore them. I mention the SMB use case because it is one with which I am familiar. Fred holds his views because of the sector he deals with. Mnesia is by no means the *only* database useful in developing for the SMB market, but it is a profoundly useful tool among several necessary to make cross-platform development for SMBs a non-suicidal task. A hilariously low percentage of tech investment targets the SMB market (aiming instead at huge business, the web, and the consumer electronics tie-in (the only interesting part of the list)), and there are accordingly very few tools available to make development for that market reasonable. [Insert rant about the deleterious effects of a legally enforced monopoly on small business software.] So is mnesia a db for the 1990s? Maybe. I don't know. That is a very broad (and totally unqualified) statement to make, so it is hard to argue about. I suppose Postgres, Oracle, DB2 and anything else with a schema fall in the same category, but last I heard those were pretty darn useful for a wide variety of very real problems people experienced in the 1990s and continue to experience today. What I can say for certain is that I continue to find Mnesia to be of profound utility today, in 2015, whether or not it is "a good database for the 1990s". The *variety* of data I have dealt with in business is so broad that no single database paradigm, and certainly no particular database system, can manage it. Not reasonably, anyway. I'm not aware of any other systems that compare very well with Mnesia in functionality, especially interoperating with Erlang (or almost any language/runtime) the same way, so its hard to compare. That leaves Mnesia in the "unique tool" category, as opposed to the "easy to compare within a commodity market of similar alternatives" category. -Craig From lloyd@REDACTED Wed Oct 14 02:28:38 2015 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Tue, 13 Oct 2015 20:28:38 -0400 Subject: [erlang-questions] Mnesia In-Reply-To: <2643813.LQAQ6q5oQX@changa> References: <1444758374.470121944@apps.rackspace.com> <2643813.LQAQ6q5oQX@changa> Message-ID: <8CB0B8A6-842C-4B01-ADBF-8F7F3DABA3B9@writersglen.com> Hi Craig, Among the most wise things I've heard on the topic so far. How can we get a crisp summary of your points and implications of Martin's negatives high up in the official mnesia docs. (I really don't know the process.) Would have saved me many hours of uncertainty. All the best, Lloyd Sent from my iPad > On Oct 13, 2015, at 8:03 PM, zxq9 wrote: > >> On 2015?10?13? ??? 13:46:14 lloyd@REDACTED wrote: >> Hello, >> >> During a break at the Chicago Erlang Workshop Fred Hebert mentioned that Mnesia is a fine database for the 1990s. >> >> Mnesia was a major draw when I was first attracted to Erlang in part because I hate transiting syntactic boundaries when I'm programming. >> >> I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. >> >> Martin's list of negatives may be a good place to start. >> >> So, just how much effort and knowledge would it take to overcome or ameliorate these and other unspecified negatives of Mnesia to produce Mnesia2? > > As with any view, perspective is what drives the perception of utility. > > If we assume that the only thing that matters is web applications or server-oriented architectures where every potential user in the world is going to be pounding the crap out of a central data center or, even worse, a floating set of vaguely provisioned services within something like AWS, then sure. Maybe Mnesia isn't the thing for that, and certainly a LOT of development is focused on that case. > > Most of that development is iterative rehashes of what everyone else has already done, though. This is a red flag to me. Social networks, messaging systems, click analytics, GIS frameworks for click analytics, and lots of other things called "analytics" but really boil down to adtech, adtech, adtech, adtech, and some more adtech, and marketing campaigns about campaigns about adtech. > > The elephant in the room being that the ad market is falling apart because it isn't worth anything near like what we imagined. If views were what mattered it would be impossible for Twitter to be in such trouble, Arstechnica be struggling to maintain its editorial standards, or sites like Huffington Post and Forbes to be forced to display clickbait panels and load 40 ~ 70 third party plants per page load. > > For this case -- the forefront of tech buzz and where a huge, and very publicly visible percentage of investment and development effort go -- maybe Mnesia is not the best tool. > > But for other use cases it is immensely useful, *especially* as a live cache with db features. Small and medium business represents about ~ 90% of the activity in many economies. Here in Japan the figure floats around 92%; in the US it is something comparable. When small businesses latch on to a solution that *actually* helps them become more efficient (instead of shelling out for yet another upgrade to a spreadsheet program that will be used to do exactly what they would have done with a spreadsheet in the 80's -- or even funnier, for a version of that spreadsheet that still does the same things, but slower, in a browser), and that solution becomes widespread enough to have an actual impact *the entire economy* improves without anyone really noticing why. > > That is AMAZING. This is much more interesting than chasing click statistics in the interest of brokering ad sales at the speed of light so that users can continue to ignore them. > > I mention the SMB use case because it is one with which I am familiar. Fred holds his views because of the sector he deals with. Mnesia is by no means the *only* database useful in developing for the SMB market, but it is a profoundly useful tool among several necessary to make cross-platform development for SMBs a non-suicidal task. A hilariously low percentage of tech investment targets the SMB market (aiming instead at huge business, the web, and the consumer electronics tie-in (the only interesting part of the list)), and there are accordingly very few tools available to make development for that market reasonable. > > [Insert rant about the deleterious effects of a legally enforced monopoly on small business software.] > > So is mnesia a db for the 1990s? Maybe. I don't know. That is a very broad (and totally unqualified) statement to make, so it is hard to argue about. I suppose Postgres, Oracle, DB2 and anything else with a schema fall in the same category, but last I heard those were pretty darn useful for a wide variety of very real problems people experienced in the 1990s and continue to experience today. What I can say for certain is that I continue to find Mnesia to be of profound utility today, in 2015, whether or not it is "a good database for the 1990s". The *variety* of data I have dealt with in business is so broad that no single database paradigm, and certainly no particular database system, can manage it. Not reasonably, anyway. I'm not aware of any other systems that compare very well with Mnesia in functionality, especially interoperating with Erlang (or almost any language/runtime) the same way, so its hard to compare. That leaves Mnesia in the "unique tool" category, as opposed to the "easy to compare within a commodity market of similar alternatives" category. > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From ok@REDACTED Wed Oct 14 02:58:16 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 14 Oct 2015 13:58:16 +1300 Subject: [erlang-questions] Coming Back (maybe improving lists:reverse/1) In-Reply-To: <003301d105c5$2767b820$76372860$@frcuba.co.cu> References: <041e01d1012f$9220aa40$b661fec0$@frcuba.co.cu> <001001d10152$7e80a080$7b81e180$@frcuba.co.cu> <3DE96CA6-56A4-4996-AEC8-9377749DA20C@cs.otago.ac.nz> <001c01d1015d$c36cf8a0$4a46e9e0$@frcuba.co.cu> <001d01d1015f$24b92790$6e2b76b0$@frcuba.co.cu> <656993BD-5652-4E90-B872-2D2D11876952@cs.otago.ac.nz> <001801d10244$4095dd10$c1c19730$@frcuba.co.cu> <003301d105c5$2767b820$76372860$@frcuba.co.cu> Message-ID: <79D9E2E6-AAF4-4B68-AC7D-6F3BDE080112@cs.otago.ac.nz> On 14/10/2015, at 3:40 am, Ivan Carmenates Garcia wrote: > Hi Richard, > > Yes I will definably like to build queries like list comprehensions or like > your nice example, but I don't understand so well parse transform because I > didn't find so much documentation about it in Erlang doc, I was trying with > qlc but yet nothing I could archive. The basic idea is that the standard tools parse a module to get an abstract syntax tree for the whole module, hand it to your code to transform, and continue with whatever you give back. The source code does NOT have to pass semantic checks; the result of transforming does. http://www.erlang-factory.com/upload/presentations/521/yrashk_parse_transformations_sf12.pdf erl_id_trans is an example module that does nothing: http://www.erlang.org/doc/man/erl_id_trans.html It's worth noting that you can use an identity parse transform as a way of plugging in a static check of some kind that is not already done by the compiler. The warnings in that are a little strong. Parse transformations are not something to do *lightly*, but for implementing a query language as an Embedded Domain Specific Language they are a good choice. The Stack Overflow query "Is there a good, complete tutorial on Erlang parse transforms available?" http://stackoverflow.com/questions/2416192/is-there-a-good-complete-tutorial-on-erlang-parse-transforms-available has some useful answers. http://seancribbs.com/tech/2009/06/21/building-a-parser-generator-in-erlang-part-4/ talks about parse transforms in the context of building a PEG parser generator for Erlang. I liked it, except that it is a *perfect* example of why syntax colouring is **evil**. The official definition of abstract syntax trees is the documentation for the erl_syntax module: http://erlang.org/doc/man/erl_syntax.html Justin Calleja has a nice introduction to that: http://www.ricston.com/blog/erlang-erlsyntax/ Parse transforms are an extremely powerful tool that you want to use *sparingly*; only when it will improve the *total* maintainability of your program. I haven't used them in anger yet in Erlang myself. I've used the equivalent in other languages to do things like embedding XML syntax in a host language and generate specialised code from a template. (The effort of trying to take three versions of the "same" thing and boil them down to a single template from which the originals can be recovered is, sadly, a good way to discover *unintentional* differences between the versions, which is why I think maintaining a master copy and generating specialised versions from it is a good thing for overall maintainability.) There are at least four ways in which a parse transform can help with an embedded query language: - queries can be much easier for people to read and write, so the queries as such can be more maintainable - what the translation *is* can be varied without the source code of any query having to be changed - the translation is done at compile time, not run time which is good for efficiency *and* timeliness of error reporting - the temptation to leave a sanity check out on the grounds that it will cost too much at run time goes away because you know the check will be done at compile time. *Can* help is not *will* help. But it's worth exploring. From ok@REDACTED Wed Oct 14 03:16:38 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 14 Oct 2015 14:16:38 +1300 Subject: [erlang-questions] Mnesia In-Reply-To: <1444758374.470121944@apps.rackspace.com> References: <1444758374.470121944@apps.rackspace.com> Message-ID: On 14/10/2015, at 6:46 am, wrote: > > I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. There's one thing that strikes me. I could go to a shop today and buy a 1 TB external drive for NZD 75, including Goods and Services Tax of 15%. (At least that's what the ad I saw a couple of days ago said.) That's almost exactly USD 50. This is a drive that fits in a shirt pocket, with room left over for all sorts of junk. To make Mnesia a data base for the 2010s, it has to be able to handle at least 1TB of data. Heck, I've got enough goodies-for- research money left that I could get the department to buy me ten of these gadgets, so let's say Mnesia - should be able to handle a single table in the low TB - should be able to handle a collection of tables in the tens of TB - where "handle" includes creating, populating, checking, recovering, and accessing in "a reasonable time". That's a "single machine data base for the 2010s". Of course there are multicore, cluster, and network issues as well. From lloyd@REDACTED Wed Oct 14 05:19:56 2015 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Tue, 13 Oct 2015 23:19:56 -0400 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> Message-ID: Hi Richard, So if we dig into the code, what exactly needs to change to make that happen? This is a great teachable moment where the wizards of Erlang can help us with less understanding significantly advance our Erlang skills--- if nothing else, guide us through the design decisions that shaped mnesia, the architecture, and the significant code passages that impose current limitations. Why? To broaden the base of folks capable of extending and advancing the Erlang legacy. All the best, Lloyd Sent from my iPad > On Oct 13, 2015, at 9:16 PM, "Richard A. O'Keefe" wrote: > > >> On 14/10/2015, at 6:46 am, wrote: >> >> I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. > > There's one thing that strikes me. > > I could go to a shop today and buy a 1 TB external drive for > NZD 75, including Goods and Services Tax of 15%. (At least > that's what the ad I saw a couple of days ago said.) > That's almost exactly USD 50. This is a drive that fits in > a shirt pocket, with room left over for all sorts of junk. > > To make Mnesia a data base for the 2010s, it has to be able to > handle at least 1TB of data. Heck, I've got enough goodies-for- > research money left that I could get the department to buy me > ten of these gadgets, so let's say Mnesia > - should be able to handle a single table in the low TB > - should be able to handle a collection of tables in the > tens of TB > - where "handle" includes creating, populating, checking, > recovering, and accessing in "a reasonable time". > > That's a "single machine data base for the 2010s". > Of course there are multicore, cluster, and network issues as > well. > > > From ok@REDACTED Wed Oct 14 07:24:12 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Wed, 14 Oct 2015 18:24:12 +1300 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> Message-ID: <76B2A53E-914A-4C15-A116-536CC57E7AAB@cs.otago.ac.nz> On 14/10/2015, at 4:19 pm, Lloyd R. Prentice wrote: > Hi Richard, > > So if we dig into the code, what exactly needs to change to make that happen? How should I know? I couldn't implement a data base to save my life. It will certainly be more than just finding the place where it says "2GB" and changing a number. Things *change* at scale. I've got a data set that's 18GB as raw text, and a student wants to do some data mining on a recent data set that's too big to fit on the 8GB memory stick he keeps bringing me excerpts on; it would be a good fit for Mnesia... mnesia_ext + LevelDB looks really good; it would be nice to know, on downloading a new release of Erlang/OTP, that it would be _there_ and *the documentation integrated*. By the way, a key fact about why Mnesia is the way it is can be found in the documentation: Mnesia is primarily intended to be a memory-resident database. Some of its design tradeoffs reflect this. But on a 16GiB 64-bit machine (yeah, I know it's small, but it's a couple of years old) a 32-bit limit doesn't make as much sense for a memory-resident data base as it used to either. From kennethlakin@REDACTED Wed Oct 14 08:55:23 2015 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Tue, 13 Oct 2015 23:55:23 -0700 Subject: [erlang-questions] Mnesia In-Reply-To: <76B2A53E-914A-4C15-A116-536CC57E7AAB@cs.otago.ac.nz> References: <1444758374.470121944@apps.rackspace.com> <76B2A53E-914A-4C15-A116-536CC57E7AAB@cs.otago.ac.nz> Message-ID: <561DFC5B.9030705@gmail.com> On 10/13/2015 10:24 PM, Richard A. O'Keefe wrote: > But on a 16GiB 64-bit machine ... a 32-bit limit doesn't > make as much sense for a memory-resident data base as it > used to either. I'm slightly confused. As I understood it, the size of a disc_copies Mnesia table was only limited by the system's available RAM, and one didn't need to worry about the size of the on-disk representation of the data. Did I misunderstand this, or were you referring to DETS-backed disc_only_copies tables? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From jan.chochol@REDACTED Wed Oct 14 09:05:42 2015 From: jan.chochol@REDACTED (Jan Chochol) Date: Wed, 14 Oct 2015 09:05:42 +0200 Subject: [erlang-questions] unpacking big/little endian speed difference In-Reply-To: <561D15A9.70104@cs.ntua.gr> References: <561D15A9.70104@cs.ntua.gr> Message-ID: The difference is really in "i_bs_get_integer_32_rfId" and "i_bs_get_integer_small_imm_xIfId". When looking in code, the "i_bs_get_integer_32_rfId" can extract 32 bit big endian integer from bitstring - it is quite simple function, which can not do much more, and even has fast path for binary (bitstring with byte boundary). On the other side "i_bs_get_integer_small_imm_xIfId" is quite complex function, which can extract integer with any bit size, signed, or unsigned, little, or big endian. It is quite complex, and even use some temporary allocated buffers. So the reason is, that there is special "fast" opcode for extracting 32 bit big endian integer (probably because authors of BEAM VM thought, that it will be more useful). On Tue, Oct 13, 2015 at 4:31 PM, Kostis Sagonas wrote: > On 10/13/2015 02:59 PM, Sergej Jure?ko wrote: > >> How come unpacking integers as big endian is faster than little endian, >> when I'm running on a little endian platform (intel osx)? >> I know big endian is erlang default, but if you're turning binaries into >> integers, you must turn them into little endian anyway, so unpacking >> should basically be memmcpy. >> >> The test: >> >> % Takes: ~1.3s >> -define(ENDIAN,unsigned-big). >> >> % Takes: ~1.8s >> % -define(ENDIAN,unsigned-little). >> > > Well, if you are interested in performance, simply compile to native code > your module and the time will drop to less than half... > > ================================================================ > Eshell V7.1 (abort with ^G) > 1> c(endian). > {ok,endian} > 2> endian:loop(). > 1097971 > 3> endian:loop(). > 1327621 > 4> endian:loop(). > 1162506 > 5> endian:loop(). > 1103813 > 6> c(endian, [native]). > {ok,endian} > 7> endian:loop(). > 414000 > 8> endian:loop(). > 441327 > ================================================================ > > Kostis > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gguthrie@REDACTED Wed Oct 14 09:25:33 2015 From: gguthrie@REDACTED (Gordon Guthrie) Date: Wed, 14 Oct 2015 08:25:33 +0100 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> Message-ID: <3880B774-DB69-487A-8BD7-6DA3E8DE6766@basho.com> Mnesia is also a pain when it partitions - you have to write your own reconciliation programmes. There has been a lot of work done on eventual consistency in *the other* Erlang database - Riak (disclaimer I am working at Basho now) Riak implements eventual consistency - and post-partition self-healing using Consistent Replicated Data Types (or CRDTs) and the canonical set of standalone CRDT libraries is written in Erlang: https://github.com/basho/riak_dt There is a comprehensive reading list here: https://christophermeiklejohn.com/crdt/2014/07/22/readings-in-crdts.html The combination of using Klarna?s (forthcoming) leveldb backend and a CRDT eventual consistency layer on top would be an interesting start offering a distributed transactional database with eventual consistency Gordon > On 14 Oct 2015, at 02:16, Richard A. O'Keefe wrote: > > > On 14/10/2015, at 6:46 am, wrote: >> >> I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. > > There's one thing that strikes me. > > I could go to a shop today and buy a 1 TB external drive for > NZD 75, including Goods and Services Tax of 15%. (At least > that's what the ad I saw a couple of days ago said.) > That's almost exactly USD 50. This is a drive that fits in > a shirt pocket, with room left over for all sorts of junk. > > To make Mnesia a data base for the 2010s, it has to be able to > handle at least 1TB of data. Heck, I've got enough goodies-for- > research money left that I could get the department to buy me > ten of these gadgets, so let's say Mnesia > - should be able to handle a single table in the low TB > - should be able to handle a collection of tables in the > tens of TB > - where "handle" includes creating, populating, checking, > recovering, and accessing in "a reasonable time". > > That's a "single machine data base for the 2010s". > Of course there are multicore, cluster, and network issues as > well. > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From rizkhan@REDACTED Wed Oct 14 09:56:07 2015 From: rizkhan@REDACTED (Rizwan Khan) Date: Wed, 14 Oct 2015 12:56:07 +0500 Subject: [erlang-questions] Erlang in the Insurance Sector Message-ID: Hi, Wanted to know if Erlang is being used in the Insurance sector anywhere? I have seem multiple platforms which are based on Java. Some of them still use COBOL. Most of them follow SOA practices and also include the business and calculation rules engines. I see Erlang/Elixir as a good fit for Microservices based architecture. How useful Erlang/Elixir stack is going to be for the underwriting purpose which is the main component for Insurance companies? Rizwan Khan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lcastro@REDACTED Wed Oct 14 10:27:06 2015 From: lcastro@REDACTED (Laura M. Castro) Date: Wed, 14 Oct 2015 10:27:06 +0200 Subject: [erlang-questions] Erlang in the Insurance Sector In-Reply-To: References: Message-ID: A few years ago, we did develop a system for risk management (including not only incident report, but best-fitting insurance policy determination, and also payment lifecycle processing), with an Erlang back-end and a Java front-end. As one can imagine, pattern-matching proved very useful in the kind of logic implementation that was needed: the use case was that multiple insurance policies could have been contracted that could possible be used for covering a given incident, and a number of user-defined criteria could be used to determine with one to use. It started as a proof-of-concept but ended up growing into a fully functional enterprise system that was used by a international corporation for a number of years; I don't know which is the situation of the system nowadays. We published a number of papers around it, you can check them out: http://dl.acm.org/citation.cfm?id=940884 http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1562879 http://www.tandfonline.com/doi/abs/10.3166/jds.17.501-521 (if you don't have access to any of those, just let me know) From carlsson.richard@REDACTED Wed Oct 14 10:46:33 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Wed, 14 Oct 2015 10:46:33 +0200 Subject: [erlang-questions] Mnesia In-Reply-To: <3880B774-DB69-487A-8BD7-6DA3E8DE6766@basho.com> References: <1444758374.470121944@apps.rackspace.com> <3880B774-DB69-487A-8BD7-6DA3E8DE6766@basho.com> Message-ID: The simple view of Mnesia is that it's a transaction layer on top of ETS tables, with some varying forms of backing storage. Transactions are pessimistic, based on locking. Some small optimizations have been done to the locking mechanism in recent years, but it's maybe more could be done in that area. It might also be possible to add built-in support for optimistic transactions based on timestamps or compare-and-swap, for certain usage patterns. The semantics of dirty reads/writes need to be better documented, and if possible cleaned up a bit, because the behaviour can depend on table type and whether or not the tables are local or remote. The table size problems can probably be considered to be solved by mnesia_ext with leveldb or other backends. None of this will make it less of a 90's database though. Adding eventual consistency (e.g. based on vector clocks) as alternative to transactions would make it more modern. The big thing that would help, as Gordon mentioned, is a new distribution/replication layer. The existing one basically assumes that tables are not huge and the network between nodes is fast and reliable with netsplits being rare, like in a small cluster in a telecom base station. We use Mnesia, but not in distributed mode - we have a custom distribution layer (stable, but very limited) on top of local Mnesia instances that are not directly aware of each other. /Richard /Richard On Wed, Oct 14, 2015 at 9:25 AM, Gordon Guthrie wrote: > Mnesia is also a pain when it partitions - you have to write your own > reconciliation programmes. > > There has been a lot of work done on eventual consistency in *the other* > Erlang database - Riak (disclaimer I am working at Basho now) > > Riak implements eventual consistency - and post-partition self-healing > using Consistent Replicated Data Types (or CRDTs) and the canonical set of > standalone CRDT libraries is written in Erlang: > https://github.com/basho/riak_dt > > There is a comprehensive reading list here: > https://christophermeiklejohn.com/crdt/2014/07/22/readings-in-crdts.html > > The combination of using Klarna?s (forthcoming) leveldb backend and a CRDT > eventual consistency layer on top would be an interesting start offering a > distributed transactional database with eventual consistency > > Gordon > > On 14 Oct 2015, at 02:16, Richard A. O'Keefe wrote: > > > On 14/10/2015, at 6:46 am, > wrote: > > > I asked Fred what it would it take to upgrade Mnesia for the 21st century > (or, at least, for the next decade). He didn't know. > > > There's one thing that strikes me. > > I could go to a shop today and buy a 1 TB external drive for > NZD 75, including Goods and Services Tax of 15%. (At least > that's what the ad I saw a couple of days ago said.) > That's almost exactly USD 50. This is a drive that fits in > a shirt pocket, with room left over for all sorts of junk. > > To make Mnesia a data base for the 2010s, it has to be able to > handle at least 1TB of data. Heck, I've got enough goodies-for- > research money left that I could get the department to buy me > ten of these gadgets, so let's say Mnesia > - should be able to handle a single table in the low TB > - should be able to handle a collection of tables in the > tens of TB > - where "handle" includes creating, populating, checking, > recovering, and accessing in "a reasonable time". > > That's a "single machine data base for the 2010s". > Of course there are multicore, cluster, and network issues as > well. > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Wed Oct 14 11:02:05 2015 From: zxq9@REDACTED (zxq9) Date: Wed, 14 Oct 2015 18:02:05 +0900 Subject: [erlang-questions] Erlang in the Insurance Sector In-Reply-To: References: Message-ID: <13137327.OLXvpmNeDx@burrito> On Wednesday 14 October 2015 10:27:06 Laura M. Castro wrote: > http://dl.acm.org/citation.cfm?id=940884 > http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1562879 > http://www.tandfonline.com/doi/abs/10.3166/jds.17.501-521 > > (if you don't have access to any of those, just let me know) I am very interested, but these are all behind rather steep paywalls. -Craig From vances@REDACTED Wed Oct 14 11:29:53 2015 From: vances@REDACTED (Vance Shipley) Date: Wed, 14 Oct 2015 14:59:53 +0530 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> <3880B774-DB69-487A-8BD7-6DA3E8DE6766@basho.com> Message-ID: On Wed, Oct 14, 2015 at 2:16 PM, Richard Carlsson wrote: > The simple view of Mnesia is that it's a transaction layer on top of ETS tables, with some varying forms of backing storage. I am careful not to use the term "database" when referring to mnesia as it leads to unfair comparisons and unreasonable expectations. I use terms such as "distributed tables" or "data store". It's a more low level tool than what most people think of as a "database". In training I describe a progression in scale out like this: single process - store data in StateData (i.e. gb_trees) multiple processes, single node - store data in ets tables multiple processes, multiple nodes - store data in mnesia ... and that's before we talk about persistence or transactions. -- -Vance From henrik.x.nord@REDACTED Wed Oct 14 12:45:58 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Wed, 14 Oct 2015 12:45:58 +0200 Subject: [erlang-questions] Patch Package OTP 18.1.2 Released Message-ID: <561E3266.9000601@ericsson.com> Patch Package: OTP 18.1.2 Git Tag: OTP-18.1.2 Date: 2015-10-14 Trouble Report Id: OTP-13036 Seq num: System: OTP Release: 18 Application: ssh-4.1.1 Predecessor: OTP 18.1.1 Check out the git tag OTP-18.1.2, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- ssh-4.1.1 ------------------------------------------------------- --------------------------------------------------------------------- The ssh-4.1.1 application can be applied independently of other applications on a full OTP 18 installation. --- Improvements and New Features --- OTP-13036 Application(s): ssh A new option max_channels limits the number of channels with active server-side subsystems that are accepted. Full runtime dependencies of ssh-4.1.1: crypto-3.3, erts-6.0, kernel-3.0, public_key-0.22, stdlib-2.3 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From gguthrie@REDACTED Wed Oct 14 12:48:27 2015 From: gguthrie@REDACTED (Gordon Guthrie) Date: Wed, 14 Oct 2015 11:48:27 +0100 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> <3880B774-DB69-487A-8BD7-6DA3E8DE6766@basho.com> Message-ID: <15F12F95-C171-4BCD-A3FB-3B365DD9A05E@basho.com> I always planned reliability for Mnesia as hot/cold A shared resilient network storage fabric underneath both a running Mnesia instance and a non-running one. Failover would then be losing a server and bringing up the other one. Obviously this can be expensive with large tables that need to be loaded fully into memory, etc, etc? Gordon > On 14 Oct 2015, at 09:46, Richard Carlsson wrote: > > The simple view of Mnesia is that it's a transaction layer on top of ETS tables, with some varying forms of backing storage. Transactions are pessimistic, based on locking. Some small optimizations have been done to the locking mechanism in recent years, but it's maybe more could be done in that area. It might also be possible to add built-in support for optimistic transactions based on timestamps or compare-and-swap, for certain usage patterns. The semantics of dirty reads/writes need to be better documented, and if possible cleaned up a bit, because the behaviour can depend on table type and whether or not the tables are local or remote. The table size problems can probably be considered to be solved by mnesia_ext with leveldb or other backends. None of this will make it less of a 90's database though. Adding eventual consistency (e.g. based on vector clocks) as alternative to transactions would make it more modern. > > The big thing that would help, as Gordon mentioned, is a new distribution/replication layer. The existing one basically assumes that tables are not huge and the network between nodes is fast and reliable with netsplits being rare, like in a small cluster in a telecom base station. We use Mnesia, but not in distributed mode - we have a custom distribution layer (stable, but very limited) on top of local Mnesia instances that are not directly aware of each other. > > /Richard > > > > /Richard > > On Wed, Oct 14, 2015 at 9:25 AM, Gordon Guthrie > wrote: > Mnesia is also a pain when it partitions - you have to write your own reconciliation programmes. > > There has been a lot of work done on eventual consistency in *the other* Erlang database - Riak (disclaimer I am working at Basho now) > > Riak implements eventual consistency - and post-partition self-healing using Consistent Replicated Data Types (or CRDTs) and the canonical set of standalone CRDT libraries is written in Erlang: > https://github.com/basho/riak_dt > > There is a comprehensive reading list here: > https://christophermeiklejohn.com/crdt/2014/07/22/readings-in-crdts.html > > The combination of using Klarna?s (forthcoming) leveldb backend and a CRDT eventual consistency layer on top would be an interesting start offering a distributed transactional database with eventual consistency > > Gordon > >> On 14 Oct 2015, at 02:16, Richard A. O'Keefe > wrote: >> >> >> On 14/10/2015, at 6:46 am, > > wrote: >>> >>> I asked Fred what it would it take to upgrade Mnesia for the 21st century (or, at least, for the next decade). He didn't know. >> >> There's one thing that strikes me. >> >> I could go to a shop today and buy a 1 TB external drive for >> NZD 75, including Goods and Services Tax of 15%. (At least >> that's what the ad I saw a couple of days ago said.) >> That's almost exactly USD 50. This is a drive that fits in >> a shirt pocket, with room left over for all sorts of junk. >> >> To make Mnesia a data base for the 2010s, it has to be able to >> handle at least 1TB of data. Heck, I've got enough goodies-for- >> research money left that I could get the department to buy me >> ten of these gadgets, so let's say Mnesia >> - should be able to handle a single table in the low TB >> - should be able to handle a collection of tables in the >> tens of TB >> - where "handle" includes creating, populating, checking, >> recovering, and accessing in "a reasonable time". >> >> That's a "single machine data base for the 2010s". >> Of course there are multicore, cluster, and network issues as >> well. >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpadovani@REDACTED Wed Oct 14 13:06:04 2015 From: gpadovani@REDACTED (Gianluca Padovani) Date: Wed, 14 Oct 2015 13:06:04 +0200 Subject: [erlang-questions] Delete key doesn't work in erl/iex shell In-Reply-To: References: Message-ID: Thank you :-) 2015-10-13 19:43 GMT+02:00 Pierre Fenoll : > Just wait a little :) > https://github.com/erlang/otp/commit/2c7e387961251af59f14bca39cbf8fbbe880383e > > > Cheers, > -- > Pierre Fenoll > > > On 13 October 2015 at 09:03, Gianluca Padovani > wrote: > >> Hi, >> I'm using Ubuntu 14.04 and in Elixir and Erlang shell the delete key >> doesn't work. I found this issue[1] on elixir repo but they suggest to fix >> this problem on erlang shell. Do you consider this issue a bug? Is it >> possible to fix it or find a workaround? >> >> Thank you very much >> Gianluca >> >> [1] https://github.com/elixir-lang/elixir/issues/3471 >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Wed Oct 14 13:31:44 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Wed, 14 Oct 2015 13:31:44 +0200 Subject: [erlang-questions] New pull request for mnesia_ext Message-ID: I have created a new pull request for mnesia_ext, based on the fresh OTP 18.1.2 tag: https://github.com/erlang/otp/pull/858 /Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus@REDACTED Wed Oct 14 14:02:42 2015 From: magnus@REDACTED (Magnus Henoch) Date: Wed, 14 Oct 2015 13:02:42 +0100 Subject: [erlang-questions] Mnesia In-Reply-To: Lloyd R. Prentice's message of "Tue\, 13 Oct 2015 20\:28\:38 -0400 \(11 hours\, 25 minutes\, 10 seconds ago\)" Message-ID: "Lloyd R. Prentice" writes: > Hi Craig, > > Among the most wise things I've heard on the topic so far. > > How can we get a crisp summary of your points and implications > of Martin's negatives high up in the official mnesia docs. (I > really don't know the process.) Would have saved me many hours > of uncertainty. The process for adding something to the documentation is submitting a pull request to the erlang/otp repository on Github. The Mnesia documentation is in the lib/mnesia/doc/src directory. You could either clone the repository, edit the documentation in your favourite text editor, commit and push your changes, and open a pull request from your branch, or you could edit the documentation through the Github web interface and submit the pull request from there. Perhaps the Mnesia overview page would be a good place for this: https://github.com/erlang/otp/blob/maint/lib/mnesia/doc/src/Mnesia_overview.xml Click the pen icon near the top to start editing. Regards, Magnus From carlsson.richard@REDACTED Wed Oct 14 15:56:43 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Wed, 14 Oct 2015 15:56:43 +0200 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: <22044.46629.802283.964778@gargle.gargle.HOWL> References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <1444655676.545392.407872329.7AC25FB7@webmail.messagingengine.com> <22044.46629.802283.964778@gargle.gargle.HOWL> Message-ID: The Postgres backend to mnesia_ext is now available here: https://github.com/klarna/mnesia_pg /Richard On Tue, Oct 13, 2015 at 9:43 AM, Mikael Pettersson wrote: > Tristan Sloughter writes: > > I'd love to see the postgres one even if it was just experimental. > > Someone else might take it up and continue with it. > > I'm working on getting that one open-sourced, hopefully this week. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From t@REDACTED Wed Oct 14 15:57:50 2015 From: t@REDACTED (Tristan Sloughter) Date: Wed, 14 Oct 2015 08:57:50 -0500 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <1444655676.545392.407872329.7AC25FB7@webmail.messagingengine.com> <22044.46629.802283.964778@gargle.gargle.HOWL> Message-ID: <1444831070.2107141.410030649.16A62F53@webmail.messagingengine.com> Awesome, thanks! -- Tristan Sloughter t@REDACTED On Wed, Oct 14, 2015, at 08:56 AM, Richard Carlsson wrote: > The Postgres backend to mnesia_ext is now available here: https://github.com/klarna/mnesia_pg > > > ? ? ? ? /Richard > > On Tue, Oct 13, 2015 at 9:43 AM, Mikael Pettersson wrote: >> Tristan Sloughter writes: >> ?> I'd love to see the postgres one even if it was just experimental. >> ?> Someone else might take it up and continue with it. >> >> I'm working on getting that one open-sourced, hopefully this week. >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From t@REDACTED Wed Oct 14 16:13:04 2015 From: t@REDACTED (Tristan Sloughter) Date: Wed, 14 Oct 2015 09:13:04 -0500 Subject: [erlang-questions] If you make WhatsApp today... In-Reply-To: References: <1F5C5B03-1CAB-47DB-AFEB-306584201D10@feuerlabs.com> <1444655676.545392.407872329.7AC25FB7@webmail.messagingengine.com> <22044.46629.802283.964778@gargle.gargle.HOWL> Message-ID: <1444831984.2110527.410045465.400F3647@webmail.messagingengine.com> Is there a reason besides convenience on your end for bundling in the postgres source? Is it patched? Just wondering if there is anything besides a patched OTP for mnesia_ext is needed for playing with this. -- Tristan Sloughter t@REDACTED On Wed, Oct 14, 2015, at 08:56 AM, Richard Carlsson wrote: > The Postgres backend to mnesia_ext is now available here: https://github.com/klarna/mnesia_pg > > > ? ? ? ? /Richard > > On Tue, Oct 13, 2015 at 9:43 AM, Mikael Pettersson wrote: >> Tristan Sloughter writes: >> ?> I'd love to see the postgres one even if it was just experimental. >> ?> Someone else might take it up and continue with it. >> >> I'm working on getting that one open-sourced, hopefully this week. >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Wed Oct 14 16:14:26 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Wed, 14 Oct 2015 16:14:26 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: Message-ID: > > Well, the two potentially blocking operations it performs are > gen_tcp:connect() (5-second timeout by default) and gen_tcp:send() > > If you want to create some form of buffering behavior, perhaps > exometer_report_logger.erl might be useful. > Hi Ulf, No I really don't want to do any caching. Just asking if somewhere it was happening along the way, because then I would have understood what happened. :) Thanks, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto@REDACTED Wed Oct 14 16:14:58 2015 From: roberto@REDACTED (Roberto Ostinelli) Date: Wed, 14 Oct 2015 16:14:58 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: <12F2115FD1CCEE4294943B2608A18FA301A2710ED2@MAIL01.win.lbaum.eu> References: <12F2115FD1CCEE4294943B2608A18FA301A2710ED2@MAIL01.win.lbaum.eu> Message-ID: > > Another possibility would be to automatically close the TCP session when > the other end gets unresponsive using the send_timeout option. > Hi Tobias, This is a good suggestion. Thanks, r. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Wed Oct 14 16:50:50 2015 From: sergej.jurecko@REDACTED (Sergej =?UTF-8?B?SnVyZcSNa28=?=) Date: Wed, 14 Oct 2015 16:50:50 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: <12F2115FD1CCEE4294943B2608A18FA301A2710ED2@MAIL01.win.lbaum.eu> References: <12F2115FD1CCEE4294943B2608A18FA301A2710ED2@MAIL01.win.lbaum.eu> Message-ID: Actually one can also use: inet_tcp:send(Sock,Data,[nosuspend]) which will return {error,busy} if it is unable to send immediately. Sergej From: on behalf of Tobias Schlager Date: Monday 12 October 2015 at 17:14 To: Ulf Wiger, Roberto Ostinelli Cc: erlang questions Subject: Re: [erlang-questions] Exometer, Recon and prim_inet:send/2? Hi, Graphite does also support UDP. Maybe that could be an option for you. Fire and forget might in this case be better than loading your system. Another possibility would be to automatically close the TCP session when the other end gets unresponsive using the send_timeout option. Regards Tobias _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Rehberger@REDACTED Wed Oct 14 19:48:39 2015 From: Frank.Rehberger@REDACTED (Frank Rehberger) Date: Wed, 14 Oct 2015 19:48:39 +0200 Subject: [erlang-questions] Release build with Erlide project? Message-ID: <561E9577.2090707@web.de> Hi, not sure if this is the correct forum for Erlide questions, please let me know. Before I turned to Erlide, I used rebar tool to create a project, compile and to build a release. Now I turned to Erlide IDE, and I am wondering if there is a functionality to build a release? Sadly this chapter in the Erlide docu is not done yet. :/ Please can you give me some advice, how to create a release? -- Best regards - Mit freundlichen Gr??en - ???? Frank Rehberger, Berlin, Germany Frank.Rehberger@REDACTED From vladdu55@REDACTED Wed Oct 14 20:33:01 2015 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Wed, 14 Oct 2015 20:33:01 +0200 Subject: [erlang-questions] Release build with Erlide project? In-Reply-To: <561E9577.2090707@web.de> References: <561E9577.2090707@web.de> Message-ID: Hi! No, erlide doesn't help with building releases. I have ongoing work to integrate with rebar, but it's not finished yet. I have added an issue for this https://github.com/erlide/erlide/issues/240. regards, Vlad On Wed, Oct 14, 2015 at 7:48 PM, Frank Rehberger wrote: > Hi, > > not sure if this is the correct forum for Erlide questions, please let > me know. > > Before I turned to Erlide, I used rebar tool to create a project, > compile and to build a release. > > Now I turned to Erlide IDE, and I am wondering if there is a > functionality to build a release? Sadly this chapter in the Erlide docu > is not done yet. :/ > > Please can you give me some advice, how to create a release? > > -- > Best regards - Mit freundlichen Gr??en - ???? > Frank Rehberger, Berlin, Germany > Frank.Rehberger@REDACTED > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidnwelton@REDACTED Wed Oct 14 20:33:47 2015 From: davidnwelton@REDACTED (David Welton) Date: Wed, 14 Oct 2015 11:33:47 -0700 Subject: [erlang-questions] epgsql help Message-ID: Hi, A while back, I started a project on github to try and maintain one 'good' version of the epgsql Postgres driver. I recently started a new job where I'm not using Erlang any more (well, not yet...), so I don't use epgsql day to day. I try and look at it from time to time, but I could use a little extra help if anyone's interested. There's not a ton of work, mostly just correcting the occasional bug, or adding support for various Postgres types. https://github.com/epgsql/epgsql There's also a smaller, newer, more experimental codebase with the aim of actually providing something that your average end user can just drop in and use, without worrying so much about having to roll their own code for when the DB goes down or when they need a pool of DB clients. https://github.com/epgsql/pgapp The best way to get involved would be to sign up for the very low traffic Google group: https://groups.google.com/forum/#!forum/epgsql Thank you! -- David N. Welton http://www.welton.it/davidw/ http://www.dedasys.com/ From davidnwelton@REDACTED Thu Oct 15 00:35:34 2015 From: davidnwelton@REDACTED (David Welton) Date: Wed, 14 Oct 2015 15:35:34 -0700 Subject: [erlang-questions] epgsql help In-Reply-To: <7c4c3272-c59c-4aa2-a550-ee0a87e00316@googlegroups.com> References: <7c4c3272-c59c-4aa2-a550-ee0a87e00316@googlegroups.com> Message-ID: Hi, >> but I could use a little extra help if anyone's interested. There's >> not a ton of work, mostly just correcting the occasional bug, or >> adding support for various Postgres types. > Howdy! Over at Chef we make heavy use of epgsql* in our erlang-based > projects. We've found epgsql to be a solid foundation for a while now. Cool! > I'm happy to help out, just point me in the right direction. I can't speak > for my coworkers, but I would be surprised if there weren't others willing > as well. > > * and we have our own fork of this project - currently not very far behind > it - that we're gradually getting off of. Basically just looking over pull requests to make sure they're not doing something bad, and that they're in keeping with keeping the project as simple as possible. The basics don't change, mostly, it's the extra types and especially 'dynamic' types that seem to cause the headaches. Thank you -- David N. Welton http://www.welton.it/davidw/ http://www.dedasys.com/ From lloyd@REDACTED Thu Oct 15 00:38:35 2015 From: lloyd@REDACTED (lloyd@REDACTED) Date: Wed, 14 Oct 2015 18:38:35 -0400 (EDT) Subject: [erlang-questions] Erlang type spec questions Message-ID: <1444862315.617928135@apps.rackspace.com> Hello, 1. -spec open_read(Table :: list(), Type :: atom()) -> {ok, Name} | {error, Reason}. Gives me Error: ...Reason is unbound Question: How should I represent? 2. How does one represent a File? E.g. -type get_tags(File :: ??? 3. Is a dets table a file or a list? E.g. How would one represent it? Apologies if I missed these in the type docs. Thanks, LRP From co7eb@REDACTED Thu Oct 15 00:37:41 2015 From: co7eb@REDACTED (Ivan Carmenates Garcia) Date: Wed, 14 Oct 2015 18:37:41 -0400 Subject: [erlang-questions] looking for a quick job Message-ID: <006001d106d0$f1301740$d39045c0$@frcuba.co.cu> Hi all, I have nothing to do for these days and I have a need of money, so I'm looking for a quick part time job, Does any have some? I also could stand for a full unlimited time job, but what I really need now is to have something to do. Kindest regards, Ivan (son of Gilberio) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierrefenoll@REDACTED Thu Oct 15 00:46:54 2015 From: pierrefenoll@REDACTED (Pierre Fenoll) Date: Wed, 14 Oct 2015 15:46:54 -0700 Subject: [erlang-questions] Erlang type spec questions In-Reply-To: <1444862315.617928135@apps.rackspace.com> References: <1444862315.617928135@apps.rackspace.com> Message-ID: On 14 October 2015 at 15:38, wrote: > 3. Is a dets table a file or a list? > > E.g. How would one represent it? > Use ets:tab(). http://erldocs.com/current/stdlib/ets.html?i=0&search=^ets:#type-tab You did miss it! Same for File. Reason needs to be Reason :: any() or term() Cheers, -- Pierre Fenoll -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Thu Oct 15 01:23:39 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 15 Oct 2015 08:23:39 +0900 Subject: [erlang-questions] Erlang type spec questions In-Reply-To: <1444862315.617928135@apps.rackspace.com> References: <1444862315.617928135@apps.rackspace.com> Message-ID: <2841166.PACDucdXiJ@changa> On 2015?10?14? ??? 18:38:35 lloyd@REDACTED wrote: > Hello, > > 1. -spec open_read(Table :: list(), Type :: atom()) -> {ok, Name} | {error, Reason}. > > Gives me Error: ...Reason is unbound > > Question: How should I represent? When the spec gets even slightly too long for a single line I tend to do this: -spec open_read(Table, Type) -> {ok, Name} | {error, Reason}. when Table :: ets:tab(), Type :: atom(), Name :: unicode:chardata(), Reason :: term(). or whatever applies to Name and Reason. > 2. How does one represent a File? > > E.g. -type get_tags(File :: ??? file:io_device() This is a general type; a union of `pid() | file:fd()`. Check out the "Data types" section: http://www.erlang.org/doc/man/file.html > 3. Is a dets table a file or a list? > > E.g. How would one represent it? Similar to the above: ets:tab() Same place in the docs format, under "Data types": http://www.erlang.org/doc/man/ets.html I don't recall that there is any place in the manual that actually says "the page for each module has a section of exported types called 'data types', and each type can be used in your programs by using the module in question as a module type prefix." There maybe should be a note like that somewhere, and may be, but I don't recall ever seeing it. Nearly every unique structure in the whole system has an exported type to represent it. Sometimes they are as simple as `foo:name() :: atom()`, but using the exported name *instead* of atom makes your code understandable at a glance and future-proofs it against the underlying value ever changing in a later release. -Craig From zxq9@REDACTED Thu Oct 15 01:34:35 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 15 Oct 2015 08:34:35 +0900 Subject: [erlang-questions] Erlang type spec questions In-Reply-To: <2841166.PACDucdXiJ@changa> References: <1444862315.617928135@apps.rackspace.com> <2841166.PACDucdXiJ@changa> Message-ID: <1908687.r66BIGDsMr@changa> On 2015?10?15? ??? 08:23:39 zxq9 wrote: > When the spec gets even slightly too long for a single line I tend to do this: > > -spec open_read(Table, Type) -> {ok, Name} | {error, Reason}. > when Table :: ets:tab(), > Type :: atom(), > Name :: unicode:chardata(), > Reason :: term(). GAH! I am a horrible person. I forgot to remove the period at the end of the first line. -Craig From ok@REDACTED Thu Oct 15 07:16:18 2015 From: ok@REDACTED (ok@REDACTED) Date: Thu, 15 Oct 2015 18:16:18 +1300 Subject: [erlang-questions] Mnesia In-Reply-To: <561DFC5B.9030705@gmail.com> References: <1444758374.470121944@apps.rackspace.com> <76B2A53E-914A-4C15-A116-536CC57E7AAB@cs.otago.ac.nz> <561DFC5B.9030705@gmail.com> Message-ID: <50a7378448ba7c52f20c0ce3fb994680.squirrel@chasm.otago.ac.nz> According to http://www.erlang.org/faq/mnesia.html "Dets uses 32 bit integers for file offsets, so the largest possible mnesia table (for now) is 4Gb". There is nothing there about the limits being different for in memory vs on disc. Poke around a bit more and you will find explicit claims that this limit applies both to disc only and to disc copies, e.g., "disc_copies tables are limited by their dets backend". You will also find the figures of 2GB and 3GB floating around. One thing that would be really nice would have to have accurate current limits prominently signposted (not necessarily *displayed*, just pointed to will do) near the beginning of the Mnesia manual. I've had serious problems with the Mnesia documentation in the past, so I'm quite prepared to believe that I have my facts totally wrong. From dangud@REDACTED Thu Oct 15 08:00:51 2015 From: dangud@REDACTED (Dan Gudmundsson) Date: Thu, 15 Oct 2015 06:00:51 +0000 Subject: [erlang-questions] Mnesia In-Reply-To: <50a7378448ba7c52f20c0ce3fb994680.squirrel@chasm.otago.ac.nz> References: <1444758374.470121944@apps.rackspace.com> <76B2A53E-914A-4C15-A116-536CC57E7AAB@cs.otago.ac.nz> <561DFC5B.9030705@gmail.com> <50a7378448ba7c52f20c0ce3fb994680.squirrel@chasm.otago.ac.nz> Message-ID: You are right in that you are wrong, and so is the documentation then, disc_copies tables is no longer using dets files as their backend storage. That restriction have been removed (a long time ago). Tough, disc_only_copies are still using dets files and have the 32bits limit. A pointer (or a patch) to the documentation would be nice. On Thu, Oct 15, 2015 at 7:25 AM wrote: > According to http://www.erlang.org/faq/mnesia.html > "Dets uses 32 bit integers for file offsets, > so the largest possible mnesia table (for now) is 4Gb". > There is nothing there about the limits being different > for in memory vs on disc. > Poke around a bit more and you will find explicit claims > that this limit applies both to disc only and to disc copies, > e.g., "disc_copies tables are limited by their dets backend". > You will also find the figures of 2GB and 3GB floating around. > > One thing that would be really nice would have to have > accurate current limits prominently signposted (not necessarily > *displayed*, just pointed to will do) near the beginning of the > Mnesia manual. > > I've had serious problems with the Mnesia documentation in the > past, so I'm quite prepared to believe that I have my facts > totally wrong. > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From v@REDACTED Thu Oct 15 08:45:03 2015 From: v@REDACTED (Valentin Micic) Date: Thu, 15 Oct 2015 08:45:03 +0200 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> <76B2A53E-914A-4C15-A116-536CC57E7AAB@cs.otago.ac.nz> <561DFC5B.9030705@gmail.com> <50a7378448ba7c52f20c0ce3fb994680.squirrel@chasm.otago.ac.nz> Message-ID: <935173A4-AF43-4ED7-A12E-362C0657E9DD@pharos-avantgard.com> Well, one can always use table fragmentation -- individual dets files are still going to be limited to 32-bits but table can go beyond that. We have used this approach in the past (fragmented a very big table across 512 dets files), and it worked reasonably well? There were a few performance problems mnesia had on a specific OS platforms (SOLARIS in particular) with a very poor file I/O caching. But the one thing that was missing from mnesia, that actually made us abandon it for projects that required huge data volumes was a lack of flexibility regarding storage mechanism used. And before you start thinking that I used more pot than usual, let me clarify: Yes, once may chose between RAM, DISK or combination of the two. However, if you want to control the amount of RAM used by mensia, you are left with only once choice: disc only copy, which puts some performance constraints. Thankfully, Erlang lends itself very nicely to "roll-your-own" approach and we managed to do just that with a relative ease. Thus, if I were to suggest an improvement for mnesia, I'd say that having a disc_only_copy with a flexible caching may do the trick (well, this what we had to do to make our lives good again -- we did not use mnesia but a combination of ets and dets files with custom dets fragmentation). Kind reagards V/ On 15 Oct 2015, at 8:00 AM, Dan Gudmundsson wrote: > You are right in that you are wrong, and so is the documentation then, > disc_copies tables is no longer using dets files as their backend storage. > That restriction have been removed (a long time ago). > > Tough, disc_only_copies are still using dets files and have the 32bits limit. > > A pointer (or a patch) to the documentation would be nice. > > > On Thu, Oct 15, 2015 at 7:25 AM wrote: > According to http://www.erlang.org/faq/mnesia.html > "Dets uses 32 bit integers for file offsets, > so the largest possible mnesia table (for now) is 4Gb". > There is nothing there about the limits being different > for in memory vs on disc. > Poke around a bit more and you will find explicit claims > that this limit applies both to disc only and to disc copies, > e.g., "disc_copies tables are limited by their dets backend". > You will also find the figures of 2GB and 3GB floating around. > > One thing that would be really nice would have to have > accurate current limits prominently signposted (not necessarily > *displayed*, just pointed to will do) near the beginning of the > Mnesia manual. > > I've had serious problems with the Mnesia documentation in the > past, so I'm quite prepared to believe that I have my facts > totally wrong. > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulf@REDACTED Thu Oct 15 08:55:05 2015 From: ulf@REDACTED (Ulf Wiger) Date: Thu, 15 Oct 2015 08:55:05 +0200 Subject: [erlang-questions] Mnesia In-Reply-To: References: <1444758374.470121944@apps.rackspace.com> Message-ID: <550C959E-102E-47A8-B05C-748D948D63BD@feuerlabs.com> > On 14 Oct 2015, at 05:19, Lloyd R. Prentice wrote: > > So if we dig into the code, what exactly needs to change to make that happen? (ah, I never sent this ? I?ll finish the thought, even though others have answered) Well, to some extent, I think Mnesia is its own worst enemy here. The wonderful thing about Mnesia is that it acts practically as an extension of the Erlang environment: it runs in the same memory space and can allow access to data in tens of usecs. It also supports both dynamic and transparent handling of data - no schema info to speak of, and ad-hoc replication. While it is possible to use Mnesia in a way that scales pretty well, Mnesia practically begs you not to. Considering the concrete challenges in Mnesia, I can think of a few off the top of my head: - Table replication. Mnesia sucks the entire table over every time, which is fine for small tables, but less so for large ones. While networks have gotten much faster, table size seems to increase at an even higher rate. Mnesia_ext offers a bit of flexibility, by letting each backend plugin determine what should be sent [1]. While a backend could get creative within the bounds of the table synch protocol, one should remember that mnesia doesn?t require backens to be the same on different nodes. - Table load. This is mainly an issue with dets repair, and the leveldb backend is *much* faster in this regard. - Split brain. Mnesia offers some low-level support, but additional, non-trivial logic is required (unsplit seems to help). Note that few ACID dbms:es offer much better support, but it arguably hits mnesia harder since it entices people to jump into data replication schemes they would hardly attempt (or even *could* attempt) in e.g. Postgres. Split brain is a hairy issue, and will remain so. - Sharding. Mnesia?s sharing support (mnesia_frag), while cool, is an afterthought - and it shows. It is a bit too non-transparent, and rehashing and (lack of) ordered_set semantics* are issues. There was some attempt at implementing consistent hashing support on top of mnesia_frag - as I recall, it was semi-successful. In general, for both sharding and netsplits, perhaps mnesia could use a meta layer, but this is hard to add without incuring penalties on use cases where it doesn?t add much utility. - Indexing. In stock mnesia, indexes are rebuilt on table load. In mnesia_ext, there is support for keeping persistent indexes, and noting when they are consistent with the data so they don?t have to be rebuilt. While not perfect, it is at least an improvement. Indexes have also been improved in a few other ways in mnesia_ext. - Backup. I?m not sure that mnesia?s backup scheme scales to gigantic data. Klarna has its own system for backup, based on their custom replication protocol. - Locking. Richard mentioned this. When executing large transactions or running workloads with several competing actors, transaction restarts can become a real issue. I?ve toyed with the idea of trying to insert the ?locks? locker in mnesia, but the main problem would likely be that it would incur a noticeable penalty on simple work, which makes it practically a non-starter in a legacy product. But basically, one should probably think twice before taking an ACID largely-in-memory database and trying to extend it to tackle Big Data. I think mnesia has a sweet spot as a deeply embedded data store - essentially going where no other DBMS can go**. If some of the more debilitating flaws are addressed, I think mnesia can be the best choice for quite a few applications, possibly combined with a more traditional DBMS. [1] https://github.com/klarna/otp/blob/OTP-18.0-mnesia_ext/lib/mnesia/test/mnesia_ext_filesystem.erl#L152 * Not that sharing ordered_set tables efficiently is particularly easy to begin with. ** One can look at it this way: mnesia is bad at some of the things e.g. riak is good at, but OTOH, riak is terrible for some of the use cases where mnesia shines. Being great at something also means deciding what you?re willing to be bad at, and for mnesia, that decision was made long ago. BR, Ulf W Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com From lucafinzicontini@REDACTED Thu Oct 15 09:53:59 2015 From: lucafinzicontini@REDACTED (Luca Finzi Contini) Date: Thu, 15 Oct 2015 00:53:59 -0700 (PDT) Subject: [erlang-questions] epgsql help In-Reply-To: References: Message-ID: <92197ec9-6494-447d-a351-6a10fcde3c66@googlegroups.com> Hi David, [...] I try and look at it from time to time, > but I could use a little extra help if anyone's interested. > > I am interested in helping the project, too! Just need some references as to how I could be of help. Grazie :) Luca. -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.ostinelli@REDACTED Thu Oct 15 14:16:13 2015 From: roberto.ostinelli@REDACTED (Roberto Ostinelli) Date: Thu, 15 Oct 2015 14:16:13 +0200 Subject: [erlang-questions] Exometer, Recon and prim_inet:send/2? In-Reply-To: References: <12F2115FD1CCEE4294943B2608A18FA301A2710ED2@MAIL01.win.lbaum.eu> Message-ID: <838ABFAB-3226-4808-ABFE-7E637EDAA76C@widetag.com> Thank you Sergej. I'm currently using the send_timeout gen_tcp option. Best, r. > On 14 ott 2015, at 16:50, Sergej Jure?ko wrote: > > Actually one can also use: inet_tcp:send(Sock,Data,[nosuspend]) which will return {error,busy} if it is unable to send immediately. > > > Sergej > > From: on behalf of Tobias Schlager > Date: Monday 12 October 2015 at 17:14 > To: Ulf Wiger, Roberto Ostinelli > Cc: erlang questions > Subject: Re: [erlang-questions] Exometer, Recon and prim_inet:send/2? > > Hi, > > Graphite does also support UDP. Maybe that could be an option for you. Fire and forget might in this case be better than loading your system. Another possibility would be to automatically close the TCP session when the other end gets unresponsive using the send_timeout option. > > Regards > Tobias > > _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From lloyd@REDACTED Thu Oct 15 17:21:31 2015 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Thu, 15 Oct 2015 11:21:31 -0400 Subject: [erlang-questions] Mnesia In-Reply-To: <550C959E-102E-47A8-B05C-748D948D63BD@feuerlabs.com> References: <1444758374.470121944@apps.rackspace.com> <550C959E-102E-47A8-B05C-748D948D63BD@feuerlabs.com> Message-ID: <8CDB6EB9-DFA0-4B7C-A8CF-7F60081A3C9B@writersglen.com> Loud and sustained applause, Ulf! This is exactly what I was hoping for when I asked the question. So, if and as we consider the next step in the evolution of Mnesia, it sounds like adoption/integration of Mnesia_ext is low-hanging fruit. What's needed to bring this off successfully--- 1. Nothing. Just do it. 2. Documentation and tutorials? 3. More work toward polishing Mnesia_ext? 4. Modifications to base mnesia? 5. Integration into a near-term release of Erlang? 6. Other things need to be done first 7. Other? Thanks from a much appreciative na?f, Lloyd Sent from my iPad > On Oct 15, 2015, at 2:55 AM, Ulf Wiger wrote: > > >> On 14 Oct 2015, at 05:19, Lloyd R. Prentice wrote: >> >> So if we dig into the code, what exactly needs to change to make that happen? > > (ah, I never sent this ? I?ll finish the thought, even though others have answered) > > Well, to some extent, I think Mnesia is its own worst enemy here. > > The wonderful thing about Mnesia is that it acts practically as an extension of the Erlang environment: it runs in the same memory space and can allow access to data in tens of usecs. It also supports both dynamic and transparent handling of data - no schema info to speak of, and ad-hoc replication. While it is possible to use Mnesia in a way that scales pretty well, Mnesia practically begs you not to. > > Considering the concrete challenges in Mnesia, I can think of a few off the top of my head: > > - Table replication. Mnesia sucks the entire table over every time, which is fine for small tables, but less so for large ones. While networks have gotten much faster, table size seems to increase at an even higher rate. > > Mnesia_ext offers a bit of flexibility, by letting each backend plugin determine what should be sent [1]. While a backend could get creative within the bounds of the table synch protocol, one should remember that mnesia doesn?t require backens to be the same on different nodes. > > - Table load. This is mainly an issue with dets repair, and the leveldb backend is *much* faster in this regard. > > - Split brain. Mnesia offers some low-level support, but additional, non-trivial logic is required (unsplit seems to help). Note that few ACID dbms:es offer much better support, but it arguably hits mnesia harder since it entices people to jump into data replication schemes they would hardly attempt (or even *could* attempt) in e.g. Postgres. Split brain is a hairy issue, and will remain so. > > - Sharding. Mnesia?s sharing support (mnesia_frag), while cool, is an afterthought - and it shows. It is a bit too non-transparent, and rehashing and (lack of) ordered_set semantics* are issues. There was some attempt at implementing consistent hashing support on top of mnesia_frag - as I recall, it was semi-successful. In general, for both sharding and netsplits, perhaps mnesia could use a meta layer, but this is hard to add without incuring penalties on use cases where it doesn?t add much utility. > > - Indexing. In stock mnesia, indexes are rebuilt on table load. In mnesia_ext, there is support for keeping persistent indexes, and noting when they are consistent with the data so they don?t have to be rebuilt. While not perfect, it is at least an improvement. Indexes have also been improved in a few other ways in mnesia_ext. > > - Backup. I?m not sure that mnesia?s backup scheme scales to gigantic data. Klarna has its own system for backup, based on their custom replication protocol. > > - Locking. Richard mentioned this. When executing large transactions or running workloads with several competing actors, transaction restarts can become a real issue. I?ve toyed with the idea of trying to insert the ?locks? locker in mnesia, but the main problem would likely be that it would incur a noticeable penalty on simple work, which makes it practically a non-starter in a legacy product. > > But basically, one should probably think twice before taking an ACID largely-in-memory database and trying to extend it to tackle Big Data. I think mnesia has a sweet spot as a deeply embedded data store - essentially going where no other DBMS can go**. If some of the more debilitating flaws are addressed, I think mnesia can be the best choice for quite a few applications, possibly combined with a more traditional DBMS. > > [1] https://github.com/klarna/otp/blob/OTP-18.0-mnesia_ext/lib/mnesia/test/mnesia_ext_filesystem.erl#L152 > * Not that sharing ordered_set tables efficiently is particularly easy to begin with. > ** One can look at it this way: mnesia is bad at some of the things e.g. riak is good at, but OTOH, riak is terrible for some of the use cases where mnesia shines. Being great at something also means deciding what you?re willing to be bad at, and for mnesia, that decision was made long ago. > > BR, > Ulf W > > Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. > http://feuerlabs.com > > > From ulf@REDACTED Thu Oct 15 17:49:56 2015 From: ulf@REDACTED (Ulf Wiger) Date: Thu, 15 Oct 2015 17:49:56 +0200 Subject: [erlang-questions] Mnesia In-Reply-To: <8CDB6EB9-DFA0-4B7C-A8CF-7F60081A3C9B@writersglen.com> References: <1444758374.470121944@apps.rackspace.com> <550C959E-102E-47A8-B05C-748D948D63BD@feuerlabs.com> <8CDB6EB9-DFA0-4B7C-A8CF-7F60081A3C9B@writersglen.com> Message-ID: <4EE04E46-D38C-46C5-9890-E6C1026C696E@feuerlabs.com> > On 15 Oct 2015, at 17:21, Lloyd R. Prentice wrote: > > What's needed to bring this off successfully? For now, Richard and Michael have worked admirably to prepare pull requests for the OTP team to ponder. We did run the design choices by the Mnesia team (i.e. Dan Gudmundsson) long ago, so some of the groundwork has been done already. Ultimately, of course, it?s the OTP team that needs to decide whether they can support it. BR, Ulf W Ulf Wiger, Co-founder & Developer Advocate, Feuerlabs Inc. http://feuerlabs.com From davidnwelton@REDACTED Thu Oct 15 23:34:46 2015 From: davidnwelton@REDACTED (David Welton) Date: Thu, 15 Oct 2015 14:34:46 -0700 Subject: [erlang-questions] epgsql help In-Reply-To: <92197ec9-6494-447d-a351-6a10fcde3c66@googlegroups.com> References: <92197ec9-6494-447d-a351-6a10fcde3c66@googlegroups.com> Message-ID: > I am interested in helping the project, too! > > Just need some references as to how I could be of help. Looking over the pull requests and running the tests are the basics to keep things turning over. "I pulled this branch, ran the tests and it looks good", or "I pulled this branch, there are no tests, and the code is really difficult to understand" - that kind of thing. > Grazie :) Grazie a te! -- David N. Welton http://www.welton.it/davidw/ http://www.dedasys.com/ From frederic.bonfanti@REDACTED Fri Oct 16 01:41:25 2015 From: frederic.bonfanti@REDACTED (Frederic BONFANTI) Date: Thu, 15 Oct 2015 18:41:25 -0500 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: <9FAC6E44-8C3B-418E-94BF-82CB9FBEB9FA@gmail.com> References: <0AF84669-C6E1-43B9-B7FF-78A2A969FB0B@gmail.com> <9FAC6E44-8C3B-418E-94BF-82CB9FBEB9FA@gmail.com> Message-ID: <5BB032C4-CEF9-4549-A6A6-ADF20D52E9C5@gmail.com> Hi, I'm getting weird error message when calling my escript from within a Python script escript: exception error: undefined function getopt:parse/2 From the shell a standalone script works just fine... any idea ? This is how I generate the escripted version of myapp : #!/usr/bin/env escript %% -*- erlang -*- %%! -smp enable -sname make_myapp main(_) -> {ok, SourceCode} = file:read_file("myapp.erl"), {ok, _, BeamCode} = compile:file("myapp.erl", [binary, debug_info]), {ok, _, GetOpt} = compile:file("getopt.erl", [binary, debug_info]), escript:create("myapp", [shebang, {archive, [ {"myapp.erl", SourceCode}, {"myapp.beam", BeamCode}, {"getopt.beam",GetOpt}], []} ]). Thanks in advance > On Oct 11, 2015, at 5:40 AM, Dmitry Kolesnikov wrote: > > And if you are using rebar and you have deps to other projects you might add following lines to rebar.config > {escript_incl_apps, [ getopt ]}. > {escript_emu_args, "%%! +K true +P 10000000\n"}. > Sad story, I've not figure out how to add erlang runtime to same package. Anyone, who receives you script needs to have one. > > Best Regards, > Dmitry > >-|-|-(*> > > On 11 Oct 2015, at 08:46, Frederic BONFANTI > wrote: > >> Hi guys, >> >> given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to >> >> 1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) >> >> 2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: >> >> test123 -A -x 555 >> >> If there are straightforward examples available, that will do. >> >> Thanks in advance >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordeev.vladimir.v@REDACTED Thu Oct 15 19:57:54 2015 From: gordeev.vladimir.v@REDACTED (Vladimir Gordeev) Date: Thu, 15 Oct 2015 19:57:54 +0200 Subject: [erlang-questions] try catch params in Core Erlang Message-ID: If you compile some try-catch statements into Core Erlang, you may notice, that it receives three params in exception pattern: http://tryerl.seriyps.ru/#id=3bf3 this: try foo:bar() catch some:thing -> ok end. into this: try call 'foo':'bar' () of <_cor0> -> _cor0 catch <_cor3,_cor2,_cor1> -> case <_cor3,_cor2,_cor1> of %% Line 7 <'some','thing',_cor4> when 'true' -> 'ok' ( <_cor3,_cor2,_cor1> when 'true' -> primop 'raise' (_cor1, _cor2) -| ['compiler_generated'] ) end In "An introduction to Core Erlang" catch described as taking two params: http://www.erlang.org/workshop/carlsson.ps Question is: what is this third param (_cor4) for? -------------- next part -------------- An HTML attachment was scrubbed... URL: From henrik.x.nord@REDACTED Fri Oct 16 09:29:24 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Fri, 16 Oct 2015 09:29:24 +0200 Subject: [erlang-questions] Patch Package OTP 18.1.3Released Message-ID: <5620A754.6010304@ericsson.com> Patch Package: OTP 18.1.3 Git Tag: OTP-18.1.3 Date: 2015-10-16 Trouble Report Id: OTP-13046 Seq num: System: OTP Release: 18 Application: ssh-4.1.2 Predecessor: OTP 18.1.2 Check out the git tag OTP-18.1.3, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- ssh-4.1.2 ------------------------------------------------------- --------------------------------------------------------------------- The ssh-4.1.2 application can be applied independently of other applications on a full OTP 18 installation. --- Fixed Bugs and Malfunctions --- OTP-13046 Application(s): ssh Add a 1024 group to the list of key group-exchange groups Full runtime dependencies of ssh-4.1.2: crypto-3.3, erts-6.0, kernel-3.0, public_key-0.22, stdlib-2.3 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From khaelin@REDACTED Fri Oct 16 11:30:06 2015 From: khaelin@REDACTED (Nicolas Martyanoff) Date: Fri, 16 Oct 2015 11:30:06 +0200 Subject: [erlang-questions] Sending signals to non-erlang processes Message-ID: <20151016093005.GA3701@valhala.home> Hi, I am writing an OTP application which spawn instances of a non-erlang daemon to run tests. I use a port to execute the daemon and read its output. However if for some reason my erlang application exits, the spawned daemon will not be killed. The documentation indicates: If the port owner terminates, so does the port (and the external program, if it is written correctly). The daemon itself behaves like most UNIX daemons and terminates on SIGTERM or SIGINT. But exiting the erlang VM (using ^C + abort) does not seem to do anything. Is that on purpose ? I also cannot find a way to actually stop the spawned application, port_close() does do it. The "UNIX way" is to send SIGTERM, wait for a bit, then send SIGKILL if the application did not stop. But I cannot find an erlang function to send a signal to an external process. I made a temporary fix using os:cmd("kill ..."), but it feels like a hack. Is there a reason not to have something such as os:kill(Signo, Pid) ? Would erlang developers accept a patch to add it ? It could be a first step toward support for proper termination via port_control(), and an open_port() option to automatically close the child process when the port is closed. Regards, -- Nicolas Martyanoff khaelin@REDACTED From rvirding@REDACTED Fri Oct 16 12:18:17 2015 From: rvirding@REDACTED (Robert Virding) Date: Fri, 16 Oct 2015 12:18:17 +0200 Subject: [erlang-questions] The -deprecated() attribute Message-ID: What gives with the -deprecated() attribute? In the compiler documentation it says: "Notice that the compiler does not know about attribute -deprecated(), but uses an assembled list of deprecated functions in Erlang/OTP." However if you look in the erl_lint code it explicitly handles the -deprecated() attribute and even uses it itself. So the compiler DOES know about it. So what does it do and for what can I use it? Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn@REDACTED Fri Oct 16 12:22:24 2015 From: bjorn@REDACTED (=?UTF-8?Q?Bj=C3=B6rn_Gustavsson?=) Date: Fri, 16 Oct 2015 12:22:24 +0200 Subject: [erlang-questions] EEP 44: Additional preprocessor directives Message-ID: Here is an EEP that proposes extensions for conditional compilation in the Erlang preprocessor: http://www.erlang.org/eeps/eep-0044.html https://github.com/erlang/eep/blob/master/eeps/eep-0044.md /Bj?rn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From jesper.louis.andersen@REDACTED Fri Oct 16 13:18:40 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Fri, 16 Oct 2015 13:18:40 +0200 Subject: [erlang-questions] Sending signals to non-erlang processes In-Reply-To: <20151016093005.GA3701@valhala.home> References: <20151016093005.GA3701@valhala.home> Message-ID: On Fri, Oct 16, 2015 at 11:30 AM, Nicolas Martyanoff wrote: > I also cannot find a way to actually stop the spawned application, > port_close() does do it. The "UNIX way" is to send SIGTERM, wait for a bit, > then send SIGKILL if the application did not stop. But I cannot find an > erlang > function to send a signal to an external process. I made a temporary fix > using > os:cmd("kill ..."), but it feels like a hack. > Two options: misbehaving programs can be handled through Aleynikov's https://github.com/saleyn/erlexec which wraps[0] them in a C++ helper process which understands how to gracefully communicate to the Erlang world. Port programs normally communicate through a set of file descriptors, so the program you spawn should detect and terminate if there are errors when reading on the fd. I've been down this rabbit hole before, but I'm afraid I forgot, again, how it all works. Perhaps this is good to document in a "How to write behaving port programs" document and make it part of the OTP documentation. [0] First time around I wrote "warps" here. Wrong, but strangely appropriate :) -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From federico.carrone@REDACTED Fri Oct 16 15:38:31 2015 From: federico.carrone@REDACTED (Federico Carrone) Date: Fri, 16 Oct 2015 13:38:31 +0000 Subject: [erlang-questions] epgsql help In-Reply-To: References: <92197ec9-6494-447d-a351-6a10fcde3c66@googlegroups.com> Message-ID: I would love to help too. I am using quite a lot epgsql :). On Thu, Oct 15, 2015 at 6:34 PM David Welton wrote: > > I am interested in helping the project, too! > > > > Just need some references as to how I could be of help. > > Looking over the pull requests and running the tests are the basics to > keep things turning over. "I pulled this branch, ran the tests and it > looks good", or "I pulled this branch, there are no tests, and the > code is really difficult to understand" - that kind of thing. > > > Grazie :) > > Grazie a te! > -- > David N. Welton > > http://www.welton.it/davidw/ > > http://www.dedasys.com/ > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fw339tgy@REDACTED Fri Oct 16 16:25:31 2015 From: fw339tgy@REDACTED (=?GBK?B?zLi549TG?=) Date: Fri, 16 Oct 2015 22:25:31 +0800 (CST) Subject: [erlang-questions] "bad receive timeout value " error Message-ID: <68888281.ec65.150710a9a35.Coremail.fw339tgy@126.com> hi , everybody, I wrote a program, using the reader module to receive data, and then on the wrong. please help me . thank you very much . -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: src.rar Type: application/octet-stream Size: 5227 bytes Desc: not available URL: From denc716@REDACTED Fri Oct 16 16:55:23 2015 From: denc716@REDACTED (derek) Date: Fri, 16 Oct 2015 07:55:23 -0700 Subject: [erlang-questions] "bad receive timeout value " error In-Reply-To: <68888281.ec65.150710a9a35.Coremail.fw339tgy@126.com> References: <68888281.ec65.150710a9a35.Coremail.fw339tgy@126.com> Message-ID: On Fri, Oct 16, 2015 at 7:25 AM, ??? wrote: > > hi , > everybody, > > I wrote a program, using the reader module to receive data, and then on the wrong. Why not include your program in the email body? I saw you have a .rar compressed attachment, but in the mailing list, no one want to look at your attachment, especially when it's in proprietary format From eric.pailleau@REDACTED Fri Oct 16 19:18:36 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Fri, 16 Oct 2015 19:18:36 +0200 Subject: [erlang-questions] Sending signals to non-erlang processes In-Reply-To: <20151016093005.GA3701@valhala.home> Message-ID: <6b9f28d0-7da0-40c4-8374-327c96e22fb6@email.android.com> Hi, Your port just have to check getppid and exit if PPID is 1. The case if a parent exit, the child inherits of parent 1 (init). This can be as well with time interrupt if this cannot be done in a loop. Regards Le?16 oct. 2015 11:30, Nicolas Martyanoff a ?crit?: > > Hi, > > I am writing an OTP application which spawn instances of a non-erlang daemon > to run tests. I use a port to execute the daemon and read its output. > > However if for some reason my erlang application exits, the spawned daemon > will not be killed. The documentation indicates: > > ??? If the port owner terminates, so does the port (and the external program, if > ??? it is written correctly). > > The daemon itself behaves like most UNIX daemons and terminates on SIGTERM or > SIGINT. But exiting the erlang VM (using ^C + abort) does not seem to do > anything. > > Is that on purpose ? > > I also cannot find a way to actually stop the spawned application, > port_close() does do it. The "UNIX way" is to send SIGTERM, wait for a bit, > then send SIGKILL if the application did not stop. But I cannot find an erlang > function to send a signal to an external process. I made a temporary fix using > os:cmd("kill ..."), but it feels like a hack. > > Is there a reason not to have something such as os:kill(Signo, Pid) ? > Would erlang developers accept a patch to add it ? > It could be a first step toward support for proper termination via > port_control(), and an open_port() option to automatically close the child > process when the port is closed. > > Regards, > > -- > Nicolas Martyanoff > khaelin@REDACTED > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From eric.pailleau@REDACTED Fri Oct 16 19:25:47 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Fri, 16 Oct 2015 19:25:47 +0200 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: <5BB032C4-CEF9-4549-A6A6-ADF20D52E9C5@gmail.com> Message-ID: Hi, You have to add path to modules used in Emu Args. -pz getopt/ebin etc... Le?16 oct. 2015 01:41, Frederic BONFANTI a ?crit?: > > Hi, > > I'm getting weird error message when calling my escript from within a Python script > >> escript: exception error: undefined function getopt:parse/2 >> > From the shell a standalone script works just fine... any idea ? > > This is how I generate the escripted version of myapp : > >> #!/usr/bin/env escript >> %% -*- erlang -*- >> %%! -smp enable -sname make_myapp >> main(_) -> {ok, SourceCode} = file:read_file("myapp.erl"), >> {ok, _, BeamCode} = compile:file("myapp.erl", [binary, debug_info]), >> {ok, _, GetOpt} = compile:file("getopt.erl", [binary, debug_info]), >> escript:create("myapp", [shebang, {archive, [ >> {"myapp.erl", SourceCode}, >> {"myapp.beam", BeamCode}, >> {"getopt.beam",GetOpt}], >> []} >> ]). >> > Thanks in advance > >> On Oct 11, 2015, at 5:40 AM, Dmitry Kolesnikov wrote: >> > And if you are using rebar and you have deps to other projects you might add following lines to rebar.config > > {escript_incl_apps, [ getopt > > ]}. > > ?{escript_emu_args, "%%! +K true +P 10000000\n"}. > > Sad story, I've not figure out how to add erlang runtime to same package. Anyone, who receives you script needs to have one. > > Best Regards, > Dmitry > >-|-|-(*> > > On 11 Oct 2015, at 08:46, Frederic BONFANTI wrote: > >> Hi guys, >> >> given a simple Erlang code that consists in one file, let?s say test123.erl , I?d like to >> >> 1. use a Makefile to compile test123.erl into test.beam and then generate a distributable version of test123 (executable) >> >> 2. figure-out how to parse the command line arguments once this test command is called from regular shell, for example: >> ? ? ? ? >> ? ? ? ?test123 -A -x 555 >> >> If there are straightforward examples available, that will do. >> >> Thanks in advance >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > From eric.pailleau@REDACTED Fri Oct 16 19:40:47 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Fri, 16 Oct 2015 19:40:47 +0200 Subject: [erlang-questions] Sending signals to non-erlang processes In-Reply-To: <6b9f28d0-7da0-40c4-8374-327c96e22fb6@email.android.com> Message-ID: Sorry, I meant your daemon have to check PPID, not port. This imply you can change its code, not always possible... Le?16 oct. 2015 19:18, ?ric Pailleau a ?crit?: > > Hi, > Your port just have to check getppid and exit if PPID is 1. The case if a parent exit, the child inherits of parent 1 (init). > This can be as well with time interrupt if this cannot be done in a loop. > Regards > > Le?16 oct. 2015 11:30, Nicolas Martyanoff a ?crit?: > > > > Hi, > > > > I am writing an OTP application which spawn instances of a non-erlang daemon > > to run tests. I use a port to execute the daemon and read its output. > > > > However if for some reason my erlang application exits, the spawned daemon > > will not be killed. The documentation indicates: > > > > ??? If the port owner terminates, so does the port (and the external program, if > > ??? it is written correctly). > > > > The daemon itself behaves like most UNIX daemons and terminates on SIGTERM or > > SIGINT. But exiting the erlang VM (using ^C + abort) does not seem to do > > anything. > > > > Is that on purpose ? > > > > I also cannot find a way to actually stop the spawned application, > > port_close() does do it. The "UNIX way" is to send SIGTERM, wait for a bit, > > then send SIGKILL if the application did not stop. But I cannot find an erlang > > function to send a signal to an external process. I made a temporary fix using > > os:cmd("kill ..."), but it feels like a hack. > > > > Is there a reason not to have something such as os:kill(Signo, Pid) ? > > Would erlang developers accept a patch to add it ? > > It could be a first step toward support for proper termination via > > port_control(), and an open_port() option to automatically close the child > > process when the port is closed. > > > > Regards, > > > > -- > > Nicolas Martyanoff > > khaelin@REDACTED > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions From vladdu55@REDACTED Fri Oct 16 19:46:25 2015 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Fri, 16 Oct 2015 19:46:25 +0200 Subject: [erlang-questions] The preprocessor Message-ID: Hi! This is related to the new EEP 44, but different. That discusson made clear something that has been bothering me for a while, but I couldn't put my finger on it. The preprocessor runs in two different stages: one that handles -include and macros, and one that handles conditional compilation. Because of the restrictions that macros can't be multi-form and that conditional markers have to be well-formed inside one file, the latter stage is basically a parser, albeit a simpler one than erl_parse. Instead of building a complete tree, it throws away the parts that aren't used. In the current implementation, these stages are intertwined, as the lexing and parsing are done in lock-step. I wonder: would it be worth to separate these stages completely? The first stage is the one that few people like and many complain about. The latter is useful. In conclusion, what if the preprocessor is split in two and the second pass can be configured to return the whole tree, with the chunks guarded by conditional expressions? I think all tools that handle raw source code would benefit from having access to the actual processor instead of writing their own. The two smaller tools will probably be easier to maintain, too. best regards, Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederic.bonfanti@REDACTED Fri Oct 16 20:27:45 2015 From: frederic.bonfanti@REDACTED (Frederic Bonfanti) Date: Fri, 16 Oct 2015 13:27:45 -0500 Subject: [erlang-questions] Erlang executable that reads command line arguments In-Reply-To: References: Message-ID: <853C04E7-5577-4D71-924D-6D73A4B08362@gmail.com> Thanks, I got it right with rebar compile, rebar escriptize > On Oct 16, 2015, at 12:25 PM, ?ric Pailleau wrote: > > Hi, > You have to add path to modules used in Emu Args. -pz getopt/ebin etc... > From khaelin@REDACTED Fri Oct 16 20:37:07 2015 From: khaelin@REDACTED (Nicolas Martyanoff) Date: Fri, 16 Oct 2015 20:37:07 +0200 Subject: [erlang-questions] Sending signals to non-erlang processes In-Reply-To: References: <6b9f28d0-7da0-40c4-8374-327c96e22fb6@email.android.com> Message-ID: <20151016183706.GE3701@valhala.home> On 2015-10-16 19:40, ?ric Pailleau wrote: > I meant your daemon have to check PPID, not port. This imply you can change > its code, not always possible... This is indeed not necessarily possible. Thank you for the idea though. -- Nicolas Martyanoff khaelin@REDACTED From silviu.cpp@REDACTED Fri Oct 16 21:34:09 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Fri, 16 Oct 2015 22:34:09 +0300 Subject: [erlang-questions] investigating memory issues Message-ID: Hello, I have an erlang application which is reading jobs from a gearman queue and call some HTTP url's (send's notifications over http). Beside this also saves some states in a redis/mysql database. Everything performs very nice excepting the fact that memory is increasing like crazy and get never released. The project is using the following deps: 1. Lager 2. hackeny - for the HTTP requests 3. econfig 4. folsom 5. emysql 6. mero 7. jsonx 8. tempo 9. erlang_gearman When I start the app it's starting with several hundred of MB but in 20 days is going to over 1.5 GB and still increasing if I don't restart the process. recon_alloc:memory: allocated -> shows at this moment 1454477752 used -> 1351474280 >From what I see 95 % of memory is on the binary_alloc. Fragmentation for this allocator is looking like: recon_alloc:fragmentation(current). [{{binary_alloc,1}, [{sbcs_usage,1.0}, {mbcs_usage,0.9404540505595212}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,691837288}, {mbcs_carriers_size,735641776}]}, {{binary_alloc,2}, [{sbcs_usage,1.0}, {mbcs_usage,0.9441229925677996}, {sbcs_block_size,0}, {sbcs_carriers_size,0}, {mbcs_block_size,567818272}, {mbcs_carriers_size,601424048}]}, I also tried to run the GC over all processes but nothing changing :( erlang:memory(total). => 1350813872 [erlang:garbage_collect(Pid) || Pid <- processes()]. erlang:memory(total). => 1347613584 I have no idea on how to identify where all this memory goes.. The sum of all processes as displayed by Observer app in the Process tab is insignificant. Any advice ? Silviu -------------- next part -------------- An HTML attachment was scrubbed... URL: From drohrer@REDACTED Fri Oct 16 22:24:46 2015 From: drohrer@REDACTED (Doug Rohrer) Date: Fri, 16 Oct 2015 16:24:46 -0400 Subject: [erlang-questions] investigating memory issues In-Reply-To: References: Message-ID: What version of Erlang are you using? One thing that comes to mind is a console handling binary memory leak in 18 before 18.03, which can cause problems if you're using lager's console handler. Updating to 18.03 or greater (18.1) should resolve the issue. Doug Rohrer > On Oct 16, 2015, at 3:34 PM, Caragea Silviu wrote: > > Hello, > > I have an erlang application which is reading jobs from a gearman queue and call some HTTP url's (send's notifications over http). > Beside this also saves some states in a redis/mysql database. > Everything performs very nice excepting the fact that memory is increasing like crazy and get never released. > > The project is using the following deps: > > 1. Lager > 2. hackeny - for the HTTP requests > 3. econfig > 4. folsom > 5. emysql > 6. mero > 7. jsonx > 8. tempo > 9. erlang_gearman > > When I start the app it's starting with several hundred of MB but in 20 days is going to over 1.5 GB and still increasing if I don't restart the process. > > recon_alloc:memory: > allocated -> shows at this moment 1454477752 > used -> 1351474280 > > From what I see 95 % of memory is on the binary_alloc. Fragmentation for this allocator is looking like: > > recon_alloc:fragmentation(current). > > [{{binary_alloc,1}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.9404540505595212}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,691837288}, > {mbcs_carriers_size,735641776}]}, > {{binary_alloc,2}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.9441229925677996}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,567818272}, > {mbcs_carriers_size,601424048}]}, > > I also tried to run the GC over all processes but nothing changing :( > > erlang:memory(total). => 1350813872 > > [erlang:garbage_collect(Pid) || Pid <- processes()]. > > erlang:memory(total). => 1347613584 > > I have no idea on how to identify where all this memory goes.. The sum of all processes as displayed by Observer app in the Process tab is insignificant. > > Any advice ? > > Silviu > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From eric.pailleau@REDACTED Fri Oct 16 22:31:11 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Fri, 16 Oct 2015 22:31:11 +0200 Subject: [erlang-questions] Sending signals to non-erlang processes In-Reply-To: <20151016183706.GE3701@valhala.home> Message-ID: <70987b20-3946-45f8-a626-c8d2b378a7c3@email.android.com> I suppose that Erlang is not posix compliant because it need to run on many platforms. But this creates some weird behaviour. Le?16 oct. 2015 20:37, Nicolas Martyanoff a ?crit?: > > On 2015-10-16 19:40, ?ric Pailleau wrote: > > I meant your daemon have to check PPID, not port. This imply you can change > > its code, not always possible... > > This is indeed not necessarily possible. Thank you for the idea though. > > -- > Nicolas Martyanoff > khaelin@REDACTED From eric.pailleau@REDACTED Sat Oct 17 00:55:50 2015 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Sat, 17 Oct 2015 00:55:50 +0200 Subject: [erlang-questions] stdlib shell_catch_exception ? Message-ID: <56218076.2040900@wanadoo.fr> Hi, by reading http://www.erlang.org/doc/man/STDLIB_app.html shell_catch_exception = boolean() This parameter can be used to set the exception handling of the Erlang shell's evaluator process. but : ----------------------------------------------------------------------- Erlang/OTP 18 [erts-7.1] [source] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.1 (abort with ^G) 1> application:set_env(stdlib, shell_catch_exception, false). ok 2> 1 / 0 . ** exception error: an error occurred when evaluating an arithmetic expression in operator '/'/2 called as 1 / 0 3> application:set_env(stdlib, shell_catch_exception, true). ok 4> 1 / 0 . * exception error: an error occurred when evaluating an arithmetic expression in operator '/'/2 called as 1 / 0 5> ----------------------------------------------------------------------- I do not see any difference by setting true or false, even by starting a new local shell with ^G what this parameter aim to do ? Regards From montuori@REDACTED Sat Oct 17 01:09:44 2015 From: montuori@REDACTED (Kevin Montuori) Date: Fri, 16 Oct 2015 18:09:44 -0500 Subject: [erlang-questions] stdlib shell_catch_exception ? In-Reply-To: <56218076.2040900@wanadoo.fr> (PAILLEAU Eric's message of "Sat, 17 Oct 2015 00:55:50 +0200") References: <56218076.2040900@wanadoo.fr> Message-ID: >>>>> "ep" == PAILLEAU Eric writes: ep> I do not see any difference by setting true or false, ep> even by starting a new local shell with ^G ep> what this parameter aim to do ? Catch exceptions! Watch the PIDs: 9> application:set_env(stdlib, shell_catch_exception, false). ok 10> self(). <0.37.0> 11> 1/0. ** exception error: ... 12> self(). <0.46.0> 13> application:set_env(stdlib, shell_catch_exception, true). ok 14> 1/0. * exception error: ... 15> self(). <0.46.0> (Really useful if you're doing ETS work in the shell.) Cheers! k. -- Kevin Montuori montuori@REDACTED From eric.pailleau@REDACTED Sat Oct 17 01:24:05 2015 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Sat, 17 Oct 2015 01:24:05 +0200 Subject: [erlang-questions] stdlib shell_catch_exception ? In-Reply-To: References: <56218076.2040900@wanadoo.fr> Message-ID: <56218715.2090606@wanadoo.fr> > (Really useful if you're doing ETS work in the shell.) > Hi, thanks ! make sens now. What puzzled me is that exception message was raised both case, but shell is killed and restarted by supervisor if 'false' and stay alive if 'true'. From vinoski@REDACTED Sat Oct 17 01:27:48 2015 From: vinoski@REDACTED (Steve Vinoski) Date: Fri, 16 Oct 2015 19:27:48 -0400 Subject: [erlang-questions] stdlib shell_catch_exception ? In-Reply-To: <56218715.2090606@wanadoo.fr> References: <56218076.2040900@wanadoo.fr> <56218715.2090606@wanadoo.fr> Message-ID: On Fri, Oct 16, 2015 at 7:24 PM, PAILLEAU Eric wrote: > > (Really useful if you're doing ETS work in the shell.) >> >> > Hi, > thanks ! make sens now. What puzzled me is that exception message was > raised both case, but shell is killed and restarted by supervisor if > 'false' and stay alive if 'true'. You can just say catch_exception, passing it a boolean argument: 1> self(). <0.33.0> 2> catch_exception(true). false 3> 1/0. * exception error: an error occurred when evaluating an arithmetic expression in operator '/'/2 called as 1 / 0 4> self(). <0.33.0> Also useful for working with sockets, since opening a socket in the shell and then having the shell die means the socket gets closed. --steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.santos@REDACTED Sat Oct 17 02:17:42 2015 From: michael.santos@REDACTED (Michael Santos) Date: Fri, 16 Oct 2015 20:17:42 -0400 Subject: [erlang-questions] Sending signals to non-erlang processes In-Reply-To: References: <20151016093005.GA3701@valhala.home> Message-ID: <20151017001742.GA3944@brk> On Fri, Oct 16, 2015 at 01:18:40PM +0200, Jesper Louis Andersen wrote: > On Fri, Oct 16, 2015 at 11:30 AM, Nicolas Martyanoff > wrote: > > > I also cannot find a way to actually stop the spawned application, > > port_close() does do it. The "UNIX way" is to send SIGTERM, wait for a bit, > > then send SIGKILL if the application did not stop. But I cannot find an > > erlang > > function to send a signal to an external process. I made a temporary fix > > using > > os:cmd("kill ..."), but it feels like a hack. > > > > Two options: misbehaving programs can be handled through Aleynikov's > https://github.com/saleyn/erlexec which wraps[0] them in a C++ helper > process which understands how to gracefully communicate to the Erlang world. > > Port programs normally communicate through a set of file descriptors, so > the program you spawn should detect and terminate if there are errors when > reading on the fd. I've been down this rabbit hole before, but I'm afraid I > forgot, again, how it all works. Perhaps this is good to document in a "How > to write behaving port programs" document and make it part of the OTP > documentation. A well behaved port program is like a process started from inetd. When the port is closed: * the read from stdin will return 0 bytes (EOF) or an error * writing to stdout will result in a SIGPIPE being sent to the port process or return an error (EPIPE) In both cases, it is up to the port process to exit if there is an error condition. A simple way of testing the behaviour of closing stdin: % Start a port running the cat command 1> Port = open_port({spawn, "/bin/cat"}, []). #Port<0.21242> % Send some data and get a response 2> port_command(Port, "test\n", []). true 3> flush(). Shell got {#Port<0.21242>,{data,"test\n"}} ok % Get the PID of the command 4> erlang:port_info(Port). [{name,"/bin/cat"}, {links,[<0.60.0>]}, {id,12197}, {connected,<0.60.0>}, {input,5}, {output,5}, {os_pid,7584}] % Close the port 5> port_close(Port). true % Confirm the port has exited 6> os:cmd("kill -0 7584"). "/bin/sh: 1: kill: No such process\n\n" Similarly, we can test the behaviour when writing to stdout: % Write to stdout 1> Port = open_port({spawn, "/usr/bin/yes"}, []), Info = erlang:port_info(Port), port_close(Port), Info. [{name,"/usr/bin/yes"}, {links,[<0.60.0>]}, {id,12197}, {connected,<0.60.0>}, {input,0}, {output,0}, {os_pid,7845}] 2> os:cmd("kill -0 7845"). "/bin/sh: 1: kill: No such process\n\n" Here is an example of a badly behaving port: #!/bin/bash while :; do echo test sleep 1 done Observing the behaviour under strace: % erlang shell 1> Port = open_port({spawn, "bad.sh"}, []). #Port<0.21242> 2> flush(). Shell got {#Port<0.21242>,{data,"test\n"}} Shell got {#Port<0.21242>,{data,"test\n"}} Shell got {#Port<0.21242>,{data,"test\n"}} Shell got {#Port<0.21242>,{data,"test\n"}} ok 3> port_close(Port). true # strace $ strace -p 8323 write(1, "test\n", 5) = 5 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb6f133c8) = 8420 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 8420 --- SIGCHLD (Child exited) @ 0 (0) --- sigreturn() = ? (mask now [QUIT ILL TRAP BUS SEGV USR2 CHLD STOP TSTP]) write(1, "test\n", 5) = -1 EPIPE (Broken pipe) --- SIGPIPE (Broken pipe) @ 0 (0) --- clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb6f133c8) = 8421 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 8421 --- SIGCHLD (Child exited) @ 0 (0) --- sigreturn() = ? (mask now [QUIT ILL TRAP BUS SEGV USR2 CHLD STOP TSTP]) write(1, "test\n", 5) = -1 EPIPE (Broken pipe) --- SIGPIPE (Broken pipe) @ 0 (0) --- So the shell script is running the echo in a forked process. When the child writes to stdout, it is being killed by SIGPIPE but the shell is ignoring the child's non-zero exit status. And the same shell script modified to exit when stdin has been closed: #!/bin/bash while read l; do echo "test" sleep 1 done So: * a port process must be written to exit if stdin or stdout are closed * badly behaving port processes may not exit. They may be unkillable for many reasons: masking signals, running commands in a subprocess or setuid and running as different user. If we're running port processes, why not run another port process to clean up? For example: #!/bin/sh # Port = open_port({spawn, "kill.sh"}, []), port_command(Port, "1234 9", []). while read l; do set -- $l kill -s $2 $1 done Which is equivalent to running: os:cmd("kill -9 1234"). From dmkolesnikov@REDACTED Sat Oct 17 12:27:22 2015 From: dmkolesnikov@REDACTED (dmkolesnikov@REDACTED) Date: Sat, 17 Oct 2015 13:27:22 +0300 Subject: [erlang-questions] investigating memory issues In-Reply-To: References: Message-ID: Hello, Some of library or your code leaks binary. This usually happened went large binary blob is not de-referenced during processing. GC would not help here at all. You need to find and fix code that does it. Dmitry Sent from my iPhone > On 16 Oct 2015, at 22:34, Caragea Silviu wrote: > > Hello, > > I have an erlang application which is reading jobs from a gearman queue and call some HTTP url's (send's notifications over http). > Beside this also saves some states in a redis/mysql database. > Everything performs very nice excepting the fact that memory is increasing like crazy and get never released. > > The project is using the following deps: > > 1. Lager > 2. hackeny - for the HTTP requests > 3. econfig > 4. folsom > 5. emysql > 6. mero > 7. jsonx > 8. tempo > 9. erlang_gearman > > When I start the app it's starting with several hundred of MB but in 20 days is going to over 1.5 GB and still increasing if I don't restart the process. > > recon_alloc:memory: > allocated -> shows at this moment 1454477752 > used -> 1351474280 > > From what I see 95 % of memory is on the binary_alloc. Fragmentation for this allocator is looking like: > > recon_alloc:fragmentation(current). > > [{{binary_alloc,1}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.9404540505595212}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,691837288}, > {mbcs_carriers_size,735641776}]}, > {{binary_alloc,2}, > [{sbcs_usage,1.0}, > {mbcs_usage,0.9441229925677996}, > {sbcs_block_size,0}, > {sbcs_carriers_size,0}, > {mbcs_block_size,567818272}, > {mbcs_carriers_size,601424048}]}, > > I also tried to run the GC over all processes but nothing changing :( > > erlang:memory(total). => 1350813872 > > [erlang:garbage_collect(Pid) || Pid <- processes()]. > > erlang:memory(total). => 1347613584 > > I have no idea on how to identify where all this memory goes.. The sum of all processes as displayed by Observer app in the Process tab is insignificant. > > Any advice ? > > Silviu > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From jesper.louis.andersen@REDACTED Sat Oct 17 12:50:51 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sat, 17 Oct 2015 12:50:51 +0200 Subject: [erlang-questions] investigating memory issues In-Reply-To: References: Message-ID: On Fri, Oct 16, 2015 at 9:34 PM, Caragea Silviu wrote: > 7. jsonx This is a shot but I think this library is your problem. When it parses a JSON document, it uses enif_make_sub_binary on the binary() containing the JSON string. This means if you do not use binary:copy() on data you hoist out of the JSON document, then you are keeping the whole document around for as long as you refer to that subbinary. Try wrapping strings you hoist out in binary:copy/1. This will slow you down, but it should remove the reference to the original binary which would make your system be able to reclaim that memory. Alternatively, if your system is not highly loaded, you could simply replace jsonx for jsx which is written in Erlang and avoids such problems (at the expense of slower parsing, but JSON will *never* win a parsing race anyway and it is better to swtch the format then). I've seen systems stuffing such subbinaries into ETS, and then your ETS table is keeping data around. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulperegud@REDACTED Sat Oct 17 14:36:58 2015 From: paulperegud@REDACTED (Paul Peregud) Date: Sat, 17 Oct 2015 14:36:58 +0200 Subject: [erlang-questions] "bad receive timeout value " error In-Reply-To: References: <68888281.ec65.150710a9a35.Coremail.fw339tgy@126.com> Message-ID: handle_info(Msg, Bot) -> process_inf(Msg, Bot), {noreply, normal, Bot}. %% <--- this is the line that crashes your process. When you are returning 3 element tuple from handle_info, gen_server.erl assumes that third element of the tuple is a timeout value. And you are returning Bot, which is not integer() nor 'infinity'. For details - check docs of gen_server. More specifically definition of callback function Module:handle_info/3. On Fri, Oct 16, 2015 at 4:55 PM, derek wrote: > On Fri, Oct 16, 2015 at 7:25 AM, ??? wrote: >> >> hi , >> everybody, >> >> I wrote a program, using the reader module to receive data, and then on the wrong. > > Why not include your program in the email body? > > I saw you have a .rar compressed attachment, but in the mailing list, > no one want to look at your attachment, especially when it's in > proprietary format > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- Best regards, Paul Peregud +48602112091 From r.wobben@REDACTED Sun Oct 18 21:27:13 2015 From: r.wobben@REDACTED (Roelof Wobben) Date: Sun, 18 Oct 2015 21:27:13 +0200 Subject: [erlang-questions] Can I do the same with a fold easily Message-ID: <5623F291.4050809@home.nl> Hello, I have this two functions : filter(List, Filter) -> lists:filter(Filter, List). split2(List) -> { filter(List, fun(X) -> math_functions:odd(X) end), filter(List, fun(X) -> math_functions:even(X) end) }. which produces both {[1,3],[2,4]} Could I this also do with a fold or is this the best solution. Roelof [ 1,3],[2,4]} From freza@REDACTED Sun Oct 18 21:55:45 2015 From: freza@REDACTED (Jachym Holecek) Date: Sun, 18 Oct 2015 15:55:45 -0400 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <5623F291.4050809@home.nl> References: <5623F291.4050809@home.nl> Message-ID: <20151018195544.GA13395@circlewave.net> # Roelof Wobben 2015-10-18: > I have this two functions : > > filter(List, Filter) -> > lists:filter(Filter, List). What is the point of this? > split2(List) -> > { filter(List, fun(X) -> math_functions:odd(X) end), > filter(List, fun(X) -> math_functions:even(X) end) }. > > which produces both {[1,3],[2,4]} > > Could I this also do with a fold or is this the best solution. Entirely untested, but the best is to 1) keep it real 2) keep it simple. :-) foo(L) -> foo(L, [], []). foo([X | L], Es, Os) -> case (X rem 2) of 0 -> foo(L, [X | Es], Os); 1 -> foo(L, Es, [X | Os]) end; foo([], Es, Os) -> {Es, Os}. Add reverse/1 calls to the terminal branch if you particularly care about the order of results. BR, -- Jachym From r.wobben@REDACTED Sun Oct 18 22:20:58 2015 From: r.wobben@REDACTED (Roelof Wobben) Date: Sun, 18 Oct 2015 22:20:58 +0200 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <20151018195544.GA13395@circlewave.net> References: <5623F291.4050809@home.nl> <20151018195544.GA13395@circlewave.net> Message-ID: <5623FF2A.3010600@home.nl> An HTML attachment was scrubbed... URL: From montuori@REDACTED Sun Oct 18 22:49:49 2015 From: montuori@REDACTED (Kevin Montuori) Date: Sun, 18 Oct 2015 15:49:49 -0500 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <5623F291.4050809@home.nl> (Roelof Wobben's message of "Sun, 18 Oct 2015 21:27:13 +0200") References: <5623F291.4050809@home.nl> Message-ID: >>>>> "rw" == Roelof Wobben writes: rw> Could I this also do with a fold or is this the best solution. I think lists:partition/2 would be the best choice. I know you're running through some exercies and all the fabricated drudgery that entails but taking a look at the source for partition/2 might help you along. k. -- Kevin Montuori montuori@REDACTED From eric.pailleau@REDACTED Sun Oct 18 23:10:23 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Sun, 18 Oct 2015 23:10:23 +0200 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <5623F291.4050809@home.nl> Message-ID: <20489d22-b749-4edf-bc7a-58d94c5185af@email.android.com> Hi, A number cannot be both even and odd, so simply take odd numbers in a variable O from list L , even numbers will be L -- O. Le?18 oct. 2015 21:27, Roelof Wobben a ?crit?: > > Hello, > > I have this two functions : > > filter(List, Filter) -> > ???? lists:filter(Filter, List). > > split2(List) -> > ???? { filter(List, fun(X) -> math_functions:odd(X) end), filter(List, > fun(X) -> math_functions:even(X) end) }. > > which produces both {[1,3],[2,4]} > > Could I this also do with a fold or is this the best solution. > > Roelof > > > [ > > 1,3],[2,4]} > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From ok@REDACTED Mon Oct 19 04:34:01 2015 From: ok@REDACTED (ok@REDACTED) Date: Mon, 19 Oct 2015 15:34:01 +1300 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <5623F291.4050809@home.nl> References: <5623F291.4050809@home.nl> Message-ID: <1a604470871ced48093b207ec7cd28c5.squirrel@chasm.otago.ac.nz> "Roelof Wobben" wrote: > I have this two functions : > > filter(List, Filter) -> > lists:filter(Filter, List). I see what you did there, you swapped the argument order. But doing that and keeping the same name is confusing; it is so easy for people to see what they KNOW must be there for 'filter' and so hard to see what IS there. > split2(List) -> > { filter(List, fun(X) -> math_functions:odd(X) end), > filter(List, fun(X) -> math_functions:even(X) end) }. > > which produces both {[1,3],[2,4]} Since I'm not familiar with the math_functions: module, I would prefer to see split2(Integers) -> { [X || X <- Integers, X band 1 =:= 1] , [X || X <- Integers, X band 1 =:= 0] }. But your code is fine. > Could I this also do with a fold or is this the best solution. It all depends on what 'best' means. List comprehensions could be implemented more efficient;y than they currently are, which could make the direct list comprehension version the fastest and most space efficient. A function that traverses a list in a single pass is a nice thing to have because it can be fused with a function that generates a list (either by the compiler, as in Haskell, or by you, as in Erlang) to eliminate the intermediate list. If you are interested, look up 'deforestation' in the context of functional programming languages. Let's consider two folds. (1) Work from left to right building up answers as we go. E := O := [] for X in Integers do if odd(X) then O := [X|O] else E := [X|E] oops, the answers are back to front. {reverse(O), reverse(E)}. Since an Erlang function has only one result, we have to pack E and O together into a single term. OE := {[],[]} for X in Integers do OE := if odd(X) then {[X|O0],E0} else {O0,[X|E0]} where {O0,E0} = OE {reverse(O0),reverse(E0)} where {O0,E0} = OE Now _that_ is a (left) fold with a cleanup step at the end: split3(Integers) -> {O,E} = lists:foldl(fun (X, {O0,E0}) -> case X band 1 of 1 -> {[X|O0], E0} ; 0 -> {O0, [X|E0]} end end, {[], []}, Integers), {reverse(O), reverse(E)}. (2) We can avoid the reversing by working from right to left. split4(Integers) -> lists:foldr(fun (X, {O0,E0)) -> case X band 1 of 1 -> {[X|O0], E0} ; 0 -> {O0, [X|E0]} end end, {[], []}, Integers), (3) You could expand the foldr/3 version out by hand. If an Erlang function could return two values, like a Lisp or Mesa function or a Prolog predicate, doing so could eliminate the tuples. Since an Erlang function _can't_ do that, we're stuck with turning over N tuples given N integers, in addition to the one tuple we really want. (A really good compiler could in fact eliminate the extra N tuples; whether the Erlang compiler is that good yet I don't know.) odds_and_evens(Integers) -> odds_and_evens(Integers, [], []). odds_and_evens([X|Xs], O0, E0) -> {O1,E1} = odds_and_evens(Xs, O0, E0), case X band 1 of 1 -> {[X|O1], E1} ; 0 -> {O1, [X|E1]} end; odds_and_evens([], O, E) -> {O, E}. (4) Alternatively, you could read the documentation for the lists: module where you will find lists:partition/2. split5(Integers) -> lists:partition(fun (X) -> X band 1 =:= 1 end, Integers). I think (4) has the edge in readability. From ok@REDACTED Mon Oct 19 04:44:10 2015 From: ok@REDACTED (ok@REDACTED) Date: Mon, 19 Oct 2015 15:44:10 +1300 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <5623FF2A.3010600@home.nl> References: <5623F291.4050809@home.nl> <20151018195544.GA13395@circlewave.net> <5623FF2A.3010600@home.nl> Message-ID: <2ca4f3f5b4a305039ce53d1291d77179.squirrel@chasm.otago.ac.nz> Roelof Wobben had > filter(List, Filter) -> > lists:filter(Filter, List). asked: > What is the point of this? Roelof Wobben replied > These were my solutions to these exercises > Add a higher-order function to math_functions.erl > called filter(F, L) which returns all the elements X in L > for which F(X) is true. But you did not do that. And you did two things wrong. First, you swapped the argument order to L, F instead of the F, L order that you were asked for. Second, you re-used someone *else's* filter/2 function instead of writing your own, as the exercise asked. In software engineering, such reuse is a good thing; the whole Erlang/OTP system *exists* to be reused like that. But for an exercise, you were supposed to write filter(F, [X|Xs]) -> ????, case F(X) of true -> ???? ; false -> ???? end, ????; filter(_, []) -> ????. or something like that, filling in ???? appropriately. From ok@REDACTED Mon Oct 19 04:54:41 2015 From: ok@REDACTED (ok@REDACTED) Date: Mon, 19 Oct 2015 15:54:41 +1300 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <20489d22-b749-4edf-bc7a-58d94c5185af@email.android.com> References: <20489d22-b749-4edf-bc7a-58d94c5185af@email.android.com> Message-ID: "?ric Pailleau" > A number cannot be both even and odd, so simply take odd numbers in a > variable O from list L , even numbers will be L -- O. Here we run into the question "what does BEST mean?" Now Xs -- Ys cannot assume anything about the contents of Xs or Ys, other than them both being well formed lists. So while it *could* be implemented in O(|Xs|+|Ys|+|Xs|lg|Ys|) time by building some kind of balanced search tree from Ys, or even better by building some kind of hashed set, the cost of building the supporting data structure would likely be worse than the cost of traversing L twice or using lists:partition/2. From jonas.falkevik@REDACTED Mon Oct 19 10:14:43 2015 From: jonas.falkevik@REDACTED (Jonas Falkevik) Date: Mon, 19 Oct 2015 10:14:43 +0200 Subject: [erlang-questions] try catch params in Core Erlang In-Reply-To: References: Message-ID: It seems to be the stack trace at least in R16B03-1. erl source: -module(test). -compile(export_all). test() -> try foo:bar() catch some:thing -> ok; error:E -> io:format("third: ~p~n", [E]) end. core erlang modification: @@ -18,10 +18,10 @@ <'some','thing',_cor4> when 'true' -> 'ok' %% Line 9 - <'error',E,_cor5> when 'true' -> + <'error',_E,F> when 'true' -> %% Line 10 call 'io':'format' - ([116|[104|[105|[114|[100|[58|[32|[126|[112|[126|[110]]]]]]]]]]], [E|[]]) + ([116|[104|[105|[114|[100|[58|[32|[126|[112|[126|[110]]]]]]]]]]], [F|[]]) ( <_cor3,_cor2,_cor1> when 'true' -> primop 'raise' (_cor1, _cor2) Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false] Eshell V5.10.4 (abort with ^G) 1> c(test, [to_core]). ** Warning: No object file created - nothing loaded ** ok 2> c(test, [from_core]). {ok,test} 3> test:test(). third: [[{foo,bar,[],[]}, {test,test,0,[]}, {erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]}, {shell,exprs,7,[{file,"shell.erl"},{line,674}]}, {shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]}, {shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]| -000000000000000016] ok /Jonas On Oct 15, 2015, at 19:57 , Vladimir Gordeev wrote: > If you compile some try-catch statements into Core Erlang, you may notice, > that it receives three params in exception pattern: http://tryerl.seriyps.ru/#id=3bf3 > > this: > > try foo:bar() > catch > some:thing -> ok > end. > > into this: > > try > call 'foo':'bar' > () > of <_cor0> -> > _cor0 > catch <_cor3,_cor2,_cor1> -> > case <_cor3,_cor2,_cor1> of > %% Line 7 > <'some','thing',_cor4> when 'true' -> > 'ok' > ( <_cor3,_cor2,_cor1> when 'true' -> > primop 'raise' > (_cor1, _cor2) > -| ['compiler_generated'] ) > end > > In "An introduction to Core Erlang" catch described as taking two params: http://www.erlang.org/workshop/carlsson.ps > > Question is: what is this third param (_cor4) for? > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From silviu.cpp@REDACTED Mon Oct 19 10:05:35 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Mon, 19 Oct 2015 11:05:35 +0300 Subject: [erlang-questions] reloading a NIF library without restarting the server Message-ID: Hello, It is possible to reload a nif library ? For example I have an app using NIF and after I'm doing some changes over the native code I want to reload the new library in production without restarting the server.. I succeeded to reload the erlang code but seems it's still using the oldest NIF. Silviu -------------- next part -------------- An HTML attachment was scrubbed... URL: From thoffmann@REDACTED Mon Oct 19 10:51:42 2015 From: thoffmann@REDACTED (Torben Hoffmann) Date: Mon, 19 Oct 2015 10:51:42 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: <5616E93B.4060608@hu-berlin.de> References: <5616E93B.4060608@hu-berlin.de> Message-ID: Hi J?rgen, With the risk of showing my inability to understand your problem I would challenge the need for the transaction server altogether. As you say, the messages that the processing server has yet to process will be lost if the server dies, so re-sending is required. I would simply deal with this in the client. When you send a request you monitor the server, if it dies, you re-send when the service is up again. The server monitors clients, if the client dies, the server stops the pending and ongoing jobs for that client. I have used this approach is my Game of Life implementation - it seems to work. There might be room for a little library for some of the book keeping involved in this, but given that there can be so many variations on this very simple pattern I fear that it will be hard to create a generic library for this. Cheers, Torben J?rgen Brandt writes: > Hello, > > is there an Erlang library for transactional message > passing, using patterns in communication and error handling > to improve fault tolerance? > > This (or a similar) question may have been asked before and, > of course, there is plenty of research on fault tolerance > and failure transparency. Nevertheless, in my work with > scientific workflows it seems that certain patterns in error > handling exist. In this mail I'm trying to motivate a way to > externalize common error handling in a standardized service > (a transaction server) but I'm unsure whether such a thing > already exists, whether I'm missing an important feature, > and whether it's a good idea anyway. > > Large distributed systems are composed of many services. > They process many tasks concurrently and need fault > tolerance to yield correct results and maintain > availability. Erlang seemed a good choice because it > provides facilities to automatically improve availability, > e.g., by using supervisers. In addition, it encourages a > programming style that separates processing logic from > error handling. In general, each service has its own > requirements, implying that a general approach to error > handling (beyond restarting) is infeasible. However, if an > application exhibits recurring patterns in the way error > handling corresponds to the messages passed between > services, we can abstract these patterns to reuse them. > > > Fault tolerance is important because it directly translates > to scalability. > > Consider an application (with transient software faults), > processing user queries. The application reports errors back > to the user as they appear. If a user query is a long- > running job (hours, days), the number of subtasks created > from this job (thousands), the number of services to process > one subtask, and the number of machines involved are large, > then the occurrence of an error is near to certain. Quietly > restarting the application and rerunning the query may > reduce the failure probability but even if the application > succeeds, the number of retries and, thus, the time elapsed > to success may be prohibitive. What is needed is a system > that does not restart the whole application but only the > service that failed reissuing only the unfinished requests > that this service received before failing. Consequently, the > finer the granularity at which errors are handled, the less > work has to be redone when errors occur, allowing a system > to host longer-running jobs, containing more subtasks, > involving more services for each subtask, and running on > more machines in feasible time. > > > Scientific workflows are a good example for a large > distributed application exhibiting patterns in communication > and error handling. > > A scientific workflow system consumes an input query in the > form of an expression in the workflow language. On > evaluation of this expression it identifies subtasks that > can be executed in any order. E.g., a variant calling > workflow from bioinformatics unfolds into several hundred > to a thousand subtasks each of which is handed down in the > form of requests through a number of services: Upon > identification of the subtask in (i) the query interpreter, > a request is sent to (ii) a cache service. This service > keeps track of all previously run subtasks and returns the > cached result if available. If not, a request is sent to > (iii) a scheduling service. This service determines the > machine, to run the subtask. The scheduler tries both, to > adequately distribute the work load among workers (load > balancing) and to minimize data transfers among nodes (data > locality). Having decided where to run the subtask, a > request is sent to (iv) the corresponding worker which > executes the subtask and returns the result up the chain of > services. Every subtask goes through this life cycle. > > Apart from the interplay of the aforementioned services we > want the workflow system to behave in a particular way when > one of these services dies: > > - Each workflow is evaluated inside its own interpreter > process. A workflow potentially runs for a long time and > at some point we might want to kill the interpreter > process. When this happens, the system has to identify all > open requests originating from that interpreter and cancel > them. > > - When an important service (say the scheduler) dies, a > supervisor will restart it, this way securing the > availability of the application. Upon a fresh start, none > of the messages this service has received will be there > anymore. Instead of having to notify the client of this > important service (in this case the cache) to give it the > chance to repair the damage, we want all the messages, > that have been sent to the important service (scheduler) > and have not been quited, to be resent to the freshly > started service (scheduler). > > - When a worker dies, from a hardware fault, we cannot > expect a supervisor to restart it (on the same machine). > In this case we want to notify the scheduler not to expect > a reply to his request anymore. Also we want to reissue > the original request to the scheduler to give it the > chance to choose a different machine to run the subtask > on. > > - When a request is canceled at a high level (say at the > cache level because the interpreter died) All subsequent > requests (to the scheduler and in the worker) > corresponding to the original request should have been > canceled before the high level service (cache) is > notified, thereby relieving him of the duty to cancel them > himself. > > > Since there is no shared memory in Erlang, the state of a > process is defined only by the messages received (and its > init parameters which are assumed constant). To reestablish > the state of a process after failure we propose three > different ways to send messages to a process and their > corresponding informal error handling semantics: > > tsend( Dest, Req, replay ) -> TransId > when Dest :: atom(), > Req :: term(), > TransId :: reference(). > > Upon calling tsend/3, a transaction server creates a record > of the request to be sent and relays it to the destination > (must be a registered process). At the same time it creates > a monitor on both the request's source and destination. When > the source dies, it will send an abort message to the > destination. When the destination dies, initially, nothing > happens. When the supervisor restarts the destination, the > transaction server replays all unfinished requests to the > destination. > > tsend( Dest, Req, replay, Precond ) -> TransId > when Dest :: atom(), > Req :: term(), > Precond :: reference(), > TransId :: reference(). > > The error handling for tsend/4 with replay works just the > same as tsend/3. Additionally, when the request with the id > Precond is canceled, this request is also canceled. > > tsend( Dest, Req, reschedule, Precond ) -> TransId > when Dest :: atom() | pid(), > Req :: term(), > Precond :: reference(), > TransId :: reference(). > > Upon calling tsend/4, with reschedule, as before, a > transaction server creates a record of the request and > monitors both source and destination. When the destination > dies, instead of waiting for a supervisor to restart it, the > original request identified with Precond is first canceled > at the source and then replayed to the source. Since we do > not rely on the destination to be a permanent process, we > can also identify it per Pid while we had to require a > registered service under replay error handling. > > commit( TransId, Reply ) -> ok > when TransId :: reference(), > Reply :: term(). > > When a service is done working on a request, it sends a > commit which relays the reply to the transaction source and > removes the record for this request from the transaction > server. > > A service participating in transaction handling has to > provide the following two callbacks: > > handle_recv( TransId::reference(), Req::_, State::_ ) -> > {noreply, NewState::_}. > > handle_abort( TransId::reference(), State::_ ) -> > {noreply, NewState::_}. > > While the so-defined transaction protocol is capable of > satisfying the requirements introduced for the workflow > system example the question is, is it general enough to be > applicable also in other applications? > > > This conduct has its limitations. > > The introduced transaction protocol may be suited to deal > with transient software faults (Heisenbugs) but its > effectiveness to mitigate hardware faults or deterministic > software faults (Bohrbugs) is limited. In addition, with the > introduction of the transaction server we created a single > point of failure. > > > Concludingly, the restarting of a service by a supervisor is > sufficient to secure the availability of a service in the > presence of software faults but large scale distributed > systems require a more fine-grained approach to error > handling. To identify patterns in message passing and error > handling gives us the opportunity to reduce error handling > code and, thereby, avoid the introduction of bugs into error > handling. The proposed transaction protocol may be suitable > to achieve this goal. > > > I had hoped to get some feedback on the concept, in order to > have an idea whether I am on the right track. If a similar > library is already around and I just couldn't find it, if I > am missing an obvious feature, a pattern that is important > but just doesn't appear in the context of scientific > workflows, it would be helpful to know about it. Thanks in > advance. > > Cheers > J?rgen > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- Torben Hoffmann Architect, basho.com M: +45 25 14 05 38 From ferenc.holzhauser@REDACTED Mon Oct 19 11:32:54 2015 From: ferenc.holzhauser@REDACTED (Ferenc Holzhauser) Date: Mon, 19 Oct 2015 11:32:54 +0200 Subject: [erlang-questions] reloading a NIF library without restarting the server In-Reply-To: References: Message-ID: Hi, you could try this: http://www.erlang.org/doc/reference_manual/code_loading.html#id88462 Ferenc On 19 October 2015 at 10:05, Caragea Silviu wrote: > Hello, > > It is possible to reload a nif library ? > > For example I have an app using NIF and after I'm doing some changes over > the native code I want to reload the new library in production without > restarting the server.. > > I succeeded to reload the erlang code but seems it's still using the > oldest NIF. > > Silviu > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mremond@REDACTED Mon Oct 19 12:16:33 2015 From: mremond@REDACTED (=?utf-8?Q?Micka=C3=ABl_R=C3=A9mond?=) Date: Mon, 19 Oct 2015 12:16:33 +0200 Subject: [erlang-questions] Basho joins Advanced Erlang Initiative - Workshop schedule updated Message-ID: <9FDD4B33-66EE-4728-ADB0-0DC97CB44F7A@process-one.net> Hello, We are happy to announce that Basho Technologies Inc. is joining Advanced Erlang Initiative. Basho is renowned for its highly resilient, available and massively scalable NoSQL distributed database product line. Developed in Erlang, these tools have been widely used and are helping Erlang based products earn recognition well outside of the Erlang community. When Advanced Erlang Initiative was launched last June, it was clear that the ecosystem of Erlang products was vibrant. We had already received good feedback about the initiative. The first Workshop, organised by Quviq early this month was a success (videos of the talks are coming soon). With Basho joining the effort, we are proud to grow the interest around such an initiative. We have now gathered a group of four Erlang companies building Erlang products and working together to promote their work. We are also adding a new Advanced Erlang workshop on our schedule. Basho?s Bryan Hunt, Solutions Architect, will be leading the Basho Advanced Erlang workshop schedule which will take place on December 9th in London. Let's meet during one of our workshops on one of the following topic: Quickcheck, ejabberd, Cowboy or Riak. The schedule is available on Advanced Erlang website: http://advanced-erlang.com/workshops/ Cheers, -- Advanced Erlang Initiative Members http://advanced-erlang.com From askjuise@REDACTED Mon Oct 19 13:33:22 2015 From: askjuise@REDACTED (Alexander Petrovsky) Date: Mon, 19 Oct 2015 14:33:22 +0300 Subject: [erlang-questions] investigating memory issues In-Reply-To: References: Message-ID: If I understand correctly, you suggest call X = binary:copy(SomeBinary) and then call jsonx parser with X as parameter? Or just going through parsed result and then call binary:copy over each binary element? 2015-10-17 13:50 GMT+03:00 Jesper Louis Andersen < jesper.louis.andersen@REDACTED>: > > On Fri, Oct 16, 2015 at 9:34 PM, Caragea Silviu > wrote: > >> 7. jsonx > > > This is a shot but I think this library is your problem. When it parses a > JSON document, it uses enif_make_sub_binary on the binary() containing the > JSON string. This means if you do not use binary:copy() on data you hoist > out of the JSON document, then you are keeping the whole document around > for as long as you refer to that subbinary. > > Try wrapping strings you hoist out in binary:copy/1. This will slow you > down, but it should remove the reference to the original binary which would > make your system be able to reclaim that memory. Alternatively, if your > system is not highly loaded, you could simply replace jsonx for jsx which > is written in Erlang and avoids such problems (at the expense of slower > parsing, but JSON will *never* win a parsing race anyway and it is better > to swtch the format then). > > I've seen systems stuffing such subbinaries into ETS, and then your ETS > table is keeping data around. > > > -- > J. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -- ?????????? ????????? / Alexander Petrovsky, Skype: askjuise Phone: +7 914 8 820 815 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Oct 19 13:58:24 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 19 Oct 2015 13:58:24 +0200 Subject: [erlang-questions] investigating memory issues In-Reply-To: References: Message-ID: The latter, ... If you have some JSON and you decode that into, say, #{ <<"key">> := SomeValue, ... } Then SomeValue is a subbinary which keeps JSON alive. Now, what to do depends on the situation: 1. If SomeValue is not long-lived and you are throwing JSON away roughly at the same time as SomeValue, then you have no problems right away. 2. If SomeValue is long-lived and JSON is not, then you could potentially keep JSON around for a long time. This is where you can do X = binary:copy(SomeValue) to force X to be a fresh copy. This then puts you back into situation 1, and you will not have a problem. It is not a priori obvious what is the right thing to do. There are trade-offs with processing speed, ease of use, gc behaviour and so on. If, for instance, jsonx made a new binary itself, then it would lose every performance benchmark to an implementation which uses the subbinary construction. And people tend to value their decoding speed over the ability to take out a VM for some odd reason :) On Mon, Oct 19, 2015 at 1:33 PM, Alexander Petrovsky wrote: > If I understand correctly, you suggest call X = binary:copy(SomeBinary) > and then call jsonx parser with X as parameter? Or just going through > parsed result and then call binary:copy over each binary element? > > 2015-10-17 13:50 GMT+03:00 Jesper Louis Andersen < > jesper.louis.andersen@REDACTED>: > >> >> On Fri, Oct 16, 2015 at 9:34 PM, Caragea Silviu >> wrote: >> >>> 7. jsonx >> >> >> This is a shot but I think this library is your problem. When it parses a >> JSON document, it uses enif_make_sub_binary on the binary() containing the >> JSON string. This means if you do not use binary:copy() on data you hoist >> out of the JSON document, then you are keeping the whole document around >> for as long as you refer to that subbinary. >> >> Try wrapping strings you hoist out in binary:copy/1. This will slow you >> down, but it should remove the reference to the original binary which would >> make your system be able to reclaim that memory. Alternatively, if your >> system is not highly loaded, you could simply replace jsonx for jsx which >> is written in Erlang and avoids such problems (at the expense of slower >> parsing, but JSON will *never* win a parsing race anyway and it is better >> to swtch the format then). >> >> I've seen systems stuffing such subbinaries into ETS, and then your ETS >> table is keeping data around. >> >> >> -- >> J. >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > > > -- > ?????????? ????????? / Alexander Petrovsky, > > Skype: askjuise > Phone: +7 914 8 820 815 > > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker.eriksson@REDACTED Mon Oct 19 14:02:00 2015 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Mon, 19 Oct 2015 14:02:00 +0200 Subject: [erlang-questions] reloading a NIF library without restarting the server In-Reply-To: References: Message-ID: <5624DBB8.6090009@ericsson.com> You can upgrade a NIF library in the same way you upgrade an Erlang module. The upgraded Erlang code just have to call erlang:load_nif/2 for the NIF library that it wants to use. However, you may need to place the new NIF library in a different path or give it a different file name, otherwise the OS (dlopen) may think you are loading the same library again. /Sverker, Erlang/OTP Ericsson On 10/19/2015 10:05 AM, Caragea Silviu wrote: > Hello, > > It is possible to reload a nif library ? > > For example I have an app using NIF and after I'm doing some changes > over the native code I want to reload the new library in production > without restarting the server.. > > I succeeded to reload the erlang code but seems it's still using the > oldest NIF. > > Silviu > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlosj.gf@REDACTED Mon Oct 19 14:46:09 2015 From: carlosj.gf@REDACTED (=?UTF-8?Q?Carlos_Gonz=C3=A1lez_Florido?=) Date: Mon, 19 Oct 2015 14:46:09 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: <5616E93B.4060608@hu-berlin.de> References: <5616E93B.4060608@hu-berlin.de> Message-ID: Hi J?rgen. You are describing a complex behavior, and I'm probably not fully understanding it, but some of your ideas seem very similar to what we are building for NetComposer. We don't yet have a full release, but we have already released many of its pieces: - NkCLUSTER (https://github.com/Nekso/nkcluster) is a framework for creating clusters of Erlang nodes of any size, and distributing and managing jobs into them, with a pattern that seems similar to your proposal, different to other frameworks in many aspects, and specially sending "intelligence" to the workers. - NkSERVICE (https://github.com/Nekso/nkservice) is framework for managing distributed services based on plugins. - NkDOMAIN (https://github.com/Nekso/nkdomain) is an Erlang framework to load an manage complex distributed, multi-tenant configurations in a cluster. - NkSIP (https://github.com/kalta/nksip) is a SIP application server that is currenly under heavy refactorization (it will be integrated as a plugin of NetComposer), but offers a way to process complex works similar to your idea. I hope it helps. Regards, Carlos On Fri, Oct 9, 2015 at 12:07 AM, J?rgen Brandt wrote: > > Hello, > > is there an Erlang library for transactional message > passing, using patterns in communication and error handling > to improve fault tolerance? > > This (or a similar) question may have been asked before and, > of course, there is plenty of research on fault tolerance > and failure transparency. Nevertheless, in my work with > scientific workflows it seems that certain patterns in error > handling exist. In this mail I'm trying to motivate a way to > externalize common error handling in a standardized service > (a transaction server) but I'm unsure whether such a thing > already exists, whether I'm missing an important feature, > and whether it's a good idea anyway. > > Large distributed systems are composed of many services. > They process many tasks concurrently and need fault > tolerance to yield correct results and maintain > availability. Erlang seemed a good choice because it > provides facilities to automatically improve availability, > e.g., by using supervisers. In addition, it encourages a > programming style that separates processing logic from > error handling. In general, each service has its own > requirements, implying that a general approach to error > handling (beyond restarting) is infeasible. However, if an > application exhibits recurring patterns in the way error > handling corresponds to the messages passed between > services, we can abstract these patterns to reuse them. > > > Fault tolerance is important because it directly translates > to scalability. > > Consider an application (with transient software faults), > processing user queries. The application reports errors back > to the user as they appear. If a user query is a long- > running job (hours, days), the number of subtasks created > from this job (thousands), the number of services to process > one subtask, and the number of machines involved are large, > then the occurrence of an error is near to certain. Quietly > restarting the application and rerunning the query may > reduce the failure probability but even if the application > succeeds, the number of retries and, thus, the time elapsed > to success may be prohibitive. What is needed is a system > that does not restart the whole application but only the > service that failed reissuing only the unfinished requests > that this service received before failing. Consequently, the > finer the granularity at which errors are handled, the less > work has to be redone when errors occur, allowing a system > to host longer-running jobs, containing more subtasks, > involving more services for each subtask, and running on > more machines in feasible time. > > > Scientific workflows are a good example for a large > distributed application exhibiting patterns in communication > and error handling. > > A scientific workflow system consumes an input query in the > form of an expression in the workflow language. On > evaluation of this expression it identifies subtasks that > can be executed in any order. E.g., a variant calling > workflow from bioinformatics unfolds into several hundred > to a thousand subtasks each of which is handed down in the > form of requests through a number of services: Upon > identification of the subtask in (i) the query interpreter, > a request is sent to (ii) a cache service. This service > keeps track of all previously run subtasks and returns the > cached result if available. If not, a request is sent to > (iii) a scheduling service. This service determines the > machine, to run the subtask. The scheduler tries both, to > adequately distribute the work load among workers (load > balancing) and to minimize data transfers among nodes (data > locality). Having decided where to run the subtask, a > request is sent to (iv) the corresponding worker which > executes the subtask and returns the result up the chain of > services. Every subtask goes through this life cycle. > > Apart from the interplay of the aforementioned services we > want the workflow system to behave in a particular way when > one of these services dies: > > - Each workflow is evaluated inside its own interpreter > process. A workflow potentially runs for a long time and > at some point we might want to kill the interpreter > process. When this happens, the system has to identify all > open requests originating from that interpreter and cancel > them. > > - When an important service (say the scheduler) dies, a > supervisor will restart it, this way securing the > availability of the application. Upon a fresh start, none > of the messages this service has received will be there > anymore. Instead of having to notify the client of this > important service (in this case the cache) to give it the > chance to repair the damage, we want all the messages, > that have been sent to the important service (scheduler) > and have not been quited, to be resent to the freshly > started service (scheduler). > > - When a worker dies, from a hardware fault, we cannot > expect a supervisor to restart it (on the same machine). > In this case we want to notify the scheduler not to expect > a reply to his request anymore. Also we want to reissue > the original request to the scheduler to give it the > chance to choose a different machine to run the subtask > on. > > - When a request is canceled at a high level (say at the > cache level because the interpreter died) All subsequent > requests (to the scheduler and in the worker) > corresponding to the original request should have been > canceled before the high level service (cache) is > notified, thereby relieving him of the duty to cancel them > himself. > > > Since there is no shared memory in Erlang, the state of a > process is defined only by the messages received (and its > init parameters which are assumed constant). To reestablish > the state of a process after failure we propose three > different ways to send messages to a process and their > corresponding informal error handling semantics: > > tsend( Dest, Req, replay ) -> TransId > when Dest :: atom(), > Req :: term(), > TransId :: reference(). > > Upon calling tsend/3, a transaction server creates a record > of the request to be sent and relays it to the destination > (must be a registered process). At the same time it creates > a monitor on both the request's source and destination. When > the source dies, it will send an abort message to the > destination. When the destination dies, initially, nothing > happens. When the supervisor restarts the destination, the > transaction server replays all unfinished requests to the > destination. > > tsend( Dest, Req, replay, Precond ) -> TransId > when Dest :: atom(), > Req :: term(), > Precond :: reference(), > TransId :: reference(). > > The error handling for tsend/4 with replay works just the > same as tsend/3. Additionally, when the request with the id > Precond is canceled, this request is also canceled. > > tsend( Dest, Req, reschedule, Precond ) -> TransId > when Dest :: atom() | pid(), > Req :: term(), > Precond :: reference(), > TransId :: reference(). > > Upon calling tsend/4, with reschedule, as before, a > transaction server creates a record of the request and > monitors both source and destination. When the destination > dies, instead of waiting for a supervisor to restart it, the > original request identified with Precond is first canceled > at the source and then replayed to the source. Since we do > not rely on the destination to be a permanent process, we > can also identify it per Pid while we had to require a > registered service under replay error handling. > > commit( TransId, Reply ) -> ok > when TransId :: reference(), > Reply :: term(). > > When a service is done working on a request, it sends a > commit which relays the reply to the transaction source and > removes the record for this request from the transaction > server. > > A service participating in transaction handling has to > provide the following two callbacks: > > handle_recv( TransId::reference(), Req::_, State::_ ) -> > {noreply, NewState::_}. > > handle_abort( TransId::reference(), State::_ ) -> > {noreply, NewState::_}. > > While the so-defined transaction protocol is capable of > satisfying the requirements introduced for the workflow > system example the question is, is it general enough to be > applicable also in other applications? > > > This conduct has its limitations. > > The introduced transaction protocol may be suited to deal > with transient software faults (Heisenbugs) but its > effectiveness to mitigate hardware faults or deterministic > software faults (Bohrbugs) is limited. In addition, with the > introduction of the transaction server we created a single > point of failure. > > > Concludingly, the restarting of a service by a supervisor is > sufficient to secure the availability of a service in the > presence of software faults but large scale distributed > systems require a more fine-grained approach to error > handling. To identify patterns in message passing and error > handling gives us the opportunity to reduce error handling > code and, thereby, avoid the introduction of bugs into error > handling. The proposed transaction protocol may be suitable > to achieve this goal. > > > I had hoped to get some feedback on the concept, in order to > have an idea whether I am on the right track. If a similar > library is already around and I just couldn't find it, if I > am missing an obvious feature, a pattern that is important > but just doesn't appear in the context of scientific > workflows, it would be helpful to know about it. Thanks in > advance. > > Cheers > J?rgen > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Mon Oct 19 22:53:52 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Mon, 19 Oct 2015 22:53:52 +0200 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: Message-ID: Hi, I agree but like Joe says: let it work, then optimize if needed. My main remark was: Splitting odd and even does not need to check odd then even, but check it is odd Else it is even (or contrary). Regards Le?19 oct. 2015 4:54 AM, ok@REDACTED a ?crit?: > > "?ric Pailleau" > > > A number cannot be both even and odd, so simply take odd numbers in a > > variable O from list L , even numbers will be L -- O. > > Here we run into the question "what does BEST mean?" > > Now Xs -- Ys cannot assume anything about the contents of Xs or Ys, > other than them both being well formed lists.? So while it *could* > be implemented in O(|Xs|+|Ys|+|Xs|lg|Ys|) time by building some kind > of balanced search tree from Ys, or even better by building some > kind of hashed set, the cost of building the supporting data structure > would likely be worse than the cost of traversing L twice or using > lists:partition/2. > > > From Andrew.Kutta@REDACTED Mon Oct 19 22:33:51 2015 From: Andrew.Kutta@REDACTED (Andrew.Kutta@REDACTED) Date: Mon, 19 Oct 2015 20:33:51 +0000 Subject: [erlang-questions] AMQP 1.0 Client Message-ID: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> I am looking for an AMQP 1.0 Client Application. All of the RabbitMQ clients appear to have remained at 0.9.1 as Pivotal decided not to pursue the 1.0 standard. Not sure if it will help, but the applications I have written/supporting are currently running on R16B02. I have tinkered with the idea of taking the QPid-Proton Library and forming a NIF, but I have no experience doing that, so if this is the only direction you know of, I would ask for some guidance on how to begin. The purpose behind this is to hook up into the Azure EventHub which only supports AMQP 1.0. Thanks for your help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Mon Oct 19 23:41:21 2015 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Mon, 19 Oct 2015 23:41:21 +0200 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: References: Message-ID: <56256381.9060001@wanadoo.fr> Hi, by doing some testing, strangely using (X rem 2 ) =:= 1 is almost always better than X band 1 =:= 1. On my machine at least. ______________________________________________________________________ -module(odd_even). -export([go/0]). go() -> List = lists:seq(1, 1000) , T1 = erlang:timestamp(), lists:partition(fun (X) -> (X rem 2 ) =:= 1 end, List), io:format("~p microseconds~n",[timer:now_diff(erlang:timestamp(), T1)]), T3 = erlang:timestamp(), lists:partition(fun (X) -> X band 1 =:= 1 end, List), io:format("~p microseconds~n",[timer:now_diff(erlang:timestamp(), T3)]). ______________________________________________________________________ Le 19/10/2015 22:53, ?ric Pailleau a ?crit : > Hi, > I agree but like Joe says: let it work, then optimize if needed. > My main remark was: Splitting odd and even does not need to check odd then even, but check it is odd Else it is even (or contrary). > Regards > > Le 19 oct. 2015 4:54 AM, ok@REDACTED a ?crit : >> >> "?ric Pailleau" >> >>> A number cannot be both even and odd, so simply take odd numbers in a >>> variable O from list L , even numbers will be L -- O. >> >> Here we run into the question "what does BEST mean?" >> >> Now Xs -- Ys cannot assume anything about the contents of Xs or Ys, >> other than them both being well formed lists. So while it *could* >> be implemented in O(|Xs|+|Ys|+|Xs|lg|Ys|) time by building some kind >> of balanced search tree from Ys, or even better by building some >> kind of hashed set, the cost of building the supporting data structure >> would likely be worse than the cost of traversing L twice or using >> lists:partition/2. >> >> >> > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From eric.pailleau@REDACTED Mon Oct 19 23:51:05 2015 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Mon, 19 Oct 2015 23:51:05 +0200 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: <56256381.9060001@wanadoo.fr> References: <56256381.9060001@wanadoo.fr> Message-ID: <562565C9.3070505@wanadoo.fr> Hi, forget. By switching order of tests, result is contrary. time is equivalent. Le 19/10/2015 23:41, PAILLEAU Eric a ?crit : > Hi, > by doing some testing, strangely using (X rem 2 ) =:= 1 is almost > always better than X band 1 =:= 1. On my machine at least. > > ______________________________________________________________________ > -module(odd_even). > > -export([go/0]). > > > go() -> List = lists:seq(1, 1000) , > T1 = erlang:timestamp(), > lists:partition(fun (X) -> (X rem 2 ) =:= 1 end, List), > io:format("~p > microseconds~n",[timer:now_diff(erlang:timestamp(), T1)]), > > T3 = erlang:timestamp(), > lists:partition(fun (X) -> X band 1 =:= 1 end, List), > io:format("~p > microseconds~n",[timer:now_diff(erlang:timestamp(), T3)]). > ______________________________________________________________________ > > > Le 19/10/2015 22:53, ?ric Pailleau a ?crit : >> Hi, >> I agree but like Joe says: let it work, then optimize if needed. >> My main remark was: Splitting odd and even does not need to check odd >> then even, but check it is odd Else it is even (or contrary). >> Regards >> >> Le 19 oct. 2015 4:54 AM, ok@REDACTED a ?crit : >>> >>> "?ric Pailleau" >>> >>>> A number cannot be both even and odd, so simply take odd numbers in a >>>> variable O from list L , even numbers will be L -- O. >>> >>> Here we run into the question "what does BEST mean?" >>> >>> Now Xs -- Ys cannot assume anything about the contents of Xs or Ys, >>> other than them both being well formed lists. So while it *could* >>> be implemented in O(|Xs|+|Ys|+|Xs|lg|Ys|) time by building some kind >>> of balanced search tree from Ys, or even better by building some >>> kind of hashed set, the cost of building the supporting data structure >>> would likely be worse than the cost of traversing L twice or using >>> lists:partition/2. >>> >>> >>> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From mjtruog@REDACTED Tue Oct 20 00:09:55 2015 From: mjtruog@REDACTED (Michael Truog) Date: Mon, 19 Oct 2015 15:09:55 -0700 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: <5616E93B.4060608@hu-berlin.de> References: <5616E93B.4060608@hu-berlin.de> Message-ID: <56256A33.2090306@gmail.com> On 10/08/2015 03:07 PM, J?rgen Brandt wrote: > Hello, > > is there an Erlang library for transactional message > passing, using patterns in communication and error handling > to improve fault tolerance? Yes, there is http://cloudi.org . If you need only Erlang support, there is a repository at https://github.com/CloudI/cloudI_core/, but the main repository (https://github.com/CloudI/CloudI) provides support for all the supported programming languages: C++/C, Erlang, Java, JavaScript, Perl, PHP, Python, and Ruby. Comments below: > > This (or a similar) question may have been asked before and, > of course, there is plenty of research on fault tolerance > and failure transparency. Nevertheless, in my work with > scientific workflows it seems that certain patterns in error > handling exist. In this mail I'm trying to motivate a way to > externalize common error handling in a standardized service > (a transaction server) but I'm unsure whether such a thing > already exists, whether I'm missing an important feature, > and whether it's a good idea anyway. > > Large distributed systems are composed of many services. > They process many tasks concurrently and need fault > tolerance to yield correct results and maintain > availability. Erlang seemed a good choice because it > provides facilities to automatically improve availability, > e.g., by using supervisers. In addition, it encourages a > programming style that separates processing logic from > error handling. In general, each service has its own > requirements, implying that a general approach to error > handling (beyond restarting) is infeasible. However, if an > application exhibits recurring patterns in the way error > handling corresponds to the messages passed between > services, we can abstract these patterns to reuse them. > > > Fault tolerance is important because it directly translates > to scalability. Not really. The approach you described in your email below uses registered processes, which is an easy way to limit scalability while still having fault-tolerance. > > Consider an application (with transient software faults), > processing user queries. The application reports errors back > to the user as they appear. If a user query is a long- > running job (hours, days), the number of subtasks created > from this job (thousands), the number of services to process > one subtask, and the number of machines involved are large, > then the occurrence of an error is near to certain. Quietly > restarting the application and rerunning the query may > reduce the failure probability but even if the application > succeeds, the number of retries and, thus, the time elapsed > to success may be prohibitive. What is needed is a system > that does not restart the whole application but only the > service that failed reissuing only the unfinished requests > that this service received before failing. Consequently, the > finer the granularity at which errors are handled, the less > work has to be redone when errors occur, allowing a system > to host longer-running jobs, containing more subtasks, > involving more services for each subtask, and running on > more machines in feasible time. > > > Scientific workflows are a good example for a large > distributed application exhibiting patterns in communication > and error handling. > > A scientific workflow system consumes an input query in the > form of an expression in the workflow language. On > evaluation of this expression it identifies subtasks that > can be executed in any order. E.g., a variant calling > workflow from bioinformatics unfolds into several hundred > to a thousand subtasks each of which is handed down in the > form of requests through a number of services: Upon > identification of the subtask in (i) the query interpreter, > a request is sent to (ii) a cache service. This service > keeps track of all previously run subtasks and returns the > cached result if available. If not, a request is sent to > (iii) a scheduling service. This service determines the > machine, to run the subtask. The scheduler tries both, to > adequately distribute the work load among workers (load > balancing) and to minimize data transfers among nodes (data > locality). Having decided where to run the subtask, a > request is sent to (iv) the corresponding worker which > executes the subtask and returns the result up the chain of > services. Every subtask goes through this life cycle. > > Apart from the interplay of the aforementioned services we > want the workflow system to behave in a particular way when > one of these services dies: > > - Each workflow is evaluated inside its own interpreter > process. A workflow potentially runs for a long time and > at some point we might want to kill the interpreter > process. When this happens, the system has to identify all > open requests originating from that interpreter and cancel > them. > > - When an important service (say the scheduler) dies, a > supervisor will restart it, this way securing the > availability of the application. Upon a fresh start, none > of the messages this service has received will be there > anymore. Instead of having to notify the client of this > important service (in this case the cache) to give it the > chance to repair the damage, we want all the messages, > that have been sent to the important service (scheduler) > and have not been quited, to be resent to the freshly > started service (scheduler). > > - When a worker dies, from a hardware fault, we cannot > expect a supervisor to restart it (on the same machine). > In this case we want to notify the scheduler not to expect > a reply to his request anymore. Also we want to reissue > the original request to the scheduler to give it the > chance to choose a different machine to run the subtask > on. > > - When a request is canceled at a high level (say at the > cache level because the interpreter died) All subsequent > requests (to the scheduler and in the worker) > corresponding to the original request should have been > canceled before the high level service (cache) is > notified, thereby relieving him of the duty to cancel them > himself. In CloudI you can create an Erlang service by implementing the cloudi_service behaviour which sends and receives service requests. For integration with other Erlang source code, you can receive normal Erlang message sends and you can send to CloudI services from normal Erlang processes with the cloudi module's context data. A CloudI service request is a task or subtask as you have described and there is no need to distinguish between workers and services (you can just configure a service to use any number of processes). A CloudI uses service names to provide a name to send service requests to and any number of service processes (or threads) may subscribe to the same name. This approach is necessary for high-availability while providing fault-tolerance. So that means all the service source code is executed concurrently without attempting to maintain consistent state outside a single thread of execution (it is an AP-type system when considering the CAP theorem). The service requests are randomly assigned to a thread of execution that has subscribed to the matching service name for service fault-tolerance guarantees. > > > Since there is no shared memory in Erlang, the state of a > process is defined only by the messages received (and its > init parameters which are assumed constant). To reestablish > the state of a process after failure we propose three > different ways to send messages to a process and their > corresponding informal error handling semantics: > > tsend( Dest, Req, replay ) -> TransId > when Dest :: atom(), > Req :: term(), > TransId :: reference(). > > Upon calling tsend/3, a transaction server creates a record > of the request to be sent and relays it to the destination > (must be a registered process). At the same time it creates > a monitor on both the request's source and destination. When > the source dies, it will send an abort message to the > destination. When the destination dies, initially, nothing > happens. When the supervisor restarts the destination, the > transaction server replays all unfinished requests to the > destination. > > tsend( Dest, Req, replay, Precond ) -> TransId > when Dest :: atom(), > Req :: term(), > Precond :: reference(), > TransId :: reference(). > > The error handling for tsend/4 with replay works just the > same as tsend/3. Additionally, when the request with the id > Precond is canceled, this request is also canceled. > > tsend( Dest, Req, reschedule, Precond ) -> TransId > when Dest :: atom() | pid(), > Req :: term(), > Precond :: reference(), > TransId :: reference(). > > Upon calling tsend/4, with reschedule, as before, a > transaction server creates a record of the request and > monitors both source and destination. When the destination > dies, instead of waiting for a supervisor to restart it, the > original request identified with Precond is first canceled > at the source and then replayed to the source. Since we do > not rely on the destination to be a permanent process, we > can also identify it per Pid while we had to require a > registered service under replay error handling. > > commit( TransId, Reply ) -> ok > when TransId :: reference(), > Reply :: term(). > > When a service is done working on a request, it sends a > commit which relays the reply to the transaction source and > removes the record for this request from the transaction > server. In CloudI the situation is simpler. The service request is sent through as many services as is necessary and data is committed upon receiving the reply to the service request. The service request handling callback function provides the TransId which provides uniqueness for the transaction and a loose time-based ordering. If a transaction (CloudI service request) receives a response, then it was handled successfully by all services involved within the timeout value provided for the service request (the timeout is important and missing from your description). It is possible to retry a transaction if nothing occurred due to a failure at some service in the service request path during the timeout time period (this would be considered replaying a transaction, cloudi_service_queue is a service which handles persisting transactions to disk and provides retries as a service but there is also a cloudi_queue data structure which can provide retries in Erlang services). > > A service participating in transaction handling has to > provide the following two callbacks: > > handle_recv( TransId::reference(), Req::_, State::_ ) -> > {noreply, NewState::_}. > > handle_abort( TransId::reference(), State::_ ) -> > {noreply, NewState::_}. In CloudI, an "abort" described above is receiving a null response (a binary of <<>> for ResponseInfo and Response, where ResponseInfo is meta-data related the Response data (e.g., HTTP headers)). The response to a CloudI service request is a result of a function call (either send_sync or recv_async from the CloudI API). Erlang services can utilize send_async_active to receive responses as Erlang messages which can provide more efficiency for services that handle high-throughput. > > While the so-defined transaction protocol is capable of > satisfying the requirements introduced for the workflow > system example the question is, is it general enough to be > applicable also in other applications? > > > This conduct has its limitations. > > The introduced transaction protocol may be suited to deal > with transient software faults (Heisenbugs) but its > effectiveness to mitigate hardware faults or deterministic > software faults (Bohrbugs) is limited. In addition, with the > introduction of the transaction server we created a single > point of failure. CloudI shares the service name information between connected Erlang nodes (using distributed Erlang), so service requests can use remote nodes when a local service isn't found. This is based on the service's configured destination refresh method (it determines how a service selects destinations for service requests). > > > Concludingly, the restarting of a service by a supervisor is > sufficient to secure the availability of a service in the > presence of software faults but large scale distributed > systems require a more fine-grained approach to error > handling. To identify patterns in message passing and error > handling gives us the opportunity to reduce error handling > code and, thereby, avoid the introduction of bugs into error > handling. The proposed transaction protocol may be suitable > to achieve this goal. > > > I had hoped to get some feedback on the concept, in order to > have an idea whether I am on the right track. If a similar > library is already around and I just couldn't find it, if I > am missing an obvious feature, a pattern that is important > but just doesn't appear in the context of scientific > workflows, it would be helpful to know about it. Thanks in > advance. Feel free to ask questions if something wasn't answered above or you needed more detail on something. Best Regards, Michael > > Cheers > J?rgen > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From g@REDACTED Tue Oct 20 00:14:13 2015 From: g@REDACTED (Garrett Smith) Date: Mon, 19 Oct 2015 17:14:13 -0500 Subject: [erlang-questions] AMQP 1.0 Client In-Reply-To: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> References: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> Message-ID: Based on my experience with AMQP and Qpid, your requirements sound very scary. If you can get access to literally any other interface, including SOAP [sic], I'd look at that seriously before using AMQP. As good as Erlang is at supporting wire protocols, if that protocol is as complicated as AMQP (0.9 and beyond) it's really hard to build decent software. After trying to escape from this requirement, if you're seriously trapped, I'd look for the most solid client and build an external port that your Erlang app can use. Here's a primer: http://erlang.org/doc/tutorial/c_port.html No one likes this model because who wants to serialize their messaging over stdio - it seems absurd. But a) it works very well, even if it's not as potentially fast, b) it's very stable - this junk cannot bring your Erlang VM down and c) you can do work in that port to minimize the payload going back and forth to the Erlang VM. Linked in drivers like NIFs will expose your Erlang VM to the horrors of whatever AMQP client library you're using - and that can easily invalidate the advantages of Erlang. So if performance is really and truly that big a deal - and you know that based on some actual data - consider doing this work in C++ with the Qpid client code directly. On Mon, Oct 19, 2015 at 3:33 PM, wrote: > I am looking for an AMQP 1.0 Client Application. All of the RabbitMQ > clients appear to have remained at 0.9.1 as Pivotal decided not to pursue > the 1.0 standard. > > Not sure if it will help, but the applications I have written/supporting are > currently running on R16B02. > > > > I have tinkered with the idea of taking the QPid-Proton Library and forming > a NIF, but I have no experience doing that, so if this is the only direction > you know of, I would ask for some guidance on how to begin. > > > > The purpose behind this is to hook up into the Azure EventHub which only > supports AMQP 1.0. > > > > Thanks for your help! > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From vishaljoshi2002@REDACTED Tue Oct 20 02:34:38 2015 From: vishaljoshi2002@REDACTED (vishal joshi) Date: Tue, 20 Oct 2015 00:34:38 +0000 (UTC) Subject: [erlang-questions] What would it take for erlang to power something like lichess dot org References: <988551548.24456.1445301278713.JavaMail.yahoo@mail.yahoo.com> Message-ID: <988551548.24456.1445301278713.JavaMail.yahoo@mail.yahoo.com> What storage engine we would use if I was powering it via erlang? ?Would the dev costs be lower? Would the OPEX be lower?? github dot com / ornicar / lila -------------------------It's a free online chess game focused on?realtime?and ease of useIt has a?search engine,?computer analysis,?tournaments,?simuls,?forums,?teams,?tactic trainer,?opening trainer, a?mobile app, a?monitoring console, and a?network world map. The UI is available in?80 languages?thanks to the community.Lichess is written in?Scala 2.11, and relies on?Play 2.3?for the routing, templating, and JSON. Pure chess logic is contained in?scalachess?submodule. The codebase is fully asynchronous, making heavy use of Scala Futures and?Akka 2 actors. Lichess talks to?Stockfish?deployed in an AI cluster of donated servers. It uses?MongoDB 2.6?to store more than 68 million games, which are indexed byelasticsearch. HTTP requests and websocket connections are proxied by?nginx 1.6. Client-side is written in?mithril.js. The?blog?uses a free open content plan from?prismic.io.Join us on #lichess IRC channel on freenode for more info. Use?github issues?for bug reports and feature requests.---------------------- ornicar/lila? | ? | | ? | | ? | ? | ? | ? | ? | | ornicar/lilalila - The forever free, adless and open source chess server. | | | | View on github.com | Preview by Yahoo | | | | ? | -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andrew.Kutta@REDACTED Tue Oct 20 03:51:28 2015 From: Andrew.Kutta@REDACTED (Andrew.Kutta@REDACTED) Date: Tue, 20 Oct 2015 01:51:28 +0000 Subject: [erlang-questions] AMQP 1.0 Client In-Reply-To: References: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> Message-ID: <60CDA295EBD171419CCC6ED6DB2CAA7E657284C6@USSTLZ-PMSG007.emrsn.org> I was really hoping you wouldn't say that. Though I somewhat knew the answer before asking. This was planned as a PoC, but it seems that there is a significant time investment in order to make any progress which is somewhat out of the question. Thanks for your help, Andrew -----Original Message----- From: Garrett Smith [mailto:g@REDACTED] Sent: Monday, October 19, 2015 5:14 PM To: Kutta, Andrew [CLIMATE/WR/STL] Cc: Erlang Questions Subject: Re: [erlang-questions] AMQP 1.0 Client Based on my experience with AMQP and Qpid, your requirements sound very scary. If you can get access to literally any other interface, including SOAP [sic], I'd look at that seriously before using AMQP. As good as Erlang is at supporting wire protocols, if that protocol is as complicated as AMQP (0.9 and beyond) it's really hard to build decent software. After trying to escape from this requirement, if you're seriously trapped, I'd look for the most solid client and build an external port that your Erlang app can use. Here's a primer: http://erlang.org/doc/tutorial/c_port.html No one likes this model because who wants to serialize their messaging over stdio - it seems absurd. But a) it works very well, even if it's not as potentially fast, b) it's very stable - this junk cannot bring your Erlang VM down and c) you can do work in that port to minimize the payload going back and forth to the Erlang VM. Linked in drivers like NIFs will expose your Erlang VM to the horrors of whatever AMQP client library you're using - and that can easily invalidate the advantages of Erlang. So if performance is really and truly that big a deal - and you know that based on some actual data - consider doing this work in C++ with the Qpid client code directly. On Mon, Oct 19, 2015 at 3:33 PM, wrote: > I am looking for an AMQP 1.0 Client Application. All of the RabbitMQ > clients appear to have remained at 0.9.1 as Pivotal decided not to > pursue the 1.0 standard. > > Not sure if it will help, but the applications I have > written/supporting are currently running on R16B02. > > > > I have tinkered with the idea of taking the QPid-Proton Library and > forming a NIF, but I have no experience doing that, so if this is the > only direction you know of, I would ask for some guidance on how to begin. > > > > The purpose behind this is to hook up into the Azure EventHub which > only supports AMQP 1.0. > > > > Thanks for your help! > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From vances@REDACTED Tue Oct 20 06:00:24 2015 From: vances@REDACTED (Vance Shipley) Date: Tue, 20 Oct 2015 09:30:24 +0530 Subject: [erlang-questions] AMQP 1.0 Client In-Reply-To: References: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> Message-ID: On Tue, Oct 20, 2015 at 3:44 AM, Garrett Smith wrote: > Linked in drivers like NIFs will expose your Erlang VM to the horrors > of whatever AMQP client library you're using - and that can easily > invalidate the advantages of Erlang. My answer to this concern is always to dedicate a node to the driver. Now you have the best of both words. -- -Vance From ok@REDACTED Tue Oct 20 06:04:36 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 20 Oct 2015 17:04:36 +1300 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: References: Message-ID: On 20/10/2015, at 9:53 am, ?ric Pailleau wrote: > Hi, > I agree but like Joe says: let it work, then optimize if needed. > My main remark was: Splitting odd and even does not need to check odd then even, but check it is odd Else it is even (or contrary). We agree on the Joe advice. But to me, using L -- O is obfuscation. It would be *harder* for me to understand that code than say a version using lists:partition/2. No, let's be honest here. It *WAS* quite a bit harder for me to understand the L -- O version because I had to stop, check the documentation, try a couple of tests, all to verify that the results came out in the right order. I quite agree that sometimes a bit of brute force *can* help you get to working code quicker. But it has to be *obviously correct* brute force. From g@REDACTED Tue Oct 20 07:39:25 2015 From: g@REDACTED (Garrett Smith) Date: Tue, 20 Oct 2015 00:39:25 -0500 Subject: [erlang-questions] AMQP 1.0 Client In-Reply-To: References: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> Message-ID: I don't think so, not obviously anyway. Setting aside the complexity of writing NIFs, you're just trading one latency (IO over stdio) for another (IO over sockets). Not to mention the overhead of another VM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjtruog@REDACTED Tue Oct 20 08:08:40 2015 From: mjtruog@REDACTED (Michael Truog) Date: Mon, 19 Oct 2015 23:08:40 -0700 Subject: [erlang-questions] AMQP 1.0 Client In-Reply-To: References: <60CDA295EBD171419CCC6ED6DB2CAA7E65726DFA@USSTLZ-PMSG007.emrsn.org> Message-ID: <5625DA68.3000902@gmail.com> On 10/19/2015 10:39 PM, Garrett Smith wrote: > > I don't think so, not obviously anyway. Setting aside the complexity of writing NIFs, you're just trading one latency (IO over stdio) for another (IO over sockets). Not to mention the overhead of another VM. > The other problem is that the Erlang node communication will be a single connection bottleneck, so you can't exploit any concurrency with the NIF (the problem gets worse as more data is exchanged utilizing more throughput). > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider@REDACTED Tue Oct 20 11:23:11 2015 From: schneider@REDACTED (Frans Schneider) Date: Tue, 20 Oct 2015 11:23:11 +0200 Subject: [erlang-questions] edoc: error in doclet 'edoc_doclet' when using macro in -type Message-ID: <562607FF.9030503@xs4all.nl> Hi list, When I use -type id() :: ?MIN_KEM_ID..?MAX_KEM_ID. instead of -type id() :: 0..16#ffff. I get the error "edoc: error in doclet 'edoc_doclet'". Is there a work around for this? Thanks, Frans From eric.pailleau@REDACTED Tue Oct 20 11:34:02 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Tue, 20 Oct 2015 11:34:02 +0200 Subject: [erlang-questions] Can I do the same with a fold easily In-Reply-To: Message-ID: Hi, Yes Erlang L -- O is slow compared to list partition, but can be used on short lists. It was more a mathematical notation to say: what is not odd is even. List partition is quite better to achieve this. Le?20 oct. 2015 6:04 AM, "Richard A. O'Keefe" a ?crit?: > > > On 20/10/2015, at 9:53 am, ?ric Pailleau wrote: > > > Hi, > > I agree but like Joe says: let it work,? then optimize if needed. > > My main remark was: Splitting odd and even does not need to check odd then even, but check it is odd Else it is even (or contrary). > > We agree on the Joe advice. > But to me, using L -- O is obfuscation.? It would be *harder* for > me to understand that code than say a version using > lists:partition/2. > > No, let's be honest here. > It *WAS* quite a bit harder for me to understand the L -- O > version because I had to stop, check the documentation, try > a couple of tests, all to verify that the results came out in > the right order. > > I quite agree that sometimes a bit of brute force *can* help you > get to working code quicker.? But it has to be *obviously correct* > brute force. > From mjw@REDACTED Tue Oct 20 11:49:22 2015 From: mjw@REDACTED (Michael Wright) Date: Tue, 20 Oct 2015 10:49:22 +0100 Subject: [erlang-questions] Supervisor post start update of restart intensity and period Message-ID: Does anyone have any interest, approval or disapproval in respect of the idea of adding capability to update the restart intensity of a supervisor after start? Currently the only way to change it after start is by way of a release change. My reason for the proposal is to optimise the case of a simple_one_to_one supervisor where: 1. The likely number of children could vary a lot (perhaps by orders of magnitude). 2. The children are homogeneous and the criticality of the service they collectively provide is shared across all of them. 3. The probability of abnormal termination of any one child is relatively constant (not lessened or known or expected to be lessened by more children being spawned). So for the case of a simple_one_for_one supervisor with 10 children, a restart intensity 10 might be appropriate, but for the same supervisor with 10,000 children it might need to be 1,000, or 10,000. In some cases the likely maximum number of children might be known at supervisor start time, but not always, and even then if it varies a lot it probably doesn't help. I can't be certain how in demand this feature would be, but I've realised I've needed it before, and compromised by setting the restart intensity high to avoid unnecessary tear down of software infrastructure. It's obviously not ideal though as it could lead to outage or service degradation while a relatively small number of children churn their way to an inappropriately large restart intensity. One could have a dynamic intensity value, {ch_multiple, N} say, making it N times the number of children, but I slightly worry someone will later want {sqrt_ch_mul_ln_moonphase, U, G, H} and then one may as well allow {M, F, A} or add a new callback. However, really I think an API call is probably the most sensible way forward: supervisor:update_supflags/3 (SupRef, intensity | period, NewValue) I prefer this to passing a map since the above is more explicit that not all the supflags are alterable. An API call is simple and low impact, and the only disadvantage is it offers to do nothing clever, making the callback module perform all the management, even if it means calling it every time a new child is spawned. Michael. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thoffmann@REDACTED Tue Oct 20 12:04:40 2015 From: thoffmann@REDACTED (Torben Hoffmann) Date: Tue, 20 Oct 2015 12:04:40 +0200 Subject: [erlang-questions] Supervisor post start update of restart intensity and period In-Reply-To: References: Message-ID: Hi Michael, Before diving into changes to the supervisor module there might be a quicker fix that can give you what you want. Say that you have a case where 10 children with a restart intensity of 10 is fine. So your sup_10 supervisor fits 10 mod_a with that configuration. Now you create a sup_sup supervisor that supervises your sup_10 supervisors. Before you start a new mod_a worker you determine if you need to start another sup_10 supervisor. Then you start the mod_a as a child of the appropriate sup_10 supervisor. It requires a bit of interrogation of the supervision tree under sup_sup (using which_children/1) before starting. But I would say that it beats forking supervisor. I haven't done the math to see if this two level solution would give you adequate control over the restart intensity... something for the interested reader ;-) Cheers, Torben Michael Wright writes: > Does anyone have any interest, approval or disapproval in respect of the > idea of adding capability to update the restart intensity of a supervisor > after start? > > Currently the only way to change it after start is by way of a release > change. > > My reason for the proposal is to optimise the case of a simple_one_to_one > supervisor where: > > 1. The likely number of children could vary a lot (perhaps by orders of > magnitude). > 2. The children are homogeneous and the criticality of the service they > collectively provide is shared across all of them. > 3. The probability of abnormal termination of any one child is > relatively constant (not lessened or known or expected to be lessened by > more children being spawned). > > So for the case of a simple_one_for_one supervisor with 10 children, a > restart intensity 10 might be appropriate, but for the same supervisor with > 10,000 children it might need to be 1,000, or 10,000. > > In some cases the likely maximum number of children might be known at > supervisor start time, but not always, and even then if it varies a lot it > probably doesn't help. > > I can't be certain how in demand this feature would be, but I've realised > I've needed it before, and compromised by setting the restart intensity > high to avoid unnecessary tear down of software infrastructure. It's > obviously not ideal though as it could lead to outage or service > degradation while a relatively small number of children churn their way to > an inappropriately large restart intensity. > > One could have a dynamic intensity value, {ch_multiple, N} say, making it N > times the number of children, but I slightly worry someone will later want > {sqrt_ch_mul_ln_moonphase, U, G, H} and then one may as well allow {M, F, > A} or add a new callback. However, really I think an API call is probably > the most sensible way forward: > > supervisor:update_supflags/3 (SupRef, intensity | period, NewValue) > > I prefer this to passing a map since the above is more explicit that not > all the supflags are alterable. > > An API call is simple and low impact, and the only disadvantage is it > offers to do nothing clever, making the callback module perform all the > management, even if it means calling it every time a new child is spawned. > > Michael. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -- Torben Hoffmann Architect, basho.com M: +45 25 14 05 38 From jesper.louis.andersen@REDACTED Tue Oct 20 12:53:58 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Tue, 20 Oct 2015 12:53:58 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: <5616E93B.4060608@hu-berlin.de> References: <5616E93B.4060608@hu-berlin.de> Message-ID: On Fri, Oct 9, 2015 at 12:07 AM, J?rgen Brandt wrote: > Consider an application (with transient software faults), > processing user queries. > I think this is the central point of the mail. One of the usual Erlang approaches is to rely on stable storage and checkpointing for these kinds of work units. Stable storage means that you can write to disk, sync data, and then you have a linearizable point in time after which data are safe on the disk and can be re-read. Checkpointing means to track on stable storage whenever the system reaches some safe invariant state. In order to make sure progress happens, even under the possibility of transient errors, one must figure out how to checkpoint invariant state such that it can be recovered. In some cases, you don't need truly persistent storage, but can simply ship the state to another process, or even system. The other ingredient you need is that of idempotent operations. That is, you can rerun an operation/subtask and it will produce the same result as before, or it will return early with the answer if the answer is already evaluated and cached. Given these two ingredients, you don't need to use transactions. You just reinject unfulfilled obligations into the system and you track which obligations that deadline. A project like Apache Storm implements this workflow, and I'm 99% sure a lot of Erlang projects exist for this, though the names eludes me right now. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjw@REDACTED Tue Oct 20 13:33:34 2015 From: mjw@REDACTED (Michael Wright) Date: Tue, 20 Oct 2015 12:33:34 +0100 Subject: [erlang-questions] Supervisor post start update of restart intensity and period In-Reply-To: References: Message-ID: Hi Torben, I did wonder about this as a solution, but I'm not terribly keen. Take the case of 10 sup_10 supervisors with a restart intensity of 10, each with 10 children. If there are 11 child deaths for children concentrated on one of those supervisors, it will trigger a sup_10 restart, but if the 11 children that die are distributed across 2 or more sup_10 supervisors, it won't... The sup_10 restart probably isn't a problem of course, but the number of total deaths in a period of time that will cause a sup_sup to restart is now variable, depending on exactly which of the children across the sup_10 supervisors die. In fact, in this situation, 11 child deaths could cause a sup_10 death, or 100 child deaths could just about cause no sup_10 to die. Now, I accept that what you suggest is a pragmatic solution that could work very well, because PROBABLY statistically the probabilities of getting much variance in the number of child deaths causing a sup_sup death (for sensible choices of sup to sup and sup to child ratios) may be low, but the non deterministic / inconsistent / unpredictable nature just makes me wrinkle my nose a bit. I say PROBABLY above because it assumes that the distribution of failures is random, but of course it's as least possible that longer running children, or children started at a similar time, are grouped on one supervisor more often than not, so the difficulties I suggest above might be more realistic than it seems... Depends on the reason for the crash, which of course we never know. Other concerns I have are that if the number of children varies by orders of magnitude, our sup_N might have to have an N that's not too large, but that means there might be 1000 of them, and which_children/1 becomes quite a trawl, and if you start 101 children with sup_10 supervisors there will be one lonely child I could write a relatively small supervisor that fits my use case and requirements exactly (probably easier than trying to make supervisor work for me as it is too), however because I realised I alone was now coming across this issue for the second time I thought it was worth checking if anyone else was interested, or if I'm just weird... Michael. On Tue, Oct 20, 2015 at 11:04 AM, Torben Hoffmann wrote: > Hi Michael, > > Before diving into changes to the supervisor module there might be a > quicker fix that > can give you what you want. > > Say that you have a case where 10 children with a restart intensity of 10 > is fine. > So your sup_10 supervisor fits 10 mod_a with that configuration. > > Now you create a sup_sup supervisor that supervises your sup_10 > supervisors. > > Before you start a new mod_a worker you determine if you need to start > another sup_10 > supervisor. Then you start the mod_a as a child of the appropriate sup_10 > supervisor. > > It requires a bit of interrogation of the supervision tree under sup_sup > (using > which_children/1) before starting. But I would say that it beats forking > supervisor. > > I haven't done the math to see if this two level solution would give you > adequate > control over the restart intensity... something for the interested reader > ;-) > > Cheers, > Torben > > Michael Wright writes: > > > Does anyone have any interest, approval or disapproval in respect of the > > idea of adding capability to update the restart intensity of a supervisor > > after start? > > > > Currently the only way to change it after start is by way of a release > > change. > > > > My reason for the proposal is to optimise the case of a simple_one_to_one > > supervisor where: > > > > 1. The likely number of children could vary a lot (perhaps by orders > of > > magnitude). > > 2. The children are homogeneous and the criticality of the service > they > > collectively provide is shared across all of them. > > 3. The probability of abnormal termination of any one child is > > relatively constant (not lessened or known or expected to be lessened by > > more children being spawned). > > > > So for the case of a simple_one_for_one supervisor with 10 children, a > > restart intensity 10 might be appropriate, but for the same supervisor > with > > 10,000 children it might need to be 1,000, or 10,000. > > > > In some cases the likely maximum number of children might be known at > > supervisor start time, but not always, and even then if it varies a lot > it > > probably doesn't help. > > > > I can't be certain how in demand this feature would be, but I've realised > > I've needed it before, and compromised by setting the restart intensity > > high to avoid unnecessary tear down of software infrastructure. It's > > obviously not ideal though as it could lead to outage or service > > degradation while a relatively small number of children churn their way > to > > an inappropriately large restart intensity. > > > > One could have a dynamic intensity value, {ch_multiple, N} say, making > it N > > times the number of children, but I slightly worry someone will later > want > > {sqrt_ch_mul_ln_moonphase, U, G, H} and then one may as well allow {M, F, > > A} or add a new callback. However, really I think an API call is probably > > the most sensible way forward: > > > > supervisor:update_supflags/3 (SupRef, intensity | period, > NewValue) > > > > I prefer this to passing a map since the above is more explicit that not > > all the supflags are alterable. > > > > An API call is simple and low impact, and the only disadvantage is it > > offers to do nothing clever, making the callback module perform all the > > management, even if it means calling it every time a new child is > spawned. > > > > Michael. > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > -- > Torben Hoffmann > Architect, basho.com > M: +45 25 14 05 38 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Tue Oct 20 15:01:36 2015 From: zxq9@REDACTED (zxq9) Date: Tue, 20 Oct 2015 22:01:36 +0900 Subject: [erlang-questions] Supervisor post start update of restart intensity and period In-Reply-To: References: Message-ID: <3103998.5DEcGYzNlC@changa> On 2015?10?20? ??? 12:33:34 Michael Wright wrote: > Hi Torben, > > I did wonder about this as a solution, but I'm not terribly keen. > > Take the case of 10 sup_10 supervisors with a restart intensity of 10, each > with 10 children. If there are 11 child deaths for children concentrated on > one of those supervisors, it will trigger a sup_10 restart, but if the 11 > children that die are distributed across 2 or more sup_10 supervisors, it > won't... The sup_10 restart probably isn't a problem of course, but the > number of total deaths in a period of time that will cause a sup_sup to > restart is now variable, depending on exactly which of the children across > the sup_10 supervisors die. > > In fact, in this situation, 11 child deaths could cause a sup_10 death, or > 100 child deaths could just about cause no sup_10 to die. With your initial post I thought "hrm, that is sort of odd that it isn't dynamically configurable" but the only scenarios I could think of off-hand for actual systems I would maybe actually use this were ones where I want precisely the sort of isolation you view as problematic. As it stands, Torben's suggestion where a sup_sup can spawn dynamically configurable supervisors seems ideal -- especially considering that I could retire an existing sup (with the "wrong" configuration) and direct all new child creation to the new one (with the "right" configuration) -- and, hot updates aside, probably smoothly transition a running process' state to a new process under the new supervisor. There could easily be edge cases where that wouldn't work, but the general case seems straightforward. It would be nice to abstract this all away for the general case, of course, and that doesn't seem to require making any adjustments to OTP. But I lack imagination. In what case would this not work? -Craig From mjw@REDACTED Tue Oct 20 17:15:14 2015 From: mjw@REDACTED (Michael Wright) Date: Tue, 20 Oct 2015 16:15:14 +0100 Subject: [erlang-questions] Supervisor post start update of restart intensity and period In-Reply-To: <3103998.5DEcGYzNlC@changa> References: <3103998.5DEcGYzNlC@changa> Message-ID: I think most of the time the isolation is, as you say, exactly what one wants. The supervisor is great at allowing you to structure a supervision tree (supervisors supervising other supervisors), and great at letting you define appropriate behaviour for a set of related / interacting / interoperating / dependent processes (by way of the different restart strategies), but in both these cases the number of children is fixed. Supervisor is also great for many simple_one_for_one cases where the number of childen is dynamic, but the capability for being able to set ideal (at least ideologically ideal) restart intensity is weakened when one doesn't know how many children there will be, and when the other conditions from my original email are met I'm stuck with a real compromise (fine if not too many children crash) or supervising the children another way. Where the children are not homogeneous they should probably be split into 2 simple_one_to_one supervisors supervised by another supervisor with a strategy appropriate to the relationship of the 2 dynamic sets of children, so then the supervisor as it is MAY be optimal. Where the criticality is not spread (i.e. 10 children has similar over all value in terms of service provision as 100 children) then another solution may be appropriate (like less variation in numbers of children probably). It wouldn't be terribly difficult to write a module to supervise precisely as I want, but since supervisor would do what I wanted with the proposed modification I considered it worth gauging interest in the addition. No one as yet seems greatly troubled by the absence of the feature though I must say. Michael. On Tue, Oct 20, 2015 at 2:01 PM, zxq9 wrote: > On 2015?10?20? ??? 12:33:34 Michael Wright wrote: > > Hi Torben, > > > > I did wonder about this as a solution, but I'm not terribly keen. > > > > Take the case of 10 sup_10 supervisors with a restart intensity of 10, > each > > with 10 children. If there are 11 child deaths for children concentrated > on > > one of those supervisors, it will trigger a sup_10 restart, but if the 11 > > children that die are distributed across 2 or more sup_10 supervisors, it > > won't... The sup_10 restart probably isn't a problem of course, but the > > number of total deaths in a period of time that will cause a sup_sup to > > restart is now variable, depending on exactly which of the children > across > > the sup_10 supervisors die. > > > > In fact, in this situation, 11 child deaths could cause a sup_10 death, > or > > 100 child deaths could just about cause no sup_10 to die. > > With your initial post I thought "hrm, that is sort of odd that it isn't > dynamically configurable" but the only scenarios I could think of off-hand > for actual systems I would maybe actually use this were ones where I want > precisely the sort of isolation you view as problematic. > > As it stands, Torben's suggestion where a sup_sup can spawn dynamically > configurable supervisors seems ideal -- especially considering that I could > retire an existing sup (with the "wrong" configuration) and direct all new > child creation to the new one (with the "right" configuration) -- and, hot > updates aside, probably smoothly transition a running process' state to a > new process under the new supervisor. There could easily be edge cases > where that wouldn't work, but the general case seems straightforward. > > It would be nice to abstract this all away for the general case, of > course, and that doesn't seem to require making any adjustments to OTP. > > But I lack imagination. In what case would this not work? > > -Craig > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From waldemar.rachwal@REDACTED Wed Oct 21 13:01:53 2015 From: waldemar.rachwal@REDACTED (=?UTF-8?Q?Waldemar_Rachwa=C5=82?=) Date: Wed, 21 Oct 2015 13:01:53 +0200 Subject: [erlang-questions] Erl 64 and 32 bits don't interoperate. A bug? Message-ID: 64 bit erlang is not able to connect to 32 bit one. I don't believe it's a "feature". Report as BUG? Thanks, Waldemar. === 32 bits machine $ erl -name bits32@REDACTED -cookie BULBA Erlang/OTP 18 [erts-7.1] [source] [smp:2:2] [async-threads:10] [kernel-poll:false] Eshell V7.1 (abort with ^G) (bits32@REDACTED)1> =ERROR REPORT==== 21-Oct-2015::10:48:52 === ** Connection attempt from disallowed node 'bits64@REDACTED' ** === 64 bits machine $ erl -name bits64@REDACTED -remsh bits32@REDACTED -cookie BULBA Erlang/OTP 18 [erts-7.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false] *** ERROR: Shell process terminated! (^G to start new job) *** -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergej.jurecko@REDACTED Wed Oct 21 13:08:35 2015 From: sergej.jurecko@REDACTED (=?utf-8?Q?Sergej_Jure=C4=8Dko?=) Date: Wed, 21 Oct 2015 13:08:35 +0200 Subject: [erlang-questions] Erl 64 and 32 bits don't interoperate. A bug? In-Reply-To: References: Message-ID: <5481891C-66F2-4B8E-BA38-0E6DAF1DD123@gmail.com> Use -setcookie Sergej > On 21 Oct 2015, at 13:01, Waldemar Rachwa? wrote: > > 64 bit erlang is not able to connect to 32 bit one. > I don't believe it's a "feature". Report as BUG? > Thanks, > Waldemar. > > === 32 bits machine > > $ erl -name bits32@REDACTED -cookie BULBA > Erlang/OTP 18 [erts-7.1] [source] [smp:2:2] [async-threads:10] [kernel-poll:false] > > Eshell V7.1 (abort with ^G) > (bits32@REDACTED )1> > =ERROR REPORT==== 21-Oct-2015::10:48:52 === > ** Connection attempt from disallowed node 'bits64@REDACTED ' ** > > === 64 bits machine > > $ erl -name bits64@REDACTED -remsh bits32@REDACTED -cookie BULBA > Erlang/OTP 18 [erts-7.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false] > > *** ERROR: Shell process terminated! (^G to start new job) *** > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From community-manager@REDACTED Wed Oct 21 17:13:12 2015 From: community-manager@REDACTED (Bruce Yinhe) Date: Wed, 21 Oct 2015 17:13:12 +0200 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org Message-ID: Hello everyone We are happy to announce the issue tracker for Erlang/OTP ( http://bugs.erlang.org). Our intention is that the issue tracker replaces the erlang-bugs mailing list, in order to make it easier for the community to report bugs, suggest improvements and new features. You can start using the issue tracker today. The issue tracker for Erlang/OTP is a step towards improving and formalising the process of community contributions, a goal which is actively worked on by the Industrial Erlang User Group. The IEUG is working with Ericsson to improve libraries, tool chains, middle-ware and contributions while spreading awareness and increasing user adoption. *FAQ* *1. Where is the issue tracker?* bugs.erlang.org *2. Will I still be able to use the erlang-bugs mailing list?* We recommend you to report new bugs at bugs.erlang.org instead. You are still able to use erlang-bugs and see its archives, but we won't be looking at it as often. We will gradually phase out the erlang-bugs mailing list. *3. How do I create an issue or feature request?* Create an account or Log in with an existing account. Select Create Issue after you log in. You can also log in with your Erlangcentral.org account if you have one. *4. What types of issue can I report?* Bug, Improvement, New Feature Best regards Bruce Yinhe Community Manager Industrial Erlang User Group +46 72 311 43 89 community-manager@REDACTED -- Visit our Erlang community site ErlangCentral.org | @ErlangCentral | Industrial Erlang User Group -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostis@REDACTED Wed Oct 21 21:00:30 2015 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 21 Oct 2015 21:00:30 +0200 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: Message-ID: <5627E0CE.7020006@cs.ntua.gr> On 10/21/2015 05:13 PM, Bruce Yinhe wrote: > We are happy to announce the issue tracker for Erlang/OTP > (http://bugs.erlang.org ). Our intention is > that the issue tracker replaces the erlang-bugs mailing list, in order > to make it easier for the community to report bugs, suggest improvements > and new features. You can start using the issue tracker today. > > The issue tracker for Erlang/OTP is a step towards improving and > formalising the process of community contributions, a goal which is > actively worked on by the Industrial Erlang User Group. The IEUG is > working with Ericsson to improve libraries, tool chains, middle-ware and > contributions while spreading awareness and increasing user adoption. > > *FAQ* > * > * > *1. Where is the issue tracker?* > > bugs.erlang.org > > *2. Will I still be able to use the erlang-bugs mailing list?* > > We recommend you to report new bugs at bugs.erlang.org > instead. You are still able to use erlang-bugs > and see its archives, but we won't be looking at it as often. We will > gradually phase out the erlang-bugs mailing list. > > *3. How do I create an issue or feature request?* > > Create an account or Log in with an existing account. Select Create > Issue after you log in. You can also log in with your Erlangcentral.org > account if you have one. I very much welcome a public issue tracker -- thanks for this initiative! -- but I am wondering what exactly is the rationale for choosing this particular process. The Erlang/OTP system is hosted on GitHub, and GitHub already has (very nice, IMO) infrastructure for reporting and tracking issues (across all github code bases, actually). Moreover, there are also mechanisms of pointing to lines of code in (some branch) in the repo and to discussions of pull requests. Why re-invent the wheel (and complicate our lives)? Don't we all manage enough accounts already? Kostis From jesper.louis.andersen@REDACTED Wed Oct 21 21:29:55 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Wed, 21 Oct 2015 21:29:55 +0200 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <5627E0CE.7020006@cs.ntua.gr> References: <5627E0CE.7020006@cs.ntua.gr> Message-ID: On Wed, Oct 21, 2015 at 9:00 PM, Kostis Sagonas wrote: > Why re-invent the wheel (and complicate our lives)? On the Jira/Bugzilla scale, Github Issues tracks in at about 1 nano-Jira. That is, Jira can do some things in the enterprise setting which you can't easily get with Github. It is nice for small simple projects, but there are many large projects which run their "own" Jira-instances for the reason of interoperating. If I remember correctly, Jira can link to other internal Jira's at Ericsson, which is necessary to be able to track issues cross-projects, some of which are obviously internal and closed source. Given such constraints, running your own instance on the "outside of the firewall/walled garden" is the right choice. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From essen@REDACTED Wed Oct 21 21:38:51 2015 From: essen@REDACTED (=?UTF-8?Q?Lo=c3=afc_Hoguin?=) Date: Wed, 21 Oct 2015 21:38:51 +0200 Subject: [erlang-questions] [erlang-bugs] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: <5627E0CE.7020006@cs.ntua.gr> Message-ID: <5627E9CB.1080107@ninenines.eu> On 10/21/2015 09:29 PM, Jesper Louis Andersen wrote: > > On Wed, Oct 21, 2015 at 9:00 PM, Kostis Sagonas > wrote: > > Why re-invent the wheel (and complicate our lives)? > > > On the Jira/Bugzilla scale, Github Issues tracks in at about 1 nano-Jira. > > That is, Jira can do some things in the enterprise setting which you > can't easily get with Github. It is nice for small simple projects, but > there are many large projects which run their "own" Jira-instances for > the reason of interoperating. If I remember correctly, Jira can link to > other internal Jira's at Ericsson, which is necessary to be able to > track issues cross-projects, some of which are obviously internal and > closed source. Given such constraints, running your own instance on the > "outside of the firewall/walled garden" is the right choice. Would be good to at least be able to log in with your Github account, if nothing else. Apparently it can use OpenID for log in, but for some reason it's restricted to Erlang Central accounts. Adding Github would make it easier to contributors since they already have an account on Github. -- Lo?c Hoguin http://ninenines.eu Author of The Erlanger Playbook, A book about software development using Erlang From dmitriid@REDACTED Wed Oct 21 22:27:36 2015 From: dmitriid@REDACTED (Dmitrii Dimandt) Date: Wed, 21 Oct 2015 20:27:36 +0000 Subject: [erlang-questions] [erlang-bugs] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <5627E9CB.1080107@ninenines.eu> References: <5627E0CE.7020006@cs.ntua.gr> <5627E9CB.1080107@ninenines.eu> Message-ID: +1 to being able to login with a different account On Wed, Oct 21, 2015 at 9:38 PM Lo?c Hoguin wrote: > On 10/21/2015 09:29 PM, Jesper Louis Andersen wrote: > > > > On Wed, Oct 21, 2015 at 9:00 PM, Kostis Sagonas > > wrote: > > > > Why re-invent the wheel (and complicate our lives)? > > > > > > On the Jira/Bugzilla scale, Github Issues tracks in at about 1 nano-Jira. > > > > That is, Jira can do some things in the enterprise setting which you > > can't easily get with Github. It is nice for small simple projects, but > > there are many large projects which run their "own" Jira-instances for > > the reason of interoperating. If I remember correctly, Jira can link to > > other internal Jira's at Ericsson, which is necessary to be able to > > track issues cross-projects, some of which are obviously internal and > > closed source. Given such constraints, running your own instance on the > > "outside of the firewall/walled garden" is the right choice. > > Would be good to at least be able to log in with your Github account, if > nothing else. Apparently it can use OpenID for log in, but for some > reason it's restricted to Erlang Central accounts. Adding Github would > make it easier to contributors since they already have an account on > Github. > > -- > Lo?c Hoguin > http://ninenines.eu > Author of The Erlanger Playbook, > A book about software development using Erlang > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Thu Oct 22 00:51:00 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 22 Oct 2015 07:51 +0900 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: <5627E0CE.7020006@cs.ntua.gr> Message-ID: <2766197.OAMIYguNob@changa> On 2015?10?21? ??? 21:29:55 Jesper Louis Andersen wrote: > On Wed, Oct 21, 2015 at 9:00 PM, Kostis Sagonas wrote: > > > Why re-invent the wheel (and complicate our lives)? > > > On the Jira/Bugzilla scale, Github Issues tracks in at about 1 nano-Jira. > > That is, Jira can do some things in the enterprise setting which you can't > easily get with Github. It is nice for small simple projects, but there are > many large projects which run their "own" Jira-instances for the reason of > interoperating. If I remember correctly, Jira can link to other internal > Jira's at Ericsson, which is necessary to be able to track issues > cross-projects, some of which are obviously internal and closed source. > Given such constraints, running your own instance on the "outside of the > firewall/walled garden" is the right choice. Light a candle for the poor sysops who have to maintain Atlassian installations. I've had to. :-/ From dlipubkey@REDACTED Thu Oct 22 00:25:25 2015 From: dlipubkey@REDACTED (David Li) Date: Wed, 21 Oct 2015 15:25:25 -0700 Subject: [erlang-questions] A newbie question regarding distributed programming Message-ID: Hi, I am reading up the ping pong example in the tutorial. I want to set up two Erlang nodes on a single VM to test the code. However the pong program never receives the message from ping. My module name is "hello_msg_twonodes". Here are what I did: To start a pong node: ================== $ erl -sname pong Erlang/OTP 18 [erts-7.1] [source] [64-bit] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.1 (abort with ^G) (pong@REDACTED)1> (pong@REDACTED)1> hello_msg_twonodes:start_pong(). true To start a ping node ================== $ erl -sname ping Erlang/OTP 18 [erts-7.1] [source] [64-bit] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.1 (abort with ^G) (ping@REDACTED)1> hello_msg_twonodes:start_ping(pong@REDACTED). In PING func, sending to pong In PING func, sent to pong, waiting for feedback <0.42.0> After this, no more messages are shown on either one of the nodes. Can anyone help me to see what's wrong? Thanks. From dmkolesnikov@REDACTED Thu Oct 22 07:19:31 2015 From: dmkolesnikov@REDACTED (Dmitry Kolesnikov) Date: Thu, 22 Oct 2015 08:19:31 +0300 Subject: [erlang-questions] A newbie question regarding distributed programming In-Reply-To: References: Message-ID: Hello, You nodes are bound to xyz fqdn but you initiate pong to localhost. Best Regards, Dmitry >-|-|-(*> > On 22 Oct 2015, at 01:25, David Li wrote: > > Hi, > > I am reading up the ping pong example in the tutorial. I want to set > up two Erlang nodes on a single VM to test the code. However the pong > program never receives the message from ping. > > My module name is "hello_msg_twonodes". Here are what I did: > > To start a pong node: > ================== > $ erl -sname pong > Erlang/OTP 18 [erts-7.1] [source] [64-bit] [async-threads:10] [hipe] > [kernel-poll:false] > > Eshell V7.1 (abort with ^G) > (pong@REDACTED)1> > (pong@REDACTED)1> hello_msg_twonodes:start_pong(). > true > > To start a ping node > ================== > > $ erl -sname ping > Erlang/OTP 18 [erts-7.1] [source] [64-bit] [async-threads:10] [hipe] > [kernel-poll:false] > > Eshell V7.1 (abort with ^G) > (ping@REDACTED)1> hello_msg_twonodes:start_ping(pong@REDACTED). > In PING func, sending to pong > In PING func, sent to pong, waiting for feedback > <0.42.0> > > > > After this, no more messages are shown on either one of the nodes. > Can anyone help me to see what's wrong? > > Thanks. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From bjorn@REDACTED Thu Oct 22 08:32:44 2015 From: bjorn@REDACTED (=?UTF-8?Q?Bj=C3=B6rn_Gustavsson?=) Date: Thu, 22 Oct 2015 08:32:44 +0200 Subject: [erlang-questions] Third draft of EEP 44 - Additional preprocessor directives Message-ID: Here is the third (and probably final) draft of EEP 44: http://www.erlang.org/eeps/eep-0044.html https://github.com/erlang/eep/blob/master/eeps/eep-0044.md What has changed is that parentheses now are mandatory for the new directives. Robert Virding pointed out that not requiring parentheses was inconsistent with the -ifdef and -undef directives. Richard O'Keefe pointed out that requiring parentheses could make it easier for makers of tools because the new directives would look similar to attributes and might not need special code. /Bj?rn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From tuncer.ayaz@REDACTED Thu Oct 22 11:12:57 2015 From: tuncer.ayaz@REDACTED (Tuncer Ayaz) Date: Thu, 22 Oct 2015 11:12:57 +0200 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <2766197.OAMIYguNob@changa> References: <5627E0CE.7020006@cs.ntua.gr> <2766197.OAMIYguNob@changa> Message-ID: On Thu, Oct 22, 2015 at 12:51 AM, zxq9 wrote: > Light a candle for the poor sysops who have to maintain Atlassian > installations. > > I've had to. :-/ Given that Atlassian was kind enough to grant a license, surely there's the option of using a hosted installation. From silviu.cpp@REDACTED Thu Oct 22 10:29:17 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Thu, 22 Oct 2015 11:29:17 +0300 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes Message-ID: Hello. In one of my projects I need to use a radix tree. I found out a very nice library : https://github.com/okeuday/trie Lookup performances are great. But I have one problem. Basically my tree has around 100 000 elements so building it it's an extremely operation. For this reason I'm building it once and all processes that needs to do lookups need to share the btrie object (created using btrie:new/1). Here I see several options: 1. Use a gen server and store the btrie object on the state or process dictionary. - I didn't tried this 2. Use a ets table and store the tire object on a public table where all processes can read and write. Doing some benchmarks I see that lookup-ing for the longest prefix (btrie: find_prefix_longest) in around 100 K elements by prefix it's around 2- 5 ms and 95% of the time is spent in the ets:lookup. I think the time spent there is so big because also my term stored there is very big. Any other suggestions ? Silviu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sperber@REDACTED Thu Oct 22 11:23:45 2015 From: sperber@REDACTED (Michael Sperber) Date: Thu, 22 Oct 2015 11:23:45 +0200 Subject: [erlang-questions] Second Call for Contributions: BOB 2016 - Berlin, Feb 19, 2016 (Deadline Oct 30) Message-ID: Erlang contributions very welcome! BOB Conference 2016 "What happens when we use what's best for a change?" http://bobkonf.de/2016/en/cfp.html Berlin, February 19 Call for Contributions Deadline: October 30, 2015 You drive advanced software engineering methods, implement ambitious architectures and are open to cutting-edge innovation? Attend this conference, meet people that share your goals, and get to know the best software tools and technologies available today. We strive to offer a day full of new experiences and impressions that you can use to immediately improve your daily life as a software developer. If you share our vision and want to contribute, submit a proposal for a talk or tutorial! Topics ------ We are looking for talks about best-of-breed software technology, e.g.: - functional programming - reactive programming - persistent data structures and databases - types - formal methods for correctness and robustness - ... everything really that isn't mainstream, but you think should be. Presenters should provide the audience with information that is practically useful for software developers. This could take the form of e.g.: - experience reports - introductory talks on technical background - demos and how-tos Requirements ------------ We accept proposals for presentations of 45 minutes (40 minutes talk + 5 minutes questions), as well as 90 minute tutorials for beginners. The language of presentation should be either English or German. Your proposal should include (in your presentation language of choice): - an abstract of max. 1500 characters. - a short bio/cv - contact information (including at least email address) - a list of 3-5 concrete ideas of how your work can be applied in a developer's daily life - additional material (websites, blogs, slides, videos of past presentations, ...) Submit here: https://docs.google.com/forms/d/1IrCa3ilxMrO2h1G1WC4ywoxdz8wohxaPW3dfiB0cq-8/viewform?usp=send_form Organisation ------------ - submit your proposal here https://docs.google.com/forms/d/1IrCa3ilxMrO2h1G1WC4ywoxdz8wohxaPW3dfiB0cq-8/viewform?usp=send_form - direct questions to `bobkonf at active minus group dot de` - proposal deadline: **October 30, 2015** - notification: November 15, 2015 - program: December 1, 2015 NOTE: The conference fee will be waived for presenters, but travel expenses will not be covered. Speaker Grants -------------- BOB has Speaker Grants available to support speakers from groups under-represented in technology. We specifically seek women speakers and speakers who not be able to attend the conference for financial reasons. Details are here: http://bobkonf.de/2016/en/speaker-grants.html Shepherding ----------- The program committee offers shepherding to all speakers. Shepherding provides speakers assistance with preparing their sessions, as well as a review of the talk slides. Program Committee ----------------- (more information here: http://bobkonf.de/2016/programmkomitee.html) - Matthias Fischmann, zerobuzz UG - Matthias Neubauer, SICK AG - Nicole Rauch, Softwareentwicklung und Entwicklungscoaching - Michael Sperber, Active Group - Stefan Wehr, factis research Scientific Advisory Board ------------------------- - Annette Bieniusa, TU Kaiserslautern - Peter Thiemann, Uni Freiburg From zxq9@REDACTED Thu Oct 22 12:37:01 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 22 Oct 2015 19:37:01 +0900 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: Message-ID: <16810655.EN6dCcUWbg@changa> On 2015?10?22? ??? 11:29:17 Caragea Silviu wrote: > 1. Use a gen server and store the btrie object on the state or process > dictionary. - I didn't tried this Try this. > Doing some benchmarks I see that lookup-ing for the longest prefix (btrie: > find_prefix_longest) in around 100 K elements by prefix it's around 2- 5 ms > and 95% of the time is spent in the ets:lookup. Is this "slow" or "fast" compared to what you wanted/expected? -Craig From zxq9@REDACTED Thu Oct 22 12:38:36 2015 From: zxq9@REDACTED (zxq9) Date: Thu, 22 Oct 2015 19:38:36 +0900 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: <2766197.OAMIYguNob@changa> Message-ID: <5277982.P3SBmGgZjW@changa> On 2015?10?22? ??? 11:12:57 you wrote: > On Thu, Oct 22, 2015 at 12:51 AM, zxq9 wrote: > > Light a candle for the poor sysops who have to maintain Atlassian > > installations. > > > > I've had to. :-/ > > Given that Atlassian was kind enough to grant a license, surely there's > the option of using a hosted installation. Given that crack dealers are kind enough to hand out free samples, surely there's a free medical emergency number on the back of the bag. From akrupicka@REDACTED Thu Oct 22 13:19:34 2015 From: akrupicka@REDACTED (Adam =?UTF-8?Q?Krupi=C4=8Dka?=) Date: Thu, 22 Oct 2015 13:19:34 +0200 Subject: [erlang-questions] =?utf-8?q?where_it=27s_the_best_way_to_store_a?= =?utf-8?q?_very_big_term_object_shared_between_processes?= In-Reply-To: References: Message-ID: <1445512774-410366-75.488472982304-29604@mail.muni.cz> > Doing some benchmarks I see that lookup-ing for the longest prefix (btrie: > find_prefix_longest) in around 100 K elements by prefix it's around 2- 5 ms > and 95% of the time is spent in the ets:lookup. This is because the trie is copied over from the ETS into the process each time you access it. [1] The gen_server approach would be more efficient; most efficient would be storing a copy of the trie in each process (but also most memory wasteful). [1] http://www.erlang.org/doc/man/ets.html From jesper.louis.andersen@REDACTED Thu Oct 22 13:56:39 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 22 Oct 2015 13:56:39 +0200 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: <1445512774-410366-75.488472982304-29604@mail.muni.cz> References: <1445512774-410366-75.488472982304-29604@mail.muni.cz> Message-ID: On Thu, Oct 22, 2015 at 1:19 PM, Adam Krupi?ka wrote: > The gen_server approach would be more efficient; most efficient would be > storing a copy of the trie in each process (but also most memory > wasteful). > Alternatively, level compress[0] the first few levels into N processes. [0] Technically you then get a concurrent level-compressed trie. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Thu Oct 22 13:58:13 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 22 Oct 2015 13:58:13 +0200 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: <1445512774-410366-75.488472982304-29604@mail.muni.cz> Message-ID: A shot to try is to use a map instead. A trie should compress better on paper, but I'm not sure the overhead in Erlang makes it competitive with just shoving everything into a map (on 18.x), which uses a HAMT. On Thu, Oct 22, 2015 at 1:56 PM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > > On Thu, Oct 22, 2015 at 1:19 PM, Adam Krupi?ka > wrote: > >> The gen_server approach would be more efficient; most efficient would be >> storing a copy of the trie in each process (but also most memory >> wasteful). >> > > Alternatively, level compress[0] the first few levels into N processes. > > [0] Technically you then get a concurrent level-compressed trie. > > > -- > J. > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Thu Oct 22 15:18:26 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Thu, 22 Oct 2015 15:18:26 +0200 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: <1445512774-410366-75.488472982304-29604@mail.muni.cz> Message-ID: If the data is constant once the table is built, and the entries are big so you don't want to copy them out from an ets table or even avoid passing them between processes (if you want to distribute the table to a large number of processes), you could generate a module, e.g. using Merl, with a lookup function that does what the trie lookup would do. It might take some effort but the speedup could be huge. /Richard On Thu, Oct 22, 2015 at 1:56 PM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > > On Thu, Oct 22, 2015 at 1:19 PM, Adam Krupi?ka > wrote: > >> The gen_server approach would be more efficient; most efficient would be >> storing a copy of the trie in each process (but also most memory >> wasteful). >> > > Alternatively, level compress[0] the first few levels into N processes. > > [0] Technically you then get a concurrent level-compressed trie. > > > -- > J. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From be.dmitry@REDACTED Thu Oct 22 15:27:42 2015 From: be.dmitry@REDACTED (Dmitry Belyaev) Date: Fri, 23 Oct 2015 00:27:42 +1100 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <5277982.P3SBmGgZjW@changa> References: <2766197.OAMIYguNob@changa> <5277982.P3SBmGgZjW@changa> Message-ID: <3E8F70DC-2DEB-4475-893C-82A46258B4A0@gmail.com> Selling jira is not yet against a law. I'm happy that we now have public bug tracker. Cheers to Erlang/OTP team. -- Best wishes, Dmitry Belyaev On 22 October 2015 9:38:36 PM AEDT, zxq9 wrote: >On 2015?10?22? ??? 11:12:57 you wrote: >> On Thu, Oct 22, 2015 at 12:51 AM, zxq9 wrote: >> > Light a candle for the poor sysops who have to maintain Atlassian >> > installations. >> > >> > I've had to. :-/ >> >> Given that Atlassian was kind enough to grant a license, surely >there's >> the option of using a hosted installation. > >Given that crack dealers are kind enough to hand out free samples, >surely >there's a free medical emergency number on the back of the bag. >_______________________________________________ >erlang-questions mailing list >erlang-questions@REDACTED >http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From seancribbs@REDACTED Thu Oct 22 15:56:02 2015 From: seancribbs@REDACTED (Sean Cribbs) Date: Thu, 22 Oct 2015 08:56:02 -0500 Subject: [erlang-questions] edoc: error in doclet 'edoc_doclet' when using macro in -type In-Reply-To: <562607FF.9030503@xs4all.nl> References: <562607FF.9030503@xs4all.nl> Message-ID: Yes, use the 'preprocess' option when running edoc. It's exposed in rebar like so: {edoc_opts, [{preprocess, true}]}. On Tue, Oct 20, 2015 at 4:23 AM, Frans Schneider wrote: > Hi list, > > When I use > > -type id() :: ?MIN_KEM_ID..?MAX_KEM_ID. > > instead of > > -type id() :: 0..16#ffff. > > I get the error "edoc: error in doclet 'edoc_doclet'". > > Is there a work around for this? > > Thanks, > > Frans > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From seancribbs@REDACTED Thu Oct 22 16:10:42 2015 From: seancribbs@REDACTED (Sean Cribbs) Date: Thu, 22 Oct 2015 09:10:42 -0500 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: <1445512774-410366-75.488472982304-29604@mail.muni.cz> Message-ID: I'd echo Richard's message. This is exactly what the unfortunately-named 'mochiglobal' module does. If you have a large, low-write, high-read data-structure, it's perfect. Keep in mind that compiling the module may take a lot longer than copying a large datastructure between processes, and so be REALLY REALLY sure this is what you want first. In the early days of Riak we used mochiglobal to store the ring, because then many client processes could access it at once without going through a gen_server. However, over time we realized the churn was too high for the code server and switched to something else; I think it ended up being a gen_server handling updates but putting it in a public-read ets table. On Thu, Oct 22, 2015 at 8:18 AM, Richard Carlsson < carlsson.richard@REDACTED> wrote: > If the data is constant once the table is built, and the entries are big > so you don't want to copy them out from an ets table or even avoid passing > them between processes (if you want to distribute the table to a large > number of processes), you could generate a module, e.g. using Merl, with a > lookup function that does what the trie lookup would do. It might take some > effort but the speedup could be huge. > > > /Richard > > On Thu, Oct 22, 2015 at 1:56 PM, Jesper Louis Andersen < > jesper.louis.andersen@REDACTED> wrote: > >> >> On Thu, Oct 22, 2015 at 1:19 PM, Adam Krupi?ka >> wrote: >> >>> The gen_server approach would be more efficient; most efficient would be >>> storing a copy of the trie in each process (but also most memory >>> wasteful). >>> >> >> Alternatively, level compress[0] the first few levels into N processes. >> >> [0] Technically you then get a concurrent level-compressed trie. >> >> >> -- >> J. >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider@REDACTED Thu Oct 22 17:14:53 2015 From: schneider@REDACTED (Frans Schneider) Date: Thu, 22 Oct 2015 17:14:53 +0200 Subject: [erlang-questions] edoc: error in doclet 'edoc_doclet' when using macro in -type In-Reply-To: References: <562607FF.9030503@xs4all.nl> Message-ID: <5628FD6D.1030501@xs4all.nl> Great, got it working more or less, however now I hit another error. ./src/test.erl: at line 23: can't find include file "gpb.hrl" edoc: skipping source file './src/test.erl': {'EXIT',error}. edoc: error in doclet 'edoc_doclet': {'EXIT',error}. "gdb" is in the deps directory. Get the error both with and without the skip_deps=true option. TIA Frans On 10/22/2015 03:56 PM, Sean Cribbs wrote: > Yes, use the 'preprocess' option when running edoc. It's exposed in > rebar like so: > > {edoc_opts, [{preprocess, true}]}. > > On Tue, Oct 20, 2015 at 4:23 AM, Frans Schneider > wrote: > > Hi list, > > When I use > > -type id() :: ?MIN_KEM_ID..?MAX_KEM_ID. > > instead of > > -type id() :: 0..16#ffff. > > I get the error "edoc: error in doclet 'edoc_doclet'". > > Is there a work around for this? > > Thanks, > > Frans > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trapexit@REDACTED Thu Oct 22 17:21:56 2015 From: trapexit@REDACTED (Antonio SJ Musumeci) Date: Thu, 22 Oct 2015 11:21:56 -0400 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: <1445512774-410366-75.488472982304-29604@mail.muni.cz> Message-ID: If it almost never or never changes after the first computation then the mochiglobal way will probably work fine. I had thrown together a simple library to expand on the idea[0]. I've used it to serve assets in a cowboy server that was escriptized. Fast and easy to distribute/deploy. [0] http://github.com/trapexit/wandb On Thu, Oct 22, 2015 at 10:10 AM, Sean Cribbs wrote: > I'd echo Richard's message. This is exactly what the unfortunately-named > 'mochiglobal' module does. If you have a large, low-write, high-read > data-structure, it's perfect. Keep in mind that compiling the module may > take a lot longer than copying a large datastructure between processes, and > so be REALLY REALLY sure this is what you want first. > > In the early days of Riak we used mochiglobal to store the ring, because > then many client processes could access it at once without going through a > gen_server. However, over time we realized the churn was too high for the > code server and switched to something else; I think it ended up being a > gen_server handling updates but putting it in a public-read ets table. > > On Thu, Oct 22, 2015 at 8:18 AM, Richard Carlsson < > carlsson.richard@REDACTED> wrote: > >> If the data is constant once the table is built, and the entries are big >> so you don't want to copy them out from an ets table or even avoid passing >> them between processes (if you want to distribute the table to a large >> number of processes), you could generate a module, e.g. using Merl, with a >> lookup function that does what the trie lookup would do. It might take some >> effort but the speedup could be huge. >> >> >> /Richard >> >> On Thu, Oct 22, 2015 at 1:56 PM, Jesper Louis Andersen < >> jesper.louis.andersen@REDACTED> wrote: >> >>> >>> On Thu, Oct 22, 2015 at 1:19 PM, Adam Krupi?ka >>> wrote: >>> >>>> The gen_server approach would be more efficient; most efficient would be >>>> storing a copy of the trie in each process (but also most memory >>>> wasteful). >>>> >>> >>> Alternatively, level compress[0] the first few levels into N processes. >>> >>> [0] Technically you then get a concurrent level-compressed trie. >>> >>> >>> -- >>> J. >>> >>> _______________________________________________ >>> erlang-questions mailing list >>> erlang-questions@REDACTED >>> http://erlang.org/mailman/listinfo/erlang-questions >>> >>> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dlipubkey@REDACTED Thu Oct 22 19:24:02 2015 From: dlipubkey@REDACTED (David Li) Date: Thu, 22 Oct 2015 10:24:02 -0700 Subject: [erlang-questions] A newbie question regarding distributed programming In-Reply-To: References: Message-ID: Dmitry, Thanks for pointing this out. It all works now. On Wed, Oct 21, 2015 at 10:19 PM, Dmitry Kolesnikov wrote: > Hello, > > You nodes are bound to xyz fqdn but you initiate pong to localhost. > > Best Regards, > Dmitry >>-|-|-(*> > >> On 22 Oct 2015, at 01:25, David Li wrote: >> >> Hi, >> >> I am reading up the ping pong example in the tutorial. I want to set >> up two Erlang nodes on a single VM to test the code. However the pong >> program never receives the message from ping. >> >> My module name is "hello_msg_twonodes". Here are what I did: >> >> To start a pong node: >> ================== >> $ erl -sname pong >> Erlang/OTP 18 [erts-7.1] [source] [64-bit] [async-threads:10] [hipe] >> [kernel-poll:false] >> >> Eshell V7.1 (abort with ^G) >> (pong@REDACTED)1> >> (pong@REDACTED)1> hello_msg_twonodes:start_pong(). >> true >> >> To start a ping node >> ================== >> >> $ erl -sname ping >> Erlang/OTP 18 [erts-7.1] [source] [64-bit] [async-threads:10] [hipe] >> [kernel-poll:false] >> >> Eshell V7.1 (abort with ^G) >> (ping@REDACTED)1> hello_msg_twonodes:start_ping(pong@REDACTED). >> In PING func, sending to pong >> In PING func, sent to pong, waiting for feedback >> <0.42.0> >> >> >> >> After this, no more messages are shown on either one of the nodes. >> Can anyone help me to see what's wrong? >> >> Thanks. >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions From tomas.abrahamsson@REDACTED Thu Oct 22 22:22:49 2015 From: tomas.abrahamsson@REDACTED (Tomas Abrahamsson) Date: Thu, 22 Oct 2015 22:22:49 +0200 Subject: [erlang-questions] edoc: error in doclet 'edoc_doclet' when using macro in -type In-Reply-To: <5628FD6D.1030501@xs4all.nl> References: <562607FF.9030503@xs4all.nl> <5628FD6D.1030501@xs4all.nl> Message-ID: > ./src/test.erl: at line 23: can't find include file "gpb.hrl" > edoc: skipping source file './src/test.erl': {'EXIT',error}. > edoc: error in doclet 'edoc_doclet': {'EXIT',error}. You might need to tell edoc, too, about the include path: {edoc_opts, [{includes,["deps/gpb/include"]}, {preprocess, true}]}. BRs Tomas From mjtruog@REDACTED Thu Oct 22 22:38:25 2015 From: mjtruog@REDACTED (Michael Truog) Date: Thu, 22 Oct 2015 13:38:25 -0700 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: Message-ID: <56294941.3080003@gmail.com> On 10/22/2015 01:29 AM, Caragea Silviu wrote: > Hello. > > In one of my projects I need to use a radix tree. I found out a very nice library : > https://github.com/okeuday/trie > > Lookup performances are great. But I have one problem. > > Basically my tree has around 100 000 elements so building it it's an extremely operation. For this reason I'm building it once and all processes that needs to do lookups need to share the btrie object (created using btrie:new/1). > > Here I see several options: > > 1. Use a gen server and store the btrie object on the state or process dictionary. - I didn't tried this > 2. Use a ets table and store the tire object on a public table where all processes can read and write. It is easier to scale and is more natural in Erlang if you pursue #1 (using the state, not the process dictionary). The #2 path (including mochiglobal) is typical in imperative programming (mutating global state). With #1 you can manage the reliability of individual processes for fault-tolerance concerns and you would probably start with a single locally registered process name. Then if there is too much contention for the single process that has the btrie, you would switch to using a process group, to share the load with replicated data. The btrie usage is probably slower than using the newer maps data structure. The trie repo was mainly created for string keys, not binary keys, due to the memory access details in Erlang (i.e., it is easier to have more efficient lookups with string keys, when using process heap data, which includes being more efficient than maps in some cases). You could also store the key/value lookup as a single large binary that you reference (in multiple processes, since large binaries are reference counted) with something like https://github.com/knutin/bisect which may work too. Best Regards, Michael > > Doing some benchmarks I see that lookup-ing for the longest prefix (btrie:find_prefix_longest) in around 100 K elements by prefix it's around 2- 5 ms and 95% of the time is spent in the ets:lookup. > > I think the time spent there is so big because also my term stored there is very big. > > Any other suggestions ? > > Silviu > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider@REDACTED Thu Oct 22 22:43:06 2015 From: schneider@REDACTED (Schneider) Date: Thu, 22 Oct 2015 22:43:06 +0200 Subject: [erlang-questions] edoc: error in doclet 'edoc_doclet' when using macro in -type In-Reply-To: References: <562607FF.9030503@xs4all.nl> <5628FD6D.1030501@xs4all.nl> Message-ID: <663AD957-3BB0-448F-AADC-72A6A5FEC6AA@xs4all.nl> Thanks! Found the documentation now also at http://www.erlang.org/doc/man/edoc.html#read_source-2. Frans Op 22 okt. 2015 om 22:22 heeft Tomas Abrahamsson het volgende geschreven: >> ./src/test.erl: at line 23: can't find include file "gpb.hrl" >> edoc: skipping source file './src/test.erl': {'EXIT',error}. >> edoc: error in doclet 'edoc_doclet': {'EXIT',error}. > > You might need to tell edoc, too, about the include path: > > {edoc_opts, [{includes,["deps/gpb/include"]}, {preprocess, true}]}. > > BRs > Tomas From silviu.cpp@REDACTED Thu Oct 22 22:54:18 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Thu, 22 Oct 2015 23:54:18 +0300 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: <56294941.3080003@gmail.com> References: <56294941.3080003@gmail.com> Message-ID: Hello, @Michael I'm using btree only because of btrie:find_prefix_longest . Basically this is the main functionality I need. As I already posted if you have a btrie with the following elements ["aa", "a", "b", "bb", "aaa"] and you call: btrie:find_prefix_longest("aaawhatever") will return the associated value to the key "aaa". I need this for a long table with calling breakouts (prefixes and rate per prefix) - around 50 k breakouts and basically I call btrie:find_prefix_longest(<<"phonenumber">>) and it returns me the prefix and the rate I need to bill for that destination. Lookup operation seems ok from 1-2.5 ms 95% of time is spent in ets:lookup. As somebody already pointed out is because ets is doing a copy. I will change with gen_server state and benchmark again. Thanks everyone for suggestions ! On Thu, Oct 22, 2015 at 11:38 PM, Michael Truog wrote: > On 10/22/2015 01:29 AM, Caragea Silviu wrote: > > Hello. > > In one of my projects I need to use a radix tree. I found out a very nice > library : > https://github.com/okeuday/trie > > Lookup performances are great. But I have one problem. > > Basically my tree has around 100 000 elements so building it it's an > extremely operation. For this reason I'm building it once and all processes > that needs to do lookups need to share the btrie object (created using > btrie:new/1). > > Here I see several options: > > 1. Use a gen server and store the btrie object on the state or process > dictionary. - I didn't tried this > 2. Use a ets table and store the tire object on a public table where all > processes can read and write. > > It is easier to scale and is more natural in Erlang if you pursue #1 > (using the state, not the process dictionary). The #2 path (including > mochiglobal) is typical in imperative programming (mutating global state). > With #1 you can manage the reliability of individual processes for > fault-tolerance concerns and you would probably start with a single locally > registered process name. Then if there is too much contention for the > single process that has the btrie, you would switch to using a process > group, to share the load with replicated data. > > The btrie usage is probably slower than using the newer maps data > structure. The trie repo was mainly created for string keys, not binary > keys, due to the memory access details in Erlang (i.e., it is easier to > have more efficient lookups with string keys, when using process heap data, > which includes being more efficient than maps in some cases). > > You could also store the key/value lookup as a single large binary that > you reference (in multiple processes, since large binaries are reference > counted) with something like https://github.com/knutin/bisect which may > work too. > > Best Regards, > Michael > > > Doing some benchmarks I see that lookup-ing for the longest prefix (btrie: > find_prefix_longest) in around 100 K elements by prefix it's around 2- 5 > ms and 95% of the time is spent in the ets:lookup. > > I think the time spent there is so big because also my term stored there > is very big. > > Any other suggestions ? > > Silviu > > > > _______________________________________________ > erlang-questions mailing listerlang-questions@REDACTED://erlang.org/mailman/listinfo/erlang-questions > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Fri Oct 23 01:56:22 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 23 Oct 2015 12:56:22 +1300 Subject: [erlang-questions] Third draft of EEP 44 - Additional preprocessor directives In-Reply-To: References: Message-ID: <0BA5B751-6ADC-4AA0-8C8D-8B0BC1A91221@cs.otago.ac.nz> I do hope this isn't the final draft. (1) "OTP_RELEASE can infer information" What a clever little macro that must be. Statements IMPLY, people INFER. OTP_RELEASE can *IMPLY* something but never INFER anything. (2) "As an hypothetical" "an" here must be "a". If the h in "hypothetical" were silent, "an" would be appropriate. But it isn't, and it isn't. (3) It's the semantics. I hope you understand that I'm not quarrelling with the set of new built-in functions or their intended use or anything like that. The set can always extended later. My problem is that I still do not know WHAT THESE MEAN. There is a general problem that if I run the preprocessor at time T in context C, save the AST, and then finish the compilation at time T' in context C', the result that I get will not, in general, be the result I *would* have got at T in C. is_deprecated/3 : How do we know whether the compiler would have generated a warning or not? If the compiler has been asked to suppress such warnings (nowarn_deprecated_function), is is_deprecated/3 still true? Suppose we have a file snagglepuss.erl containing -if(...). -deprecated(...). -endif. and a snagglepuss.beam generated from it, and suppose the .erl file is modified. Should the preprocessor cause snagglepuss.erl to be recompiled, or what? (Oh, Notice that the compiler does not know about attribute -deprecated(), but uses an assembled list of deprecated functions in Erlang/OTP. http://www.erlang.org/doc/man/compile.html) I think this needs to be repeated in the EEP. Something like is_deprecated(Module, Function, Arity) There are two ways that a function can be deprecated. One is by using the -deprecated() attribute. This is what you use to deprecate your functions, and the Xref tool knows about it. The compiler does not, and this if-BIF doesn't either. The other way is by listing the function in the compiler's table of deprecated functions. This is what this if-BIF consults. is_deprecated(M,F,A) is true if and only if M:F/A is listed in that table; the nowarn_deprecated option has no effect on this decision. is_exported(M,F,A) : HOW DOES IT TELL whether F/A is exported from M? Again, if we have -if(...). -export([...]). -endif. and the .erl file is newer than the .beam file, what is supposed to happen? Now try this. -module(a). -if(not is_exported(b,x,0)). -export([x/0]). -endif. ... -module(b). -if(is_exported(a,x,0)). -export([x/0]). -endif. Start from cold. 'a' assumes b:x/0 isn't exported, so it exports x/0. 'b' now exports b:x/0. OOPS. 'a' relied on an assumption that's false. My point here is that without an explicit semantics, I honestly do not know how to implement this EEP NOR DO I KNOW HOW TO USE IT SAFELY. is_header(...) : this seems clear enough. is_module(...) : ouch. I really did not expect that a test for whether a module EXISTS would turn into an attempt to LOAD it. Try this one: -module(a). -if(module_exists(b)). ... -endif. ... -module(b). -if(module_exists(a)). ... -endif. ... Do we have an infinite loop here, or do we have a situation where one of the modules is going to be compiled under assumptions that turn out to be false, or is this an error that must be reported? version(App) : " If a component consists of of numbers only, it will be converted to an integer". How? Turning 6.0.1 into 601 is tempting, but consider 5.10.2. You don't want 5.10.2 > 6.0.1. It's clear enough that the version is determined by trying to locate the app file, and thankfully app files don't use the preprocessor... I did raise the issue of semantics before. Maybe this time the importance of the issue is clearer. On the general subject of preprocessors, I found for my Smalltalk compiler that I needed one. What I initially needed it for was producing a list of test cases, because some operating system/library features were not present everywhere. For example, fmtmsg(), gettext(), load average, POSIX message queues, POSIX semaphores, a working version of the UUID library. There is still no conditional processing in any Smalltalk code, but lists of files and test cases, yes. Now that tells me that there is one more if-BIF needed, and that is something that tells you what kind of platform you have. Yes, Erlang code shouldn't normally depend on that, but the Erlang code might run external programs. Maybe this is best done by having some sort of installation script (hello, configure!) that writes a .hrl file. It is certainly the kind of thing that can be added as an if-BIF later; it's not something I'd want the EEP delayed over. From ok@REDACTED Fri Oct 23 02:07:47 2015 From: ok@REDACTED (Richard A. O'Keefe) Date: Fri, 23 Oct 2015 13:07:47 +1300 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: Message-ID: On 22/10/2015, at 4:13 am, Bruce Yinhe wrote: > We are happy to announce the issue tracker for Erlang/OTP (http://bugs.erlang.org). That's great. If I am ever unlucky enough to notice (or more likely, to think I've noticed) a bug in Erlang, that will be so far in the future I'll have forgotten this announcement. I see the bug tracker is announced under NEWS in the www.erlang.org web page, and that's great too. But in that distant future, it won't be news any more, and I will _still_ have forgotten. So can "Issue Tracker" be a permanent part of the www.erlang.org page? Perhaps in the nav bar at the top? Or possibly one step away, among the links? From jesper.louis.andersen@REDACTED Fri Oct 23 13:46:05 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Fri, 23 Oct 2015 13:46:05 +0200 Subject: [erlang-questions] [ANN] Turtle - Yet another wrapper for the RabbitMQ erlang client Message-ID: Hi, At Shopgun, we've just Open Sourced our "turtle" application: https://github.com/shopgun/turtle The turtle application is built to be a wrapper around the RabbitMQ standard Erlang driver. The purpose is to enable faster implementation and use of RabbitMQ by factoring out common tasks into a specific application targeted toward ease of use. The secondary purpose is to make the Erlang client better behaving toward an OTP setup. The official client makes lots of assumptions which are not always true in an OTP setting. The features of turtle are: Maintain RabbitMQ connections and automate re-connections if the network is temporarily severed. Provides the invariant that there is always a process knowing the connection state toward RabbitMQ. If the connection is down, you will receive errors back. Provide support for having multiple connection points to the RabbitMQ cluster. On failure the client will try the next client in the group. On too many failures on a group, connect to a backup group instead. This allows a client the ability to use a backup cluster in another data center, should the primary cluster break down. Support easy subscription where each received message is handled by a stateful callback function supplied by the application. Support easy message sending anywhere in the application by introducing a connection proxy. Support RPC style calls in RabbitMQ over the connection proxy. This allows a caller to block on an AMQP style RPC message since the connection proxy handles the asynchronous non-blocking behavior. Tracks timing of most calls. This allows you to gather metrics on the behavior of underlying systems and in turn handle erroneous cases proactively. ------------------- We are using the application in several setups internally, and it has already helped us more than once. It bears some resemblance with other applications which has been written in the same area, such as MochiMedias gen_bunny, or as I've seen them as closed source implementations. This is a rewrite, focusing on the support of RPC style messaging with timing deadlines. I'm interested in speaking to potential users of turtle, so we can help each other. Also, please get in touch if you need a dependency broken or altered for you to be able to use it. The documentation is written with a users guide in the README.md, which I hope is good enough for people to be able to get started. If not, please open an Issue :) On behalf of the Shopgun team, -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anders.nygren@REDACTED Fri Oct 23 14:20:50 2015 From: anders.nygren@REDACTED (Anders Nygren) Date: Fri, 23 Oct 2015 07:20:50 -0500 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: <56294941.3080003@gmail.com> Message-ID: Since it is number analysis You want I think I should mention https://github.com/nygge/number_analysis Originally written by Klacke. It builds a trie in an ETS table. /Anders On Thu, Oct 22, 2015 at 3:54 PM, Caragea Silviu wrote: > Hello, > > @Michael I'm using btree only because of btrie:find_prefix_longest . > > Basically this is the main functionality I need. As I already posted if > you have a btrie with the following elements ["aa", "a", "b", "bb", "aaa"] > and you call: btrie:find_prefix_longest("aaawhatever") will return the > associated value to the key "aaa". > > I need this for a long table with calling breakouts (prefixes and rate per > prefix) - around 50 k breakouts and basically I call > btrie:find_prefix_longest(<<"phonenumber">>) and it returns me the prefix > and the rate I need to bill for that destination. Lookup operation seems ok > from 1-2.5 ms 95% of time is spent in ets:lookup. As somebody already > pointed out is because ets is doing a copy. I will change with gen_server > state and benchmark again. > > Thanks everyone for suggestions ! > > On Thu, Oct 22, 2015 at 11:38 PM, Michael Truog wrote: > >> On 10/22/2015 01:29 AM, Caragea Silviu wrote: >> >> Hello. >> >> In one of my projects I need to use a radix tree. I found out a very nice >> library : >> https://github.com/okeuday/trie >> >> Lookup performances are great. But I have one problem. >> >> Basically my tree has around 100 000 elements so building it it's an >> extremely operation. For this reason I'm building it once and all processes >> that needs to do lookups need to share the btrie object (created using >> btrie:new/1). >> >> Here I see several options: >> >> 1. Use a gen server and store the btrie object on the state or process >> dictionary. - I didn't tried this >> 2. Use a ets table and store the tire object on a public table where all >> processes can read and write. >> >> It is easier to scale and is more natural in Erlang if you pursue #1 >> (using the state, not the process dictionary). The #2 path (including >> mochiglobal) is typical in imperative programming (mutating global state). >> With #1 you can manage the reliability of individual processes for >> fault-tolerance concerns and you would probably start with a single locally >> registered process name. Then if there is too much contention for the >> single process that has the btrie, you would switch to using a process >> group, to share the load with replicated data. >> >> The btrie usage is probably slower than using the newer maps data >> structure. The trie repo was mainly created for string keys, not binary >> keys, due to the memory access details in Erlang (i.e., it is easier to >> have more efficient lookups with string keys, when using process heap data, >> which includes being more efficient than maps in some cases). >> >> You could also store the key/value lookup as a single large binary that >> you reference (in multiple processes, since large binaries are reference >> counted) with something like https://github.com/knutin/bisect which may >> work too. >> >> Best Regards, >> Michael >> >> >> Doing some benchmarks I see that lookup-ing for the longest prefix (btrie: >> find_prefix_longest) in around 100 K elements by prefix it's around 2- 5 >> ms and 95% of the time is spent in the ets:lookup. >> >> I think the time spent there is so big because also my term stored there >> is very big. >> >> Any other suggestions ? >> >> Silviu >> >> >> >> _______________________________________________ >> erlang-questions mailing listerlang-questions@REDACTED://erlang.org/mailman/listinfo/erlang-questions >> >> >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silviu.cpp@REDACTED Fri Oct 23 14:39:44 2015 From: silviu.cpp@REDACTED (Caragea Silviu) Date: Fri, 23 Oct 2015 15:39:44 +0300 Subject: [erlang-questions] where it's the best way to store a very big term object shared between processes In-Reply-To: References: <56294941.3080003@gmail.com> Message-ID: Hello, I tested the gen_server implementation and the results of a lookup are in average 0.1 ms so are pretty nice . In case the gen_server becomes a problem I night use multiple gen_server instances. https://github.com/inaka/worker_pool Silviu On Fri, Oct 23, 2015 at 3:20 PM, Anders Nygren wrote: > Since it is number analysis You want I think I should mention > https://github.com/nygge/number_analysis > Originally written by Klacke. It builds a trie in an ETS table. > > /Anders > > On Thu, Oct 22, 2015 at 3:54 PM, Caragea Silviu > wrote: > >> Hello, >> >> @Michael I'm using btree only because of btrie:find_prefix_longest . >> >> Basically this is the main functionality I need. As I already posted if >> you have a btrie with the following elements ["aa", "a", "b", "bb", "aaa"] >> and you call: btrie:find_prefix_longest("aaawhatever") will return the >> associated value to the key "aaa". >> >> I need this for a long table with calling breakouts (prefixes and rate >> per prefix) - around 50 k breakouts and basically I call >> btrie:find_prefix_longest(<<"phonenumber">>) and it returns me the prefix >> and the rate I need to bill for that destination. Lookup operation seems ok >> from 1-2.5 ms 95% of time is spent in ets:lookup. As somebody already >> pointed out is because ets is doing a copy. I will change with gen_server >> state and benchmark again. >> >> Thanks everyone for suggestions ! >> >> On Thu, Oct 22, 2015 at 11:38 PM, Michael Truog >> wrote: >> >>> On 10/22/2015 01:29 AM, Caragea Silviu wrote: >>> >>> Hello. >>> >>> In one of my projects I need to use a radix tree. I found out a very >>> nice library : >>> https://github.com/okeuday/trie >>> >>> Lookup performances are great. But I have one problem. >>> >>> Basically my tree has around 100 000 elements so building it it's an >>> extremely operation. For this reason I'm building it once and all processes >>> that needs to do lookups need to share the btrie object (created using >>> btrie:new/1). >>> >>> Here I see several options: >>> >>> 1. Use a gen server and store the btrie object on the state or process >>> dictionary. - I didn't tried this >>> 2. Use a ets table and store the tire object on a public table where all >>> processes can read and write. >>> >>> It is easier to scale and is more natural in Erlang if you pursue #1 >>> (using the state, not the process dictionary). The #2 path (including >>> mochiglobal) is typical in imperative programming (mutating global state). >>> With #1 you can manage the reliability of individual processes for >>> fault-tolerance concerns and you would probably start with a single locally >>> registered process name. Then if there is too much contention for the >>> single process that has the btrie, you would switch to using a process >>> group, to share the load with replicated data. >>> >>> The btrie usage is probably slower than using the newer maps data >>> structure. The trie repo was mainly created for string keys, not binary >>> keys, due to the memory access details in Erlang (i.e., it is easier to >>> have more efficient lookups with string keys, when using process heap data, >>> which includes being more efficient than maps in some cases). >>> >>> You could also store the key/value lookup as a single large binary that >>> you reference (in multiple processes, since large binaries are reference >>> counted) with something like https://github.com/knutin/bisect which may >>> work too. >>> >>> Best Regards, >>> Michael >>> >>> >>> Doing some benchmarks I see that lookup-ing for the longest prefix >>> (btrie:find_prefix_longest) in around 100 K elements by prefix it's >>> around 2- 5 ms and 95% of the time is spent in the ets:lookup. >>> >>> I think the time spent there is so big because also my term stored there >>> is very big. >>> >>> Any other suggestions ? >>> >>> Silviu >>> >>> >>> >>> _______________________________________________ >>> erlang-questions mailing listerlang-questions@REDACTED://erlang.org/mailman/listinfo/erlang-questions >>> >>> >>> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Fri Oct 23 14:51:28 2015 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Fri, 23 Oct 2015 14:51:28 +0200 Subject: [erlang-questions] try catch params in Core Erlang In-Reply-To: References: Message-ID: The third parameter is described in the Core Erlang specification ( https://www.it.uu.se/research/group/hipe/cerl/doc/core_erlang-1.0.3.pdf), albeit in a very opaque way. This is because the Core Erlang language doesn't go into any details about things that can be left to the actual Erlang implementation to decide how it should work. The third parameter is the exception object itself. This needs to be an Erlang term so that it can be handled and passed around, but for efficiency, the BEAM will pack the necessary info into a bignum, which is a cheap and compact representation. It will stay like this until someone really needs to do anything except pass it on upwards, in which case it can be unpacked to a more readable tuple with a stack trace. This detail is completely implementation dependent and should not be relied on by programmers on the Erlang level. /Richard On Mon, Oct 19, 2015 at 10:14 AM, Jonas Falkevik < jonas.falkevik@REDACTED> wrote: > It seems to be the stack trace at least in R16B03-1. > > erl source: > -module(test). > -compile(export_all). > > test() -> > try > foo:bar() > catch > some:thing -> ok; > error:E -> > io:format("third: ~p~n", [E]) > end. > > core erlang modification: > > @@ -18,10 +18,10 @@ > <'some','thing',_cor4> when 'true' -> > 'ok' > %% Line 9 > - <'error',E,_cor5> when 'true' -> > + <'error',_E,F> when 'true' -> > %% Line 10 > call 'io':'format' > - ([116|[104|[105|[114|[100|[58|[32|[126|[112|[126|[110]]]]]]]]]]], > [E|[]]) > + ([116|[104|[105|[114|[100|[58|[32|[126|[112|[126|[110]]]]]]]]]]], > [F|[]]) > ( <_cor3,_cor2,_cor1> when 'true' -> > primop 'raise' > (_cor1, _cor2) > > > > Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:4:4] > [async-threads:10] [hipe] [kernel-poll:false] > > Eshell V5.10.4 (abort with ^G) > 1> c(test, [to_core]). > ** Warning: No object file created - nothing loaded ** > ok > 2> c(test, [from_core]). > {ok,test} > 3> test:test(). > third: [[{foo,bar,[],[]}, > {test,test,0,[]}, > {erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]}, > {shell,exprs,7,[{file,"shell.erl"},{line,674}]}, > {shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]}, > {shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]| > -000000000000000016] > ok > > /Jonas > > On Oct 15, 2015, at 19:57 , Vladimir Gordeev wrote: > > If you compile some try-catch statements into Core Erlang, you may notice, > that it receives three params in exception pattern: > http://tryerl.seriyps.ru/#id=3bf3 > > this: > > try foo:bar() > catch > some:thing -> ok > end. > > into this: > > try > call 'foo':'bar' > () > of <_cor0> -> > _cor0 > catch <_cor3,_cor2,_cor1> -> > case <_cor3,_cor2,_cor1> of > %% Line 7 > <'some','thing',_cor4> when 'true' -> > 'ok' > ( <_cor3,_cor2,_cor1> when 'true' -> > primop 'raise' > (_cor1, _cor2) > -| ['compiler_generated'] ) > end > > In "An introduction to Core Erlang" catch described as taking two params: > http://www.erlang.org/workshop/carlsson.ps > > Question is: what is this third param (_cor4) for? > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandjoe@REDACTED Fri Oct 23 15:02:10 2015 From: brandjoe@REDACTED (=?UTF-8?Q?J=c3=b6rgen_Brandt?=) Date: Fri, 23 Oct 2015 15:02:10 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: References: <5616E93B.4060608@hu-berlin.de> Message-ID: <562A2FD2.2060406@hu-berlin.de> Hey Torben, thanks for your reply. On 19.10.2015 10:51, Torben Hoffmann wrote: > Hi J?rgen, > > With the risk of showing my inability to understand your problem I would challenge > the need for the transaction server altogether. > > As you say, the messages that the processing server has yet to process will be lost > if the server dies, so re-sending is required. > > I would simply deal with this in the client. > When you send a request you monitor the server, if it dies, you re-send when the > service is up again. > The server monitors clients, if the client dies, the server stops the pending and > ongoing jobs for that client. You are perfectly right. It would be advantageous to have a decentralized model for this. The reason I proposed a centralized architecture (with a dedicated transaction server) is the following: You send a request to a server. The server dies. Because of a monitor you find out you have to resend the message when the server is up again. So far so good. How do you find out, the server is up again? There will be a millisecond or something the supervisor needs to restart the server and you need to make sure, not to repeat the message into the void. How did you address this issue in your GOL implementation? Cheers J?rgen > > I have used this approach is my Game of Life implementation - it seems to work. > > There might be room for a little library for some of the book keeping involved in > this, but given that there can be so many variations on this very simple pattern I > fear that it will be hard to create a generic library for this. > > Cheers, > Torben > > J?rgen Brandt writes: > >> Hello, >> >> is there an Erlang library for transactional message >> passing, using patterns in communication and error handling >> to improve fault tolerance? >> >> This (or a similar) question may have been asked before and, >> of course, there is plenty of research on fault tolerance >> and failure transparency. Nevertheless, in my work with >> scientific workflows it seems that certain patterns in error >> handling exist. In this mail I'm trying to motivate a way to >> externalize common error handling in a standardized service >> (a transaction server) but I'm unsure whether such a thing >> already exists, whether I'm missing an important feature, >> and whether it's a good idea anyway. >> >> Large distributed systems are composed of many services. >> They process many tasks concurrently and need fault >> tolerance to yield correct results and maintain >> availability. Erlang seemed a good choice because it >> provides facilities to automatically improve availability, >> e.g., by using supervisers. In addition, it encourages a >> programming style that separates processing logic from >> error handling. In general, each service has its own >> requirements, implying that a general approach to error >> handling (beyond restarting) is infeasible. However, if an >> application exhibits recurring patterns in the way error >> handling corresponds to the messages passed between >> services, we can abstract these patterns to reuse them. >> >> >> Fault tolerance is important because it directly translates >> to scalability. >> >> Consider an application (with transient software faults), >> processing user queries. The application reports errors back >> to the user as they appear. If a user query is a long- >> running job (hours, days), the number of subtasks created >> from this job (thousands), the number of services to process >> one subtask, and the number of machines involved are large, >> then the occurrence of an error is near to certain. Quietly >> restarting the application and rerunning the query may >> reduce the failure probability but even if the application >> succeeds, the number of retries and, thus, the time elapsed >> to success may be prohibitive. What is needed is a system >> that does not restart the whole application but only the >> service that failed reissuing only the unfinished requests >> that this service received before failing. Consequently, the >> finer the granularity at which errors are handled, the less >> work has to be redone when errors occur, allowing a system >> to host longer-running jobs, containing more subtasks, >> involving more services for each subtask, and running on >> more machines in feasible time. >> >> >> Scientific workflows are a good example for a large >> distributed application exhibiting patterns in communication >> and error handling. >> >> A scientific workflow system consumes an input query in the >> form of an expression in the workflow language. On >> evaluation of this expression it identifies subtasks that >> can be executed in any order. E.g., a variant calling >> workflow from bioinformatics unfolds into several hundred >> to a thousand subtasks each of which is handed down in the >> form of requests through a number of services: Upon >> identification of the subtask in (i) the query interpreter, >> a request is sent to (ii) a cache service. This service >> keeps track of all previously run subtasks and returns the >> cached result if available. If not, a request is sent to >> (iii) a scheduling service. This service determines the >> machine, to run the subtask. The scheduler tries both, to >> adequately distribute the work load among workers (load >> balancing) and to minimize data transfers among nodes (data >> locality). Having decided where to run the subtask, a >> request is sent to (iv) the corresponding worker which >> executes the subtask and returns the result up the chain of >> services. Every subtask goes through this life cycle. >> >> Apart from the interplay of the aforementioned services we >> want the workflow system to behave in a particular way when >> one of these services dies: >> >> - Each workflow is evaluated inside its own interpreter >> process. A workflow potentially runs for a long time and >> at some point we might want to kill the interpreter >> process. When this happens, the system has to identify all >> open requests originating from that interpreter and cancel >> them. >> >> - When an important service (say the scheduler) dies, a >> supervisor will restart it, this way securing the >> availability of the application. Upon a fresh start, none >> of the messages this service has received will be there >> anymore. Instead of having to notify the client of this >> important service (in this case the cache) to give it the >> chance to repair the damage, we want all the messages, >> that have been sent to the important service (scheduler) >> and have not been quited, to be resent to the freshly >> started service (scheduler). >> >> - When a worker dies, from a hardware fault, we cannot >> expect a supervisor to restart it (on the same machine). >> In this case we want to notify the scheduler not to expect >> a reply to his request anymore. Also we want to reissue >> the original request to the scheduler to give it the >> chance to choose a different machine to run the subtask >> on. >> >> - When a request is canceled at a high level (say at the >> cache level because the interpreter died) All subsequent >> requests (to the scheduler and in the worker) >> corresponding to the original request should have been >> canceled before the high level service (cache) is >> notified, thereby relieving him of the duty to cancel them >> himself. >> >> >> Since there is no shared memory in Erlang, the state of a >> process is defined only by the messages received (and its >> init parameters which are assumed constant). To reestablish >> the state of a process after failure we propose three >> different ways to send messages to a process and their >> corresponding informal error handling semantics: >> >> tsend( Dest, Req, replay ) -> TransId >> when Dest :: atom(), >> Req :: term(), >> TransId :: reference(). >> >> Upon calling tsend/3, a transaction server creates a record >> of the request to be sent and relays it to the destination >> (must be a registered process). At the same time it creates >> a monitor on both the request's source and destination. When >> the source dies, it will send an abort message to the >> destination. When the destination dies, initially, nothing >> happens. When the supervisor restarts the destination, the >> transaction server replays all unfinished requests to the >> destination. >> >> tsend( Dest, Req, replay, Precond ) -> TransId >> when Dest :: atom(), >> Req :: term(), >> Precond :: reference(), >> TransId :: reference(). >> >> The error handling for tsend/4 with replay works just the >> same as tsend/3. Additionally, when the request with the id >> Precond is canceled, this request is also canceled. >> >> tsend( Dest, Req, reschedule, Precond ) -> TransId >> when Dest :: atom() | pid(), >> Req :: term(), >> Precond :: reference(), >> TransId :: reference(). >> >> Upon calling tsend/4, with reschedule, as before, a >> transaction server creates a record of the request and >> monitors both source and destination. When the destination >> dies, instead of waiting for a supervisor to restart it, the >> original request identified with Precond is first canceled >> at the source and then replayed to the source. Since we do >> not rely on the destination to be a permanent process, we >> can also identify it per Pid while we had to require a >> registered service under replay error handling. >> >> commit( TransId, Reply ) -> ok >> when TransId :: reference(), >> Reply :: term(). >> >> When a service is done working on a request, it sends a >> commit which relays the reply to the transaction source and >> removes the record for this request from the transaction >> server. >> >> A service participating in transaction handling has to >> provide the following two callbacks: >> >> handle_recv( TransId::reference(), Req::_, State::_ ) -> >> {noreply, NewState::_}. >> >> handle_abort( TransId::reference(), State::_ ) -> >> {noreply, NewState::_}. >> >> While the so-defined transaction protocol is capable of >> satisfying the requirements introduced for the workflow >> system example the question is, is it general enough to be >> applicable also in other applications? >> >> >> This conduct has its limitations. >> >> The introduced transaction protocol may be suited to deal >> with transient software faults (Heisenbugs) but its >> effectiveness to mitigate hardware faults or deterministic >> software faults (Bohrbugs) is limited. In addition, with the >> introduction of the transaction server we created a single >> point of failure. >> >> >> Concludingly, the restarting of a service by a supervisor is >> sufficient to secure the availability of a service in the >> presence of software faults but large scale distributed >> systems require a more fine-grained approach to error >> handling. To identify patterns in message passing and error >> handling gives us the opportunity to reduce error handling >> code and, thereby, avoid the introduction of bugs into error >> handling. The proposed transaction protocol may be suitable >> to achieve this goal. >> >> >> I had hoped to get some feedback on the concept, in order to >> have an idea whether I am on the right track. If a similar >> library is already around and I just couldn't find it, if I >> am missing an obvious feature, a pattern that is important >> but just doesn't appear in the context of scientific >> workflows, it would be helpful to know about it. Thanks in >> advance. >> >> Cheers >> J?rgen >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > -- > Torben Hoffmann > Architect, basho.com > M: +45 25 14 05 38 > From brandjoe@REDACTED Fri Oct 23 15:51:59 2015 From: brandjoe@REDACTED (=?UTF-8?Q?J=c3=b6rgen_Brandt?=) Date: Fri, 23 Oct 2015 15:51:59 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: References: <5616E93B.4060608@hu-berlin.de> Message-ID: <562A3B7F.4060404@hu-berlin.de> Hello, On 20.10.2015 12:53, Jesper Louis Andersen wrote: > > On Fri, Oct 9, 2015 at 12:07 AM, J?rgen Brandt > > wrote: > > Consider an application (with transient software faults), > processing user queries. > > > I think this is the central point of the mail. One of the usual > Erlang approaches is to rely on stable storage and checkpointing > for these kinds of work units. > > Stable storage means that you can write to disk, sync data, and > then you have a linearizable point in time after which data are > safe on the disk and can be re-read. Sounds good. > > Checkpointing means to track on stable storage whenever the system > reaches some safe invariant state. Given, (after failure) a service is restarted with constant parameters, its state is constituted only by the requests it received. So the reestablishment of a state and the resending of all requests should be exchangeable (like in a redo log). Now, if you can guarantee/assume that messages are uncorrelated throughout the application, you only need to redo the requests which have been unreplied so far. I find the view that state is just the sum of all messages quite appealing because appending a message to a message history is a local action whereas snapshotting and storing the state of a whole application is a global action (and, thus, harder to accomplish). > > In order to make sure progress happens, even under the possibility > of transient errors, one must figure out how to checkpoint > invariant state such that it can be recovered. In some cases, you > don't need truly persistent storage, but can simply ship the state > to another process, or even system. Agreed, not for all applications one needs persistent storage. > > The other ingredient you need is that of idempotent operations. > That is, you can rerun an operation/subtask and it will produce the > same result as before, or it will return early with the answer if > the answer is already evaluated and cached. You mean that operations are deterministic and side-effect free. Something I assume (but do not enforce) in my application. > > Given these two ingredients, you don't need to use transactions. > You just reinject unfulfilled obligations into the system and you > track which obligations that deadline. > Checkpointing is a substitute for transactions. But is it a preferable one? > A project like Apache Storm implements this workflow, and I'm 99% > sure a lot of Erlang projects exist for this, though the names > eludes me right now. > Perhaps I have been a bit unspecific in what I wanted to know. I am thankful, though, for your reply. Cheers J?rgen > > -- J. From chassoul@REDACTED Fri Oct 23 16:35:46 2015 From: chassoul@REDACTED (Jean Chassoul) Date: Fri, 23 Oct 2015 08:35:46 -0600 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: Message-ID: On Wed, Oct 21, 2015 at 9:13 AM, Bruce Yinhe wrote: > Hello everyone > > We are happy to announce the issue tracker for Erlang/OTP ( > http://bugs.erlang.org). > Cheers to Erlang/OTP team! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thoffmann@REDACTED Fri Oct 23 17:06:49 2015 From: thoffmann@REDACTED (Torben Hoffmann) Date: Fri, 23 Oct 2015 17:06:49 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: <562A2FD2.2060406@hu-berlin.de> References: <5616E93B.4060608@hu-berlin.de> <562A2FD2.2060406@hu-berlin.de> Message-ID: J?rgen Brandt writes: > Hey Torben, > > thanks for your reply. > > On 19.10.2015 10:51, Torben Hoffmann wrote: >> Hi J?rgen, >> >> With the risk of showing my inability to understand your problem I would challenge >> the need for the transaction server altogether. >> >> As you say, the messages that the processing server has yet to process will be lost >> if the server dies, so re-sending is required. >> >> I would simply deal with this in the client. >> When you send a request you monitor the server, if it dies, you re-send when the >> service is up again. >> The server monitors clients, if the client dies, the server stops the pending and >> ongoing jobs for that client. > > You are perfectly right. It would be advantageous to have a > decentralized model for this. The reason I proposed a centralized > architecture (with a dedicated transaction server) is the following: > > You send a request to a server. The server dies. Because of a monitor > you find out you have to resend the message when the server is up again. > So far so good. > > How do you find out, the server is up again? There will be a millisecond > or something the supervisor needs to restart the server and you need to > make sure, not to repeat the message into the void. > > How did you address this issue in your GOL implementation? > I'm dirty. I poll the egol_cell_mgr. The proper way of doing it would be to add a egol_cell_mgr:await_new_neighbour(N, OldPid) function and then get a message back from egol_cell_mgr once a new process has been registered. Or make the the call synchronous - I'm spawning a function anyway to do the polling, so it might as well just hang there until the process is there. (https://github.com/lehoff/egol/blob/testable/src/egol_cell.erl#L370) What is best depends on the problem. For egol, I think the asynchronous approach is probably better as it follows the rest of the design. Cheers, Torben -- Torben Hoffmann Architect, basho.com M: +45 25 14 05 38 From brandjoe@REDACTED Fri Oct 23 22:40:19 2015 From: brandjoe@REDACTED (=?UTF-8?Q?J=c3=b6rgen_Brandt?=) Date: Fri, 23 Oct 2015 22:40:19 +0200 Subject: [erlang-questions] Considering a Generic Transaction System in Erlang In-Reply-To: References: <5616E93B.4060608@hu-berlin.de> <562A2FD2.2060406@hu-berlin.de> Message-ID: <562A9B33.7040702@hu-berlin.de> Hey Torben, On 23.10.2015 17:06, Torben Hoffmann wrote: > > J?rgen Brandt writes: > >> Hey Torben, >> >> thanks for your reply. >> >> On 19.10.2015 10:51, Torben Hoffmann wrote: >>> Hi J?rgen, >>> >>> With the risk of showing my inability to understand your problem I would challenge >>> the need for the transaction server altogether. >>> >>> As you say, the messages that the processing server has yet to process will be lost >>> if the server dies, so re-sending is required. >>> >>> I would simply deal with this in the client. >>> When you send a request you monitor the server, if it dies, you re-send when the >>> service is up again. >>> The server monitors clients, if the client dies, the server stops the pending and >>> ongoing jobs for that client. >> >> You are perfectly right. It would be advantageous to have a >> decentralized model for this. The reason I proposed a centralized >> architecture (with a dedicated transaction server) is the following: >> >> You send a request to a server. The server dies. Because of a monitor >> you find out you have to resend the message when the server is up again. >> So far so good. >> >> How do you find out, the server is up again? There will be a millisecond >> or something the supervisor needs to restart the server and you need to >> make sure, not to repeat the message into the void. >> >> How did you address this issue in your GOL implementation? >> > I'm dirty. I poll the egol_cell_mgr. > > The proper way of doing it would be to add a > egol_cell_mgr:await_new_neighbour(N, OldPid) > function and then get a message back from egol_cell_mgr once a new process has been > registered. > Or make the the call synchronous - I'm spawning a function anyway to do the polling, > so it might as well just hang there until the process is there. > (https://github.com/lehoff/egol/blob/testable/src/egol_cell.erl#L370) I like your design. It's not that you poll every nanosecond and since you spawn the function the sending process can just continue work. I guess I'm still too much stuck in the OO way of thinking. The fact that you can just spawn any function in a fire-and-forget manner didn't come to my mind (although I remember having seen it in the textbooks). Your approach is simple and doesn't need an extra service. Neat. Cheers J?rgen > > What is best depends on the problem. > For egol, I think the asynchronous approach is probably better as it follows the rest of the > design. > > Cheers, > Torben > -- > Torben Hoffmann > Architect, basho.com > M: +45 25 14 05 38 > From icfp.publicity@REDACTED Sat Oct 24 01:31:44 2015 From: icfp.publicity@REDACTED (Lindsey Kuper) Date: Fri, 23 Oct 2015 23:31:44 +0000 Subject: [erlang-questions] ICFP 2016 Call for Workshop and Co-located Event Proposals Message-ID: <047d7b160197b3a7f20522ce03f1@google.com>          CALL FOR WORKSHOP AND CO-LOCATED EVENT PROPOSALS                             ICFP 2016  21st ACM SIGPLAN International Conference on Functional Programming                      September 18-24, 2016                           Nara, Japan                http://icfpconference.org/icfp2016/ The 21st ACM SIGPLAN International Conference on Functional Programming will be held in Nara, Japan on September 18-24, 2016. ICFP provides a forum for researchers and developers to hear about the latest work on the design, implementations, principles, and uses of functional programming. Proposals are invited for workshops (and other co-located events, such as tutorials) to be affiliated with ICFP 2016 and sponsored by SIGPLAN. These events should be less formal and more focused than ICFP itself, include sessions that enable interaction among the attendees, and foster the exchange of new ideas. The preference is for one-day events, but other schedules can also be considered. The workshops are scheduled to occur on September 18 (the day before ICFP) and September 22-24 (the three days after ICFP). ---------------------------------------------------------------------- Submission details  Deadline for submission:     November 21, 2015  Notification of acceptance:  December 20, 2015 Prospective organizers of workshops or other co-located events are invited to submit a completed workshop proposal form in plain text format to the ICFP 2016 workshop co-chairs (Andres Loeh and Nicolas Wu), via email to     icfp2016-workshops@REDACTED by November 21, 2015. (For proposals of co-located events other than workshops, please fill in the workshop proposal form and just leave blank any sections that do not apply.) Please note that this is a firm deadline. Organizers will be notified if their event proposal is accepted by December 20, 2015, and if successful, depending on the event, they will be asked to produce a final report after the event has taken place that is suitable for publication in SIGPLAN Notices. The proposal form is available at: http://www.icfpconference.org/icfp2016-files/icfp16-workshops-form.txt Further information about SIGPLAN sponsorship is available at: http://www.sigplan.org/Resources/Proposals/Sponsored/ ---------------------------------------------------------------------- Selection committee The proposals will be evaluated by a committee comprising the following members of the ICFP 2016 organizing committee, together with the members of the SIGPLAN executive committee.  Workshop Co-Chair: Andres Loeh                        (Well-Typed LLP)  Workshop Co-Chair: Nicolas Wu                  (University of Bristol)  General Co-Chair : Jacques Garrigue                (Nagoya University)  General Co-Chair : Gabriele Keller     (University of New South Wales)  Program Chair:     Eijiro Sumii                    (Tohoku University) ---------------------------------------------------------------------- Further information Any queries should be addressed to the workshop co-chairs (Andres Loeh and Nicolas Wu), via email to icfp2016-workshops@REDACTED -------------- next part -------------- An HTML attachment was scrubbed... URL: From shawn@REDACTED Sun Oct 25 03:32:44 2015 From: shawn@REDACTED (Shawn Debnath) Date: Sat, 24 Oct 2015 19:32:44 -0700 (PDT) Subject: [erlang-questions] epgsql help In-Reply-To: References: <92197ec9-6494-447d-a351-6a10fcde3c66@googlegroups.com> Message-ID: <83c3fce6-fadd-4923-9447-42b444793f4b@googlegroups.com> Another volunteer to help with epgsql. We are using this for our startup and would love to ensure this project stays alive and up to date. Cheers! On Saturday, October 17, 2015 at 12:58:43 AM UTC+9, Federico Carrone wrote: > > I would love to help too. I am using quite a lot epgsql :). > > On Thu, Oct 15, 2015 at 6:34 PM David Welton > wrote: > >> > I am interested in helping the project, too! >> > >> > Just need some references as to how I could be of help. >> >> Looking over the pull requests and running the tests are the basics to >> keep things turning over. "I pulled this branch, ran the tests and it >> looks good", or "I pulled this branch, there are no tests, and the >> code is really difficult to understand" - that kind of thing. >> >> > Grazie :) >> >> Grazie a te! >> -- >> David N. Welton >> >> http://www.welton.it/davidw/ >> >> http://www.dedasys.com/ >> _______________________________________________ >> erlang-questions mailing list >> erlang-q...@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthonym@REDACTED Mon Oct 26 04:06:33 2015 From: anthonym@REDACTED (ANTHONY MOLINARO) Date: Sun, 25 Oct 2015 20:06:33 -0700 Subject: [erlang-questions] Processes stuck in ets:select_trap Message-ID: Hi, On one of my systems (running R16B-03-1) that some processes sometimes get backed up. When they are backed up I?ve noticed they seem to be running but the current function is gets:select_trap and they appear to be stuck in that function. I assume this function is somehow related to calling an ets:select function of some sort but am not sure what it means? Does any one have any pointer or has any one seen this sort of behavior before? Thanks, -Anthony From max.lapshin@REDACTED Mon Oct 26 07:23:10 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Mon, 26 Oct 2015 09:23:10 +0300 Subject: [erlang-questions] Processes stuck in ets:select_trap In-Reply-To: References: Message-ID: You are running ets:select and ets:select If it is a frequent problem, consider changing ets:select to ets:lookup. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Mon Oct 26 10:29:22 2015 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Mon, 26 Oct 2015 10:29:22 +0100 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: References: Message-ID: <562DF272.20801@wanadoo.fr> Hi, This page should then be updated : http://www.erlang.org/community.html in order to encourage use of it, with a clear link to it. BTW, the "Contributing to Erlang/OTP" paragraph on https://github.com/erlang/otp/blob/maint/README.md is totally outdated. Regards Le 21/10/2015 17:13, Bruce Yinhe a ?crit : > Hello everyone > > We are happy to announce the issue tracker for Erlang/OTP > (http://bugs.erlang.org ). Our intention is > that the issue tracker replaces the erlang-bugs mailing list, in order > to make it easier for the community to report bugs, suggest improvements > and new features. You can start using the issue tracker today. From henrik.x.nord@REDACTED Mon Oct 26 11:43:39 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Mon, 26 Oct 2015 11:43:39 +0100 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <562DF272.20801@wanadoo.fr> References: <562DF272.20801@wanadoo.fr> Message-ID: <562E03DB.60101@ericsson.com> On 10/26/2015 10:29 AM, PAILLEAU Eric wrote: > Hi, > > This page should then be updated : http://www.erlang.org/community.html > in order to encourage use of it, with a clear link to it. > > BTW, the "Contributing to Erlang/OTP" paragraph on > https://github.com/erlang/otp/blob/maint/README.md is totally outdated. Thanks!, Will add to the todo list. > > Regards > > Le 21/10/2015 17:13, Bruce Yinhe a ?crit : >> Hello everyone >> >> We are happy to announce the issue tracker for Erlang/OTP >> (http://bugs.erlang.org ). Our intention is >> that the issue tracker replaces the erlang-bugs mailing list, in order >> to make it easier for the community to report bugs, suggest improvements >> and new features. You can start using the issue tracker today. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From jesper.louis.andersen@REDACTED Mon Oct 26 12:50:16 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 26 Oct 2015 12:50:16 +0100 Subject: [erlang-questions] Processes stuck in ets:select_trap In-Reply-To: References: Message-ID: On Mon, Oct 26, 2015 at 4:06 AM, ANTHONY MOLINARO < anthonym@REDACTED> wrote: > When they are backed up I?ve noticed they seem to be running but the > current function is gets:select_trap and they appear to be stuck in that > function. The select_trap function is a (fake) function used to "break up" a long running select so you don't end up monopolizing a scheduler. So this just means you have a select which takes a long time to run, presumably because it is running a full-table-scan. -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Mon Oct 26 13:40:14 2015 From: max.lapshin@REDACTED (Max Lapshin) Date: Mon, 26 Oct 2015 15:40:14 +0300 Subject: [erlang-questions] Processes stuck in ets:select_trap In-Reply-To: References: Message-ID: Correct me if I'm wrong, but ets:select is always a full table scan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Oct 26 13:50:27 2015 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 26 Oct 2015 13:50:27 +0100 Subject: [erlang-questions] Processes stuck in ets:select_trap In-Reply-To: References: Message-ID: It depends on how your filter function works: Create a table: 2> ets:new(foo, [named_table, {keypos, 1}]). foo Fill the table so we can measure lookup differences in time: 3> ets:insert(foo, [{K, K+1} || K <- lists:seq(1, 1000000)]). true Create a point-query match spec: 4> MS = ets:fun2ms(fun({3, V}) -> V end). [{{3,'$1'},[],['$1']}] Run a query with this matchspec: 5> timer:tc(fun() -> ets:select(foo, MS) end). {25,[4]} Create another match-spec where you don't match directly on the key: 6> MS2 = ets:fun2ms(fun({K, V}) when K == 3 -> V end). [{{'$1','$2'},[{'==','$1',3}],['$2']}] Now time this one: 7> timer:tc(fun() -> ets:select(foo, MS2) end). {177663,[4]} As you can see, the MS2 variant requests elements of the form {$1, $2} and then proceeds to filter them via $1 == 3, which is much more expensive than the first one which requests data of the form {3, $2} and then needs no filtering. On Mon, Oct 26, 2015 at 1:40 PM, Max Lapshin wrote: > Correct me if I'm wrong, but ets:select is always a full table scan. > > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From heinz@REDACTED Mon Oct 26 14:33:58 2015 From: heinz@REDACTED (Heinz Nikolaus Gies) Date: Mon, 26 Oct 2015 14:33:58 +0100 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <5277982.P3SBmGgZjW@changa> References: <2766197.OAMIYguNob@changa> <5277982.P3SBmGgZjW@changa> Message-ID: <76DE954A-6B6F-4C82-95F8-E1D06D7895BC@licenser.net> Atlasssian offers free hosted versions of Jira to Open Source projects, so yes they do offer a emergency number on the back of the bag ;) - only downside is no custom domains (either for OSS or non OSS hosted Jira). Then again it works really well :) have been using it for FiFo for quite a while. > On Oct 22, 2015, at 12:38, zxq9 wrote: > > On 2015?10?22? ??? 11:12:57 you wrote: >> On Thu, Oct 22, 2015 at 12:51 AM, zxq9 wrote: >>> Light a candle for the poor sysops who have to maintain Atlassian >>> installations. >>> >>> I've had to. :-/ >> >> Given that Atlassian was kind enough to grant a license, surely there's >> the option of using a hosted installation. > > Given that crack dealers are kind enough to hand out free samples, surely > there's a free medical emergency number on the back of the bag. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From henrik.x.nord@REDACTED Mon Oct 26 14:45:43 2015 From: henrik.x.nord@REDACTED (Henrik Nord X) Date: Mon, 26 Oct 2015 14:45:43 +0100 Subject: [erlang-questions] [erlang-bugs] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <5627E9CB.1080107@ninenines.eu> References: <5627E0CE.7020006@cs.ntua.gr> <5627E9CB.1080107@ninenines.eu> Message-ID: <562E2E87.40705@ericsson.com> On 10/21/2015 09:38 PM, Lo?c Hoguin wrote: > Would be good to at least be able to log in with your Github account, > if nothing else. Apparently it can use OpenID for log in, but for some > reason it's restricted to Erlang Central accounts. Adding Github would > make it easier to contributors since they already have an account on > Github. > This should be fixed. From eric.pailleau@REDACTED Mon Oct 26 18:55:49 2015 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Mon, 26 Oct 2015 18:55:49 +0100 Subject: [erlang-questions] [ANN] Announcing Erlang Issue Tracker bugs.erlang.org In-Reply-To: <562E03DB.60101@ericsson.com> Message-ID: Hi, I forgot to say that "what's cooking in Otp" was a great idea, as well the macro planning of upcoming releases. Unfortunately it seems that it is given up, or at least not regularly done. Mho this could be done by the community manager ? Le?26 oct. 2015 11:43 AM, Henrik Nord X a ?crit?: > > > > On 10/26/2015 10:29 AM, PAILLEAU Eric wrote: > > Hi, > > > > This page should then be updated : http://www.erlang.org/community.html > > in order to encourage use of it, with a clear link to it. > > > > > BTW, the "Contributing to Erlang/OTP" paragraph on > > https://github.com/erlang/otp/blob/maint/README.md? is totally outdated. > Thanks!, Will add to the todo list. > > > > Regards > > > > Le 21/10/2015 17:13, Bruce Yinhe a ?crit : > >> Hello everyone > >> > >> We are happy to announce the issue tracker for Erlang/OTP > >> (http://bugs.erlang.org ). Our intention is > >> that the issue tracker replaces the erlang-bugs mailing list, in order > >> to make it easier for the community to report bugs, suggest improvements > >> and new features. You can start using the issue tracker today. > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > From marc@REDACTED Tue Oct 27 15:43:16 2015 From: marc@REDACTED (Marc Worrell) Date: Tue, 27 Oct 2015 15:43:16 +0100 Subject: [erlang-questions] [ANN] Zotonic 0.13.5 released Message-ID: Zotonic is the Erlang Content Management System and Framework. We have released version 0.13.5. This is a maintenance release of Zotonic 0.13 Main changes are: ? Updates to the bundled tinymce and code view plugins ? Fulltext search fixes, search terms could be stemmed multiple times ? Many fixes to the admin css and html ? Fixes an ACL user groups problem where the sudo permissions didn't reflect the admin user ? Fixes a problem with the template compiler where a template could be compiled multiple times ? Translation option to force the default language of a site ? Fixes for the comet and postback controllers where push data was not sent with postback requests ? New admin filter 'temporary_rsc' ? Fix for a problem in mod_video where a video could be rendered with a color space that is incompatible with QuickTime See the full release notes at http://zotonic.com/docs/latest/dev/releasenotes/rel_0.13.5.html Download here https://github.com/zotonic/zotonic/releases/tag/release-0.13.5 Thanks to all who have been contributing to Zotonic! The Zotonic Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ciprian.craciun@REDACTED Tue Oct 27 19:19:16 2015 From: ciprian.craciun@REDACTED (Ciprian Dorin Craciun) Date: Tue, 27 Oct 2015 20:19:16 +0200 Subject: [erlang-questions] Why Erlang spawns sub-processes in their own process session / group? Message-ID: Today I observed that the processes spawned by RabbitMQ (like for example EPMD in non-detached mode, system supervisor processes like disk, etc.) are put in different process sessions / groups than the "parent" Erlang VM process. Thus I tracked down the `setgrp` / `setsid` calls to one of the two places: * `erts/emulator/sys/unix/erl_child_setup.c` (`main`) -- which is my understanding is a process launcher that bootstraps the process to be `execve`-ed; * `erts/emulator/sys/unix/sys.c` (`spawn_start`) -- which I gather is used to spawn new port drivers; * (a few other places, but there the calls were made when the process has daemonized in the background;) I wonder why such a decision to put newly spawned processes in new sessions / groups than the parent Erlang VM? What is the advantage? Because I definitely can find disadvantages, like for example being quite hard to kill an entire Erlang-based application, together with all the sub-processes by using the `kill` syscall with a group PID instead of a process PID. Thanks, Ciprian. From yashgt@REDACTED Wed Oct 28 12:48:58 2015 From: yashgt@REDACTED (Yash Ganthe) Date: Wed, 28 Oct 2015 04:48:58 -0700 (PDT) Subject: [erlang-questions] ODBC interface for mnesia Message-ID: <7acd4ba1-497e-4094-af1b-7eb2484b4597@googlegroups.com> Hi, I am aware mnesia is best accessed using QLC. However, I would like to access it using ODBC because the programming library I am using understands only ODBC. I intend to perform very basic operations on the DB. Is there any project that offers an ODBC interface to Mnesia? Thanks, Yash -------------- next part -------------- An HTML attachment was scrubbed... URL: From hm@REDACTED Wed Oct 28 16:16:20 2015 From: hm@REDACTED (=?UTF-8?Q?H=C3=A5kan_Mattsson?=) Date: Wed, 28 Oct 2015 16:16:20 +0100 Subject: [erlang-questions] ODBC interface for mnesia Message-ID: http://www.erlang.se/publications/xjobb/sql_compiler_report.pdf /H?kan On Wed, Oct 28, 2015 at 12:48 PM, Yash Ganthe wrote: > Hi, > > I am aware mnesia is best accessed using QLC. However, I would like to > access it using ODBC because the programming library I am using understands > only ODBC. I intend to perform very basic operations on the DB. > > Is there any project that offers an ODBC interface to Mnesia? > > Thanks, > Yash > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus@REDACTED Wed Oct 28 17:31:38 2015 From: magnus@REDACTED (Magnus Henoch) Date: Wed, 28 Oct 2015 16:31:38 +0000 Subject: [erlang-questions] TLS distribution: why proxy? Message-ID: Hi all, I'm looking into the code for running the Erlang distribution protocol over TLS, as described in http://www.erlang.org/doc/apps/ssl/ssl_distribution.html . I've noticed that the code uses a proxy: for each node, there is one TLS-encrypted connection to the remote node, and one non-encrypted connection over localhost, all managed by a proxy process that just receives data on the non-encrypted connection and sends it to the TLS connection and vice versa. To me it would seem more rational to use a TLS connection directly, so surely there must be a good reason for things being done this way, but I haven't found any, neither in comments nor in the version history. Does anyone know why the TLS distribution is set up in this way? Regards, Magnus -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn@REDACTED Thu Oct 29 14:08:35 2015 From: bjorn@REDACTED (=?UTF-8?Q?Bj=C3=B6rn_Gustavsson?=) Date: Thu, 29 Oct 2015 14:08:35 +0100 Subject: [erlang-questions] Fourth draft of EEP 44 - Additional preprocessor directives Message-ID: Here is the fourth draft with more clarifications. There is also a minor correction in the reference implementation. http://www.erlang.org/eeps/eep-0044.html https://github.com/erlang/eep/blob/master/eeps/eep-0044.md /Bj?rn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From bjorn@REDACTED Thu Oct 29 14:11:16 2015 From: bjorn@REDACTED (=?UTF-8?Q?Bj=C3=B6rn_Gustavsson?=) Date: Thu, 29 Oct 2015 14:11:16 +0100 Subject: [erlang-questions] New EEP 45 - FUNCTION macro Message-ID: There is a new EEP 45 that proposes a new FUNCTION macro in the preprocessor. http://www.erlang.org/eeps/eep-0045.html https://github.com/erlang/eep/blob/master/eeps/eep-0045.md /Bj?rn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From matwey.kornilov@REDACTED Thu Oct 29 15:07:34 2015 From: matwey.kornilov@REDACTED (Matwey V. Kornilov) Date: Thu, 29 Oct 2015 17:07:34 +0300 Subject: [erlang-questions] otp 18.1.3: compilation failed: error: 'slocked' undeclared (first use in this function) Message-ID: Hello, I am trying to compile otp 18.1.3 from sources and get the following: [ 269s] CC obj/x86_64-suse-linux-gnu/opt/smp/erl_process_dict.o [ 270s] beam/erl_process.c: In function 'fetch_sys_task': [ 270s] beam/erl_process.c:10015:13: error: 'slocked' undeclared (first use in this function) [ 270s] if (slocked) [ 270s] ^ [ 270s] beam/erl_process.c:10015:13: note: each undeclared identifier is reported only once for each function it appears in [ 270s] beam/erl_process.c:10016:34: error: 'p' undeclared (first use in this function) [ 270s] erts_smp_proc_unlock(p, ERTS_PROC_LOCK_STATUS); [ 270s] ^ [ 270s] CC obj/x86_64-suse-linux-gnu/opt/smp/erl_process_lock.o [ 270s] x86_64-suse-linux-gnu/Makefile:676: recipe for target 'obj/x86_64-suse-linux-gnu/opt/smp/erl_process.o' failed [ 270s] make[3]: *** [obj/x86_64-suse-linux-gnu/opt/smp/erl_process.o] Error 1 My gcc version is 4.8. Configure options was the following: ./configure --host=x86_64-suse-linux-gnu --build=x86_64-suse-linux-gnu --program-prefix= --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --disable-dependency-tracking --enable-systemd --with-ssl=/usr --enable-threads --enable-smp-support --enable-kernel-poll --enable-hipe --enable-shared-zlib From matwey.kornilov@REDACTED Thu Oct 29 15:16:17 2015 From: matwey.kornilov@REDACTED (Matwey V. Kornilov) Date: Thu, 29 Oct 2015 17:16:17 +0300 Subject: [erlang-questions] otp 18.1.3: compilation failed: error: 'slocked' undeclared (first use in this function) In-Reply-To: References: Message-ID: I am sorry, forget it. 29.10.2015 17:07, Matwey V. Kornilov ?????: > Hello, > > I am trying to compile otp 18.1.3 from sources and get the following: > > [ 269s] CC obj/x86_64-suse-linux-gnu/opt/smp/erl_process_dict.o > [ 270s] beam/erl_process.c: In function 'fetch_sys_task': > [ 270s] beam/erl_process.c:10015:13: error: 'slocked' undeclared (first > use in this function) > [ 270s] if (slocked) > [ 270s] ^ > [ 270s] beam/erl_process.c:10015:13: note: each undeclared identifier > is reported only once for each function it appears in > [ 270s] beam/erl_process.c:10016:34: error: 'p' undeclared (first use > in this function) > [ 270s] erts_smp_proc_unlock(p, ERTS_PROC_LOCK_STATUS); > [ 270s] ^ > [ 270s] CC obj/x86_64-suse-linux-gnu/opt/smp/erl_process_lock.o > [ 270s] x86_64-suse-linux-gnu/Makefile:676: recipe for target > 'obj/x86_64-suse-linux-gnu/opt/smp/erl_process.o' failed > [ 270s] make[3]: *** [obj/x86_64-suse-linux-gnu/opt/smp/erl_process.o] > Error 1 > > > My gcc version is 4.8. Configure options was the following: > > ./configure --host=x86_64-suse-linux-gnu --build=x86_64-suse-linux-gnu > --program-prefix= --disable-dependency-tracking --prefix=/usr > --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin > --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include > --libdir=/usr/lib64 --libexecdir=/usr/lib --localstatedir=/var > --sharedstatedir=/usr/com --mandir=/usr/share/man > --infodir=/usr/share/info --disable-dependency-tracking --enable-systemd > --with-ssl=/usr --enable-threads --enable-smp-support > --enable-kernel-poll --enable-hipe --enable-shared-zlib > From ingela.andin@REDACTED Thu Oct 29 15:26:56 2015 From: ingela.andin@REDACTED (Ingela Andin) Date: Thu, 29 Oct 2015 15:26:56 +0100 Subject: [erlang-questions] TLS distribution: why proxy? In-Reply-To: References: Message-ID: Hi! The reason it is done the way it is, is because that is the easiest way to do it with the existing kernel API. . We also had to make some special hooks to the kernel supervisor so that Erlang ssl can be run before Erlang applications can be started. Regards Ingela Erlang/OTP Team - Ericsson AB 2015-10-28 17:31 GMT+01:00 Magnus Henoch : > Hi all, > > I'm looking into the code for running the Erlang distribution protocol > over TLS, as described in > http://www.erlang.org/doc/apps/ssl/ssl_distribution.html . I've noticed > that the code uses a proxy: for each node, there is one TLS-encrypted > connection to the remote node, and one non-encrypted connection over > localhost, all managed by a proxy process that just receives data on the > non-encrypted connection and sends it to the TLS connection and vice versa. > > To me it would seem more rational to use a TLS connection directly, so > surely there must be a good reason for things being done this way, but I > haven't found any, neither in comments nor in the version history. Does > anyone know why the TLS distribution is set up in this way? > > Regards, > Magnus > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrtndimitrov@REDACTED Thu Oct 29 16:33:06 2015 From: mrtndimitrov@REDACTED (Martin Koroudjiev) Date: Thu, 29 Oct 2015 17:33:06 +0200 Subject: [erlang-questions] pattern matching with maps in function heads Message-ID: <56323C32.6090809@gmail.com> Hello, How can I do pattern matching with maps in function heads? I tried -module(m1). -export([main/0]). main() -> M = #{a => 1, b => 2, c => 3}, io:format("b: ~p~n", get_attr(b, M)), io:format("d: ~p~n", get_attr(d, M)). get_attr(Attr, #{Attr := Val} = Map) -> Val; get_attr(_Attr, _M) -> not_found. but I get compile error: m1.erl:10: variable 'Attr' is unbound Thanks in advance, Martin From a.shneyderman@REDACTED Thu Oct 29 17:21:34 2015 From: a.shneyderman@REDACTED (Alex Shneyderman) Date: Thu, 29 Oct 2015 12:21:34 -0400 Subject: [erlang-questions] pattern matching with maps in function heads In-Reply-To: <56323C32.6090809@gmail.com> References: <56323C32.6090809@gmail.com> Message-ID: Attr might have multiple values. io:format("b: ~p~n", maps:get(b, M)), io:format("d: ~p~n", maps:get(d, M)). is probably what you want. It is discussed briefly in LYSE book: "You also cannot do matching of a key by value (#{X := val} = Map) because there could be multiple keys with the same value." Cheers, Alex. On Thu, Oct 29, 2015 at 11:33 AM, Martin Koroudjiev wrote: > Hello, > > How can I do pattern matching with maps in function heads? I tried > > -module(m1). > > -export([main/0]). > > main() -> > M = #{a => 1, b => 2, c => 3}, > io:format("b: ~p~n", get_attr(b, M)), > io:format("d: ~p~n", get_attr(d, M)). > > get_attr(Attr, #{Attr := Val} = Map) -> > Val; > get_attr(_Attr, _M) -> > not_found. > > but I get compile error: m1.erl:10: variable 'Attr' is unbound > > Thanks in advance, > Martin > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From mrtndimitrov@REDACTED Thu Oct 29 17:57:19 2015 From: mrtndimitrov@REDACTED (Martin Koroudjiev) Date: Thu, 29 Oct 2015 18:57:19 +0200 Subject: [erlang-questions] pattern matching with maps in function heads In-Reply-To: References: <56323C32.6090809@gmail.com> Message-ID: <56324FEF.3000003@gmail.com> Hello and thank, Yes, I know about the maps module but was trying to explore the map syntax and use pattern matching. I, actually, am not trying to match on the value but on the key name. I thought Attrib will take the value of the first argument and this value will be used as key name in the map. Regards, Martin On 10/29/2015 6:21 PM, Alex Shneyderman wrote: > Attr might have multiple values. > > io:format("b: ~p~n", maps:get(b, M)), > io:format("d: ~p~n", maps:get(d, M)). > > is probably what you want. > > It is discussed briefly in LYSE book: > > "You also cannot do matching of a key by value (#{X := val} = Map) > because there could be multiple keys with the same value." > > Cheers, > Alex. > > On Thu, Oct 29, 2015 at 11:33 AM, Martin Koroudjiev > wrote: >> Hello, >> >> How can I do pattern matching with maps in function heads? I tried >> >> -module(m1). >> >> -export([main/0]). >> >> main() -> >> M = #{a => 1, b => 2, c => 3}, >> io:format("b: ~p~n", get_attr(b, M)), >> io:format("d: ~p~n", get_attr(d, M)). >> >> get_attr(Attr, #{Attr := Val} = Map) -> >> Val; >> get_attr(_Attr, _M) -> >> not_found. >> >> but I get compile error: m1.erl:10: variable 'Attr' is unbound >> >> Thanks in advance, >> Martin >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions From gomoripeti@REDACTED Thu Oct 29 19:38:32 2015 From: gomoripeti@REDACTED (=?UTF-8?B?UGV0aSBHw7Ztw7ZyaQ==?=) Date: Thu, 29 Oct 2015 19:38:32 +0100 Subject: [erlang-questions] pattern matching with maps in function heads In-Reply-To: <56324FEF.3000003@gmail.com> References: <56323C32.6090809@gmail.com> <56324FEF.3000003@gmail.com> Message-ID: Hello Martin, Alex This is interesting as this works get_attr(Attr, Map) -> #{Attr := Val} = Map, Val. It is kind of documented that only matching literal keys in function heads are supported: http://www.erlang.org/doc/reference_manual/expressions.html#id81375 under Matching Syntax. I wonder how complicated it would be to support the syntax in question. On Thu, Oct 29, 2015 at 5:57 PM, Martin Koroudjiev wrote: > Hello and thank, > > Yes, I know about the maps module but was trying to explore the map > syntax and use pattern matching. I, actually, am not trying to match on > the value but on the key name. I thought Attrib will take the value of > the first argument and this value will be used as key name in the map. > > Regards, > Martin > > On 10/29/2015 6:21 PM, Alex Shneyderman wrote: > > Attr might have multiple values. > > > > io:format("b: ~p~n", maps:get(b, M)), > > io:format("d: ~p~n", maps:get(d, M)). > > > > is probably what you want. > > > > It is discussed briefly in LYSE book: > > > > "You also cannot do matching of a key by value (#{X := val} = Map) > > because there could be multiple keys with the same value." > > > > Cheers, > > Alex. > > > > On Thu, Oct 29, 2015 at 11:33 AM, Martin Koroudjiev > > wrote: > >> Hello, > >> > >> How can I do pattern matching with maps in function heads? I tried > >> > >> -module(m1). > >> > >> -export([main/0]). > >> > >> main() -> > >> M = #{a => 1, b => 2, c => 3}, > >> io:format("b: ~p~n", get_attr(b, M)), > >> io:format("d: ~p~n", get_attr(d, M)). > >> > >> get_attr(Attr, #{Attr := Val} = Map) -> > >> Val; > >> get_attr(_Attr, _M) -> > >> not_found. > >> > >> but I get compile error: m1.erl:10: variable 'Attr' is unbound > >> > >> Thanks in advance, > >> Martin > >> > >> _______________________________________________ > >> erlang-questions mailing list > >> erlang-questions@REDACTED > >> http://erlang.org/mailman/listinfo/erlang-questions > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxq9@REDACTED Fri Oct 30 00:09:14 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 30 Oct 2015 08:09:14 +0900 Subject: [erlang-questions] pattern matching with maps in function heads In-Reply-To: <56323C32.6090809@gmail.com> References: <56323C32.6090809@gmail.com> Message-ID: <6457984.Ldj4ujshpG@changa> On 2015?10?29? ??? 17:33:06 Martin Koroudjiev wrote: > Hello, > > How can I do pattern matching with maps in function heads? I tried > > -module(m1). > > -export([main/0]). > > main() -> > M = #{a => 1, b => 2, c => 3}, > io:format("b: ~p~n", get_attr(b, M)), > io:format("d: ~p~n", get_attr(d, M)). > > get_attr(Attr, #{Attr := Val} = Map) -> > Val; > get_attr(_Attr, _M) -> > not_found. > > but I get compile error: m1.erl:10: variable 'Attr' is unbound The docs say "Matching of literals as keys are allowed in function heads", but nothing else about this, so it is safe to assume this is not supported -- and the error message confirms it. Technically I imagine this should be possible to implement. That said, there isn't any reason to support this I can think of other than to implement a function like the one above, which would simply be a reimplementation of a feature already supported two different ways (map syntax and the maps module). In practice, the calling function will have already pulled the value from the map, the programmer will know the desired key as a literal value, and/or the called function will pull the value from the map. In all cases extraction of the value will happen much more naturally on either side of the function call, not within the function head. It is a bit convoluted to expect it to work otherwise -- it fattens your function head and adds an argument to it for no reason. Think of it this way: double-matches within a function head are used most often to *assert* that something is true, that a particular match should indicate that a particular clause should be the one executed. In the case above the only thing that can be asserted is whether or not a particular key is in the given map -- and that is something there are already functions for. I think the current idiom is clearer for the case where a clause is selected based on whether a key exists in the course of processing: foo(Bar, SomeMap) -> % ... stuff some_fun(maps:get(Bar, SomeMap, undefined)), % ... more stuff some_fun("some literal value") -> next_thing(); some_fun("a different literal") -> alt_next_thing(); some_fun(undefined) -> default_oops_thing(). The above is such a simplified example, that I can't actually imagine writing that exact sort of code. But I *can* imagine writing something a bit more interesting, like: foo(Bar, SomeMap) -> % ... stuff some_fun(Bar, maps:get(Bar, SomeMap, undefined)), % ... more stuff some_fun(Bar, {notice, Message}) -> send_user(Bar, Message); some_fun(Bar, {system, Message}) -> handle_sys(Bar, Message); some_fun(Bar, undefined) -> log_missing(Bar). I like this more than: foo(Bar, SomeMap) -> % stuff some_fun(Bar, SomeMap), % stuff some_fun(Bar, #{Bar := {notice, Message}) -> ...; some_fun(Bar, #{Bar := {system, Message}) -> ...; some_fun(Bar, _) -> ... This just feels too noisy. As we've seen many times on the ML before, though, it can be difficult to extract concrete examples from isolated code examples like this. Maybe someone else can think of a case where it would be really useful to match variable keys in function heads, I just can't think of any case where I would find it clearer than the current way of doing things (also I don't really *want* there to be 10 different ways to achieve the same effect -- part of why I like Erlang is that it isn't full of C++-style syntactic hairballs; but that's me). From zxq9@REDACTED Fri Oct 30 00:26:03 2015 From: zxq9@REDACTED (zxq9) Date: Fri, 30 Oct 2015 08:26:03 +0900 Subject: [erlang-questions] pattern matching with maps in function heads In-Reply-To: <6457984.Ldj4ujshpG@changa> References: <56323C32.6090809@gmail.com> <6457984.Ldj4ujshpG@changa> Message-ID: <3068759.Cf5S8XNbpP@changa> On 2015?10?30? ??? 08:09:14 zxq9 wrote: > On 2015?10?29? ??? 17:33:06 Martin Koroudjiev wrote: > > get_attr(Attr, #{Attr := Val} = Map) -> > > Val; > > get_attr(_Attr, _M) -> > > not_found. > > > > but I get compile error: m1.erl:10: variable 'Attr' is unbound ... > some_fun(Bar, {notice, Message}) -> send_user(Bar, Message); > some_fun(Bar, {system, Message}) -> handle_sys(Bar, Message); > some_fun(Bar, undefined) -> log_missing(Bar). > VS > some_fun(Bar, #{Bar := {notice, Message}) -> ...; > some_fun(Bar, #{Bar := {system, Message}) -> ...; > some_fun(Bar, _) -> ... > > This just feels too noisy. I completely forgot to mention the other meaning of "noise" aside from syntax. In the first case some_fun/2 has a slimly constrained typespec, even without knowing much about it: -spec some_fun(Bar, Message) -> ok when Bar :: term(), Message :: {system, term()} | {notice, term()} | undefined. In the second version, though, nothing can be known about it other than it accepts a map -- and passing entire maps pretty much destroys the utility of dialyzer in many cases. I know that doesn't matter to a lot of people, but I've found it useful. Early on I might pass a whole map in to lots of functions, but once I've got a good understanding of what a project is really doing internally I like to roll that back and be more deliberate when all I actually want is a single value from the map used within a function -- then dialyzer can sometimes becomes much more useful. -Craig From comptekki@REDACTED Fri Oct 30 20:00:03 2015 From: comptekki@REDACTED (Wes James) Date: Fri, 30 Oct 2015 13:00:03 -0600 Subject: [erlang-questions] framework for building interface Message-ID: With my esysman app ( https://github.com/comptekki/esysman ) I use manual piecing of erlang binary/text chunks in the cowboy application that uses a .hrl for config info to building the interface. Anyone know of an erlang framework that would help in building web pages where rooms (one tab on the page then several other tabs for the other rooms) could be accessed with a click on a tabbed menu at the top and each room has computers laid out on the screen. Right now, like I said, I'm doing that in loops inside the cowboy app. I'd like cowboy to serve the interface, but some other framework to help build the interface, since I'm not that great at the html5 stuff. Would nitrogen work? html5 framework for erlang - I guess you'd call it?? Thanks, -wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From gumm@REDACTED Fri Oct 30 20:55:31 2015 From: gumm@REDACTED (Jesse Gumm) Date: Fri, 30 Oct 2015 14:55:31 -0500 Subject: [erlang-questions] framework for building interface In-Reply-To: References: Message-ID: Oops, forgot to reply all. ---------- Forwarded message ---------- From: "Jesse Gumm" Date: Oct 30, 2015 2:54 PM Subject: Re: [erlang-questions] framework for building interface To: "Wes James" Cc: Nitrogen, being a largely front-end framework, certainly sounds like it'd be a fit. Showing and hiding elements on the page is a matter of calling wf:wire(#show{}) or wf:wire(#hide{}) in a postback event. You can see a simple version of this here: http://nitrogenproject.com/demos/effects Feel free to plunk around with it, and if you have any questions, let me know. -Jesse P.S. Sorry for the terseness, I'm typing on my phone. -- Jesse Gumm Owner, Sigma Star Systems 414.940.4866 || sigma-star.com || @jessegumm On Oct 30, 2015 2:00 PM, "Wes James" wrote: > With my esysman app ( https://github.com/comptekki/esysman ) I use manual > piecing of erlang binary/text chunks in the cowboy application that uses a > .hrl for config info to building the interface. > > Anyone know of an erlang framework that would help in building web pages > where rooms (one tab on the page then several other tabs for the other > rooms) could be accessed with a click on a tabbed menu at the top and each > room has computers laid out on the screen. Right now, like I said, I'm > doing that in loops inside the cowboy app. I'd like cowboy to serve the > interface, but some other framework to help build the interface, since I'm > not that great at the html5 stuff. Would nitrogen work? html5 framework > for erlang - I guess you'd call it?? > > Thanks, > > -wes > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From comptekki@REDACTED Fri Oct 30 21:10:30 2015 From: comptekki@REDACTED (Wes James) Date: Fri, 30 Oct 2015 14:10:30 -0600 Subject: [erlang-questions] framework for building interface In-Reply-To: References: Message-ID: Thanks. I'll take a look. -wes On Fri, Oct 30, 2015 at 1:54 PM, Jesse Gumm wrote: > Nitrogen, being a largely front-end framework, certainly sounds like it'd > be a fit. > > Showing and hiding elements on the page is a matter of calling > wf:wire(#show{}) or wf:wire(#hide{}) in a postback event. You can see a > simple version of this here: http://nitrogenproject.com/demos/effects > > Feel free to plunk around with it, and if you have any questions, let me > know. > > -Jesse > > P.S. Sorry for the terseness, I'm typing on my phone. > > -- > Jesse Gumm > Owner, Sigma Star Systems > 414.940.4866 || sigma-star.com || @jessegumm > On Oct 30, 2015 2:00 PM, "Wes James" wrote: > >> With my esysman app ( https://github.com/comptekki/esysman ) I use >> manual piecing of erlang binary/text chunks in the cowboy application that >> uses a .hrl for config info to building the interface. >> >> Anyone know of an erlang framework that would help in building web pages >> where rooms (one tab on the page then several other tabs for the other >> rooms) could be accessed with a click on a tabbed menu at the top and each >> room has computers laid out on the screen. Right now, like I said, I'm >> doing that in loops inside the cowboy app. I'd like cowboy to serve the >> interface, but some other framework to help build the interface, since I'm >> not that great at the html5 stuff. Would nitrogen work? html5 framework >> for erlang - I guess you'd call it?? >> >> Thanks, >> >> -wes >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: