From ruel@REDACTED Sat Oct 1 12:37:12 2016 From: ruel@REDACTED (Pagayon, Ruel) Date: Sat, 1 Oct 2016 18:37:12 +0800 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? Message-ID: Hi everyone, I'm just wondering (assuming the keys of my maps in my application is not dynamically generated): 1. What is the most optimal key type for maps? 2. If there is little to no effect in performance (or resources in general), as a convention, which is the best to use? Thank you in advance for your responses. Cheers, Ruel -------------- next part -------------- An HTML attachment was scrubbed... URL: From uniaika@REDACTED Sat Oct 1 12:39:49 2016 From: uniaika@REDACTED (Uniaika) Date: Sat, 1 Oct 2016 12:39:49 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: Maybe this blog post (originally for Elixir though) will help you: https://engineering.appcues.com/2016/02/02/too-many-dicts.html TL;DR > WUse string-keyed maps for data which has just arrived from an > external source. > Convert external data to structs as soon as possible. > Use structs almost everywhere else in your program. > Use other atom-keyed maps sparingly. On 10/01/2016 12:37 PM, Pagayon, Ruel wrote: > Hi everyone, > > I'm just wondering (assuming the keys of my maps in my application is > not dynamically generated): > > 1. What is the most optimal key type for maps? > 2. If there is little to no effect in performance (or resources in > general), as a convention, which is the best to use? > > Thank you in advance for your responses. > > Cheers, > Ruel > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From vans_163@REDACTED Sat Oct 1 18:39:49 2016 From: vans_163@REDACTED (Vans S) Date: Sat, 1 Oct 2016 16:39:49 +0000 (UTC) Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: <2053734734.7935255.1475339989311@mail.yahoo.com> That is bad design and advice IMO. ?Structs in elixir define object orientation, functional programming is about data, everything is data. ?The tuple being a core building block. When you define your application using structures, you have just added an extra layer of inference. ?Now in most cases that extra layer will just get in your way, providing no reasonable benefit. One of the biggest enlightenments and problems Erlang developers learn early on is the record. ?First the record is pretty much a Struct in Elixir, with some insignificant differences. ?Now most developers would do this; ?define rigid records like 'User', 'Location', 'Job' with default values and? start writing expressions to work with them. ?Before the developer knows it, the record is being used throughout multiple modules across the entire code base. Everything is unit tested and static analysis reports the code is 100% excellent, not a single flaw. Now the developers code is running in production and its working great!? But now a problem comes along, the job the code does has changed, storing the email_verification_token inside the 'User' record is no longer valid, and a new record is created called 'Validation' that houses email_verification_token with sms tokens and other validations. ?Sure no problem, well defined structure is awesome! The entire code base was written rigidly following the spec defined by the records (Struct), as soon as we want to make a change now multiple modules need to be rewritten. ?Now this is not a problem, a day can be expent to rewrite/replace 20 modules that use that record. Done, rewritten! ? Now the developers code is running in production and its failing :( Turns out by simply replacing text in 20 modules something was missed that is producing undefined behavior now. The project is rolled back in production to the previous version and the entire development process starts all over, trying to figure out where the bug is that was introduced from the simple record change. Tl; Dr: ?Never rely on records (Elixir Structs) across multiple modules, always write expressions that manipulate DATA, never write expressions that manipulate OBJECTS (oop definition). On the count of maps anything used as a key is optimal, there is no limitations. Maps are a great flexibility and there is no one right way to use them. An example of a useful map key: DataChannelLookup#{} maps:put({peer1, peer2}, Channel,?DataChannelLookup) maps:put({peer2, peer1}, Channel,?DataChannelLookup) maps:put({peer2, peer3}, Channel2,?DataChannelLookup) maps:put({peer3, peer2}, Channel2,?DataChannelLookup) Channel2 = maps:get({peer2, peer3}, DataChannelLookup) On Saturday, October 1, 2016 7:52 AM, Uniaika wrote: Maybe this blog post (originally for Elixir though) will help you: https://engineering.appcues.com/2016/02/02/too-many-dicts.html TL;DR >??? WUse string-keyed maps for data which has just arrived from an > external source. >??? Convert external data to structs as soon as possible. >??? Use structs almost everywhere else in your program. >??? Use other atom-keyed maps sparingly. On 10/01/2016 12:37 PM, Pagayon, Ruel wrote: > Hi everyone, > > I'm just wondering (assuming the keys of my maps in my application is > not dynamically generated): > > 1. What is the most optimal key type for maps? > 2. If there is little to no effect in performance (or resources in > general), as a convention, which is the best to use? > > Thank you in advance for your responses. > > Cheers, > Ruel > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkbucc@REDACTED Sat Oct 1 18:42:51 2016 From: mkbucc@REDACTED (Mark Bucciarelli) Date: Sat, 1 Oct 2016 12:42:51 -0400 Subject: [erlang-questions] concurrency question In-Reply-To: References: Message-ID: On Fri, Sep 30, 2016 at 12:18 PM, Grzegorz Junka wrote: > > On 30/09/2016 01:25, Mark Bucciarelli wrote: > > I have a program that does the following: > > - start gen_event, naming it "dispatcher" > - spawn a process > - send 60 message to process then a stop message > - exit > > ... > But when I run it from inside the interpreter, > > > simulation:start(). > > I see: > > ** exception error: no such process or port > > (which is gen_event saying hey, I can't notify the Pid with that name > > because it's gone.) > > > Why don't I get that error output when I run from the console? > > > If you spawn the dispatcher from the shell then it will be linked to the > shell process. So, won't be actually gone. > > My question was not precise, but your answer was still correct I believe. I was stopping the gen_event when the main loop ended. But the child process keeps running, and its call to gen_event:notify/2 fails. So, if I understand correctly, inside the interpreter the child is linked to the interpreter shell when the main loop stops. Then, when the child crashes it sends an exit signal to its one link, the shell. Thanks, I have gained some understanding of how fast async messages and mailbox buffers require you to think in a different way. Here's the code of the main loop for reference. -module(simulation). -export([start/0]). % The clock. tictoc(60, Pid) -> Pid ! stop, ok; tictoc(TimeInTics, Pid) -> Pid ! tic, tictoc(TimeInTics + 1, Pid). % Spawn child process and start clock ticking. run_clock() -> Pid = spawn_link(stoplight, start, [0, {30, 30, 6}]), tictoc(0, Pid). % Simulation entry point. start() -> {ok, _Pid} = gen_event:start({local, dispatcher}), ok = gen_event:add_handler(dispatcher, logger, []), run_clock(), gen_event:stop(dispatcher), io:put_chars("main routine is done.\n"). Mark -- Blogging at markbucciarelli.com Tweeting @mbucc -------------- next part -------------- An HTML attachment was scrubbed... URL: From uniaika@REDACTED Sat Oct 1 19:06:38 2016 From: uniaika@REDACTED (Uniaika) Date: Sat, 1 Oct 2016 19:06:38 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: <2053734734.7935255.1475339989311@mail.yahoo.com> References: <2053734734.7935255.1475339989311@mail.yahoo.com> Message-ID: <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> There are two things I don't understand in your message: The first is that Elixir would be backed, or at least similar to Erlang's records, which isn't quite true, since Elixir's Structs are built with maps. Maybe you saw something deep in the source code that made you understand the true nature of structs, but for the moment I'm keeping the definition provided by the official documentation. The second thing is the so-called OO nature of Structs. You see them as objects. If I refer to a definition given by Joe Armstrong (who will maybe forgive me to bring his article in this context, or maybe not), an OO object is a cluster of data structures and functions. So now I'm asking you: How do you manage to access functions from a data structure that doesn't contain any? I'd be more than happy to be revealed the great truth beyond the truth concerning structs. On 10/01/2016 06:39 PM, Vans S wrote: > That is bad design and advice IMO. Structs in elixir define object > orientation, functional programming is about data, everything is data. > The tuple being a core building block. > > When you define your application using structures, you have just added > an extra layer of inference. Now in most cases that extra layer will > just get in your way, providing no reasonable benefit. > > One of the biggest enlightenments and problems Erlang developers learn > early on is the record. First the record is pretty much a Struct in > Elixir, with some insignificant differences. Now most developers would > do this; define rigid records like 'User', 'Location', 'Job' with > default values and start writing expressions to work with them. Before > the developer knows it, the record is being used throughout multiple > modules across the entire code base. Everything is unit tested and > static analysis reports the code is 100% excellent, not a single flaw. > > Now the developers code is running in production and its working great! > > But now a problem comes along, the job the code does has changed, > storing the email_verification_token inside the 'User' record is no > longer valid, and a new record is created called 'Validation' that > houses email_verification_token with sms tokens and other validations. > Sure no problem, well defined structure is awesome! > > The entire code base was written rigidly following the spec defined by > the records (Struct), as soon as we want to make a change now multiple > modules need to be rewritten. Now this is not a problem, a day can be > expent to rewrite/replace 20 modules that use that record. Done, > rewritten! > > Now the developers code is running in production and its failing :( > > Turns out by simply replacing text in 20 modules something was missed > that is producing undefined behavior now. The project is rolled back in > production to the previous version and the entire development process > starts all over, trying to figure out where the bug is that was > introduced from the simple record change. > > > Tl; Dr: Never rely on records (Elixir Structs) across multiple modules, > always write expressions that manipulate DATA, never write expressions > that manipulate OBJECTS (oop definition). > > > On the count of maps anything used as a key is optimal, there is no > limitations. Maps are a great flexibility and there is no one right way > to use them. An example of a useful map key: > > DataChannelLookup#{} > maps:put({peer1, peer2}, Channel, DataChannelLookup) > maps:put({peer2, peer1}, Channel, DataChannelLookup) > maps:put({peer2, peer3}, Channel2, DataChannelLookup) > maps:put({peer3, peer2}, Channel2, DataChannelLookup) > > Channel2 = maps:get({peer2, peer3}, DataChannelLookup) > > From jesper.louis.andersen@REDACTED Sat Oct 1 21:24:10 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sat, 1 Oct 2016 21:24:10 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: Hi, You have to define optimal. Do you want efficient lookup or do you want to save space? For static keys, using atoms has the advantage of being fast to compare and rather small (1 machine word). For small maps, lookup speeds are so quick I don't think it really matters too much if a record is marginally faster. I'd go for readability over efficiency in almost all situations. As for the record vs maps discussion: I tend to use an algebraic datatype instead, so I can avoid representing state which is not valid. In some circumstances, a map is fit for the representation. One weakness of a record is that if we define -record(state, { name, socket }). then we need to have #state { socket = undefined } at some point if we don't have a valid socket. With a map, we can simply initialize to the state #{ name => Name } and then when the socket is opened later, we can do State#{ socket => Sock } in the code and extend the state with a socket key. In a language such as OCaml, we could represent it as the type -type state() :: {unconnected, name()} | {connected, name(), socket()}. And I've done this in Erlang as well. It depends on the complexity of the representation. As for the goal: the goal is to build a state which is very hard to accidentally pattern match wrongly on in the code base. If your state has no socket, a match such as barney(#{ socket := Sock }) -> ... cannot match, even by accident. In turn, it forces the code to fail on the match itself, not later on when you try to do something with an undefined socket. On Sat, Oct 1, 2016 at 12:37 PM, Pagayon, Ruel wrote: > Hi everyone, > > I'm just wondering (assuming the keys of my maps in my application is not > dynamically generated): > > 1. What is the most optimal key type for maps? > 2. If there is little to no effect in performance (or resources in > general), as a convention, which is the best to use? > > Thank you in advance for your responses. > > Cheers, > Ruel > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lloyd@REDACTED Sat Oct 1 22:36:15 2016 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Sat, 1 Oct 2016 16:36:15 -0400 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: Hi Jesper, > I tend to use an algebraic datatype instead, Can you please explain this. Many thanks,, LRP Sent from my iPad > On Oct 1, 2016, at 3:24 PM, Jesper Louis Andersen wrote: > > Hi, > > You have to define optimal. Do you want efficient lookup or do you want to save space? For static keys, using atoms has the advantage of being fast to compare and rather small (1 machine word). For small maps, lookup speeds are so quick I don't think it really matters too much if a record is marginally faster. I'd go for readability over efficiency in almost all situations. > > As for the record vs maps discussion: I tend to use an algebraic datatype instead, so I can avoid representing state which is not valid. In some circumstances, a map is fit for the representation. One weakness of a record is that if we define > > -record(state, { name, socket }). > > then we need to have #state { socket = undefined } at some point if we don't have a valid socket. With a map, we can simply initialize to the state #{ name => Name } and then when the socket is opened later, we can do State#{ socket => Sock } in the code and extend the state with a socket key. In a language such as OCaml, we could represent it as the type > > -type state() :: {unconnected, name()} | {connected, name(), socket()}. > > And I've done this in Erlang as well. It depends on the complexity of the representation. As for the goal: the goal is to build a state which is very hard to accidentally pattern match wrongly on in the code base. If your state has no socket, a match such as > > barney(#{ socket := Sock }) -> ... > > cannot match, even by accident. In turn, it forces the code to fail on the match itself, not later on when you try to do something with an undefined socket. > > >> On Sat, Oct 1, 2016 at 12:37 PM, Pagayon, Ruel wrote: >> Hi everyone, >> >> I'm just wondering (assuming the keys of my maps in my application is not dynamically generated): >> >> 1. What is the most optimal key type for maps? >> 2. If there is little to no effect in performance (or resources in general), as a convention, which is the best to use? >> >> Thank you in advance for your responses. >> >> Cheers, >> Ruel >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > > > -- > J. > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Sat Oct 1 22:57:55 2016 From: vans_163@REDACTED (Vans S) Date: Sat, 1 Oct 2016 20:57:55 +0000 (UTC) Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> References: <2053734734.7935255.1475339989311@mail.yahoo.com> <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> Message-ID: <212787807.7999568.1475355475352@mail.yahoo.com> I am not comparing implementation details but abstract concepts, what is the purpose of such a type. ?The underlying purpose of records and Elixir Structs are fairly similar, the main difference is that Elixir early on had huge problems with the way records are implemented in Erlang (and still problems exist) so it was hard to port over Erlang records into Elixir (same reason why Erlang macros cannot be ported cleanly). Thus they created Structs which fit into Elixirs way of doing things much better and solve the same problem records solve. ?You should not be using records in Elixir unless you want headache. As I defined in brackets, I mean the general OOP definition. ?The general definition of OO is objects DEFINE how you manipulate data. ?For example you have a bike object with the turn_pedals function, which mutates data belonging to the bike object such as its position in the world after the pedals turned. ? IMO the correct way to manipulate data is to EXPRESS your intention. A poor example of this is you have functions that take data and return data.? turn_pedals(BikeSpeed, RiderStrength) -> ? ? {ok, 2}. You pass the speed and strength to the function. ?Now perform the calculations and return the distance moved. ?Now you are responsible for doing something knowing that if the the rider with this strength were to turn the bikes pedals, it would move 2 units. This way your structure does not pollute the expression. ?Should the structure change, your expression will still be perfectly valid and working. The hard and painful way IMO is like this: turn_pedals(BikeStruct, RiderStruct) -> ? ?{ok, NewBikeStruct, NewRiderStruct}. My point is that the second example is the OOP approach, where you change data defined as objects. ?The first approach is more functional where there is no relationship to what happens inside the expression to the object defining the data. The example is bad because it does not cover the case of when the Bike or Rider objectified data changes. ?But this is a ongoing exercise with many correct approaches. ?I find it works to keep functions that pull out data and write data to the objectified data all in 1 module. ?So if you change the structure of the objectification, you wont have to go and hunt down references in 20+ source files.? I think this answers the last question as well. On Saturday, October 1, 2016 2:33 PM, Uniaika wrote: There are two things I don't understand in your message: The first is that Elixir would be backed, or at least similar to Erlang's records, which isn't quite true, since Elixir's Structs are built with maps. Maybe you saw something deep in the source code that made you understand the true nature of structs, but for the moment I'm keeping the definition provided by the official documentation. The second thing is the so-called OO nature of Structs. You see them as objects. If I refer to a definition given by Joe Armstrong (who will maybe forgive me to bring his article in this context, or maybe not), an OO object is a cluster of data structures and functions. So now I'm asking you: How do you manage to access functions from a data structure that doesn't contain any? I'd be more than happy to be revealed the great truth beyond the truth concerning structs. On 10/01/2016 06:39 PM, Vans S wrote: > That is bad design and advice IMO.? Structs in elixir define object > orientation, functional programming is about data, everything is data. >? The tuple being a core building block. > > When you define your application using structures, you have just added > an extra layer of inference.? Now in most cases that extra layer will > just get in your way, providing no reasonable benefit. > > One of the biggest enlightenments and problems Erlang developers learn > early on is the record.? First the record is pretty much a Struct in > Elixir, with some insignificant differences.? Now most developers would > do this;? define rigid records like 'User', 'Location', 'Job' with > default values and? start writing expressions to work with them.? Before > the developer knows it, the record is being used throughout multiple > modules across the entire code base. Everything is unit tested and > static analysis reports the code is 100% excellent, not a single flaw. > > Now the developers code is running in production and its working great! > > But now a problem comes along, the job the code does has changed, > storing the email_verification_token inside the 'User' record is no > longer valid, and a new record is created called 'Validation' that > houses email_verification_token with sms tokens and other validations. >? Sure no problem, well defined structure is awesome! > > The entire code base was written rigidly following the spec defined by > the records (Struct), as soon as we want to make a change now multiple > modules need to be rewritten.? Now this is not a problem, a day can be > expent to rewrite/replace 20 modules that use that record. Done, > rewritten!? > > Now the developers code is running in production and its failing :( > > Turns out by simply replacing text in 20 modules something was missed > that is producing undefined behavior now. The project is rolled back in > production to the previous version and the entire development process > starts all over, trying to figure out where the bug is that was > introduced from the simple record change. > > > Tl; Dr:? Never rely on records (Elixir Structs) across multiple modules, > always write expressions that manipulate DATA, never write expressions > that manipulate OBJECTS (oop definition). > > > On the count of maps anything used as a key is optimal, there is no > limitations. Maps are a great flexibility and there is no one right way > to use them. An example of a useful map key: > > DataChannelLookup#{} > maps:put({peer1, peer2}, Channel, DataChannelLookup) > maps:put({peer2, peer1}, Channel, DataChannelLookup) > maps:put({peer2, peer3}, Channel2, DataChannelLookup) > maps:put({peer3, peer2}, Channel2, DataChannelLookup) > > Channel2 = maps:get({peer2, peer3}, DataChannelLookup) > > _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennethlakin@REDACTED Sun Oct 2 00:22:34 2016 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Sat, 1 Oct 2016 15:22:34 -0700 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> References: <2053734734.7935255.1475339989311@mail.yahoo.com> <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> Message-ID: On 10/01/2016 06:39 PM, Vans S wrote: > Turns out by simply replacing text in 20 modules something was missed > that is producing undefined behavior now. One of the things that records get you -that maps do not- is a compiler error when you try to use a field that is not defined in the record. I'm not sure what sort of error you had in mind when you were creating your scenario, but you're free to remove and add fields from a record. Were I working on the project in your story, I would remove the email_verification_token field from the 'User' record before going ahead and creating the 'Validation' record. The compiler errors would make it difficult to miss code that was using the old field. Maps are nice, but records have their place, too. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From kennethlakin@REDACTED Sun Oct 2 00:26:50 2016 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Sat, 1 Oct 2016 15:26:50 -0700 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> References: <2053734734.7935255.1475339989311@mail.yahoo.com> <1e481f0f-2802-b631-b502-9df4a879e01d@crypto-keupone.eu> Message-ID: On 10/01/2016 06:39 PM, Vans S wrote: > Turns out by simply replacing text in 20 modules something was missed > that is producing undefined behavior now. One of the things that records get you -that maps do not- is a compile-time error when you try to use a field that is not defined in the record. I'm not sure what sort of error you had in mind when you were creating your scenario, but you're free to remove and add fields from a record. Were I working on the project in your story, I would remove the email_verification_token field from the 'User' record before going ahead and creating the 'Validation' record. The compiler errors would make it difficult to miss code that was using the old field. Maps are nice, but records have their place, too. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From rvirding@REDACTED Sun Oct 2 00:56:21 2016 From: rvirding@REDACTED (Robert Virding) Date: Sun, 2 Oct 2016 00:56:21 +0200 Subject: [erlang-questions] Type def and spec syntax Message-ID: Is there any better documentation of type defs and specs than that which occurs in the Erlang Reference manual? If so where is it? I am trying to define an equivalent type/spec syntax for LFE so I am going through the valid syntax then trying to work out what they mean. And there is a lot of strangeness from simple things like the predefined type names are 'reserved' so this is valid: -type atom(X) :: list(X). (whatever it means) to much more strange things like: -spec foo(Y) -> integer() when atom(Y). -spec foo(Y) -> integer() when atom(Y :: integer()). (whatever they mean) neither of which are mentioned in the docs. So is there a more complete and understandable documentation some where? Robert P.S. I am a bad boy with types and really only use specs for saying a function never returns to keep dialyzer quiet. :-) -spec generate_a_foo_error(any()) -> no_return(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz@REDACTED Sun Oct 2 01:11:17 2016 From: tuncer.ayaz@REDACTED (Tuncer Ayaz) Date: Sun, 2 Oct 2016 01:11:17 +0200 Subject: [erlang-questions] Type def and spec syntax In-Reply-To: References: Message-ID: On 2 October 2016 at 00:56, Robert Virding wrote: > So is there a more complete and understandable documentation some > where? It's not documentation, but have you looked at lib/hipe/cerl/erl_types.erl and lib/hipe/cerl/erl_bif_types.erl? From vladdu55@REDACTED Sun Oct 2 11:31:00 2016 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Sun, 2 Oct 2016 11:31:00 +0200 Subject: [erlang-questions] Erlang documentation -- a modest proposal In-Reply-To: References: <1474312965.50459872@apps.rackspace.com> <4f69071d-bcec-eda2-6322-428b0ed8ee85@ninenines.eu> <52803f0c-e906-51b3-fe13-63291ef68902@gmail.com> <66361144-3156-2cd8-693f-984676f2bebd@cs.otago.ac.nz> <3a97bd90-b149-b517-8f7f-a73d58174ed0@ninenines.eu> Message-ID: Hi, A short status update. I've spent a couple of evenings looking at the details for how to make the HTML documentation a little better structured, so that it can be styled. The good news is that it's relatively easy to tweak the XSL and get the HTML that we want. On the other hand, it is a single XSL stylesheet that handles not only the reference documentation (which I was targeting foremost), but also everything else (user guides, etc). This requires a bit more thought and discovery by trial and error, as it is easy to get an unexpected result in a part that is only present in few places (and browsing all documentation after every change is not feasible). Debugging XSLT is difficult, I discover. Styling the docs just for improving the artistic experience is probably not a priority, but what would be useful is to make the pages mobile-friendly and make it possible to filter out parts that are not relevant at the moment. Most of this can be done in smaller steps, with the downside that the XSLT will probably become spaghetti; or with a larger scope, convert the template to HTML5 and make it more modular (easy to maintain), with the downside that there is a bit longer start-up time. The latter option brings also up a topic discussed here before: is XSLT a technology for the future? IMHO the only argument in its favour is that it works now and there is a *lot* of XML sources that depend on it. What I have difficulty estimating is if the amount of work needed to modify and maintain the existing templates might be in the same region as the one needed to convert the sources to a format for which there are already templates (asciidoc and restructuredText were mentioned). I really don't know, but what I know is that I wouldn't want to do the first and then scrap it and do the second. best regards, Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Sun Oct 2 12:13:05 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Sun, 2 Oct 2016 12:13:05 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: On Sat, Oct 1, 2016 at 10:36 PM, Lloyd R. Prentice wrote: > Can you please explain this. In languages such as OCaml, you can define algebraic datatypes which encode certain invariants of your program. By doing so, you can sometimes build your construction such that illegal states cannot be represented. This "make the illegal states impossible" approach has been handled by Yaron Minsky, among others. Concretely, a good way of defining the state of a file is: type file_state = Closed | Open of (char Stream.t) In Erlang, we would write: -type file_state() = closed | {open, port()} but the gist is the same thing. When we match on the file_state, the interesting thing happens: let barney fs = match fs with | Closed -> ... | Open stream -> ... Note how we only have access to the 'stream' component when the file is open, but not when it is closed. If we manage this invariant in the code, there is no way to accidentally sit with a closed file descriptor[0]. In Erlang, we can get much the same flow: barney(.., closed) -> ...; barney(.., {open, Port}) -> ... Note again how the only way to get to the port field is when we have an open port. The--admittedly contrived--naive approach is to write: -record(state, { status = closed :: closed | open, port = undefined :: undefined | port() }). But note that we can thus write: State1 = #state{ status = closed, port = undefined }, State2 = #state{ status = open, port = Port }, State3 = #state{ status = closed, port = Port }, State4 = #state{ status = open, port = undefined } Here states 1 and 2 are valid states, but 3 and 4 are not. Yet our record allows for their representation! It is a common mistake in programming to write down such illegal states[1], and by alluding to the algebraic datatype, you can avoid them. The key idea is to encode your state as a term which has no extra information at any point, but is precise as to what data/information you have at a given point in time. For a real-world example, see https://github.com/shopgun/turtle , in which this technique is used in a couple of places. Turtle is a wrapper for RabbitMQ making the official driver a bit more OTP-like. In RabbitMQ (AMQP), you first open a connection and draw channels inside the connection. Communication happens on a certain channel, not on the connection. In order to handle connections as Fred Hebert writes in his "Its about the guarantees" post[2], we want to start up processes in a known state, and then switch their internals once we have a valid connection. In particular, if you want to publish to RabbitMQ, you want to add a publisher process to your own supervision tree. This process will have to wait until a connection is established to RabbitMQ and then it will need to draw the channel and connect. The publisher is a gen_server and its Module:init/1 callback is: https://github.com/shopgun/turtle/blob/401aea5dc13256f1ed5fbf70830e86 153a4db740/src/turtle_publisher.erl#L151-L155 init([{takeover, Name}, ConnName, Options]) -> process_flag(trap_exit, true), Ref = gproc:nb_wait({n,l,{turtle,connection, ConnName}}), ok = exometer:ensure([ConnName, Name, casts], spiral, []), {ok, {initializing_takeover, Name, Ref, ConnName, Options}}; The process_flag/2 is for handling the fact the official driver cannot close down appropriately. By writing the publisher with trap_exit, we can protect the rest of the Erlang system against its misbehavior. We set up gproc to tell us when there is a connection ready. Then we tell exometer to create a spiral so we can track the behavior of the publisher in our metrics solution. Finally, we get into the "initializing" state. Note how we don't use the "real" state here. Then later on in the file, we handle the message from grpoc: https://github.com/shopgun/turtle/blob/401aea5dc13256f1ed5fbf70830e86153a4db740/src/turtle_publisher.erl#L210-L230 handle_info({gproc, Ref, registered, {_, Pid, _}}, {initializing, N, Ref, CName, Options}) -> {ok, Channel} = turtle:open_channel(CName), #{ declarations := Decls, passive := Passive, confirms := Confirms} = Options, ok = turtle:declare(Channel, Decls, #{ passive => Passive }), ok = turtle:qos(Channel, Options), ok = handle_confirms(Channel, Options), {ok, ReplyQueue, Tag} = handle_rpc(Channel, Options), ConnMRef = monitor(process, Pid), ChanMRef = monitor(process, Channel), reg(N), {noreply, #state { channel = Channel, channel_ref = ChanMRef, conn_ref = ConnMRef, conn_name = CName, confirms = Confirms, corr_id = 0, reply_queue = ReplyQueue, consumer_tag = Tag, name = N}}; When we have a valid connection, gproc tells us. And we are in the initializing state. So we then set up the fabric in RabbitMQ, set appropriate monitors, register ourselves and then build up the "real" state record for the process. The idea here is that we have a special state for when we are initializing which is different from the normal operating state in the system. This avoids having to populate our #state{} record with lots of "undefined" values. In turn, we can define the type scheme of the #state{} record more precisely because when we initialize it, every value is a valid value. Now the dialyzer becomes far more powerful because it has a simpler record to work with type-wise. The original question was about maps. Since maps are dynamic in nature, you can sometimes use them by avoiding to populate fields before they are available and have valid data in them. This gives you the same structure as above, albeit simpler. You could encode the closed state as State1 = #{} and the open state as State2 = #{ port => Port } Now, any match which needs the port must match on it: case State of #{ port := Port } -> ... end, so you cannot by accident have an uninitialized port. In Erlang/OTP Release 19.x we even have the dialyzer able to work with maps, so we can type this as well and get the dialyzer to figure out where there are problems in the code base. The method is not nearly as powerful as it is in OCaml. Static type systems can tell you, at compile time, where your constructions are wrong. With a bit more type-level work, you can encode even more invariants. For instance, you can discriminate a public key from a private key in a public-key cryptosystem, without ever having a runtime overhead of doing so. [0] Of course, this is slightly false. Network sockets may close for other reasons for instance. [1] The Go programming language is notorious for doing this. Either by returning a pair (result, error) where the error if nil if the result is valid and vice versa. Or by using a struct in which certain fields encode if other fields are valid. [2] http://ferd.ca/it-s-about-the-guarantees.html -- J. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmytro.lytovchenko@REDACTED Sun Oct 2 12:23:52 2016 From: dmytro.lytovchenko@REDACTED (Dmytro Lytovchenko) Date: Sun, 02 Oct 2016 10:23:52 +0000 Subject: [erlang-questions] Erlang documentation -- a modest proposal In-Reply-To: References: <1474312965.50459872@apps.rackspace.com> <4f69071d-bcec-eda2-6322-428b0ed8ee85@ninenines.eu> <52803f0c-e906-51b3-fe13-63291ef68902@gmail.com> <66361144-3156-2cd8-693f-984676f2bebd@cs.otago.ac.nz> <3a97bd90-b149-b517-8f7f-a73d58174ed0@ninenines.eu> Message-ID: Here tonight I have experimented with XSLT which generates man pages and made it produce RestructuredText instead: It is not perfect, but i have cut my time at 6 hours and here's the result. http://imgur.com/3kbecSI (using 'classic' template, output has Man page sections because it was based on man page XSLT) http://imgur.com/7IFr30X It is using python function markers, so it ignores arity and args just takes the function name, i am able to link to specific functions and types (this works across whole documentation project, if i'd have more than 1 file). It produces still erlang.3 man file which is RST inside (did not bother to modify the Makefile yet) My offer was to take this in stages: 1. Add a RST target XSLT (taking my POC as a base or not), generate temporary RST files into a temporary project and build HTML, PDF, man (also Sphinx offers epub format) without affecting original docs, and keep working to make it better. 2. Work on XSLT until it produces desired results. Add new entities to Sphinx with Erlang funcs/types support (includes some Python scripting). Sphinx also can do autodoc (i.e. refer into existing modules for some doc pieces), possibly we can shift complex postprocessing ERL/BEAM files to a shorter Python script reading from them. 3. When the output is good enough, scrap the XML and XSLT. XSLT is bad, XSLT 1.0 is even worse. It lacks many features of modern XSLT and both of them are pain to edit. From then on RST and module edoc will contain the primary source of documentation. From here we can edit RST manually instead of generating it -- this is better control over the output. 4. Restyle. Add javascript to show/hide examples, etc. This is done by Sphinx templating and is much easier than XSLT (1.0 bah!). 5. Enjoy the pleasure of editing RestructuredText. The screenshot how it may look is here: http://imgur.com/3kbecSI http://imgur.com/7IFr30X Using template 'classic' The branch is here https://github.com/kvakvs/otp/tree/xslt-sphinx-rst The diff with master is here https://github.com/erlang/otp/compare/master...kvakvs:xslt-sphinx-rst?expand=1 Sphinx project (a simple Makefile and conf.py are not included). P.S .I am not going into dragon realm of "not invented here" and "it introduces a dependency". Well, yes, yes it does, it passes the task to a better tool, which is designed to build documentation. s?n 2 okt. 2016 kl 11:31 skrev Vlad Dumitrescu : > Hi, > > A short status update. I've spent a couple of evenings looking at the > details for how to make the HTML documentation a little better structured, > so that it can be styled. > > The good news is that it's relatively easy to tweak the XSL and get the > HTML that we want. On the other hand, it is a single XSL stylesheet that > handles not only the reference documentation (which I was targeting > foremost), but also everything else (user guides, etc). This requires a bit > more thought and discovery by trial and error, as it is easy to get an > unexpected result in a part that is only present in few places (and > browsing all documentation after every change is not feasible). Debugging > XSLT is difficult, I discover. > > Styling the docs just for improving the artistic experience is probably > not a priority, but what would be useful is to make the pages > mobile-friendly and make it possible to filter out parts that are not > relevant at the moment. Most of this can be done in smaller steps, with the > downside that the XSLT will probably become spaghetti; or with a larger > scope, convert the template to HTML5 and make it more modular (easy to > maintain), with the downside that there is a bit longer start-up time. > > The latter option brings also up a topic discussed here before: is XSLT a > technology for the future? IMHO the only argument in its favour is that it > works now and there is a *lot* of XML sources that depend on it. What I > have difficulty estimating is if the amount of work needed to modify and > maintain the existing templates might be in the same region as the one > needed to convert the sources to a format for which there are already > templates (asciidoc and restructuredText were mentioned). I really don't > know, but what I know is that I wouldn't want to do the first and then > scrap it and do the second. > > best regards, > Vlad > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From list1@REDACTED Sun Oct 2 13:07:40 2016 From: list1@REDACTED (Grzegorz Junka) Date: Sun, 2 Oct 2016 11:07:40 +0000 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: I don't think Ruel was asking about differences between maps and records, only what datatype is optimal for map's keys? I was hoping someone from the OTP team will take on this question, but from what I understand the key is always hashed using some C functions. So, the shorter is the key the faster will be the hashing, but the difference will be so small, that only noticeable on really big data structures (like long lists/strings, binaries or deep data structures, like dicts, where it has to be traversed, and by long I mean a few dozens of bytes). Grzegorz On 01/10/2016 19:24, Jesper Louis Andersen wrote: > Hi, > > You have to define optimal. Do you want efficient lookup or do you > want to save space? For static keys, using atoms has the advantage of > being fast to compare and rather small (1 machine word). For small > maps, lookup speeds are so quick I don't think it really matters too > much if a record is marginally faster. I'd go for readability over > efficiency in almost all situations. > > As for the record vs maps discussion: I tend to use an algebraic > datatype instead, so I can avoid representing state which is not > valid. In some circumstances, a map is fit for the representation. One > weakness of a record is that if we define > > -record(state, { name, socket }). > > then we need to have #state { socket = undefined } at some point if we > don't have a valid socket. With a map, we can simply initialize to the > state #{ name => Name } and then when the socket is opened later, we > can do State#{ socket => Sock } in the code and extend the state with > a socket key. In a language such as OCaml, we could represent it as > the type > > -type state() :: {unconnected, name()} | {connected, name(), socket()}. > > And I've done this in Erlang as well. It depends on the complexity of > the representation. As for the goal: the goal is to build a state > which is very hard to accidentally pattern match wrongly on in the > code base. If your state has no socket, a match such as > > barney(#{ socket := Sock }) -> ... > > cannot match, even by accident. In turn, it forces the code to fail on > the match itself, not later on when you try to do something with an > undefined socket. > > > On Sat, Oct 1, 2016 at 12:37 PM, Pagayon, Ruel > wrote: > > Hi everyone, > > I'm just wondering (assuming the keys of my maps in my application > is not dynamically generated): > > 1. What is the most optimal key type for maps? > 2. If there is little to no effect in performance (or resources in > general), as a convention, which is the best to use? > > Thank you in advance for your responses. > > Cheers, > Ruel > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > > > -- > J. > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From lloyd@REDACTED Sun Oct 2 15:50:21 2016 From: lloyd@REDACTED (Lloyd R. Prentice) Date: Sun, 2 Oct 2016 09:50:21 -0400 Subject: [erlang-questions] Erlang documentation -- a modest proposal In-Reply-To: References: <1474312965.50459872@apps.rackspace.com> <4f69071d-bcec-eda2-6322-428b0ed8ee85@ninenines.eu> <52803f0c-e906-51b3-fe13-63291ef68902@gmail.com> <66361144-3156-2cd8-693f-984676f2bebd@cs.otago.ac.nz> <3a97bd90-b149-b517-8f7f-a73d58174ed0@ninenines.eu> Message-ID: Hats off to Vlad and Dmytro! Looks like promising work. Thanks, guys! Best wishes, Lloyd Sent from my iPad > On Oct 2, 2016, at 6:23 AM, Dmytro Lytovchenko wrote: > > Here tonight I have experimented with XSLT which generates man pages and made it produce RestructuredText instead: > It is not perfect, but i have cut my time at 6 hours and here's the result. > http://imgur.com/3kbecSI (using 'classic' template, output has Man page sections because it was based on man page XSLT) > http://imgur.com/7IFr30X > > It is using python function markers, so it ignores arity and args just takes the function name, i am able to link to specific functions and types (this works across whole documentation project, if i'd have more than 1 file). > > It produces still erlang.3 man file which is RST inside (did not bother to modify the Makefile yet) > > My offer was to take this in stages: > > 1. Add a RST target XSLT (taking my POC as a base or not), generate temporary RST files into a temporary project and build HTML, PDF, man (also Sphinx offers epub format) without affecting original docs, and keep working to make it better. > 2. Work on XSLT until it produces desired results. Add new entities to Sphinx with Erlang funcs/types support (includes some Python scripting). Sphinx also can do autodoc (i.e. refer into existing modules for some doc pieces), possibly we can shift complex postprocessing ERL/BEAM files to a shorter Python script reading from them. > 3. When the output is good enough, scrap the XML and XSLT. XSLT is bad, XSLT 1.0 is even worse. It lacks many features of modern XSLT and both of them are pain to edit. From then on RST and module edoc will contain the primary source of documentation. From here we can edit RST manually instead of generating it -- this is better control over the output. > 4. Restyle. Add javascript to show/hide examples, etc. This is done by Sphinx templating and is much easier than XSLT (1.0 bah!). > 5. Enjoy the pleasure of editing RestructuredText. > > The screenshot how it may look is here: http://imgur.com/3kbecSI http://imgur.com/7IFr30X Using template 'classic' > The branch is here https://github.com/kvakvs/otp/tree/xslt-sphinx-rst > The diff with master is here https://github.com/erlang/otp/compare/master...kvakvs:xslt-sphinx-rst?expand=1 > Sphinx project (a simple Makefile and conf.py are not included). > > P.S .I am not going into dragon realm of "not invented here" and "it introduces a dependency". Well, yes, yes it does, it passes the task to a better tool, which is designed to build documentation. > > s?n 2 okt. 2016 kl 11:31 skrev Vlad Dumitrescu : >> Hi, >> >> A short status update. I've spent a couple of evenings looking at the details for how to make the HTML documentation a little better structured, so that it can be styled. >> >> The good news is that it's relatively easy to tweak the XSL and get the HTML that we want. On the other hand, it is a single XSL stylesheet that handles not only the reference documentation (which I was targeting foremost), but also everything else (user guides, etc). This requires a bit more thought and discovery by trial and error, as it is easy to get an unexpected result in a part that is only present in few places (and browsing all documentation after every change is not feasible). Debugging XSLT is difficult, I discover. >> >> Styling the docs just for improving the artistic experience is probably not a priority, but what would be useful is to make the pages mobile-friendly and make it possible to filter out parts that are not relevant at the moment. Most of this can be done in smaller steps, with the downside that the XSLT will probably become spaghetti; or with a larger scope, convert the template to HTML5 and make it more modular (easy to maintain), with the downside that there is a bit longer start-up time. >> >> The latter option brings also up a topic discussed here before: is XSLT a technology for the future? IMHO the only argument in its favour is that it works now and there is a *lot* of XML sources that depend on it. What I have difficulty estimating is if the amount of work needed to modify and maintain the existing templates might be in the same region as the one needed to convert the sources to a format for which there are already templates (asciidoc and restructuredText were mentioned). I really don't know, but what I know is that I wouldn't want to do the first and then scrap it and do the second. >> >> best regards, >> Vlad >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Sun Oct 2 18:31:38 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Sun, 2 Oct 2016 19:31:38 +0300 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: I can tell about our experience. We in Flussonic have disk format of config. It is similar to nginx-like: stream ort { url udp://239.0.0.1:1234; dvr /storage 10d; } This format is parsed to a map of maps of maps of maps. I tried to avoid arrays (lists). These maps mostly have atom as a key except stream names, that must be binaries. It is convenient to use from erlang, but we have to transform it to JSON. When we read it back from JSON we need to translate binary keys to atoms and so we have a special function that translates some keys from binary to atom, and some keys are left as binary. It is a bit tricky, but it is very convenient to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladdu55@REDACTED Sun Oct 2 19:19:25 2016 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Sun, 2 Oct 2016 19:19:25 +0200 Subject: [erlang-questions] Erlang documentation -- a modest proposal In-Reply-To: References: <1474312965.50459872@apps.rackspace.com> <4f69071d-bcec-eda2-6322-428b0ed8ee85@ninenines.eu> <52803f0c-e906-51b3-fe13-63291ef68902@gmail.com> <66361144-3156-2cd8-693f-984676f2bebd@cs.otago.ac.nz> <3a97bd90-b149-b517-8f7f-a73d58174ed0@ninenines.eu> Message-ID: On Sun, Oct 2, 2016 at 12:23 PM, Dmytro Lytovchenko < dmytro.lytovchenko@REDACTED> wrote: > Here tonight I have experimented with XSLT which generates man pages and > made it produce RestructuredText instead: > It is not perfect, but i have cut my time at 6 hours and here's the result. > http://imgur.com/3kbecSI (using 'classic' template, output has Man page > sections because it was based on man page XSLT) > http://imgur.com/7IFr30X > > It is using python function markers, so it ignores arity and args just > takes the function name, i am able to link to specific functions and types > (this works across whole documentation project, if i'd have more than 1 > file). > > It produces still erlang.3 man file which is RST inside (did not bother to > modify the Makefile yet) > > My offer was to take this in stages: > > 1. Add a RST target XSLT (taking my POC as a base or not), generate > temporary RST files into a temporary project and build HTML, PDF, man (also > Sphinx offers epub format) without affecting original docs, and keep > working to make it better. > 2. Work on XSLT until it produces desired results. Add new entities to > Sphinx with Erlang funcs/types support (includes some Python scripting). > Sphinx also can do autodoc (i.e. refer into existing modules for some doc > pieces), possibly we can shift complex postprocessing ERL/BEAM files to a > shorter Python script reading from them. > 3. When the output is good enough, scrap the XML and XSLT. XSLT is bad, > XSLT 1.0 is even worse. It lacks many features of modern XSLT and both of > them are pain to edit. From then on RST and module edoc will contain the > primary source of documentation. From here we can edit RST manually instead > of generating it -- this is better control over the output. > 4. Restyle. Add javascript to show/hide examples, etc. This is done by > Sphinx templating and is much easier than XSLT (1.0 bah!). > 5. Enjoy the pleasure of editing RestructuredText. > Great job, Dmytro! * #2: it looks like there is an Erlang domain for Sphinx, don't know how complete it is. https://pypi.python.org/pypi/sphinxcontrib-erlangdomain * Regarding the autodoc extension, that would generate RST from edoc comments, right? * I'm not sure how to make sure the results are good enough. In the first stage, there are two levels of unknowns: xml->rst and rst->html/man/pdf. I would like to be able to test each separately, that is specifying how the intermediary rst format will look like, taking into account the special cases found in the xml sources (some of the xslt seems to be there to handle such cases, but I may be wrong). > P.S .I am not going into dragon realm of "not invented here" and "it > introduces a dependency". Well, yes, yes it does, it passes the task to a > better tool, which is designed to build documentation. > One added dependency and one that is removed (xsltproc chain) give a net result of zero. :-) The big hurdle is that the OTP devs need to learn about the new format and toolchain, while the old ones "just work"(tm). On the other hand, I'm not sure how easy would be to maintain the xml+xslt sources in face of the large-ish changes we would like to make: the git history shows only few and minor changes done since 2009. In the end, it's their decision if a change is worth it, and an automatic conversion xml->rst helps a lot make the decision easier. best regards, Vlad > s?n 2 okt. 2016 kl 11:31 skrev Vlad Dumitrescu : > >> Hi, >> >> A short status update. I've spent a couple of evenings looking at the >> details for how to make the HTML documentation a little better structured, >> so that it can be styled. >> >> The good news is that it's relatively easy to tweak the XSL and get the >> HTML that we want. On the other hand, it is a single XSL stylesheet that >> handles not only the reference documentation (which I was targeting >> foremost), but also everything else (user guides, etc). This requires a bit >> more thought and discovery by trial and error, as it is easy to get an >> unexpected result in a part that is only present in few places (and >> browsing all documentation after every change is not feasible). Debugging >> XSLT is difficult, I discover. >> >> Styling the docs just for improving the artistic experience is probably >> not a priority, but what would be useful is to make the pages >> mobile-friendly and make it possible to filter out parts that are not >> relevant at the moment. Most of this can be done in smaller steps, with the >> downside that the XSLT will probably become spaghetti; or with a larger >> scope, convert the template to HTML5 and make it more modular (easy to >> maintain), with the downside that there is a bit longer start-up time. >> >> The latter option brings also up a topic discussed here before: is XSLT a >> technology for the future? IMHO the only argument in its favour is that it >> works now and there is a *lot* of XML sources that depend on it. What I >> have difficulty estimating is if the amount of work needed to modify and >> maintain the existing templates might be in the same region as the one >> needed to convert the sources to a format for which there are already >> templates (asciidoc and restructuredText were mentioned). I really don't >> know, but what I know is that I wouldn't want to do the first and then >> scrap it and do the second. >> >> best regards, >> Vlad >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostis@REDACTED Sun Oct 2 20:42:08 2016 From: kostis@REDACTED (Kostis Sagonas) Date: Sun, 2 Oct 2016 20:42:08 +0200 Subject: [erlang-questions] Type def and spec syntax In-Reply-To: References: Message-ID: <895950d3-3739-7b1f-7c5f-bad87436b84c@cs.ntua.gr> On 10/02/2016 12:56 AM, Robert Virding wrote: > Is there any better documentation of type defs and specs than that which > occurs in the Erlang Reference manual? If so where is it? In light of recent discussions on this mailing list, I wish that those who complain about documentation took a bit of time to tell us which aspects of the current documentation they find unsatisfactory and why. It's very difficult to know how to make something "better" if one does not get told why it's not good enough as it is. > I am trying to define an equivalent type/spec syntax for LFE so I am > going through the valid syntax then trying to work out what they mean. OK, here is a thing I try hard to teach students who take my compiler courses: Do NOT mix up in your mind(s) syntax and semantics of a language. They are certainly not the same thing; they may not even be related in any way. So it's not particularly surprising that you cannot "work out what they mean" by looking at the valid syntax of types and specs. > And there is a lot of strangeness from simple things like the predefined > type names are 'reserved' so this is valid: > > -type atom(X) :: list(X). > > (whatever it means) to much more strange things like: > > -spec foo(Y) -> integer() when atom(Y). > -spec foo(Y) -> integer() when atom(Y :: integer()). > > (whatever they mean) neither of which are mentioned in the docs. First of all, why should the above be mentioned in the reference manual, especially if they are "strange"? Second, why do you find the first of these examples strange? Erlang has been designed so that e.g. functions do not have just some alphanumeric sequence of characters as a name but their name includes an arity. This means that length/1 is reserved (you cannot redefine it), while length/3 is not. In this prism, why do you find it strange that the atom/1 type is not? (Only atom/0 is.) As to what the above examples mean, well it's very simple: - The type declaration defines a polymorphic type called atom that is an alias for the built-in polymorphic type list. Granted that it's a very stupid name for this type, but there is nothing that forces good naming convensions in Erlang. I can certainly define a length/3 function that takes integer() arguments and returns a binary() of some sort. - The first foo/1 spec has no meaning because you cannot use the atom/1 type as a subtype constraint. In fact, if you put this spec in a file and try to compile it, you will get a warning, which is consistent with the (very bad IMO) philosophy of the Erlang compiler to not refuse to compule code which is damn stupid. If you use dialyzer on this spec, you will discover that this tool is more sane as far as tolerating constructs which have no meaning. You will get a dialyzer error in this case. - The second foo/1 spec is rejected even by the compiler. Have you actually tried this and the compiler accepted it? > So is there a more complete and understandable documentation some where? Suggestions on how to make the current documentation more complete and understandable are welcome! Kostis From prof3ta@REDACTED Sun Oct 2 23:57:02 2016 From: prof3ta@REDACTED (Roberto Aloi) Date: Sun, 02 Oct 2016 21:57:02 +0000 Subject: [erlang-questions] [ANN] Erlang ansible-nodetool module 2.0.1 Message-ID: Hi, I would like to announce a new version of the ansible-nodetool module, an Ansible module to interact with Erlang nodes via Erlang RPC. If your architecture includes one or more Erlang nodes and you use Ansible to orchestrate them, you may find this Ansible module helpful. The ansible-nodetool module is available at: https://github.com/robertoaloi/ansible-nodetool The README contains detailed information on how to install and use the Ansible module. Sample playbooks are available in the `examples` directory. For the impatient, here is an example of what you can do with it: --- - hosts: localhost tasks: - name: "Return a list of running applications" nodetool: action: eval command: application:which_applications() cookie: secret node: alice@{{ inventory_hostname_short }} register: applications - debug: msg: "{{ applications.stdout_lines }} The module can also "intercept" I/O from the remote nodes via the Erlang `group leader`. Hope you find it useful. Cheers, Roberto http://roberto-aloi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Mon Oct 3 01:26:46 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 3 Oct 2016 12:26:46 +1300 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> Message-ID: <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> On 1/10/16 2:25 AM, Donald Steven wrote: > Hi all, > > I can get '-include(' to work before the main() function, but I can't > get it to work *in* a function, either using a comma or a period at end. The syntax of Erlang does not allow this. An Erlang source file is a sequence of chunks each terminated by a full stop. A chunk is a function definition if it does not begin with "-"; if it does begin with "-" it's a declaration like -module or -export or a preprocessor directive like -include. These things *cannot* be mixed. Can you tell us what you are trying to achieve by doing this? From t6sn7gt@REDACTED Mon Oct 3 01:37:50 2016 From: t6sn7gt@REDACTED (Donald Steven) Date: Sun, 2 Oct 2016 19:37:50 -0400 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> Message-ID: <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> Thanks Richard. I had (alas) come to that conclusion. In c (or m4 or the like), I can include blocks of text or code quite freely. In this case, primarily as a matter of aesthetics, I wanted to off load some repetitive initializations to a file which could be included at the appropriate point, with a neat % comment on the side to keep my head straight. It's a big, ugly block in an otherwise elegant set of funs. I guess I'm stuck with it, as I'm reluctant to make things too complicated by running the whole thing through m4 first before the erlang pre-processor. I do wish it could be done though. (Dare I suggest a modest proposal?) On 10/2/2016 7:26 PM, Richard A. O'Keefe wrote: > > > On 1/10/16 2:25 AM, Donald Steven wrote: >> Hi all, >> >> I can get '-include(' to work before the main() function, but I can't >> get it to work *in* a function, either using a comma or a period at end. > > The syntax of Erlang does not allow this. > An Erlang source file is a sequence of chunks each terminated by > a full stop. > A chunk is a function definition if it does not begin with "-"; > if it does begin with "-" it's a declaration like -module or > -export or a preprocessor directive like -include. > > These things *cannot* be mixed. > > Can you tell us what you are trying to achieve by doing this? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From ok@REDACTED Mon Oct 3 02:03:03 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 3 Oct 2016 13:03:03 +1300 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> Message-ID: <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> On 3/10/16 12:37 PM, Donald Steven wrote: > Thanks Richard. I had (alas) come to that conclusion. > > In c (or m4 or the like), I can include blocks of text or code quite > freely. I used to maintain pdm4. > In this case, primarily as a matter of aesthetics, I wanted to > off load some repetitive initializations to a file which could be > included at the appropriate point, with a neat % comment on the side to > keep my head straight. Colour me stupid, but I don't see why you can't have % biginit.hrl biginit(Arguments...) -> big, ugly, block. % main.erl ... -include('biginit.erl'). main(...) -> biginit(...), rest of main. OK, so it's *two* lines instead of one line, but is that a problem? Failing that, what stops biginit being a single big ugly macro? Again, at the point of use there would be two lines, not one. I'm trying to think of anything I might want to include in the body of a function that couldn't be in another function, and failing. From rvirding@REDACTED Mon Oct 3 02:11:01 2016 From: rvirding@REDACTED (Robert Virding) Date: Mon, 3 Oct 2016 02:11:01 +0200 Subject: [erlang-questions] Type def and spec syntax In-Reply-To: <895950d3-3739-7b1f-7c5f-bad87436b84c@cs.ntua.gr> References: <895950d3-3739-7b1f-7c5f-bad87436b84c@cs.ntua.gr> Message-ID: I think you have reiterated what I said: what does this mean? I worked out myself that the first form: -type atom(X) :: list(X). works as the pre-defined types are not "reserved words" in any sense. BUT THIS IS NOT STATED IN THE DOCUMENTATION. I discovered it by looking at the what the parser output. So I continued to look in the parser to see if there were more things which are allowed but not documented and I found the second two (plus more). So my question was what do they mean? While my examples may be erroneous does this syntax have any possible meaning at all? You seem to imply that it doesn't. If not why is is there at all? Was someone just having a wild time chucking in whatever they could dream up? And it if can be used in a meaningful way why isn't it documented? I think that having syntax which can never legal is a great way to complicate things. Which we don't need. I quite agree that semantics is the important thing but we need syntax to express ourselves. And if the documentation doesn't cover all the legal syntax then all legal syntax might have a meaning so it is reasonable to ask what it means. So my suggestion is that the documentation should cover all legal type syntax. And that we should remove all syntax which can never be legal. Robert On 2 October 2016 at 20:42, Kostis Sagonas wrote: > On 10/02/2016 12:56 AM, Robert Virding wrote: > >> Is there any better documentation of type defs and specs than that which >> occurs in the Erlang Reference manual? If so where is it? >> > > In light of recent discussions on this mailing list, I wish that those who > complain about documentation took a bit of time to tell us which aspects of > the current documentation they find unsatisfactory and why. > > It's very difficult to know how to make something "better" if one does not > get told why it's not good enough as it is. > > > I am trying to define an equivalent type/spec syntax for LFE so I am >> going through the valid syntax then trying to work out what they mean. >> > > OK, here is a thing I try hard to teach students who take my compiler > courses: Do NOT mix up in your mind(s) syntax and semantics of a language. > They are certainly not the same thing; they may not even be related in any > way. > > So it's not particularly surprising that you cannot "work out what they > mean" by looking at the valid syntax of types and specs. > > > And there is a lot of strangeness from simple things like the predefined >> type names are 'reserved' so this is valid: >> >> -type atom(X) :: list(X). >> >> (whatever it means) to much more strange things like: >> >> -spec foo(Y) -> integer() when atom(Y). >> -spec foo(Y) -> integer() when atom(Y :: integer()). >> >> (whatever they mean) neither of which are mentioned in the docs. >> > > First of all, why should the above be mentioned in the reference manual, > especially if they are "strange"? > > Second, why do you find the first of these examples strange? Erlang has > been designed so that e.g. functions do not have just some alphanumeric > sequence of characters as a name but their name includes an arity. This > means that length/1 is reserved (you cannot redefine it), while length/3 is > not. > > In this prism, why do you find it strange that the atom/1 type is not? > (Only atom/0 is.) > > > As to what the above examples mean, well it's very simple: > > - The type declaration defines a polymorphic type called atom that is an > alias for the built-in polymorphic type list. Granted that it's a very > stupid name for this type, but there is nothing that forces good naming > convensions in Erlang. I can certainly define a length/3 function that > takes integer() arguments and returns a binary() of some sort. > > - The first foo/1 spec has no meaning because you cannot use the atom/1 > type as a subtype constraint. In fact, if you put this spec in a file and > try to compile it, you will get a warning, which is consistent with the > (very bad IMO) philosophy of the Erlang compiler to not refuse to compule > code which is damn stupid. If you use dialyzer on this spec, you will > discover that this tool is more sane as far as tolerating constructs which > have no meaning. You will get a dialyzer error in this case. > > - The second foo/1 spec is rejected even by the compiler. Have you > actually tried this and the compiler accepted it? > > > So is there a more complete and understandable documentation some where? >> > > Suggestions on how to make the current documentation more complete and > understandable are welcome! > > Kostis > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Mon Oct 3 02:24:47 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 3 Oct 2016 13:24:47 +1300 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: <6aea8ee7-e398-4665-711d-6a43c8554150@cs.otago.ac.nz> On 1/10/16 11:37 PM, Pagayon, Ruel wrote: > 1. What is the most optimal key type for maps? I'm not sure that question has an answer. So much depends on your usage patterns. Even a little benchmark comparing lookup/update speeds for maps could be misleading because it wouldn't take into account the work done by your program in *getting* the keys to use. > 2. If there is little to no effect in performance (or resources in > general), as a convention, which is the best to use? Whatever is most appropriate for your immediate needs. If the frames proposal had been adopted, the answer would have been "only atoms work, so what's the question". But the frames proposal *wasn't* adopted, and one *point* of using maps is to be able to use as keys whatever you need to use as keys, without worrying too much about converting them to something else. This really is the most helpful response I can give. Today's machines are literally thousands of times faster than the machines I used as an undergraduate. The era of micro-optimisation is long gone. For a long time now the advice has been "make your program RIGHT before making it FAST." Optimise the human time it takes to get to a working program, then benchmark that, and when you know (a) that you HAVE a performance problem and (b) WHERE the performance is suffering, that's the time to try alternatives. OK, there's no point in premature pessimisation either. From t6sn7gt@REDACTED Mon Oct 3 02:31:57 2016 From: t6sn7gt@REDACTED (Donald Steven) Date: Sun, 2 Oct 2016 20:31:57 -0400 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> Message-ID: <38a465df-e9c2-f3d3-5049-1ff50d6a074a@aim.com> Thanks. I'll have another look. On 10/2/2016 8:03 PM, Richard A. O'Keefe wrote: > > > On 3/10/16 12:37 PM, Donald Steven wrote: >> Thanks Richard. I had (alas) come to that conclusion. >> >> In c (or m4 or the like), I can include blocks of text or code quite >> freely. > > I used to maintain pdm4. > >> In this case, primarily as a matter of aesthetics, I wanted to >> off load some repetitive initializations to a file which could be >> included at the appropriate point, with a neat % comment on the side to >> keep my head straight. > > Colour me stupid, but I don't see why you can't have > > % biginit.hrl > > biginit(Arguments...) -> > big, > ugly, > block. > > > % main.erl > ... > -include('biginit.erl'). > > main(...) -> > biginit(...), > rest of main. > > OK, so it's *two* lines instead of one line, but is that a problem? > > Failing that, what stops biginit being a single big ugly macro? > Again, at the point of use there would be two lines, not one. > > I'm trying to think of anything I might want to include in the > body of a function that couldn't be in another function, and failing. > > --- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus From ok@REDACTED Mon Oct 3 02:49:18 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 3 Oct 2016 13:49:18 +1300 Subject: [erlang-questions] Run-time error messages Message-ID: A student just asked me for help finding a bug in his Erlang code. This was the line responsible: ChosenTopic = lists:nth( rand:uniform(length(AvailableTopics-1)), AvailableTopics), ^^^^^^^^^^^^^^^^^ Of course the Dialyzer would spot this in a flash (AvailableTopics is a list), but the dialyzer hadn't been installed on the lab machines. The error message was {badarith,[{client,choose_topics,3,[{file,"client.erl"},{line,145}]}, {client,start,0,[{file,"client.erl"},{line,8}]}]} where {line,145} points to the beginning of the function where the error is, not even to the beginning of the clause. If the error message had shown even just which operation was involved ('-'), the student would have found the problem by himself. As it was, he was looking all over (especially at line 145). Since we're nearly at the end of the semester, my action items are - get the latest Erlang release installed by the start of next semester - ensure that the dialyzer is installed and the PLT built as part of this - teach Erlang type specifications and how to use the dialyzer next time I teach anything about Erlang. Even so, it would be nice if 'badarith' were less vague. From t6sn7gt@REDACTED Mon Oct 3 02:51:30 2016 From: t6sn7gt@REDACTED (Donald Steven) Date: Sun, 2 Oct 2016 20:51:30 -0400 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> Message-ID: <706af4df-0687-abe2-1c20-9bac646e83bb@aim.com> Richard, The program is a musical composition. There are three modules: base, orbiter and composer. The 16 orbiter processes fly elliptical orbits and, when they're close, they 'resonate' and output musical material. The orbits have to be initialized, so you end up with this kind of sprawl: main(Args) -> io:format("~nbase.erl, version 1.13, 2 October 2016~n,... working ... "), Orbiter1 = spawn(orbiter, start, []), Orbiter1 ! {self(), #orbit{xaxis = 780.0, yaxis = 187.0, xorigin = 0.0, yorigin = 0.2, eccentricity = 0.7, rotation = 0.0, direction = ?ROTX, translation = 0.2, speed = 265}}, Orbiter2 = spawn(orbiter, start, []), Orbiter2 ! {self(), #orbit{xaxis = 171.0, yaxis = 112.0, xorigin = 0.1, yorigin = 0.3, eccentricity = 0.6, rotation = 0.1, direction = ?ROTY, translation = 0.3, speed = 111}}, Orbiter3 = spawn(orbiter, start, []), Orbiter3 ! {self(), #orbit{xaxis = 40.0, yaxis = 140.0, xorigin = 0.0, yorigin = 0.5, eccentricity = 0.5, rotation = 0.2, direction = ?ROTZ, translation = 0.4, speed = 63}}, ... and so on (up to 16), before getting to more attractive code such as: ==================================================================================================================================== baseloop( _, _, _ , _, _, ?DONE) -> ok; baseloop(SystemTime, OrbiterList, ComposerList, L, ?MAXORBITER, _ ) -> proximitytest(SystemTime, OrbiterList, ComposerList, lists:reverse(L), 1, ?NOTDONE), baseloop(SystemTime, OrbiterList, ComposerList, [], 1, ?NOTDONE); baseloop(SystemTime, OrbiterList, ComposerList, L, I, ?NOTDONE) -> Orbiter = lists:nth(I, OrbiterList), receive {Orbiter, {{X, Y, Z}}} -> ok end, L1 = [{X, Y, Z} | L], CurrentTime = os:system_time(?HUNDREDTHS) - SystemTime, if CurrentTime >= ?LENGTHOFPIECE -> stopcomposers(ComposerList, ?NUMTRACKS), baseloop(SystemTime, OrbiterList, ComposerList, L1, I + 1, ?DONE); true -> baseloop(SystemTime, OrbiterList, ComposerList, L1, I + 1, ?NOTDONE) end. ==================================================================================================================================== My thinking was that, if I move this all outside main(), I'll have to go through contortions to retrieve the pids (I trust that self() is available everywhere in the module and doesn't have to be passed explicitly from one fun to another). I apologize if all this seems naive. I'm was 'brought up' on c and am transitioning to erlang, and just an old man having some fun. Don On 10/02/2016 08:03 PM, Richard A. O'Keefe wrote: > > > On 3/10/16 12:37 PM, Donald Steven wrote: >> Thanks Richard. I had (alas) come to that conclusion. >> >> In c (or m4 or the like), I can include blocks of text or code quite >> freely. > > I used to maintain pdm4. > >> In this case, primarily as a matter of aesthetics, I wanted to >> off load some repetitive initializations to a file which could be >> included at the appropriate point, with a neat % comment on the side to >> keep my head straight. > > Colour me stupid, but I don't see why you can't have > > % biginit.hrl > > biginit(Arguments...) -> > big, > ugly, > block. > > > % main.erl > ... > -include('biginit.erl'). > > main(...) -> > biginit(...), > rest of main. > > OK, so it's *two* lines instead of one line, but is that a problem? > > Failing that, what stops biginit being a single big ugly macro? > Again, at the point of use there would be two lines, not one. > > I'm trying to think of anything I might want to include in the > body of a function that couldn't be in another function, and failing. > > From sdl.web@REDACTED Mon Oct 3 05:06:43 2016 From: sdl.web@REDACTED (Leo Liu) Date: Mon, 03 Oct 2016 11:06:43 +0800 Subject: [erlang-questions] Run-time error messages References: Message-ID: On 2016-10-03 13:49 +1300, Richard A. O'Keefe wrote: > Even so, it would be nice if 'badarith' were less vague. ?? From ok@REDACTED Mon Oct 3 05:14:11 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Mon, 3 Oct 2016 16:14:11 +1300 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <706af4df-0687-abe2-1c20-9bac646e83bb@aim.com> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> <706af4df-0687-abe2-1c20-9bac646e83bb@aim.com> Message-ID: (1) Your initialisation code can return a tuple or list of process IDs. (2) Your initialisation code can store the process IDs in the process dictionary. Each process has its own dictionary. Self = self(), Orbiter1 = spawn(fun () -> orbiter:start(Self, data(1) end), ... Orbiter16 = spawn(fun () -> orbiter:start(Self, data(16) end), put(orbiter1, Orbiter1), ... put(orbiter16, Orbiter16), ... or even Self = self(), put(orbiters, [spawn(fun () -> orbiter:start(Self, Data) end) || Data <- orbiter_data()]) (3) You could use the (per-node) global process registry in the initialisation code register(orbiter1, spawn(fun () -> ... end)), ... and then use orbiter1!Message later. This seriously looks like code that could be in another module without any preprocessor use at all. From sdl.web@REDACTED Mon Oct 3 05:14:18 2016 From: sdl.web@REDACTED (Leo Liu) Date: Mon, 03 Oct 2016 11:14:18 +0800 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: (Max Lapshin's message of "Sun, 2 Oct 2016 19:31:38 +0300") References: Message-ID: On 2016-10-02 19:31 +0300, Max Lapshin wrote: > It is convenient to use from erlang, but we have to transform it to > JSON. I have a similar situation and mixing atoms and binaries can be confusing so I stick to binaries. Not much inconvenience besides having to type the 6 chars <<"">> (which is solvable at the IDE level). Leo From max.lapshin@REDACTED Mon Oct 3 08:12:55 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Mon, 3 Oct 2016 09:12:55 +0300 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: We have made transformation from json to erlang stable, so there are mixed atoms and binaries and it is not confusing in our case -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn@REDACTED Mon Oct 3 08:38:09 2016 From: bjorn@REDACTED (=?UTF-8?Q?Bj=C3=B6rn_Gustavsson?=) Date: Mon, 3 Oct 2016 08:38:09 +0200 Subject: [erlang-questions] Run-time error messages In-Reply-To: References: Message-ID: On Mon, Oct 3, 2016 at 2:49 AM, Richard A. O'Keefe wrote: > A student just asked me for help finding a bug in his > Erlang code. This was the line responsible: > > ChosenTopic = lists:nth( > rand:uniform(length(AvailableTopics-1)), AvailableTopics), > ^^^^^^^^^^^^^^^^^ Can you post a complete code example that I compile and see if there is a bug in line number handling that should be fixed? Normally the line number should point to the line where the exception happens. /Bjorn -- Bj?rn Gustavsson, Erlang/OTP, Ericsson AB From akat.metin@REDACTED Mon Oct 3 10:33:54 2016 From: akat.metin@REDACTED (Metin Akat) Date: Mon, 3 Oct 2016 11:33:54 +0300 Subject: [erlang-questions] Leex does not support ^ and $ in regexps, is there a workaround? Message-ID: Hi List, I am trying to implement a parser for Ledger (http://www.ledger-cli.org/) journal files. Here is an example of such a journal: 2015/10/12 Exxon Expenses:Auto:Gas 10.00 EUR Liabilities:MasterCard -10.00 EUR P 2015/11/21 02:18:02 USD 1.1 EUR Here is my the current implementation: https://github.com/loxs/ledgerparse My plan is to do lexing and parsing line by line and write my own combinator to generate larger structures (as is in the example, one transaction is more than one line) So I am now starting to implement the various possible lines and I am stuck at trying to implement the price definition (the line which is "P 2015/11/21 02:18:02 USD 1.1 EUR") Leex does not support ^ for beginning of line and I somehow need to instruct the lexer to parse the leading "P" as a "pricetag token" instead of just a "word" which can occur in various other places of the journal. So my question is: How do I tackle this? Do I just accept "P" as a WORD token and somehow instruct yecc to parse based on the WORD's value? Is it even possible to do? Any ideas are very welcome, thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn-egil.xb.dahlberg@REDACTED Mon Oct 3 10:54:30 2016 From: bjorn-egil.xb.dahlberg@REDACTED (=?UTF-8?Q?Bj=c3=b6rn-Egil_Dahlberg_XB?=) Date: Mon, 3 Oct 2016 10:54:30 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: Message-ID: <188bbe12-e759-a1ba-e883-b0a54f99519b@ericsson.com> On 10/02/2016 01:07 PM, Grzegorz Junka wrote: > > I don't think Ruel was asking about differences between maps and > records, only what datatype is optimal for map's keys? > > I was hoping someone from the OTP team will take on this question, but > from what I understand the key is always hashed using some C > functions. So, the shorter is the key the faster will be the hashing, > but the difference will be so small, that only noticeable on really > big data structures (like long lists/strings, binaries or deep data > structures, like dicts, where it has to be traversed, and by long I > mean a few dozens of bytes). > Maps were designed to handle all types as keys but it's best suited for literal keys, i.e. 'this_is_a_literal_key', "this_is_a_literal_key", <<"this_is_also_a_literal_key">>. It is true that comparisons are more expensive for larger keys, immediates (atoms and fixnums) are word comparisons, but for big maps this is not so problematic. The HAMT implementation only does one comparison on the key itself and that it is when it reaches the leaf, otherwise it uses the hash to traverses the hash value. Also, for literal keys the hashing is done at *load time* so it will only hash again when it that hash exhausted, typically this will happen if Maps are larger then 50000 pairs. For small maps there are more comparisons but the Map is small so who cares =) As for optimal keys .. it depends on your need. I would say atoms are optimal if you don't need to transcode the keys to atoms and back when you use them. Otherwise use the source type. // Bj?rn-Egil > Grzegorz > > > On 01/10/2016 19:24, Jesper Louis Andersen wrote: >> Hi, >> >> You have to define optimal. Do you want efficient lookup or do you >> want to save space? For static keys, using atoms has the advantage of >> being fast to compare and rather small (1 machine word). For small >> maps, lookup speeds are so quick I don't think it really matters too >> much if a record is marginally faster. I'd go for readability over >> efficiency in almost all situations. >> >> As for the record vs maps discussion: I tend to use an algebraic >> datatype instead, so I can avoid representing state which is not >> valid. In some circumstances, a map is fit for the representation. >> One weakness of a record is that if we define >> >> -record(state, { name, socket }). >> >> then we need to have #state { socket = undefined } at some point if >> we don't have a valid socket. With a map, we can simply initialize to >> the state #{ name => Name } and then when the socket is opened later, >> we can do State#{ socket => Sock } in the code and extend the state >> with a socket key. In a language such as OCaml, we could represent it >> as the type >> >> -type state() :: {unconnected, name()} | {connected, name(), socket()}. >> >> And I've done this in Erlang as well. It depends on the complexity of >> the representation. As for the goal: the goal is to build a state >> which is very hard to accidentally pattern match wrongly on in the >> code base. If your state has no socket, a match such as >> >> barney(#{ socket := Sock }) -> ... >> >> cannot match, even by accident. In turn, it forces the code to fail >> on the match itself, not later on when you try to do something with >> an undefined socket. >> >> >> On Sat, Oct 1, 2016 at 12:37 PM, Pagayon, Ruel > > wrote: >> >> Hi everyone, >> >> I'm just wondering (assuming the keys of my maps in my >> application is not dynamically generated): >> >> 1. What is the most optimal key type for maps? >> 2. If there is little to no effect in performance (or resources >> in general), as a convention, which is the best to use? >> >> Thank you in advance for your responses. >> >> Cheers, >> Ruel >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> >> >> >> >> -- >> J. >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.valim@REDACTED Mon Oct 3 11:06:18 2016 From: jose.valim@REDACTED (=?UTF-8?Q?Jos=C3=A9_Valim?=) Date: Mon, 3 Oct 2016 11:06:18 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: <188bbe12-e759-a1ba-e883-b0a54f99519b@ericsson.com> References: <188bbe12-e759-a1ba-e883-b0a54f99519b@ericsson.com> Message-ID: > > Also, for literal keys the hashing is done at *load time* so it will only > hash again when it that hash exhausted, typically this will happen if Maps > are larger then 50000 pairs. > Neat. Does this happen only on maps creation or also when updating maps with := or =>? What about matching on maps? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn-egil.xb.dahlberg@REDACTED Mon Oct 3 11:12:24 2016 From: bjorn-egil.xb.dahlberg@REDACTED (=?UTF-8?Q?Bj=c3=b6rn-Egil_Dahlberg_XB?=) Date: Mon, 3 Oct 2016 11:12:24 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: <188bbe12-e759-a1ba-e883-b0a54f99519b@ericsson.com> Message-ID: On 10/03/2016 11:06 AM, Jos? Valim wrote: > > Also, for literal keys the hashing is done at *load time* so it > will only hash again when it that hash exhausted, typically this > will happen if Maps are larger then 50000 pairs. > > > Neat. Does this happen only on maps creation or also when updating > maps with := or =>? What about matching on maps? Any get_map_element(s) so matching and lookups with literals for sure. We haven't done it for updating or creation but lookups are far more common. Also, not for BIFs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjorn-egil.xb.dahlberg@REDACTED Mon Oct 3 11:19:55 2016 From: bjorn-egil.xb.dahlberg@REDACTED (=?UTF-8?Q?Bj=c3=b6rn-Egil_Dahlberg_XB?=) Date: Mon, 3 Oct 2016 11:19:55 +0200 Subject: [erlang-questions] Best Practice in Map keys: Atoms or Binaries (or Strings)? In-Reply-To: References: <188bbe12-e759-a1ba-e883-b0a54f99519b@ericsson.com> Message-ID: <32946cc9-cc0c-9fa4-a313-4a477dcdef7b@ericsson.com> On 10/03/2016 11:12 AM, Bj?rn-Egil Dahlberg XB wrote: > On 10/03/2016 11:06 AM, Jos? Valim wrote: >> >> Also, for literal keys the hashing is done at *load time* so it >> will only hash again when it that hash exhausted, typically this >> will happen if Maps are larger then 50000 pairs. >> >> >> Neat. Does this happen only on maps creation or also when updating >> maps with := or =>? What about matching on maps? > > Any get_map_element(s) so matching and lookups with literals for sure. > We haven't done it for updating or creation but lookups are far more > common. Also, not for BIFs. And ofc, if everything are literals at compile time the map is constructed at compile time, not load time or run time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From t6sn7gt@REDACTED Mon Oct 3 11:28:28 2016 From: t6sn7gt@REDACTED (Donald Steven) Date: Mon, 3 Oct 2016 05:28:28 -0400 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> <7e9404af-6584-eab7-74a8-2b8f2b31dc76@cs.otago.ac.nz> <706af4df-0687-abe2-1c20-9bac646e83bb@aim.com> Message-ID: <368ec613-145f-8416-fba5-689cff681b7c@aim.com> Thanks Richard and Jeff, much appreciated. On 10/02/2016 11:14 PM, Richard A. O'Keefe wrote: > (1) Your initialisation code can return a tuple or list of process IDs. > > (2) Your initialisation code can store the process IDs in the > process dictionary. Each process has its own dictionary. > > Self = self(), > Orbiter1 = spawn(fun () -> orbiter:start(Self, data(1) end), > ... > Orbiter16 = spawn(fun () -> orbiter:start(Self, data(16) end), > put(orbiter1, Orbiter1), > ... > put(orbiter16, Orbiter16), > ... > or even > > Self = self(), > put(orbiters, [spawn(fun () -> orbiter:start(Self, Data) end) > || Data <- orbiter_data()]) > > (3) You could use the (per-node) global process registry in the > initialisation code > > register(orbiter1, spawn(fun () -> ... end)), > ... > > and then use orbiter1!Message later. > > This seriously looks like code that could be in another module > without any preprocessor use at all. > From raimo+erlang-questions@REDACTED Mon Oct 3 11:45:15 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Mon, 3 Oct 2016 11:45:15 +0200 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <287809764.7376616.1475245513197@mail.yahoo.com> References: <917841123.4519860.1474824739386.ref@mail.yahoo.com> <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> Message-ID: <20161003094515.GA65154@erix.ericsson.se> On Fri, Sep 30, 2016 at 02:25:13PM +0000, Vans S wrote: > The reasoning behind this is because I am using a?handle_event_function callback mode. I find this is more flexible. ?A standard state machine can be in 1 state at 1 time. > > But things are more complex now and the desire for a state machine to be in multiple states at 1 time is crucial. ? > > For example you can be running and screaming. Not only running then stopping to transition to screaming, then stop screaming and start running again. > Maybe this is beyond the scope of a state machine, excuse if my terminology is off. I'd say that your state machine can only be in one state at any given time but you have a complex state as in multi dimensional and each combination of the different dimensions counts as a different state. So you have 4 discrete states: {not screaming, not running}, {not screaming, running}, {screaming, not running} and {screaming, running} but wants to reason about the state as one complex term: {IsScreaming, IsRunning}. > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? Does not erlang:start_timer/3,4 work in this case? You can have any number of such timers simultaneously running just as for erlang:send_after/3,4. > > An example of this is say if we are in a running state and every 100 ms we timeout, when we timeout we take an extra step and update our position in the world, also we determine how fast we are running and maybe the next step will be in 80ms now. > > Now while we are running, we also got a moral boost by seeing a well lit street, so in 5000ms our moral boost will expire, and will timeout. > > Also we start screaming due to panic, every 500ms we send a scream to all nearby recipients. > > Maybe this is all beyond the scope of a state machine. ?But for me it seems the only change required to support this is allowing multiple timers, and Erlang always had the design philosophy of "do what works" vs "do what is academically sound". The question is if it is possible to create a timer concept in gen_statem that is easier to use and still as flexible as erlang:start_timer/3,4 + erlang:cancel_timer/1,2. With those primitives you have to handle the TimerRef yourself. Here is an attempt: You return [{named_timeout,Time,Name}] and will get an event named_timeout,Name. If you return [{named_timeout,NewTime,Name}] with the Name of a running timer it will restart with NewTime. Therefore you can cancel it with [{named_timeout,infinity,Name}]. Any number of these can run simultaneously and you distinguish them by Name. Or is the benefit of this i.e just to hide the TimerRef behind the timer Msg/Name too small to be worthy of implementing? / Raimo > > > > > On Thursday, September 29, 2016 9:30 AM, Raimo Niskanen wrote: > > > After giving this a second thought i wonder if a single state timer would > be a desired feature and enough in your case. > > Today we have an event timeout, which is seldom useful since often you are > in one state and wait for something specific while stray events that either > are ignored or immediately replied to passes by.? The event timeout is > reset for every stray event. > > What I think would cover many use cases is a single state timeout.? It > would be cancelled if you change states.? If you set it again the running > timer is cancelled and a new is started.? There would only need to be one > such timer so it is roughly as easy to implement as the event timeout. > > There would be no way to cancel it other than changing states. > It would be started with an action {state_timeout,T,Msg}. > > We should keep the old {timeout,T,Msg} since it is inherited from gen_fsm > and has some use cases. > > What do you think? > > / Raimo > > > > On Mon, Sep 26, 2016 at 04:56:25PM +0200, Raimo Niskanen wrote: > > On Sun, Sep 25, 2016 at 05:32:19PM +0000, Vans S wrote: > > > Learning the new gen_statem made me desire for one extra feature. > > > > > > Say you have a common use case of a webserver /w websockets, you have a general connection timeout of 120s, if no messages are received in 120s it means the socket is in an unknown state, and should be closed. > > > > > > So you write your returns like this anytime the client sends you a message: > > > > > > {next_state, NextState, NewData, {timeout, 120*1000, idle_timeout}} > > > > > > Now if the client does not send any messages in 120 seconds, we will get a?idle_timeout?message sent to the gen_statem process. > > > > > > Awesome. > > > > > > But enter a little complexity, enter websockets. > > > > > > Now we need to send a ping from the gen_statem every 15s to the client, but we also need to consider if we did not get any messages from the client in 120s, we are in unknown state and should terminate the connection. > > > > > > So now we are just itching to do this on init: > > > > > > {ok, initial_state, Data, [? ? ? ? {timeout, 120*1000,?idle_timeout},? ? ? ? {timeout, 15*1000, websocket_ping} > > > ????]} > > > > > > This way we do not need to manage our own timers using erlang:send_after. ?timer module is not even a consideration due to how inefficient it is at scaling. > > > > > > But of course we cannot do this, the latest timeout will always override any previous. > > > > > > What do you think? > > > > Your use case is in the middle ground between the existing event timeout > > and using erlang:start_timer/4,3, and is a special case of using > > erlang:start_timer/4,3. > > > > The existing {timeout,T,Msg} is an *event* timeout, so you get either an > > event or the timeout.? The timer is cancelled by the first event. > > This semantics is easy to reason about and has got a fairly simple > > implementation in the state machine engine partly since it only needs > > to store one timer ref. > > > > It seems you could use a state timeout, i.e the timeout is cancelled when > > the state changes.? This would require the state machine engine to hold any > > number of timer refs and cancel all during a state change. > > > > This semantics is subtly similar to the current event timeout.? It would > > need a new option, e.g {state_timeout,T,Msg}. > > > > The {state_timeout,_,_} semantics would be just a special case of using > > erlang:start_timer/4,3, keep your timer ref in the server state and cancel > > it when needed, since in the general case you might want to cancel the > > timer at some other state change or maybe not a state change but an event. > > > > So the question is if a {state_timeout,_,_} feature that auto cancels the > > timer at the first state change is so useful that it is worthy of being > > implemented?? It is not _that_ much code that is needed to store > > a timer ref and cancel the timer started with erlang:start_timer/4,3, > > and it is more flexible. > > > > I implemented the {timeout,_,_} feature just so make it easy to port from > > gen_fsm.? Otherwise I thought that using explicit timers was easy enough. > > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From mononcqc@REDACTED Mon Oct 3 12:43:18 2016 From: mononcqc@REDACTED (Fred Hebert) Date: Mon, 3 Oct 2016 06:43:18 -0400 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> Message-ID: <20161003104317.GB5083@fhebert-ltm2.internal.salesforce.com> On 10/02, Donald Steven wrote: >Thanks Richard. I had (alas) come to that conclusion. > >In c (or m4 or the like), I can include blocks of text or code quite >freely. In this case, primarily as a matter of aesthetics, I wanted >to off load some repetitive initializations to a file which could be >included at the appropriate point, with a neat % comment on the side >to keep my head straight. It's a big, ugly block in an otherwise >elegant set of funs. I guess I'm stuck with it, as I'm reluctant to >make things too complicated by running the whole thing through m4 >first before the erlang pre-processor. I do wish it could be done >though. (Dare I suggest a modest proposal?) > Another approach, although a bit more painful to make work and correlate with all scope would be to use macros: -define(CODE_SNIPPET, begin end). YOu can then -include("myheader.hrl"). and just call ?CODE_SNIPPET inline. Richard O'Keefe's solutions would be cleaner in the long run (and I definitely recommend running with that option), but this could be an immediate workaround for what you had in mind. From t6sn7gt@REDACTED Mon Oct 3 13:01:48 2016 From: t6sn7gt@REDACTED (Donald Steven) Date: Mon, 3 Oct 2016 07:01:48 -0400 Subject: [erlang-questions] Newbie question: function include/1 undefined In-Reply-To: <20161003104317.GB5083@fhebert-ltm2.internal.salesforce.com> References: <344887da-4b80-a5c2-de48-d76a7285649f@aim.com> <3e605a3a-4929-56a7-16cf-52279a3c7da1@aim.com> <4a83baff-278c-ed0e-a7dd-a67412b66b93@aim.com> <13da8bf0-c794-d5a0-9a30-7b9e5e9f72c6@aim.com> <529a87be-d137-14be-68cb-bdcd231ad64d@aim.com> <7cc9c353-c8c3-f3ff-de34-cc1db3cb72d3@cs.otago.ac.nz> <85fb85df-6dfa-dc5f-1678-c00414d28dd5@aim.com> <20161003104317.GB5083@fhebert-ltm2.internal.salesforce.com> Message-ID: <18b18cd9-54d7-c8a1-6e84-9bbf79e6f0d1@aim.com> Thanks Fred, I'll see if this works for me here. On 10/03/2016 06:43 AM, Fred Hebert wrote: > On 10/02, Donald Steven wrote: >> Thanks Richard. I had (alas) come to that conclusion. >> >> In c (or m4 or the like), I can include blocks of text or code quite >> freely. In this case, primarily as a matter of aesthetics, I wanted >> to off load some repetitive initializations to a file which could be >> included at the appropriate point, with a neat % comment on the side >> to keep my head straight. It's a big, ugly block in an otherwise >> elegant set of funs. I guess I'm stuck with it, as I'm reluctant to >> make things too complicated by running the whole thing through m4 >> first before the erlang pre-processor. I do wish it could be done >> though. (Dare I suggest a modest proposal?) >> > > Another approach, although a bit more painful to make work and > correlate with all scope would be to use macros: > > -define(CODE_SNIPPET, begin end). > > YOu can then -include("myheader.hrl"). and just call ?CODE_SNIPPET > inline. > > Richard O'Keefe's solutions would be cleaner in the long run (and I > definitely recommend running with that option), but this could be an > immediate workaround for what you had in mind. From mononcqc@REDACTED Mon Oct 3 13:16:29 2016 From: mononcqc@REDACTED (Fred Hebert) Date: Mon, 3 Oct 2016 07:16:29 -0400 Subject: [erlang-questions] Type def and spec syntax In-Reply-To: References: <895950d3-3739-7b1f-7c5f-bad87436b84c@cs.ntua.gr> Message-ID: <20161003111628.GC5083@fhebert-ltm2.internal.salesforce.com> On 10/03, Robert Virding wrote: >And it if can be used in a >meaningful way why isn't it documented? I think that having syntax which >can never legal is a great way to complicate things. Which we don't need. The three samples for syntax were: -type atom(X) :: list(X). -spec foo(Y) -> integer() when atom(Y). -spec foo(Y) -> integer() when atom(Y :: integer()). So here's a valid way to use them that is useful, at the very least in some cases: -module(mod). -export([main/0]). -type collection(K, V) :: #{K => V} | dict:dict(K, V) | gb_trees:tree(K, V). -spec lookup(K,F,C) -> V when C :: collection(K, V), F :: fun((K, C) -> V). lookup(K, F, C) -> F(K, C). main() -> C = maps:from_list([{a,1},{b,2},{c,3}]), lookup(b, fun maps:get/2, C), lookup("bad ignored type", fun maps:get/2, C), lookup(b, C, fun maps:get/2). This module defines an accessor function using the parametrized types and 'when' parts of typespecs syntax. Sadly the analysis is currently not good enough to figure out that "bad ignored type" is not an acceptable value of `K' for the lookup (it appears dialyzer does not do the parametrized inference this deep), but in the third call, it can definitely infer that I swapped the collection and the accessor function: mod.erl:13: Function main/0 has no local return mod.erl:17: The call mod:lookup('b',C::#{'a'=>1, 'b'=>2, 'c'=>3},fun((_,_) -> any())) does not have a term of type dict:dict(_,_) | gb_trees:tree(_,_) | map() (with opaque subterms) as 3rd argument Had I otherwise defined my spec as: -spec lookup(K,F,C) -> V when K :: atom(), C :: collection(K, V), F :: fun((K, C) -> V). Which adds a constraint that keys must be atoms, Then dialyzer would have caught my error: mod.erl:17: The call mod:lookup("bad ignored type",fun((_,_) -> any()),C::#{'a'=>1, 'b'=>2, 'c'=>3}) breaks the contract (K,F,C) -> V when K :: atom(), C :: collection(K,V), F :: fun((K,collection(K,V)) -> V) If you're using rebar3, you should also be getting colored rebar3 output, which does make the error a lot easier to see by putting the bad value in red and the piece of contract it breaks in green*: http://i.imgur.com/AjNgVCB.png Regards, Fred. * we should find a way to somehow parametrize the colors to help * colorblind users I guess. From jesper.louis.andersen@REDACTED Mon Oct 3 16:24:12 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 03 Oct 2016 14:24:12 +0000 Subject: [erlang-questions] Leex does not support ^ and $ in regexps, is there a workaround? In-Reply-To: References: Message-ID: On Mon, Oct 3, 2016 at 10:34 AM Metin Akat wrote: > > > P 2015/11/21 02:18:02 USD 1.1 EUR > > > So my question is: How do I tackle this? Do I just accept "P" as a WORD > token and somehow instruct yecc to parse based on the WORD's value? Is it > even possible to do? > > (This is loosely from memory) The reason ^ and $ are not implemented is because they are never needed in an LALR(1) parser/scanner construction. We want the above line to be scanned into [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', ... {id, "USD"}, {float, 1.1}, {id, "EUR}] Then we can define a yecc-grammar which can turn these into meaningful constructions: Command -> Cmd Date Time Currency Amount Currency : {command, $1, $2, $3, {$4, $5, $5}}. Date -> Year '/' Month '/' Date : {$1, $3, $5}. Year -> int : $1. Month -> int : $1. ... Sometimes, the indentation in the file does matter. But then it can be smarter to code the lexer by hand or pre-pass over the input file and insert markers for newlines etc. In other words, give structure to the input before actually parsing it. This is used in many languages which uses indentation-based-scope: a pre-pass inserts the scope markers based on newlines and indentation. Then the scanner takes over and handles the stream which has structure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 3 16:30:32 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 3 Oct 2016 14:30:32 +0000 (UTC) Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161003094515.GA65154@erix.ericsson.se> References: <917841123.4519860.1474824739386.ref@mail.yahoo.com> <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> Message-ID: <2095514048.8665120.1475505032853@mail.yahoo.com> > >? > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > >? > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? > > Does not erlang:start_timer/3,4 work in this case?? You can have any number > of such timers simultaneously running just as for erlang:send_after/3,4. The example I gave was poor I forgot one key point and that was a case where cancel_timer was required. Say every scream resets your running. ?So every 500ms a scream event would return [{timeout, 500,scream }, {timeout, 100, running_step}]. Using erlang:send_after we would have to cancel_timer on the running_step, also I am not sure how timers work, but if a function took long would a timer msg arrive in the mailbox? Thus cancel_timer would not work ideally? Say if the erlang:send_after is in 100ms, and we spend 200ms in a function then at the end of that function we call cancel_timer. > Here is an attempt: >? > You return [{named_timeout,Time,Name}] and will get an event >?named_timeout,Name.? If you return [{named_timeout,NewTime,Name}] with the >?Name of a running timer it will restart with NewTime.? Therefore you can >?cancel it with [{named_timeout,infinity,Name}].? Any number of these can >?run simultaneously and you distinguish them by Name. >? >?Or is the benefit of this i.e just to hide the TimerRef behind the timer >?Msg/Name too small to be worthy of implementing? I think the benefit is two fold, first is not managing cancel timer, and second is the code would be cleaner to read and reason about. 'infinity' as the timeout perhaps is misleading, could a new atom be introduced for this, perhaps cancel? ? {timeout, cancel, scream}. Crash if the scream timer is not initiated. If we cancel a timer that has to exist, we are in an undefined state. 'infinity' is fine for practicality purposes or the maximum value of a 56 bit integer (if that is the correct value before venturing into big Erlang integers). What is the benefit of?{named_timeout,Time,Name} vs the current?{timeout,Time,EventName}? On Monday, October 3, 2016 5:45 AM, Raimo Niskanen wrote: On Fri, Sep 30, 2016 at 02:25:13PM +0000, Vans S wrote: > The reasoning behind this is because I am using a?handle_event_function callback mode. I find this is more flexible. ?A standard state machine can be in 1 state at 1 time. > > But things are more complex now and the desire for a state machine to be in multiple states at 1 time is crucial. ? > > For example you can be running and screaming. Not only running then stopping to transition to screaming, then stop screaming and start running again. > Maybe this is beyond the scope of a state machine, excuse if my terminology is off. I'd say that your state machine can only be in one state at any given time but you have a complex state as in multi dimensional and each combination of the different dimensions counts as a different state.? So you have 4 discrete states: {not screaming, not running}, {not screaming, running}, {screaming, not running} and {screaming, running} but wants to reason about the state as one complex term: {IsScreaming, IsRunning}. > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? Does not erlang:start_timer/3,4 work in this case?? You can have any number of such timers simultaneously running just as for erlang:send_after/3,4. > > An example of this is say if we are in a running state and every 100 ms we timeout, when we timeout we take an extra step and update our position in the world, also we determine how fast we are running and maybe the next step will be in 80ms now. > > Now while we are running, we also got a moral boost by seeing a well lit street, so in 5000ms our moral boost will expire, and will timeout. > > Also we start screaming due to panic, every 500ms we send a scream to all nearby recipients. > > Maybe this is all beyond the scope of a state machine. ?But for me it seems the only change required to support this is allowing multiple timers, and Erlang always had the design philosophy of "do what works" vs "do what is academically sound". The question is if it is possible to create a timer concept in gen_statem that is easier to use and still as flexible as erlang:start_timer/3,4 + erlang:cancel_timer/1,2. With those primitives you have to handle the TimerRef yourself. Here is an attempt: You return [{named_timeout,Time,Name}] and will get an event named_timeout,Name.? If you return [{named_timeout,NewTime,Name}] with the Name of a running timer it will restart with NewTime.? Therefore you can cancel it with [{named_timeout,infinity,Name}].? Any number of these can run simultaneously and you distinguish them by Name. Or is the benefit of this i.e just to hide the TimerRef behind the timer Msg/Name too small to be worthy of implementing? / Raimo > > >? > >? ? On Thursday, September 29, 2016 9:30 AM, Raimo Niskanen wrote: >? > >? After giving this a second thought i wonder if a single state timer would > be a desired feature and enough in your case. > > Today we have an event timeout, which is seldom useful since often you are > in one state and wait for something specific while stray events that either > are ignored or immediately replied to passes by.? The event timeout is > reset for every stray event. > > What I think would cover many use cases is a single state timeout.? It > would be cancelled if you change states.? If you set it again the running > timer is cancelled and a new is started.? There would only need to be one > such timer so it is roughly as easy to implement as the event timeout. > > There would be no way to cancel it other than changing states. > It would be started with an action {state_timeout,T,Msg}. > > We should keep the old {timeout,T,Msg} since it is inherited from gen_fsm > and has some use cases. > > What do you think? > > / Raimo > > > > On Mon, Sep 26, 2016 at 04:56:25PM +0200, Raimo Niskanen wrote: > > On Sun, Sep 25, 2016 at 05:32:19PM +0000, Vans S wrote: > > > Learning the new gen_statem made me desire for one extra feature. > > > > > > Say you have a common use case of a webserver /w websockets, you have a general connection timeout of 120s, if no messages are received in 120s it means the socket is in an unknown state, and should be closed. > > > > > > So you write your returns like this anytime the client sends you a message: > > > > > > {next_state, NextState, NewData, {timeout, 120*1000, idle_timeout}} > > > > > > Now if the client does not send any messages in 120 seconds, we will get a?idle_timeout?message sent to the gen_statem process. > > > > > > Awesome. > > > > > > But enter a little complexity, enter websockets. > > > > > > Now we need to send a ping from the gen_statem every 15s to the client, but we also need to consider if we did not get any messages from the client in 120s, we are in unknown state and should terminate the connection. > > > > > > So now we are just itching to do this on init: > > > > > > {ok, initial_state, Data, [? ? ? ? {timeout, 120*1000,?idle_timeout},? ? ? ? {timeout, 15*1000, websocket_ping} > > > ????]} > > > > > > This way we do not need to manage our own timers using erlang:send_after. ?timer module is not even a consideration due to how inefficient it is at scaling. > > > > > > But of course we cannot do this, the latest timeout will always override any previous. > > > > > > What do you think? > > > > Your use case is in the middle ground between the existing event timeout > > and using erlang:start_timer/4,3, and is a special case of using > > erlang:start_timer/4,3. > > > > The existing {timeout,T,Msg} is an *event* timeout, so you get either an > > event or the timeout.? The timer is cancelled by the first event. > > This semantics is easy to reason about and has got a fairly simple > > implementation in the state machine engine partly since it only needs > > to store one timer ref. > > > > It seems you could use a state timeout, i.e the timeout is cancelled when > > the state changes.? This would require the state machine engine to hold any > > number of timer refs and cancel all during a state change. > > > > This semantics is subtly similar to the current event timeout.? It would > > need a new option, e.g {state_timeout,T,Msg}. > > > > The {state_timeout,_,_} semantics would be just a special case of using > > erlang:start_timer/4,3, keep your timer ref in the server state and cancel > > it when needed, since in the general case you might want to cancel the > > timer at some other state change or maybe not a state change but an event. > > > > So the question is if a {state_timeout,_,_} feature that auto cancels the > > timer at the first state change is so useful that it is worthy of being > > implemented?? It is not _that_ much code that is needed to store > > a timer ref and cancel the timer started with erlang:start_timer/4,3, > > and it is more flexible. > > > > I implemented the {timeout,_,_} feature just so make it easy to port from > > gen_fsm.? Otherwise I thought that using explicit timers was easy enough. > > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From akat.metin@REDACTED Mon Oct 3 16:37:24 2016 From: akat.metin@REDACTED (Metin Akat) Date: Mon, 3 Oct 2016 17:37:24 +0300 Subject: [erlang-questions] Leex does not support ^ and $ in regexps, is there a workaround? In-Reply-To: References: Message-ID: [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', ... {id, "USD"}, {float, 1.1}, {id, "EUR}] In this case, how does your lexer know to parse the "P" to {cmd, "P} and the "EUR" to {id, "EUR"}? The only way I can think of is to check if the P is in the beginning of the line (which would totally suffice) Otherwise, yeah... if I am to write my own lexer... then I guess my whole question is pointless. On Mon, Oct 3, 2016 at 5:24 PM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > > > On Mon, Oct 3, 2016 at 10:34 AM Metin Akat wrote: > >> >> >> P 2015/11/21 02:18:02 USD 1.1 EUR >> >> >> So my question is: How do I tackle this? Do I just accept "P" as a WORD >> token and somehow instruct yecc to parse based on the WORD's value? Is it >> even possible to do? >> >> > (This is loosely from memory) > > The reason ^ and $ are not implemented is because they are never needed in > an LALR(1) parser/scanner construction. We want the above line to be > scanned into > > [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', > ... > {id, "USD"}, {float, 1.1}, {id, "EUR}] > > Then we can define a yecc-grammar which can turn these into meaningful > constructions: > > Command -> Cmd Date Time Currency Amount Currency > : {command, $1, $2, $3, {$4, $5, $5}}. > > Date -> Year '/' Month '/' Date : {$1, $3, $5}. > Year -> int : $1. > Month -> int : $1. > ... > > Sometimes, the indentation in the file does matter. But then it can be > smarter to code the lexer by hand or pre-pass over the input file and > insert markers for newlines etc. In other words, give structure to the > input before actually parsing it. This is used in many languages which uses > indentation-based-scope: a pre-pass inserts the scope markers based on > newlines and indentation. Then the scanner takes over and handles the > stream which has structure. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Oct 3 16:46:06 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 03 Oct 2016 14:46:06 +0000 Subject: [erlang-questions] Leex does not support ^ and $ in regexps, is there a workaround? In-Reply-To: References: Message-ID: Just parse it as {id, "P"} and then use the parser to figure out if this is valid in that position. In the format you may want to keep the newlines explicit in the tokenized output since it seems to be significant. In some programs you have a set of valid keywords, in which case you can write a function: keyword("P") -> {cmd, "P"}; keyword("U") -> {cmd, "U"}; ... keyword(ID) -> {id, ID}. but note that this means that P and U are really only occurring in the input as special markers and have no other way to occur. You often use this in the situation where you have a construction such as 'if...then...else' in a typical programming language: you want those parsed specially, not as general identifiers (i.e., variables and other stuff). On Mon, Oct 3, 2016 at 4:37 PM Metin Akat wrote: > [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', > ... > {id, "USD"}, {float, 1.1}, {id, "EUR}] > > In this case, how does your lexer know to parse the "P" to {cmd, "P} and > the "EUR" to {id, "EUR"}? The only way I can think of is to check if the P > is in the beginning of the line (which would totally suffice) > > Otherwise, yeah... if I am to write my own lexer... then I guess my whole > question is pointless. > > On Mon, Oct 3, 2016 at 5:24 PM, Jesper Louis Andersen < > jesper.louis.andersen@REDACTED> wrote: > > > > On Mon, Oct 3, 2016 at 10:34 AM Metin Akat wrote: > > > > P 2015/11/21 02:18:02 USD 1.1 EUR > > > So my question is: How do I tackle this? Do I just accept "P" as a WORD > token and somehow instruct yecc to parse based on the WORD's value? Is it > even possible to do? > > > (This is loosely from memory) > > The reason ^ and $ are not implemented is because they are never needed in > an LALR(1) parser/scanner construction. We want the above line to be > scanned into > > [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', > ... > {id, "USD"}, {float, 1.1}, {id, "EUR}] > > Then we can define a yecc-grammar which can turn these into meaningful > constructions: > > Command -> Cmd Date Time Currency Amount Currency > : {command, $1, $2, $3, {$4, $5, $5}}. > > Date -> Year '/' Month '/' Date : {$1, $3, $5}. > Year -> int : $1. > Month -> int : $1. > ... > > Sometimes, the indentation in the file does matter. But then it can be > smarter to code the lexer by hand or pre-pass over the input file and > insert markers for newlines etc. In other words, give structure to the > input before actually parsing it. This is used in many languages which uses > indentation-based-scope: a pre-pass inserts the scope markers based on > newlines and indentation. Then the scanner takes over and handles the > stream which has structure. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akat.metin@REDACTED Mon Oct 3 16:55:58 2016 From: akat.metin@REDACTED (Metin Akat) Date: Mon, 3 Oct 2016 17:55:58 +0300 Subject: [erlang-questions] Leex does not support ^ and $ in regexps, is there a workaround? In-Reply-To: References: Message-ID: Ah, right, now I get what you mean. Do some preprocessing between lexing and parsing. Yes, that way I think it'll work. Thanks! On Mon, Oct 3, 2016 at 5:46 PM, Jesper Louis Andersen < jesper.louis.andersen@REDACTED> wrote: > Just parse it as {id, "P"} and then use the parser to figure out if this > is valid in that position. In the format you may want to keep the newlines > explicit in the tokenized output since it seems to be significant. In some > programs you have a set of valid keywords, in which case you can write a > function: > > keyword("P") -> {cmd, "P"}; > keyword("U") -> {cmd, "U"}; > ... > keyword(ID) -> {id, ID}. > > but note that this means that P and U are really only occurring in the > input as special markers and have no other way to occur. You often use this > in the situation where you have a construction such as 'if...then...else' > in a typical programming language: you want those parsed specially, not as > general identifiers (i.e., variables and other stuff). > > > > On Mon, Oct 3, 2016 at 4:37 PM Metin Akat wrote: > >> [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', >> ... >> {id, "USD"}, {float, 1.1}, {id, "EUR}] >> >> In this case, how does your lexer know to parse the "P" to {cmd, "P} and >> the "EUR" to {id, "EUR"}? The only way I can think of is to check if the P >> is in the beginning of the line (which would totally suffice) >> >> Otherwise, yeah... if I am to write my own lexer... then I guess my whole >> question is pointless. >> >> On Mon, Oct 3, 2016 at 5:24 PM, Jesper Louis Andersen < >> jesper.louis.andersen@REDACTED> wrote: >> >> >> >> On Mon, Oct 3, 2016 at 10:34 AM Metin Akat wrote: >> >> >> >> P 2015/11/21 02:18:02 USD 1.1 EUR >> >> >> So my question is: How do I tackle this? Do I just accept "P" as a WORD >> token and somehow instruct yecc to parse based on the WORD's value? Is it >> even possible to do? >> >> >> (This is loosely from memory) >> >> The reason ^ and $ are not implemented is because they are never needed >> in an LALR(1) parser/scanner construction. We want the above line to be >> scanned into >> >> [{cmd, "P"}, {int, 2015}, '/', {int, 11}, '/', {int, 21}, {int, 2}, ':', >> ... >> {id, "USD"}, {float, 1.1}, {id, "EUR}] >> >> Then we can define a yecc-grammar which can turn these into meaningful >> constructions: >> >> Command -> Cmd Date Time Currency Amount Currency >> : {command, $1, $2, $3, {$4, $5, $5}}. >> >> Date -> Year '/' Month '/' Date : {$1, $3, $5}. >> Year -> int : $1. >> Month -> int : $1. >> ... >> >> Sometimes, the indentation in the file does matter. But then it can be >> smarter to code the lexer by hand or pre-pass over the input file and >> insert markers for newlines etc. In other words, give structure to the >> input before actually parsing it. This is used in many languages which uses >> indentation-based-scope: a pre-pass inserts the scope markers based on >> newlines and indentation. Then the scanner takes over and handles the >> stream which has structure. >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Mon Oct 3 19:05:22 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Mon, 03 Oct 2016 17:05:22 +0000 Subject: [erlang-questions] Type def and spec syntax In-Reply-To: References: Message-ID: > > > > -spec foo(Y) -> integer() when atom(Y). > -spec foo(Y) -> integer() when atom(Y :: integer()). > > Hi Robert, Putting this in the limestone spotlight, we can muse over why this can happen: A language consists of 3 things: A grammar of accepted syntax, a static semantics ("statics"), run on the grammar before execution, and a dynamic semantics which tells how the program executes ("dynamics"). Almost *every* language has all three. The reason is that the grammar often accepts more input than what are valid programs. One example would be in common lisp, where (defun x) is a valid s-expression, but it isn't a valid program. In fact, SBCL rejects this program. One could argue it is within the valid syntax of a Lisp (S-exp) but isn't valid according to the statics. Another, better, example is the following Erlang module: -module(z). -export([t/1, u/1]). t(X) -> t(X, a). u(X) -> X + Y. which is valid from a parsing perspective, but isn't a valid erlang program due to an arity mismatch on t/2 which is undefined. And u/1 uses the unbound variable Y. What erlang uses is a linting step as part of its statics to rule out such a module as valid. It is very nice to allow for being lax in the notation of the grammar and then tighten up the valid programs later on. In fact, type checking/inference is often part of the statics in order to constrain the valid programs and simplify the later compiler parts. Most notably the dynamic semantics of the program. The key part is that Erlang *does* have a type checker, but it is rather weak since it is really the linting step + more correctness steps inside the compiler. It verifies structurality of your expressions, but it doesn't verify that the types you pass around are valid. You cannot avoid it if you have any kind of way to exit the compiler with some kind of structural error, like an undefined variable. The alternative is to postpone these kinds of errors until the dynamics at runtime, but only very insane programming languages do this. One such example is Guido in which you can write: def foo(): return y which is accepted as a program, but once you run it, it fails spectacularly[0]: Traceback (most recent call last): File "", line 1, in File "", line 2, in foo NameError: name 'y' is not defined The TL;DR is that you can't judge the valid programs by the grammar and syntax alone. You have to know the statics of a program as well to be able to define what constitutes a valid program. And what does this have to do with types? Well, types are just languages with grammar rules as well. So it is likely that you have grammar-valid syntax which isn't a valid type because there is a statics which encode what the valid types are. This is often called a kind-system, and it encodes certain rules for what to check to rule out the invalid programs at the type level. If you think about it, this is also why BNF is a lousy way to document what are the valid programs. What you really need are operational semantics so you can properly encode what the valid programs are. Preferably in machine verifiable form (Agda, Coq, Twelf, ...) [0] To be very polite: HOW THE HELL DO PEOPLE PROGRAM LARGE SOFTWARE SYSTEMS WITH THIS???? -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladdu55@REDACTED Mon Oct 3 22:31:33 2016 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Mon, 3 Oct 2016 22:31:33 +0200 Subject: [erlang-questions] Providing documentation via API Message-ID: Hi, If we are to be able to provide an API to locate and extract documentation for code constructs, what would be a good format to return it in? The source can be code comments or external files, which means that there will be multiple possible source formats (edoc, xml, asciidoc, rst, markdown, etc). I can see the following alternatives: * raw: textual data as in the source, the clients have to parse and interpret it * display: textual data that can be rendered (basically, HTML) * structured: some data structure that unifies the different sources; some kind of plug-ins providing parser support are needed. The 'raw' solution will be needed by the other two to retrieve the doc sources, but are the others useful to have in the library? I would like to be able to provide the 'structured' format. This is the kind of API I am looking at: https://gist.github.com/vladdu/aa571a548cecfeb85b84c65378429bf0 (thanks Joe for pointing me in this direction!). There should also be a way to configure the location of the external docs. best regards, Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlsson.richard@REDACTED Mon Oct 3 23:10:17 2016 From: carlsson.richard@REDACTED (Richard Carlsson) Date: Mon, 3 Oct 2016 23:10:17 +0200 Subject: [erlang-questions] Providing documentation via API In-Reply-To: References: Message-ID: There is of course already https://github.com/erlang/otp/blob/maint/lib/edoc/priv/edoc.dtd, but it might need updating. /Richard 2016-10-03 22:31 GMT+02:00 Vlad Dumitrescu : > Hi, > > If we are to be able to provide an API to locate and extract documentation > for code constructs, what would be a good format to return it in? The > source can be code comments or external files, which means that there will > be multiple possible source formats (edoc, xml, asciidoc, rst, markdown, > etc). > > I can see the following alternatives: > * raw: textual data as in the source, the clients have to parse and > interpret it > * display: textual data that can be rendered (basically, HTML) > * structured: some data structure that unifies the different sources; some > kind of plug-ins providing parser support are needed. > > The 'raw' solution will be needed by the other two to retrieve the doc > sources, but are the others useful to have in the library? I would like to > be able to provide the 'structured' format. > > This is the kind of API I am looking at: https://gist.github.com/vladdu/ > aa571a548cecfeb85b84c65378429bf0 (thanks Joe for pointing me in this > direction!). There should also be a way to configure the location of the > external docs. > > best regards, > Vlad > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok@REDACTED Mon Oct 3 23:30:09 2016 From: ok@REDACTED (Richard A. O'Keefe) Date: Tue, 4 Oct 2016 10:30:09 +1300 Subject: [erlang-questions] Leex does not support ^ and $ in regexps, is there a workaround? In-Reply-To: References: Message-ID: <9863d859-55f3-d435-9208-f5f1ae2ce511@cs.otago.ac.nz> On 3/10/16 9:33 PM, Metin Akat wrote: > Hi List, > > I am trying to implement a parser for Ledger > (http://www.ledger-cli.org/) journal files. Is there any precise specification of the format? I skimmed through the manual but could not find one. It seems to be a line-oriented format, so why not tokenise a line at a time? > Here is an example of such a journal: > > > 2015/10/12 Exxon This might yield [{date,...},{text,...}] > Expenses:Auto:Gas 10.00 EUR This might yield [{indent,4},{text,...},{number,...},{text,..} > Liabilities:MasterCard -10.00 EUR This might yield [{indent,4},{text,...},{number,...},{text,...} > > P 2015/11/21 02:18:02 USD 1.1 EUR This might yield [{text,...},{date,...},{time,...},{text,...},{number,...},{text,...}] Oh, you are doing line at a time. It's not clear why the *lexer* has to recognise P as a pricetag token. Why can't the *parser* recognise that the first thing on the line isn't a date or an indent? Another possibility, if you are lexing a line at a time, is to jam a fake character (like Ctrl-A) at the beginning of a line before handing it to leex. From vladdu55@REDACTED Tue Oct 4 14:33:57 2016 From: vladdu55@REDACTED (Vlad Dumitrescu) Date: Tue, 4 Oct 2016 14:33:57 +0200 Subject: [erlang-questions] Providing documentation via API In-Reply-To: References: Message-ID: On Mon, Oct 3, 2016 at 11:10 PM, Richard Carlsson < carlsson.richard@REDACTED> wrote: > There is of course already https://github.com/erlang/otp/ > blob/maint/lib/edoc/priv/edoc.dtd, but it might need updating. > Yes, and that's a very good base for the structured documentation data. I think the OTP docs structure is a subset of this format. I don't want to care where exactly and in which format the documentation is. best regards, Vlad > /Richard > > 2016-10-03 22:31 GMT+02:00 Vlad Dumitrescu : > >> Hi, >> >> If we are to be able to provide an API to locate and extract >> documentation for code constructs, what would be a good format to return it >> in? The source can be code comments or external files, which means that >> there will be multiple possible source formats (edoc, xml, asciidoc, rst, >> markdown, etc). >> >> I can see the following alternatives: >> * raw: textual data as in the source, the clients have to parse and >> interpret it >> * display: textual data that can be rendered (basically, HTML) >> * structured: some data structure that unifies the different sources; >> some kind of plug-ins providing parser support are needed. >> >> The 'raw' solution will be needed by the other two to retrieve the doc >> sources, but are the others useful to have in the library? I would like to >> be able to provide the 'structured' format. >> >> This is the kind of API I am looking at: https://gist.github.com/vl >> addu/aa571a548cecfeb85b84c65378429bf0 (thanks Joe for pointing me in >> this direction!). There should also be a way to configure the location of >> the external docs. >> >> best regards, >> Vlad >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Tue Oct 4 17:24:53 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Tue, 4 Oct 2016 17:24:53 +0200 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <2095514048.8665120.1475505032853@mail.yahoo.com> References: <917841123.4519860.1474824739386.ref@mail.yahoo.com> <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> Message-ID: <20161004152453.GA17981@erix.ericsson.se> On Mon, Oct 03, 2016 at 02:30:32PM +0000, Vans S wrote: > > > > >? > > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > > >? > > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? > > > > Does not erlang:start_timer/3,4 work in this case?? You can have any number > > of such timers simultaneously running just as for erlang:send_after/3,4. > > The example I gave was poor I forgot one key point and that was a case where cancel_timer was required. Say every scream resets your running. ?So every 500ms a scream event would return > [{timeout, 500,scream }, {timeout, 100, running_step}]. > > Using erlang:send_after we would have to cancel_timer on the running_step, also I am not sure how timers work, but if a function took long would a timer msg arrive in the mailbox? Thus cancel_timer would not work ideally? Say if the erlang:send_after is in 100ms, and we spend 200ms in a function then at the end of that function we call cancel_timer. Using erlang:cancel_timer correctly needs at least this trick: after having called erlang:cancel_timer you are guaranteed that no timeout message will be delivered to you after that, so it is self to do a receive on the TimerRef with 'after 0' to read out the timeout message. Then you can check the return value from erlang:cancel_timer/1,2 to decide when you for sure do not need to read out the timeout message... So it can function ideally if used wisely. > > > > Here is an attempt: > >? > > You return [{named_timeout,Time,Name}] and will get an event > >?named_timeout,Name.? If you return [{named_timeout,NewTime,Name}] with the > >?Name of a running timer it will restart with NewTime.? Therefore you can > >?cancel it with [{named_timeout,infinity,Name}].? Any number of these can > >?run simultaneously and you distinguish them by Name. > >? > >?Or is the benefit of this i.e just to hide the TimerRef behind the timer > >?Msg/Name too small to be worthy of implementing? > > I think the benefit is two fold, first is not managing cancel timer, and second is the code would be cleaner to read and reason about. > > 'infinity' as the timeout perhaps is misleading, could a new atom be introduced for this, perhaps cancel? ? > {timeout, cancel, scream}. That would be possible and have its merits. > > Crash if the scream timer is not initiated. If we cancel a timer that has to exist, we are in an undefined state. Then 'infinity' can be used as: cancel if started otherwise ignore... > > 'infinity' is fine for practicality purposes or the maximum value of a 56 bit integer (if that is the correct value before venturing into big Erlang integers). Since the bignum limit is platform dependent the atom 'infinity' is interpreted by all code at the appliction API level I know of handling timer values to mean "no timer". For example gen_server:call, et.al. It is even so that the pre-defined Dialyzer type timeout() is defined as 'infinity' | non_neg_integer() % integer() >= 0. > > What is the benefit of?{named_timeout,Time,Name} vs the current?{timeout,Time,EventName}? The current {timeout,Time,EventContent} will be cancelled by any other event you receive. It is intended to be used as some kind of inactivity timeout. So if you get for example a call event for status that is handled in your "handle in any state" code, it will cancel the timeout. The suggested (and in the pipeline) {state_timeout,Time,EventContent} will be cancelled by a state change and there can be only one such timer. So typically events handled in "handle in any state" code will not cancel this timer since unless changing states. {named_timeout,Time,Name} would give you full control of when to start and cancel any number of timers distinguished by Name, which is easier to read from the code than handling TimerRefs. It raises the abstraction level by hiding the TimerRef and how to do a correct canceling at the price of not being able to have multiple timers with the same Name, which you can do with erlang:start_timer. / Raimo > > > > > On Monday, October 3, 2016 5:45 AM, Raimo Niskanen wrote: > > > On Fri, Sep 30, 2016 at 02:25:13PM +0000, Vans S wrote: > > The reasoning behind this is because I am using a?handle_event_function callback mode. I find this is more flexible. ?A standard state machine can be in 1 state at 1 time. > > > > But things are more complex now and the desire for a state machine to be in multiple states at 1 time is crucial. ? > > > > For example you can be running and screaming. Not only running then stopping to transition to screaming, then stop screaming and start running again. > > Maybe this is beyond the scope of a state machine, excuse if my terminology is off. > > I'd say that your state machine can only be in one state at any given time > but you have a complex state as in multi dimensional and each combination > of the different dimensions counts as a different state.? So you have 4 > discrete states: {not screaming, not running}, {not screaming, running}, > {screaming, not running} and {screaming, running} but wants to reason about > the state as one complex term: {IsScreaming, IsRunning}. > > > > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > > > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? > > Does not erlang:start_timer/3,4 work in this case?? You can have any number > of such timers simultaneously running just as for erlang:send_after/3,4. > > > > > An example of this is say if we are in a running state and every 100 ms we timeout, when we timeout we take an extra step and update our position in the world, also we determine how fast we are running and maybe the next step will be in 80ms now. > > > > Now while we are running, we also got a moral boost by seeing a well lit street, so in 5000ms our moral boost will expire, and will timeout. > > > > Also we start screaming due to panic, every 500ms we send a scream to all nearby recipients. > > > > Maybe this is all beyond the scope of a state machine. ?But for me it seems the only change required to support this is allowing multiple timers, and Erlang always had the design philosophy of "do what works" vs "do what is academically sound". > > The question is if it is possible to create a timer concept in gen_statem > that is easier to use and still as flexible as erlang:start_timer/3,4 + > erlang:cancel_timer/1,2. > > With those primitives you have to handle the TimerRef yourself. > > Here is an attempt: > > You return [{named_timeout,Time,Name}] and will get an event > named_timeout,Name.? If you return [{named_timeout,NewTime,Name}] with the > Name of a running timer it will restart with NewTime.? Therefore you can > cancel it with [{named_timeout,infinity,Name}].? Any number of these can > run simultaneously and you distinguish them by Name. > > Or is the benefit of this i.e just to hide the TimerRef behind the timer > Msg/Name too small to be worthy of implementing? > > / Raimo > > > > > > > >? > > > >? ? On Thursday, September 29, 2016 9:30 AM, Raimo Niskanen wrote: > >? > > > >? After giving this a second thought i wonder if a single state timer would > > be a desired feature and enough in your case. > > > > Today we have an event timeout, which is seldom useful since often you are > > in one state and wait for something specific while stray events that either > > are ignored or immediately replied to passes by.? The event timeout is > > reset for every stray event. > > > > What I think would cover many use cases is a single state timeout.? It > > would be cancelled if you change states.? If you set it again the running > > timer is cancelled and a new is started.? There would only need to be one > > such timer so it is roughly as easy to implement as the event timeout. > > > > There would be no way to cancel it other than changing states. > > It would be started with an action {state_timeout,T,Msg}. > > > > We should keep the old {timeout,T,Msg} since it is inherited from gen_fsm > > and has some use cases. > > > > What do you think? > > > > / Raimo > > > > > > > > On Mon, Sep 26, 2016 at 04:56:25PM +0200, Raimo Niskanen wrote: > > > On Sun, Sep 25, 2016 at 05:32:19PM +0000, Vans S wrote: > > > > Learning the new gen_statem made me desire for one extra feature. > > > > > > > > Say you have a common use case of a webserver /w websockets, you have a general connection timeout of 120s, if no messages are received in 120s it means the socket is in an unknown state, and should be closed. > > > > > > > > So you write your returns like this anytime the client sends you a message: > > > > > > > > {next_state, NextState, NewData, {timeout, 120*1000, idle_timeout}} > > > > > > > > Now if the client does not send any messages in 120 seconds, we will get a?idle_timeout?message sent to the gen_statem process. > > > > > > > > Awesome. > > > > > > > > But enter a little complexity, enter websockets. > > > > > > > > Now we need to send a ping from the gen_statem every 15s to the client, but we also need to consider if we did not get any messages from the client in 120s, we are in unknown state and should terminate the connection. > > > > > > > > So now we are just itching to do this on init: > > > > > > > > {ok, initial_state, Data, [? ? ? ? {timeout, 120*1000,?idle_timeout},? ? ? ? {timeout, 15*1000, websocket_ping} > > > > ????]} > > > > > > > > This way we do not need to manage our own timers using erlang:send_after. ?timer module is not even a consideration due to how inefficient it is at scaling. > > > > > > > > But of course we cannot do this, the latest timeout will always override any previous. > > > > > > > > What do you think? > > > > > > Your use case is in the middle ground between the existing event timeout > > > and using erlang:start_timer/4,3, and is a special case of using > > > erlang:start_timer/4,3. > > > > > > The existing {timeout,T,Msg} is an *event* timeout, so you get either an > > > event or the timeout.? The timer is cancelled by the first event. > > > This semantics is easy to reason about and has got a fairly simple > > > implementation in the state machine engine partly since it only needs > > > to store one timer ref. > > > > > > It seems you could use a state timeout, i.e the timeout is cancelled when > > > the state changes.? This would require the state machine engine to hold any > > > number of timer refs and cancel all during a state change. > > > > > > This semantics is subtly similar to the current event timeout.? It would > > > need a new option, e.g {state_timeout,T,Msg}. > > > > > > The {state_timeout,_,_} semantics would be just a special case of using > > > erlang:start_timer/4,3, keep your timer ref in the server state and cancel > > > it when needed, since in the general case you might want to cancel the > > > timer at some other state change or maybe not a state change but an event. > > > > > > So the question is if a {state_timeout,_,_} feature that auto cancels the > > > timer at the first state change is so useful that it is worthy of being > > > implemented?? It is not _that_ much code that is needed to store > > > a timer ref and cancel the timer started with erlang:start_timer/4,3, > > > and it is more flexible. > > > > > > I implemented the {timeout,_,_} feature just so make it easy to port from > > > gen_fsm.? Otherwise I thought that using explicit timers was easy enough. > > > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From vans_163@REDACTED Tue Oct 4 18:08:28 2016 From: vans_163@REDACTED (Vans S) Date: Tue, 4 Oct 2016 16:08:28 +0000 (UTC) Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161004152453.GA17981@erix.ericsson.se> References: <917841123.4519860.1474824739386.ref@mail.yahoo.com> <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> Message-ID: <84499889.9421135.1475597308543@mail.yahoo.com> > So it can function ideally if used wisely. Implementing it correctly like this would be required. ?(to clean timeout message out the mailbox) >?Then 'infinity' can be used as: cancel if started otherwise ignore... 'infinity' is perfect then. ?I gave some thought to how code would look and I think it should not crash. ? Picture a game where you autoattack every 1000ms, if you use Halt command your autoattack stops and movement stops. The return will look like [{named_timeout, infinity, auto_attack},?{named_timeout, infinity, move}]. ?In this case if we were not autoattacking butmoving and halted we would crash. Which is bad. ?It adds unneeded complexity. Also I think there may be a need to read out the registered timers in a function body inside the self() gen_statem. For example, if we have a moral boost buff that lasts 10,000ms, and we receive another 10,000ms moral boost buff that stacks, we want to increment? the current 10,000ms moral_buff timeout by 10,000ms more. ?Maybe on named_timeout callback pass the list of all Timers registered, or at least [{Name, TimeRemaining}] ? > {named_timeout,Time,Name} would give you full control of when to start and > cancel any number of timers distinguished by Name, which is easier to read from > the code than handling TimerRefs.? I understand now. On Tuesday, October 4, 2016 11:25 AM, Raimo Niskanen wrote: On Mon, Oct 03, 2016 at 02:30:32PM +0000, Vans S wrote: > > > > >? > > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > > >? > > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? > > > > Does not erlang:start_timer/3,4 work in this case?? You can have any number > > of such timers simultaneously running just as for erlang:send_after/3,4. > > The example I gave was poor I forgot one key point and that was a case where cancel_timer was required. Say every scream resets your running. ?So every 500ms a scream event would return > [{timeout, 500,scream }, {timeout, 100, running_step}]. > > Using erlang:send_after we would have to cancel_timer on the running_step, also I am not sure how timers work, but if a function took long would a timer msg arrive in the mailbox? Thus cancel_timer would not work ideally? Say if the erlang:send_after is in 100ms, and we spend 200ms in a function then at the end of that function we call cancel_timer. Using erlang:cancel_timer correctly needs at least this trick: after having called erlang:cancel_timer you are guaranteed that no timeout message will be delivered to you after that, so it is self to do a receive on the TimerRef with 'after 0' to read out the timeout message.? Then you can check the return value from erlang:cancel_timer/1,2 to decide when you for sure do not need to read out the timeout message... So it can function ideally if used wisely. > > > > Here is an attempt: > >? > > You return [{named_timeout,Time,Name}] and will get an event > >?named_timeout,Name.? If you return [{named_timeout,NewTime,Name}] with the > >?Name of a running timer it will restart with NewTime.? Therefore you can > >?cancel it with [{named_timeout,infinity,Name}].? Any number of these can > >?run simultaneously and you distinguish them by Name. > >? > >?Or is the benefit of this i.e just to hide the TimerRef behind the timer > >?Msg/Name too small to be worthy of implementing? > > I think the benefit is two fold, first is not managing cancel timer, and second is the code would be cleaner to read and reason about. > > 'infinity' as the timeout perhaps is misleading, could a new atom be introduced for this, perhaps cancel? ? > {timeout, cancel, scream}. That would be possible and have its merits. > > Crash if the scream timer is not initiated. If we cancel a timer that has to exist, we are in an undefined state. Then 'infinity' can be used as: cancel if started otherwise ignore... > > 'infinity' is fine for practicality purposes or the maximum value of a 56 bit integer (if that is the correct value before venturing into big Erlang integers). Since the bignum limit is platform dependent the atom 'infinity' is interpreted by all code at the appliction API level I know of handling timer values to mean "no timer".? For example gen_server:call, et.al.? It is even so that the pre-defined Dialyzer type timeout() is defined as 'infinity' | non_neg_integer() % integer() >= 0. > > What is the benefit of?{named_timeout,Time,Name} vs the current?{timeout,Time,EventName}? The current {timeout,Time,EventContent} will be cancelled by any other event you receive.? It is intended to be used as some kind of inactivity timeout. So if you get for example a call event for status that is handled in your "handle in any state" code, it will cancel the timeout. The suggested (and in the pipeline) {state_timeout,Time,EventContent} will be cancelled by a state change and there can be only one such timer. So typically events handled in "handle in any state" code will not cancel this timer since unless changing states. {named_timeout,Time,Name} would give you full control of when to start and cancel any number of timers distinguished by Name, which is easier to read from the code than handling TimerRefs.? It raises the abstraction level by hiding the TimerRef and how to do a correct canceling at the price of not being able to have multiple timers with the same Name, which you can do with erlang:start_timer. / Raimo > > >? > >? ? On Monday, October 3, 2016 5:45 AM, Raimo Niskanen wrote: >? > >? On Fri, Sep 30, 2016 at 02:25:13PM +0000, Vans S wrote: > > The reasoning behind this is because I am using a?handle_event_function callback mode. I find this is more flexible. ?A standard state machine can be in 1 state at 1 time. > > > > But things are more complex now and the desire for a state machine to be in multiple states at 1 time is crucial. ? > > > > For example you can be running and screaming. Not only running then stopping to transition to screaming, then stop screaming and start running again. > > Maybe this is beyond the scope of a state machine, excuse if my terminology is off. > > I'd say that your state machine can only be in one state at any given time > but you have a complex state as in multi dimensional and each combination > of the different dimensions counts as a different state.? So you have 4 > discrete states: {not screaming, not running}, {not screaming, running}, > {screaming, not running} and {screaming, running} but wants to reason about > the state as one complex term: {IsScreaming, IsRunning}. > > > > > If the implementation is quite simple using timers, I would not mind putting in the pull request. ?I just came here to discuss about it and see if it is something that fits. > > > > The rational is using erlang:send_after/3 is not enough in more complex cases, we need to also be stopping and starting new timers. ? > > Does not erlang:start_timer/3,4 work in this case?? You can have any number > of such timers simultaneously running just as for erlang:send_after/3,4. > > > > > An example of this is say if we are in a running state and every 100 ms we timeout, when we timeout we take an extra step and update our position in the world, also we determine how fast we are running and maybe the next step will be in 80ms now. > > > > Now while we are running, we also got a moral boost by seeing a well lit street, so in 5000ms our moral boost will expire, and will timeout. > > > > Also we start screaming due to panic, every 500ms we send a scream to all nearby recipients. > > > > Maybe this is all beyond the scope of a state machine. ?But for me it seems the only change required to support this is allowing multiple timers, and Erlang always had the design philosophy of "do what works" vs "do what is academically sound". > > The question is if it is possible to create a timer concept in gen_statem > that is easier to use and still as flexible as erlang:start_timer/3,4 + > erlang:cancel_timer/1,2. > > With those primitives you have to handle the TimerRef yourself. > > Here is an attempt: > > You return [{named_timeout,Time,Name}] and will get an event > named_timeout,Name.? If you return [{named_timeout,NewTime,Name}] with the > Name of a running timer it will restart with NewTime.? Therefore you can > cancel it with [{named_timeout,infinity,Name}].? Any number of these can > run simultaneously and you distinguish them by Name. > > Or is the benefit of this i.e just to hide the TimerRef behind the timer > Msg/Name too small to be worthy of implementing? > > / Raimo > > > > > > > >? > > > >? ? On Thursday, September 29, 2016 9:30 AM, Raimo Niskanen wrote: > >? > > > >? After giving this a second thought i wonder if a single state timer would > > be a desired feature and enough in your case. > > > > Today we have an event timeout, which is seldom useful since often you are > > in one state and wait for something specific while stray events that either > > are ignored or immediately replied to passes by.? The event timeout is > > reset for every stray event. > > > > What I think would cover many use cases is a single state timeout.? It > > would be cancelled if you change states.? If you set it again the running > > timer is cancelled and a new is started.? There would only need to be one > > such timer so it is roughly as easy to implement as the event timeout. > > > > There would be no way to cancel it other than changing states. > > It would be started with an action {state_timeout,T,Msg}. > > > > We should keep the old {timeout,T,Msg} since it is inherited from gen_fsm > > and has some use cases. > > > > What do you think? > > > > / Raimo > > > > > > > > On Mon, Sep 26, 2016 at 04:56:25PM +0200, Raimo Niskanen wrote: > > > On Sun, Sep 25, 2016 at 05:32:19PM +0000, Vans S wrote: > > > > Learning the new gen_statem made me desire for one extra feature. > > > > > > > > Say you have a common use case of a webserver /w websockets, you have a general connection timeout of 120s, if no messages are received in 120s it means the socket is in an unknown state, and should be closed. > > > > > > > > So you write your returns like this anytime the client sends you a message: > > > > > > > > {next_state, NextState, NewData, {timeout, 120*1000, idle_timeout}} > > > > > > > > Now if the client does not send any messages in 120 seconds, we will get a?idle_timeout?message sent to the gen_statem process. > > > > > > > > Awesome. > > > > > > > > But enter a little complexity, enter websockets. > > > > > > > > Now we need to send a ping from the gen_statem every 15s to the client, but we also need to consider if we did not get any messages from the client in 120s, we are in unknown state and should terminate the connection. > > > > > > > > So now we are just itching to do this on init: > > > > > > > > {ok, initial_state, Data, [? ? ? ? {timeout, 120*1000,?idle_timeout},? ? ? ? {timeout, 15*1000, websocket_ping} > > > > ????]} > > > > > > > > This way we do not need to manage our own timers using erlang:send_after. ?timer module is not even a consideration due to how inefficient it is at scaling. > > > > > > > > But of course we cannot do this, the latest timeout will always override any previous. > > > > > > > > What do you think? > > > > > > Your use case is in the middle ground between the existing event timeout > > > and using erlang:start_timer/4,3, and is a special case of using > > > erlang:start_timer/4,3. > > > > > > The existing {timeout,T,Msg} is an *event* timeout, so you get either an > > > event or the timeout.? The timer is cancelled by the first event. > > > This semantics is easy to reason about and has got a fairly simple > > > implementation in the state machine engine partly since it only needs > > > to store one timer ref. > > > > > > It seems you could use a state timeout, i.e the timeout is cancelled when > > > the state changes.? This would require the state machine engine to hold any > > > number of timer refs and cancel all during a state change. > > > > > > This semantics is subtly similar to the current event timeout.? It would > > > need a new option, e.g {state_timeout,T,Msg}. > > > > > > The {state_timeout,_,_} semantics would be just a special case of using > > > erlang:start_timer/4,3, keep your timer ref in the server state and cancel > > > it when needed, since in the general case you might want to cancel the > > > timer at some other state change or maybe not a state change but an event. > > > > > > So the question is if a {state_timeout,_,_} feature that auto cancels the > > > timer at the first state change is so useful that it is worthy of being > > > implemented?? It is not _that_ much code that is needed to store > > > a timer ref and cancel the timer started with erlang:start_timer/4,3, > > > and it is more flexible. > > > > > > I implemented the {timeout,_,_} feature just so make it easy to port from > > > gen_fsm.? Otherwise I thought that using explicit timers was easy enough. > > > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Tue Oct 4 21:39:48 2016 From: mononcqc@REDACTED (Fred Hebert) Date: Tue, 4 Oct 2016 14:39:48 -0500 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <84499889.9421135.1475597308543@mail.yahoo.com> References: <917841123.4519860.1474824739386.ref@mail.yahoo.com> <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> Message-ID: <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> On 10/04, Vans S wrote: >Picture a game where you autoattack every 1000ms, if you use Halt >command your autoattack stops and movement stops. >The return will look like [{named_timeout, infinity, >auto_attack},?{named_timeout, infinity, move}]. ?In this case if we >were not autoattacking butmoving and halted we would crash. Which is >bad. ?It adds unneeded complexity. > I'd rather see this implemented by seeing one track the timers they use (with their Refs), and their action from the message is dependent on their state (halted or auto_attacking). What you're advocating is doing that tracking by hand based on some other unrelated state, and you're actively fighting one of the states that could exist in your FSM! At the very least, that sounds better than adding yet more features into the gen_statem behaviour, which is gathering requirements at a fairly rapid rate so far. > >For example, if we have a moral boost buff that lasts 10,000ms, and we receive another 10,000ms moral boost buff that stacks, we want to increment? >the current 10,000ms moral_buff timeout by 10,000ms more. ?Maybe on named_timeout callback pass the list of all Timers registered, or at least [{Name, TimeRemaining}] ? > This is a state machine behaviour! If you're buffed, this should be represented in your state explicitly rather than implicitly within the state machine mechanisms! Think for example that you upgrade a node's code. One thing to consider is that after the upgrade you may add or remove timers. However, the code change mechanism does not allow you to manage that kind of stuff: you can only modify your own state and data, but not play with the event mailbox. To be able to change your logic around timeouts during a pause would require you to move them to your own FSM's data and to ignore old references, no way around it. There's a benefit to handling that kind of stuff explicitly, and if timers are integral to your system progressing, you should probably take a more direct approach to it. Hell, you could even start a related 'timeout' FSM that sends message at specific intervals to your own FSM and manage that one, rather than doing it all through the more restrictive interface. From vans_163@REDACTED Tue Oct 4 22:51:19 2016 From: vans_163@REDACTED (Vans S) Date: Tue, 4 Oct 2016 20:51:19 +0000 (UTC) Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> References: <917841123.4519860.1474824739386.ref@mail.yahoo.com> <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> Message-ID: <269322506.4361336.1475614279739@mail.yahoo.com> > I'd rather see this implemented by seeing one track the timers they use? >?(with their Refs), and their action from the message is dependent on? >?their state (halted or auto_attacking). What you're advocating is doing? >?that tracking by hand based on some other unrelated state, and you're? >?actively fighting one of the states that could exist in your FSM! >? >?At the very least, that sounds better than adding yet more features into? >?the gen_statem behaviour, which is gathering requirements at a fairly? >?rapid rate so far. A NFA?Nondeterministic finite automaton - Wikipedia, the free encyclopedia?by definition is a state machine that can be in multiple states simultaneously. Do we really need a gen_statem_nfa module now? >From a basic use case of a connection that needs to send keep alives at regular intervals. ?This "unnecessary feature" is screaming include me. To be clear the connection has 2 timers, timer 1 is if there has not been a message received in x amount of time. Timer 2 is the "when to send keep alive" timer. If a packet arrives on the connection, we bump the keep-alive timer since we know the peer is alive. ?This is a pretty basic feature. And being able to bump the keep-alive time this way will require keeping timer refs and using cancel_timer + purging mailbox. Its the natural way you want to write the gen_statem behavior. At least IMO. This single state way does not work, it works in simple use cases like a safe/lock or basic phone call. But it does not work in more complex applications. The individual states could be running, jumping and attacking. Do you really want to write a combo state for each occurrence? moving, attacking, jumping, stopped, moving_attacking, moving_attacking_jumping, attacking_jumping, moving_jumping, jumping_stopped, .. etc >From what your saying, the state machine needs to be in ONLY ONE of these states at a time. > This is a state machine behaviour! If you're buffed, this should be? > represented in your state explicitly rather than implicitly within the? > state machine mechanisms! This is true. ?But then there needs to be a way to get the timer ref. ?Sometimes Erlangs approach is, just make it work. ? I don't see any way to get the timer ref without including it in the callback OR creating your own timer and including it for tracking (vs just specifying the timeout). > Think for example that you upgrade a node's code. One thing to consider? > is that after the upgrade you may add or remove timers.? I did not consider this. This is truly problematic BUT would not the current way timeout works run into this same problem??So this should not affect allowing multiple timeouts vs a single timeout. ?> Hell, you could even start a related? > 'timeout' FSM that sends message at specific intervals to your own FSM? > and manage that one I tried this way before and it was not very performant. ?Sending a timeout every 100ms and managing tons of other timers in that timeout was very poor performance. ?The reason is I needed acceptable latency and did not want to deal with cancel_timer. Changing to gen_statem?timeout?dropped the 16 phys core (with 200k erlang processes) cpu usage from 50% to 0-1%. Learning from that, I rather this be inside gen_statem. If its not, I would have no problem to write my own little timer library for cancel_timer+purging mailbox.? Either way to me having 1 timeout that can proc a UNIQUE EventContent is silly. It should be fixed then. A timeout ONLY procs a timeout event, this way you wont accidentally override the timeout EventContent. Since you are only in ONE state at a time, you automatically know if the timeout happened, it MUST correspond to the current state we are in.? Another option is to remove the timeout then. It just seems out of place to me. ? What is the use case of a single timeout with a UNIQUE EventContent? Its not to allow the gen_statem to tick itself, (which gen_event lacked) if it was there would not be a way to pass a unique EventContent. Unique event content is misleading making you think if you pass a different?EventContent, it will NOT override a previous?EventContent. On Tuesday, October 4, 2016 3:39 PM, Fred Hebert wrote: On 10/04, Vans S wrote: >Picture a game where you autoattack every 1000ms, if you use Halt >command your autoattack stops and movement stops. >The return will look like [{named_timeout, infinity, >auto_attack},?{named_timeout, infinity, move}]. ?In this case if we >were not autoattacking butmoving and halted we would crash. Which is >bad. ?It adds unneeded complexity. > I'd rather see this implemented by seeing one track the timers they use (with their Refs), and their action from the message is dependent on their state (halted or auto_attacking). What you're advocating is doing that tracking by hand based on some other unrelated state, and you're actively fighting one of the states that could exist in your FSM! At the very least, that sounds better than adding yet more features into the gen_statem behaviour, which is gathering requirements at a fairly rapid rate so far. > >For example, if we have a moral boost buff that lasts 10,000ms, and we receive another 10,000ms moral boost buff that stacks, we want to increment? >the current 10,000ms moral_buff timeout by 10,000ms more. ?Maybe on named_timeout callback pass the list of all Timers registered, or at least [{Name, TimeRemaining}] ? > This is a state machine behaviour! If you're buffed, this should be represented in your state explicitly rather than implicitly within the state machine mechanisms! Think for example that you upgrade a node's code. One thing to consider is that after the upgrade you may add or remove timers. However, the code change mechanism does not allow you to manage that kind of stuff: you can only modify your own state and data, but not play with the event mailbox. To be able to change your logic around timeouts during a pause would require you to move them to your own FSM's data and to ignore old references, no way around it. There's a benefit to handling that kind of stuff explicitly, and if timers are integral to your system progressing, you should probably take a more direct approach to it. Hell, you could even start a related 'timeout' FSM that sends message at specific intervals to your own FSM and manage that one, rather than doing it all through the more restrictive interface. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Wed Oct 5 05:52:38 2016 From: mononcqc@REDACTED (Fred Hebert) Date: Tue, 4 Oct 2016 22:52:38 -0500 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <269322506.4361336.1475614279739@mail.yahoo.com> References: <917841123.4519860.1474824739386@mail.yahoo.com> <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> Message-ID: <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> On 10/04, Vans S wrote: >moving, attacking, jumping, stopped, moving_attacking, >moving_attacking_jumping, attacking_jumping, moving_jumping, >jumping_stopped, .. etc > >From what your saying, the state machine needs to be in ONLY ONE of these states at a time. > Well not necessarily, gen_statem now lets you use complex terms as state. So the state could be [moving, attacking, jumping], though not quite sure how you'd deal with these overlapped states. But it's possible to use a more abstract data structure than an atom to represent states now. I.e. a timeout for running being triggered could transition you into a thing like: handle_event({timeout, Ref, running}, ComplexState, Data) -> ... NextState = ComplexState -- [running], ... or whatever format you'd like. This is new stuff and capacities of gen_statem that was not possible in gen_fsm, so best practices haven't been fully codified for this yet, but this should certainly be possible. > >This is true. ?But then there needs to be a way to get the timer ref. >?Sometimes Erlangs approach is, just make it work. ? >I don't see any way to get the timer ref without including it in the callback OR creating your own timer and including it for tracking (vs just specifying the timeout). > Yeah, I'm advocating for creating your own timer, where you do get a ref for it. >I did not consider this. This is truly problematic BUT would not the >current way timeout works run into this same problem??So this should >not affect allowing multiple timeouts vs a single timeout. Right now what you can do if you manage your own timers is to cancel the old ones or ignore them (you've got their refs so it's easy) and you can set new ones right away. You can't do that however if you rely on the return tuple of a state transition to do it, since that transition cannot happen from a code_change callback. >Learning from that, I rather this be inside gen_statem. If its not, I >would have no problem to write my own little timer library for >cancel_timer+purging mailbox.? > Usually the pattern I use is to write a short 'reset__timer' function that returns its ref, and then I can track the ref in my state. I always found this fairly convenient to deal with things, and for example, I mostly don't clean up timers very hard. I.e. you can make use of the newer cancel_timer options for async returns with or without info, and by matching on specific refs, I can just ignore timers that are for events I know are no longer up for consideration and let them disappear, and automatically can ignore a bunch of potential race conditions (i.e. no mailbox cleanup, just ignore messages coming in) This is also neat to avoid whatever blocking or synchronization that could take place on timer management, but in some cases you may still want to do that. >Since you are only in ONE state at a time, you automatically know if >the timeout happened, it MUST correspond to the current state we are >in.? > Does it? Couldn't the timeout event have been postponed, and therefore come from a prior state change? I believe so. There's no relationship between an event being handled in a state and that event having been sent in that state. >Another option is to remove the timeout then. It just seems out of place to me. ? > At the two extremes are the positions that all timeout management is manual, and that gen_statem is to replace full-on control of timeouts people can do manually (with all the cancellation options and whatnot). Those are the two extremes, and the current thing is where on the gradient does the timeout implementation currently lie. So far it appears to be a fairly minimal convenience factor. I'm just wondering if it's worth pushing it further on the gradient of "emulating all the flexibility of manual control." >What is the use case of a single timeout with a UNIQUE EventContent? > The current semantics are specific about one thing: Generates an event of event_type() timeout after this time (in milliseconds) unless another event arrives in which case this time-out is cancelled. Notice that a retried or inserted event counts like a new in this respect. These timeouts are mostly used for one thing in my experience: tracking inactivity of the state machine. gen_servers and gen_fsms have the same one. The only pragmatic use case I've made of them were to give myself a delay of N milliseconds before which it would be reasonable to send the process in hibernation, GCing and compacting its memory at once, since it had not received a message in that time otherwise. All other timeouts I tend to manage by hand because I do not want them to be interruptible by the arrival of other messages in the mailbox of the process. This is a very, very important detail! Those semantics are also a lot trickier to handle by yourself (you'd need to put them in every clause) than most other timers you could imagine using. This makes them a fairly good idea to include in the OTP behaviours themselves, as opposed to most other timer usages, IMO. Regards, Fred. From max.lapshin@REDACTED Wed Oct 5 09:50:10 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Wed, 5 Oct 2016 10:50:10 +0300 Subject: [erlang-questions] socket getsockopt PACKET_STATISTICS Message-ID: Hi. I want to get statistics for dropped packets on udp socket. Right now I open on linux /proc/net/udp and read it once per second and parse. It is not very reliable, because if there are two multicast listeners on same group:port, it is impossible to distinguish them. Seems that it is possible to query PACKET_STATISTICS via getsockopt, but this is supported only in Linux. Will such platform specific patch to OTP be accepted, if I add to inet:getstat/1 something like drop_cnt response that will work only under linux? -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang@REDACTED Wed Oct 5 09:54:17 2016 From: erlang@REDACTED (Joe Armstrong) Date: Wed, 5 Oct 2016 09:54:17 +0200 Subject: [erlang-questions] code garbage collector Message-ID: Has anybody written a code garbage collector? The root set would be a given module. I want to statically extract all code that is reachable (recursively) from the module and examine it. I want the *opposite* of "including a dependency" but rather a tool that garbage collects code reducing it to a minimal program. /Joe From jesper.louis.andersen@REDACTED Wed Oct 5 10:03:40 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Wed, 05 Oct 2016 08:03:40 +0000 Subject: [erlang-questions] code garbage collector In-Reply-To: References: Message-ID: Replies inline On Wed, Oct 5, 2016 at 9:54 AM Joe Armstrong wrote: > > I want the *opposite* of "including a dependency" but rather a tool that > garbage collects code reducing it to a minimal program. > > Isn't this just dead-code-elimination? For each function, you can put it into SSA-form, reverse the SSA dominator graph and everything not reachable is dead code which can be eliminated. You can then improve on this approximation to shave off more code. For it to really work, you need to have global knowledge of all the modules in the system. And you can't in Erlang due to its lack of proper bundling support for modules. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmytro.lytovchenko@REDACTED Wed Oct 5 10:03:51 2016 From: dmytro.lytovchenko@REDACTED (Dmytro Lytovchenko) Date: Wed, 5 Oct 2016 10:03:51 +0200 Subject: [erlang-questions] code garbage collector In-Reply-To: References: Message-ID: Modern IDE are able to detect and highlight some (many of) unused symbols and functions. Also: What you want can be done. But wouldn't it be better to have function-level selective code loading? I can foresee some trouble re-linking labels from loaded code to memory addresses. Den 5 okt. 2016 09:54 skrev "Joe Armstrong" : > Has anybody written a code garbage collector? > > The root set would be a given module. > > I want to statically extract all code that is reachable (recursively) > from the module and examine it. > > I want the *opposite* of "including a dependency" but rather a tool that > garbage collects code reducing it to a minimal program. > > /Joe > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kostis@REDACTED Wed Oct 5 10:09:53 2016 From: kostis@REDACTED (Kostis Sagonas) Date: Wed, 5 Oct 2016 10:09:53 +0200 Subject: [erlang-questions] code garbage collector In-Reply-To: References: Message-ID: <4f91c183-27ed-e13e-0df5-11a7939d4879@cs.ntua.gr> On 10/05/2016 09:54 AM, Joe Armstrong wrote: > I want the *opposite* of "including a dependency" but rather a tool that > garbage collects code reducing it to a minimal program. I doubt that something like that exists. Note that Erlang is not just a higher-order language where you need to do a complicated analysis, but it is also language that also allows something along the lines of: % foo/3 is an exported function (called with some constructed lists) foo(L1, L2, Args) -> M = list_to_atom(L1), F = list_to_atom(L2), apply(M, F, Args). Of course, writing an analysis that is effective if no such cases exist or a conservative analysis that protects itself from such cases is doable. But you need to base it on a "closed-world assumption" of some sort. Kostis From 2696834883@REDACTED Wed Oct 5 08:45:14 2016 From: 2696834883@REDACTED (=?gb18030?B?wda9qMjrLcjtvP7J6LzG?=) Date: Wed, 5 Oct 2016 14:45:14 +0800 Subject: [erlang-questions] Is there anyway to join the OTP team? Message-ID: Sorry for such a question. But I love erlang so much and I'm just wondering is there anyway to join the OTP team? I mean, how does the team choose the programmer, and how does it orgnize?Is there any other people who are not a member of OTP team maintainances Erlang/OTP project too? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmytro.lytovchenko@REDACTED Wed Oct 5 10:56:10 2016 From: dmytro.lytovchenko@REDACTED (Dmytro Lytovchenko) Date: Wed, 5 Oct 2016 10:56:10 +0200 Subject: [erlang-questions] Is there anyway to join the OTP team? In-Reply-To: References: Message-ID: You can 1. Cooperate with team members and help them doing tasks on bugs.erlang.org (ask them which task is easy to get started). There are lot of requirements to code quality and testing of your contribution, when you match these requirements, your code will possibly be accepted. 2. If you are looking for job, apply to Ericsson Sweden. If they even accept you, there is some chance that you'll impress them with your experience and skills and you'll get into OTP team. A working experience with OTP source will be a large plus. High chance though, that you get accepted but never get sent to the OTP team, they have a lot of other work to do in Ericsson. Too many if's in 2, consider 1. 2016-10-05 8:45 GMT+02:00 ???-???? <2696834883@REDACTED>: > Sorry for such a question. But I love erlang so much and I'm just > wondering is there anyway to join the OTP team? I mean, how does the team > choose the programmer, and how does it orgnize?Is there any other people > who are not a member of OTP team maintainances Erlang/OTP project too? > > Thanks. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang@REDACTED Wed Oct 5 11:41:12 2016 From: erlang@REDACTED (Joe Armstrong) Date: Wed, 5 Oct 2016 11:41:12 +0200 Subject: [erlang-questions] code garbage collector In-Reply-To: <4f91c183-27ed-e13e-0df5-11a7939d4879@cs.ntua.gr> References: <4f91c183-27ed-e13e-0df5-11a7939d4879@cs.ntua.gr> Message-ID: On Wed, Oct 5, 2016 at 10:09 AM, Kostis Sagonas wrote: > On 10/05/2016 09:54 AM, Joe Armstrong wrote: >> >> I want the *opposite* of "including a dependency" but rather a tool that >> garbage collects code reducing it to a minimal program. > > > I doubt that something like that exists. Note that Erlang is not just a > higher-order language where you need to do a complicated analysis, but it is > also language that also allows something along the lines of: > > % foo/3 is an exported function (called with some constructed lists) > > foo(L1, L2, Args) -> > M = list_to_atom(L1), F = list_to_atom(L2), apply(M, F, Args). > > Of course, writing an analysis that is effective if no such cases exist or a > conservative analysis that protects itself from such cases is doable. But > you need to base it on a "closed-world assumption" of some sort. Agreed - but code like the above should really not be hidden away in the middle of a module but be in a special module reserved for foul smelling code. All I want to do is extract all the callable code from a given start point and stop the extraction when I hit OTP library code, or nasty smelling code. /Joe > > Kostis > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From dm.klionsky@REDACTED Wed Oct 5 14:21:52 2016 From: dm.klionsky@REDACTED (Dmitry Klionsky) Date: Wed, 5 Oct 2016 15:21:52 +0300 Subject: [erlang-questions] erl_interface and long long Message-ID: <57F4F060.6020404@gmail.com> Hi, I found some macros and functions for the long long the in erl_interface.h aren't documented. Is it intentional or simply forgotten? The comment /* FIXME some macros left in erl_eterm.h should probably be documented */ just above some of them suggests the latter. ERL_IS_LONGLONG ERL_IS_UNSIGNED_LONGLONG ERL_LL_VALUE ERL_LL_UVALUE erl_mk_longlong erl_mk_ulonglong Thanks From raimo+erlang-questions@REDACTED Wed Oct 5 15:44:42 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Wed, 5 Oct 2016 15:44:42 +0200 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> References: <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> Message-ID: <20161005134442.GB37632@erix.ericsson.se> On Tue, Oct 04, 2016 at 10:52:38PM -0500, Fred Hebert wrote: > On 10/04, Vans S wrote: > : > >I did not consider this. This is truly problematic BUT would not the > >current way timeout works run into this same problem??So this should > >not affect allowing multiple timeouts vs a single timeout. > > Right now what you can do if you manage your own timers is to cancel the > old ones or ignore them (you've got their refs so it's easy) and you can > set new ones right away. You can't do that however if you rely on the > return tuple of a state transition to do it, since that transition > cannot happen from a code_change callback. To some extent this cat is already out of the box thanks to the gen_fsm timeout that got inherited to gen_statem. If such a timer is running and there is a code change on the server, it may expire "earlier" than you would like or rather suddenly you might have wanted this particular timeout to be longer. If this is a real problem you can set a flag in your Data to note that there has been a code change, and then act differently when the timeout comes, or change states to one that is aware of the possibility of an early timeout. I wonder how this affects my state_timeout proposal... It has got the same corner case, and the same workarounds apply. The same would apply to named_timeout, with the same workarounds, but the more timers you do not control, the messier to work around, I guess... > > >Learning from that, I rather this be inside gen_statem. If its not, I > >would have no problem to write my own little timer library for > >cancel_timer+purging mailbox.? > > > > Usually the pattern I use is to write a short 'reset__timer' > function that returns its ref, and then I can track the ref in my state. It may be a bit tricky to get it right, but it is educational. The problem in writing a library for this is that there are some options on both how to start a timer and how to cancel it, so a library would either not get it right for everybody or would have to take very many options to be right for everybody. So this may be a good candidate for a do-it-as-you-want-it library module. / Raimo > > I always found this fairly convenient to deal with things, and for > example, I mostly don't clean up timers very hard. I.e. you can make use > of the newer cancel_timer options for async returns with or without > info, and by matching on specific refs, I can just ignore timers that > are for events I know are no longer up for consideration and let them > disappear, and automatically can ignore a bunch of potential race > conditions (i.e. no mailbox cleanup, just ignore messages coming in) > > This is also neat to avoid whatever blocking or synchronization that > could take place on timer management, but in some cases you may still > want to do that. > > > >Since you are only in ONE state at a time, you automatically know if > >the timeout happened, it MUST correspond to the current state we are > >in.? > > > > Does it? Couldn't the timeout event have been postponed, and therefore > come from a prior state change? I believe so. There's no relationship > between an event being handled in a state and that event having been > sent in that state. > > >Another option is to remove the timeout then. It just seems out of place to me. ? > > > > At the two extremes are the positions that all timeout management is > manual, and that gen_statem is to replace full-on control of timeouts > people can do manually (with all the cancellation options and whatnot). > > Those are the two extremes, and the current thing is where on the > gradient does the timeout implementation currently lie. So far it > appears to be a fairly minimal convenience factor. I'm just wondering if > it's worth pushing it further on the gradient of "emulating all the > flexibility of manual control." > > >What is the use case of a single timeout with a UNIQUE EventContent? > > > > The current semantics are specific about one thing: > > Generates an event of event_type() timeout after this time (in > milliseconds) unless another event arrives in which case this > time-out is cancelled. Notice that a retried or inserted event > counts like a new in this respect. > > These timeouts are mostly used for one thing in my experience: tracking > inactivity of the state machine. gen_servers and gen_fsms have the same > one. The only pragmatic use case I've made of them were to give myself a > delay of N milliseconds before which it would be reasonable to send the > process in hibernation, GCing and compacting its memory at once, since > it had not received a message in that time otherwise. > > All other timeouts I tend to manage by hand because I do not want them > to be interruptible by the arrival of other messages in the mailbox of > the process. > > This is a very, very important detail! Those semantics are also a lot > trickier to handle by yourself (you'd need to put them in every clause) > than most other timers you could imagine using. This makes them a fairly > good idea to include in the OTP behaviours themselves, as opposed to > most other timer usages, IMO. > > Regards, > Fred. -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From kenneth@REDACTED Wed Oct 5 17:08:54 2016 From: kenneth@REDACTED (Kenneth Lundin) Date: Wed, 5 Oct 2016 17:08:54 +0200 Subject: [erlang-questions] Is there anyway to join the OTP team? Message-ID: It is not easy to join the Ericsson OTP team right now since Ericsson Sweden is not hiring at the moment. It is however possible to contribute to the OTP development by means of pull request via Github. You can read about how to contribute here https://github.com/erlang/otp/wiki#contributing-to-erlangotp You can for example find tasks to start working on on http://bugs.erlang.org or you can have your own suggestions. But if you have bigger suggestions please contact us first so that we can agree on a solution that we really want to take into the main track. /Regards Kenneth, Erlang/OTP, Ericsson On Wed, Oct 5, 2016 at 8:45 AM, ???-???? <2696834883@REDACTED> wrote: > Sorry for such a question. But I love erlang so much and I'm just > wondering is there anyway to join the OTP team? I mean, how does the team > choose the programmer, and how does it orgnize?Is there any other people > who are not a member of OTP team maintainances Erlang/OTP project too? > > Thanks. > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Wed Oct 5 17:19:51 2016 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Wed, 05 Oct 2016 17:19:51 +0200 Subject: [erlang-questions] Is there anyway to join the OTP team? In-Reply-To: References: Message-ID: Dmytro forgot 3. 3. beer lover. ;-) "Envoy? depuis mon mobile " Eric ---- Dmytro Lytovchenko a ?crit ---- >_______________________________________________ >erlang-questions mailing list >erlang-questions@REDACTED >http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Wed Oct 5 18:41:11 2016 From: vans_163@REDACTED (Vans S) Date: Wed, 5 Oct 2016 16:41:11 +0000 (UTC) Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161005134442.GB37632@erix.ericsson.se> References: <20160926145625.GA42780@erix.ericsson.se> <20160929133038.GA34091@erix.ericsson.se> <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> <20161005134442.GB37632@erix.ericsson.se> Message-ID: <1733027913.10116944.1475685671184@mail.yahoo.com> The state_timeout that was added makes more sense to me IMO. ?And what Fred said about uses for the current timeout being more limited are valid. I think that if there is support for the way the current timeout is, that times out regardless of the state you are currently in, what is the problem with having one vs multiple of them? Is it really that much of an annoying feature to add. ?Where the cost of maintenance of the code will be greater then the benefit?? I even mentioned I would put in the pull request for it if we can agree it would serve some use. On Wednesday, October 5, 2016 9:45 AM, Raimo Niskanen wrote: On Tue, Oct 04, 2016 at 10:52:38PM -0500, Fred Hebert wrote: > On 10/04, Vans S wrote: > : > >I did not consider this. This is truly problematic BUT would not the > >current way timeout works run into this same problem??So this should > >not affect allowing multiple timeouts vs a single timeout. > > Right now what you can do if you manage your own timers is to cancel the > old ones or ignore them (you've got their refs so it's easy) and you can > set new ones right away. You can't do that however if you rely on the > return tuple of a state transition to do it, since that transition > cannot happen from a code_change callback. To some extent this cat is already out of the box thanks to the gen_fsm timeout that got inherited to gen_statem.? If such a timer is running and there is a code change on the server, it may expire "earlier" than you would like or rather suddenly you might have wanted this particular timeout to be longer. If this is a real problem you can set a flag in your Data to note that there has been a code change, and then act differently when the timeout comes, or change states to one that is aware of the possibility of an early timeout. I wonder how this affects my state_timeout proposal... It has got the same corner case, and the same workarounds apply. The same would apply to named_timeout, with the same workarounds, but the more timers you do not control, the messier to work around, I guess... > > >Learning from that, I rather this be inside gen_statem. If its not, I > >would have no problem to write my own little timer library for > >cancel_timer+purging mailbox.? > > > > Usually the pattern I use is to write a short 'reset__timer' > function that returns its ref, and then I can track the ref in my state. It may be a bit tricky to get it right, but it is educational. The problem in writing a library for this is that there are some options on both how to start a timer and how to cancel it, so a library would either not get it right for everybody or would have to take very many options to be right for everybody. So this may be a good candidate for a do-it-as-you-want-it library module. / Raimo > > I always found this fairly convenient to deal with things, and for > example, I mostly don't clean up timers very hard. I.e. you can make use > of the newer cancel_timer options for async returns with or without > info, and by matching on specific refs, I can just ignore timers that > are for events I know are no longer up for consideration and let them > disappear, and automatically can ignore a bunch of potential race > conditions (i.e. no mailbox cleanup, just ignore messages coming in) > > This is also neat to avoid whatever blocking or synchronization that > could take place on timer management, but in some cases you may still > want to do that. > > > >Since you are only in ONE state at a time, you automatically know if > >the timeout happened, it MUST correspond to the current state we are > >in.? > > > > Does it? Couldn't the timeout event have been postponed, and therefore > come from a prior state change? I believe so. There's no relationship > between an event being handled in a state and that event having been > sent in that state. > > >Another option is to remove the timeout then. It just seems out of place to me. ? > > > > At the two extremes are the positions that all timeout management is > manual, and that gen_statem is to replace full-on control of timeouts > people can do manually (with all the cancellation options and whatnot). > > Those are the two extremes, and the current thing is where on the > gradient does the timeout implementation currently lie. So far it > appears to be a fairly minimal convenience factor. I'm just wondering if > it's worth pushing it further on the gradient of "emulating all the > flexibility of manual control." > > >What is the use case of a single timeout with a UNIQUE EventContent? > > > > The current semantics are specific about one thing: > >? ? Generates an event of event_type() timeout after this time (in >? ? milliseconds) unless another event arrives in which case this >? ? time-out is cancelled. Notice that a retried or inserted event >? ? counts like a new in this respect. > > These timeouts are mostly used for one thing in my experience: tracking > inactivity of the state machine. gen_servers and gen_fsms have the same > one. The only pragmatic use case I've made of them were to give myself a > delay of N milliseconds before which it would be reasonable to send the > process in hibernation, GCing and compacting its memory at once, since > it had not received a message in that time otherwise. > > All other timeouts I tend to manage by hand because I do not want them > to be interruptible by the arrival of other messages in the mailbox of > the process. > > This is a very, very important detail! Those semantics are also a lot > trickier to handle by yourself (you'd need to put them in every clause) > than most other timers you could imagine using. This makes them a fairly > good idea to include in the OTP behaviours themselves, as opposed to > most other timer usages, IMO. > > Regards, > Fred. -- / Raimo Niskanen, Erlang/OTP, Ericsson AB _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas@REDACTED Thu Oct 6 08:06:51 2016 From: lukas@REDACTED (Lukas Larsson) Date: Thu, 6 Oct 2016 08:06:51 +0200 Subject: [erlang-questions] socket getsockopt PACKET_STATISTICS In-Reply-To: References: Message-ID: On Wed, Oct 5, 2016 at 9:50 AM, Max Lapshin wrote: > > Will such platform specific patch to OTP be accepted, if I add to > inet:getstat/1 something like drop_cnt response that will work only under > linux? > > As a rule we do not accept platform specific patches. There has to be a very strong argument to go against that rule. In cases like this (when you want to get something that is platform specific on a socket) we recommend using the raw option that can be given to inet:getopts/2. Or am I missing something and that is not applicable in this case? Lukas -------------- next part -------------- An HTML attachment was scrubbed... URL: From max.lapshin@REDACTED Thu Oct 6 09:28:05 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 6 Oct 2016 10:28:05 +0300 Subject: [erlang-questions] socket getsockopt PACKET_STATISTICS In-Reply-To: References: Message-ID: Seems that I hurried with my email and it was wrong. 1) it seems to be impossible to query packet drops statistics from UDP socket. I need to look in /proc for inode number and parse /proc/net/udp to find row with proper inode 2) yes, it is possible theoretically to query a {raw,X,Y,Z} sockopt and it could help if not (1) and it seems to be a more portable way according to your policy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Thu Oct 6 11:30:47 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Thu, 6 Oct 2016 11:30:47 +0200 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <1733027913.10116944.1475685671184@mail.yahoo.com> References: <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> <20161005134442.GB37632@erix.ericsson.se> <1733027913.10116944.1475685671184@mail.yahoo.com> Message-ID: <20161006093047.GA92680@erix.ericsson.se> On Wed, Oct 05, 2016 at 04:41:11PM +0000, Vans S wrote: > The state_timeout that was added makes more sense to me IMO. ?And what Fred said about uses for the current timeout being more limited are valid. It seems we are reaching a common conclusion on that one... > > > > I think that if there is support for the way the current timeout is, that times out regardless of the state you are currently in, what is the problem with having one vs multiple of them? I am confused. The current timeout {timeout,Time,Msg} can not be used between different states since it is cancelled by the first event that arrives, so it is cancelled with the first possibility to change states. So to have more than one of them would mean as they are defined now that when the first generates a timeout event the others are cancelled and therefore pointless. So I see no point in having multiple {timeout,Time,Msg}s. Kind of the same applies to {state_timeout,Time,Msg}. When you change states it (they) is (are) cancelled, so having multiple timers while staying in one state gets kind of toothless. You could do it but it feels like you will be tempted to force your machine to stay in one state when it really should have more states coupled to the different timeouts. The reason {state_timeout,Time,Msg} is interesting to implement is that it fits a very common niche of timeouts in state machines and is very simple to use. > > Is it really that much of an annoying feature to add. ?Where the cost of maintenance of the code will be greater then the benefit?? > I even mentioned I would put in the pull request for it if we can agree it would serve some use. The problem is not about having multiple timers, it is about having more general timers since I am convinced that if you have multiple timers you will want to have control over when to cancel them, and how, which forces a new more general timer concept. Hence my suggestion of {named_timeout,Time,Name}. For example I think you mentioned that it would be handy to be able to bump a timer i.e add more time to a running timer, and then you need to know how much time a timer has left when you cancel it, which is problematic with the API that we need to use as gen_statem now looks. You have to return an action from a state function to control the timer so you can not act on the return value from erlang:cancel_timer/1,2, not unless you introduce a fun() in the action, or define different actions for all possible operations. And just for at bump action the question is what to do if the timer has already expired? Time out or restart? Are there more desired actions than bump? So I think the API in gen_statem which we are limited by makes it hard to build general enough timers when it is comparatively easy to write a small timer library that does what you need and keep track of the TimerRefs in your state machine Data, which gives you full control over the timers. And in general, as Fred Hebert pointed out, the more state data you get in the state machine engine, the less control the state machine implementation gets which for instance decreases your freedom to do what you want during code upgrade where you only may act on State and Data and can know nothing about the engine state data. If we introduce {named_timeout,Time,Name} and it turns out to not become general enough, most will use erlang:start_timer/3,4 anyway and we will have seldom used code to maintain. Forcing users to use erlang:start_timer/3,4 for all cases where {timeout,Time,Msg} and {state_timeout,Time,Msg} does not suffice might be the better option since all will have to understand them anyway, which improves readability. But if you have a suggestion of a good timer interface for multiple timers in gen_statem at least I would be very interested to hear it. I do not think the suggestions that have surfaced so far have been worthy of implementation... / Raimo > > On Wednesday, October 5, 2016 9:45 AM, Raimo Niskanen wrote: > > > On Tue, Oct 04, 2016 at 10:52:38PM -0500, Fred Hebert wrote: > > On 10/04, Vans S wrote: > > : > > >I did not consider this. This is truly problematic BUT would not the > > >current way timeout works run into this same problem??So this should > > >not affect allowing multiple timeouts vs a single timeout. > > > > Right now what you can do if you manage your own timers is to cancel the > > old ones or ignore them (you've got their refs so it's easy) and you can > > set new ones right away. You can't do that however if you rely on the > > return tuple of a state transition to do it, since that transition > > cannot happen from a code_change callback. > > To some extent this cat is already out of the box thanks to the gen_fsm > timeout that got inherited to gen_statem.? If such a timer is running and > there is a code change on the server, it may expire "earlier" than you > would like or rather suddenly you might have wanted this particular timeout > to be longer. > > If this is a real problem you can set a flag in your Data to note that > there has been a code change, and then act differently when the timeout > comes, or change states to one that is aware of the possibility of an early > timeout. > > I wonder how this affects my state_timeout proposal... > It has got the same corner case, and the same workarounds apply. > > The same would apply to named_timeout, with the same workarounds, > but the more timers you do not control, the messier to work around, > I guess... > > > > > >Learning from that, I rather this be inside gen_statem. If its not, I > > >would have no problem to write my own little timer library for > > >cancel_timer+purging mailbox.? > > > > > > > Usually the pattern I use is to write a short 'reset__timer' > > function that returns its ref, and then I can track the ref in my state. > > It may be a bit tricky to get it right, but it is educational. > > The problem in writing a library for this is that there are some options > on both how to start a timer and how to cancel it, so a library would > either not get it right for everybody or would have to take very many > options to be right for everybody. > > So this may be a good candidate for a do-it-as-you-want-it library module. > > / Raimo > > > > > > > I always found this fairly convenient to deal with things, and for > > example, I mostly don't clean up timers very hard. I.e. you can make use > > of the newer cancel_timer options for async returns with or without > > info, and by matching on specific refs, I can just ignore timers that > > are for events I know are no longer up for consideration and let them > > disappear, and automatically can ignore a bunch of potential race > > conditions (i.e. no mailbox cleanup, just ignore messages coming in) > > > > This is also neat to avoid whatever blocking or synchronization that > > could take place on timer management, but in some cases you may still > > want to do that. > > > > > > >Since you are only in ONE state at a time, you automatically know if > > >the timeout happened, it MUST correspond to the current state we are > > >in.? > > > > > > > Does it? Couldn't the timeout event have been postponed, and therefore > > come from a prior state change? I believe so. There's no relationship > > between an event being handled in a state and that event having been > > sent in that state. > > > > >Another option is to remove the timeout then. It just seems out of place to me. ? > > > > > > > At the two extremes are the positions that all timeout management is > > manual, and that gen_statem is to replace full-on control of timeouts > > people can do manually (with all the cancellation options and whatnot). > > > > Those are the two extremes, and the current thing is where on the > > gradient does the timeout implementation currently lie. So far it > > appears to be a fairly minimal convenience factor. I'm just wondering if > > it's worth pushing it further on the gradient of "emulating all the > > flexibility of manual control." > > > > >What is the use case of a single timeout with a UNIQUE EventContent? > > > > > > > The current semantics are specific about one thing: > > > >? ? Generates an event of event_type() timeout after this time (in > >? ? milliseconds) unless another event arrives in which case this > >? ? time-out is cancelled. Notice that a retried or inserted event > >? ? counts like a new in this respect. > > > > These timeouts are mostly used for one thing in my experience: tracking > > inactivity of the state machine. gen_servers and gen_fsms have the same > > one. The only pragmatic use case I've made of them were to give myself a > > delay of N milliseconds before which it would be reasonable to send the > > process in hibernation, GCing and compacting its memory at once, since > > it had not received a message in that time otherwise. > > > > All other timeouts I tend to manage by hand because I do not want them > > to be interruptible by the arrival of other messages in the mailbox of > > the process. > > > > This is a very, very important detail! Those semantics are also a lot > > trickier to handle by yourself (you'd need to put them in every clause) > > than most other timers you could imagine using. This makes them a fairly > > good idea to include in the OTP behaviours themselves, as opposed to > > most other timer usages, IMO. > > > > Regards, > > Fred. > > -- > > / Raimo Niskanen, Erlang/OTP, Ericsson AB > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From Ingela.Anderton.Andin@REDACTED Thu Oct 6 12:29:51 2016 From: Ingela.Anderton.Andin@REDACTED (Ingela Anderton Andin) Date: Thu, 6 Oct 2016 12:29:51 +0200 Subject: [erlang-questions] Patch package OTP 19.1.2 released Message-ID: <490317fc-672b-dc32-15c6-2ddd4772c188@ericsson.com> Patch Package: OTP 19.1.2 Git Tag: OTP-19.1.2 Date: 2016-10-06 Trouble Report Id: OTP-13905, OTP-13932 Seq num: seq13189 System: OTP Release: 19 Application: ssh-4.3.3 Predecessor: OTP 19.1.1 --------------------------------------------------------------------- --- ssh-4.3.3 ------------------------------------------------------- --------------------------------------------------------------------- Note! The ssh-4.3.3 application can *not* be applied independently of other applications on an arbitrary OTP 19 installation. On a full OTP 19 installation, also the following runtime dependency has to be satisfied: -- stdlib-3.1 (first satisfied in OTP 19.1) --- Fixed Bugs and Malfunctions --- OTP-13932 Application(s): ssh Related Id(s): seq13189 Handle all possible exit values that should be interpreted as {error, closed}. Failing to do so could lead to unexpected crashes for users of the ssh application. Full runtime dependencies of ssh-4.3.3: crypto-3.3, erts-6.0, kernel-3.0, public_key-1.1, stdlib-3.1 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- From max.lapshin@REDACTED Thu Oct 6 16:16:00 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Thu, 6 Oct 2016 17:16:00 +0300 Subject: [erlang-questions] erlang and dtls (in webrtc) Message-ID: Hi. We have successully implemented WebRTC in Flussonic and it is already working with Chrome and Firefox It means that we have working DTLS. It is not 100% doing what DTLS must do, but it works now. I know that there is a plan to do it in OTP team, but maybe we can help to add it faster? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bchamagne@REDACTED Thu Oct 6 15:17:46 2016 From: bchamagne@REDACTED (Bastien Chamagne) Date: Thu, 6 Oct 2016 15:17:46 +0200 Subject: [erlang-questions] debugger in release mode Message-ID: <0cd80c69-bef6-569b-24c2-1aec41177d0b@idmog.com> Hi guys, I know it might sound weird, but I'm working in release mode, (with reltool) and I'd like to use the visual debugger. Is it possible to add it to the release? (I get an undef error when I try debugger:start()) ps: I use R15B03 if that matters. Bastien. From vans_163@REDACTED Thu Oct 6 17:46:55 2016 From: vans_163@REDACTED (Vans S) Date: Thu, 6 Oct 2016 15:46:55 +0000 (UTC) Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161006093047.GA92680@erix.ericsson.se> References: <287809764.7376616.1475245513197@mail.yahoo.com> <20161003094515.GA65154@erix.ericsson.se> <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> <20161005134442.GB37632@erix.ericsson.se> <1733027913.10116944.1475685671184@mail.yahoo.com> <20161006093047.GA92680@erix.ericsson.se> Message-ID: <1630608036.2221527.1475768815304@mail.yahoo.com> Okay going to take another attempt at it. In evented mode. Some points before we get into it. gen_statem fits here because it ensures 2 key points. #1 If we get any message while in waiting_socket state, we should not process it / crash / drop it. #2 We need to transition out of waiting_socket state, before processing future messages. This code might be off as I did not test it, so treat as pseudo code. init() ->????{ok, waiting_socket, #{},?????????[{state_timeout, 2000, no_socket_passed}]}. %% Timeouts% Crash here or handle ourselves%handle_event(state_timeout, no_socket_passed, waiting_socket, D) ->?%????{stop, {shutdown, tcp_closed}, D}; % Crash here or handle, we should NEVER get any message here%handle_event(_, _, waiting_socket, D) ->?%????{stop, {shutdown, tcp_closed}, D}; handle_event(named_timeout, endpoint_timeout, _, D) ->?????{stop, {shutdown, tcp_closed}, D}; %% Basic socket opshandle_event(info, {pass_socket, Socket}, waiting_socket, D) ->?????{next_state, established, D#{socket=>?Socket}, [{named_timeout, 15000, keep_alive},?{named_timeout, 120000, endpoint_timeout}]}; handle_event(named_timeout, keep_alive, S, D) ->?????send_keepalive(),????{next_state, S, D, [{named_timeout, 15000, keep_alive}]}; %% Login chain % Moves our state to 'ingame' if success% Not implemented in example %% Ingamehandle_event(info, {tcp, Socket, Bin},?ingame, D)?->?case proc_on_bin(Bin) of? ? {build, Building, X, Y} ->? ? ? ? ?{BuildUUID, BuildTime} = get_build_time(Building),? ? ? ? ?register_build_uuid(BuildUUID, Building, X, Y), ? ? ? ? ? ? ? ? %% Make some helper functions to make this cleaner? ? ? ? % A cleaner option is to send a message to mailbox ? ? ? ? % after parsing packet ? ? ? ? ?{next_state,?ingame, D, [????????????{named_timeout, BuildTime, {built,?BuildUUID}},?????????????{named_timeout, 15000, keep_alive},?????????????{named_timeout, 120000, endpoint_timeout}? ? ? ? ]}; ? ?{cancel_build, BuildUUID} ->? ? ? ? unregister_build_uuid(BuildUUID),? ? ? ??{next_state,?ingame, D, [????????????{named_timeout, infinity, BuildUUID},?????????????{named_timeout, 15000, keep_alive},?????????????{named_timeout, 120000, endpoint_timeout}????????]}end; handle_event(named_timeout, {built, BuildUUID}, ingame, D) -> ? ? Socket = maps:get(socket, D),? ? notify_client_building_upgraded(Socket, BuildUUID),? ??{next_state,?ingame, D}; To me this named_timeout really fits this model, keeps the code clean, keeps it all in one place. state_timeout helps us correctly transition our states from outgame/ingame/lobby/etc. Possibly we could have 1 process for loggedout/outgame/lobby and 1 process for ingame, so we can chat while playing. The original 'timeout' seems out of place, only use case I see for it is if the process is idle to hibernate it, or to kill the process after a certain time, or other similar. So to summarize, gen_statem allows us to guarantee state transitions even if other messages arrive to mailbox. ?Thus if we were to implement this code using processes or gen_server we would be recreating a lot of gen_statem. ?Using the handle_event_function callback mode allows us to have this kind of flexible gen_statem. This is my final argument in support of named_timeout. ?Thank you for your attention. On Thursday, October 6, 2016 5:31 AM, Raimo Niskanen wrote: On Wed, Oct 05, 2016 at 04:41:11PM +0000, Vans S wrote: > The state_timeout that was added makes more sense to me IMO. ?And what Fred said about uses for the current timeout being more limited are valid. It seems we are reaching a common conclusion on that one... > > > > I think that if there is support for the way the current timeout is, that times out regardless of the state you are currently in, what is the problem with having one vs multiple of them? I am confused.? The current timeout {timeout,Time,Msg} can not be used between different states since it is cancelled by the first event that arrives, so it is cancelled with the first possibility to change states. So to have more than one of them would mean as they are defined now that when the first generates a timeout event the others are cancelled and therefore pointless.? So I see no point in having multiple {timeout,Time,Msg}s. Kind of the same applies to {state_timeout,Time,Msg}.? When you change states it (they) is (are) cancelled, so having multiple timers while staying in one state gets kind of toothless.? You could do it but it feels like you will be tempted to force your machine to stay in one state when it really should have more states coupled to the different timeouts. The reason {state_timeout,Time,Msg} is interesting to implement is that it fits a very common niche of timeouts in state machines and is very simple to use. > > Is it really that much of an annoying feature to add. ?Where the cost of maintenance of the code will be greater then the benefit?? > I even mentioned I would put in the pull request for it if we can agree it would serve some use. The problem is not about having multiple timers, it is about having more general timers since I am convinced that if you have multiple timers you will want to have control over when to cancel them, and how, which forces a new more general timer concept.? Hence my suggestion of {named_timeout,Time,Name}. For example I think you mentioned that it would be handy to be able to bump a timer i.e add more time to a running timer, and then you need to know how much time a timer has left when you cancel it, which is problematic with the API that we need to use as gen_statem now looks.? You have to return an action from a state function to control the timer so you can not act on the return value from erlang:cancel_timer/1,2, not unless you introduce a fun() in the action, or define different actions for all possible operations. And just for at bump action the question is what to do if the timer has already expired?? Time out or restart?? Are there more desired actions than bump? So I think the API in gen_statem which we are limited by makes it hard to build general enough timers when it is comparatively easy to write a small timer library that does what you need and keep track of the TimerRefs in your state machine Data, which gives you full control over the timers. And in general, as Fred Hebert pointed out, the more state data you get in the state machine engine, the less control the state machine implementation gets which for instance decreases your freedom to do what you want during code upgrade where you only may act on State and Data and can know nothing about the engine state data. If we introduce {named_timeout,Time,Name} and it turns out to not become general enough, most will use erlang:start_timer/3,4 anyway and we will have seldom used code to maintain.? Forcing users to use erlang:start_timer/3,4 for all cases where {timeout,Time,Msg} and {state_timeout,Time,Msg} does not suffice might be the better option since all will have to understand them anyway, which improves readability. But if you have a suggestion of a good timer interface for multiple timers in gen_statem at least I would be very interested to hear it.? I do not think the suggestions that have surfaced so far have been worthy of implementation... / Raimo > >? ? On Wednesday, October 5, 2016 9:45 AM, Raimo Niskanen wrote: >? > >? On Tue, Oct 04, 2016 at 10:52:38PM -0500, Fred Hebert wrote: > > On 10/04, Vans S wrote: > > : > > >I did not consider this. This is truly problematic BUT would not the > > >current way timeout works run into this same problem??So this should > > >not affect allowing multiple timeouts vs a single timeout. > > > > Right now what you can do if you manage your own timers is to cancel the > > old ones or ignore them (you've got their refs so it's easy) and you can > > set new ones right away. You can't do that however if you rely on the > > return tuple of a state transition to do it, since that transition > > cannot happen from a code_change callback. > > To some extent this cat is already out of the box thanks to the gen_fsm > timeout that got inherited to gen_statem.? If such a timer is running and > there is a code change on the server, it may expire "earlier" than you > would like or rather suddenly you might have wanted this particular timeout > to be longer. > > If this is a real problem you can set a flag in your Data to note that > there has been a code change, and then act differently when the timeout > comes, or change states to one that is aware of the possibility of an early > timeout. > > I wonder how this affects my state_timeout proposal... > It has got the same corner case, and the same workarounds apply. > > The same would apply to named_timeout, with the same workarounds, > but the more timers you do not control, the messier to work around, > I guess... > > > > > >Learning from that, I rather this be inside gen_statem. If its not, I > > >would have no problem to write my own little timer library for > > >cancel_timer+purging mailbox.? > > > > > > > Usually the pattern I use is to write a short 'reset__timer' > > function that returns its ref, and then I can track the ref in my state. > > It may be a bit tricky to get it right, but it is educational. > > The problem in writing a library for this is that there are some options > on both how to start a timer and how to cancel it, so a library would > either not get it right for everybody or would have to take very many > options to be right for everybody. > > So this may be a good candidate for a do-it-as-you-want-it library module. > > / Raimo > > > > > > > I always found this fairly convenient to deal with things, and for > > example, I mostly don't clean up timers very hard. I.e. you can make use > > of the newer cancel_timer options for async returns with or without > > info, and by matching on specific refs, I can just ignore timers that > > are for events I know are no longer up for consideration and let them > > disappear, and automatically can ignore a bunch of potential race > > conditions (i.e. no mailbox cleanup, just ignore messages coming in) > > > > This is also neat to avoid whatever blocking or synchronization that > > could take place on timer management, but in some cases you may still > > want to do that. > > > > > > >Since you are only in ONE state at a time, you automatically know if > > >the timeout happened, it MUST correspond to the current state we are > > >in.? > > > > > > > Does it? Couldn't the timeout event have been postponed, and therefore > > come from a prior state change? I believe so. There's no relationship > > between an event being handled in a state and that event having been > > sent in that state. > > > > >Another option is to remove the timeout then. It just seems out of place to me. ? > > > > > > > At the two extremes are the positions that all timeout management is > > manual, and that gen_statem is to replace full-on control of timeouts > > people can do manually (with all the cancellation options and whatnot). > > > > Those are the two extremes, and the current thing is where on the > > gradient does the timeout implementation currently lie. So far it > > appears to be a fairly minimal convenience factor. I'm just wondering if > > it's worth pushing it further on the gradient of "emulating all the > > flexibility of manual control." > > > > >What is the use case of a single timeout with a UNIQUE EventContent? > > > > > > > The current semantics are specific about one thing: > > > >? ? Generates an event of event_type() timeout after this time (in > >? ? milliseconds) unless another event arrives in which case this > >? ? time-out is cancelled. Notice that a retried or inserted event > >? ? counts like a new in this respect. > > > > These timeouts are mostly used for one thing in my experience: tracking > > inactivity of the state machine. gen_servers and gen_fsms have the same > > one. The only pragmatic use case I've made of them were to give myself a > > delay of N milliseconds before which it would be reasonable to send the > > process in hibernation, GCing and compacting its memory at once, since > > it had not received a message in that time otherwise. > > > > All other timeouts I tend to manage by hand because I do not want them > > to be interruptible by the arrival of other messages in the mailbox of > > the process. > > > > This is a very, very important detail! Those semantics are also a lot > > trickier to handle by yourself (you'd need to put them in every clause) > > than most other timers you could imagine using. This makes them a fairly > > good idea to include in the OTP behaviours themselves, as opposed to > > most other timer usages, IMO. > > > > Regards, > > Fred. > > -- > > / Raimo Niskanen, Erlang/OTP, Ericsson AB > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > >? ? -- / Raimo Niskanen, Erlang/OTP, Ericsson AB _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz@REDACTED Thu Oct 6 21:24:39 2016 From: tuncer.ayaz@REDACTED (Tuncer Ayaz) Date: Thu, 6 Oct 2016 21:24:39 +0200 Subject: [erlang-questions] debugger in release mode In-Reply-To: <0cd80c69-bef6-569b-24c2-1aec41177d0b@idmog.com> References: <0cd80c69-bef6-569b-24c2-1aec41177d0b@idmog.com> Message-ID: On 6 October 2016 at 15:17, Bastien Chamagne wrote: > Hi guys, I know it might sound weird, but I'm working in release mode, (with > reltool) and I'd like to use the visual debugger. > > Is it possible to add it to the release? (I get an undef error when I try > debugger:start()) I never tried, but it should work, since I added observer once and it also uses wxErlang for its interface. > ps: I use R15B03 if that matters. Have you tried adding the debugger application to your sys tuple? From tuncer.ayaz@REDACTED Thu Oct 6 22:46:56 2016 From: tuncer.ayaz@REDACTED (Tuncer Ayaz) Date: Thu, 6 Oct 2016 22:46:56 +0200 Subject: [erlang-questions] debugger in release mode In-Reply-To: References: <0cd80c69-bef6-569b-24c2-1aec41177d0b@idmog.com> Message-ID: On 6 October 2016 at 21:24, Tuncer Ayaz wrote: > On 6 October 2016 at 15:17, Bastien Chamagne wrote: > > Hi guys, I know it might sound weird, but I'm working in release mode, (with > > reltool) and I'd like to use the visual debugger. > > > > Is it possible to add it to the release? (I get an undef error when I try > > debugger:start()) > > I never tried, but it should work, since I added observer once and it > also uses wxErlang for its interface. > > > ps: I use R15B03 if that matters. > > Have you tried adding the debugger application to your sys tuple? Just tried and it works as expected. You can either add it to {sys, [{rel, "your_app", "1", [ ..., debugger, ... ]}]} or {sys, [{app, debugger, [{incl_cond, include}] }] } or your app's app file too, of course, and it will be detected. From eshikafe@REDACTED Fri Oct 7 09:44:25 2016 From: eshikafe@REDACTED (austin aigbe) Date: Fri, 7 Oct 2016 08:44:25 +0100 Subject: [erlang-questions] erlang and dtls (in webrtc) In-Reply-To: References: Message-ID: Good news and well done. A pull request to the existing dtls code base will be the fastest way to help. On Thu, Oct 6, 2016 at 3:16 PM, Max Lapshin wrote: > > Hi. > > We have successully implemented WebRTC in Flussonic and it is already > working with Chrome and Firefox > > It means that we have working DTLS. It is not 100% doing what DTLS must > do, but it works now. > > > I know that there is a plan to do it in OTP team, but maybe we can help to > add it faster? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ingela.andin@REDACTED Fri Oct 7 09:59:56 2016 From: ingela.andin@REDACTED (Ingela Andin) Date: Fri, 7 Oct 2016 09:59:56 +0200 Subject: [erlang-questions] erlang and dtls (in webrtc) In-Reply-To: References: Message-ID: Hi Max! I have some commits not yet visible in github also taking steps forward. If you have some git repo that I can look at I can see if we can get some use part of your solution too, maybe making it a PR. I am hoping to have some thing runnable soon and I have included some of Andreas Schultz ideas and some of my own. It has some priority now for Ericsson too :) Regards Ingela Erlang/OTP team - Ericsson AB 2016-10-07 9:44 GMT+02:00 austin aigbe : > Good news and well done. > A pull request to the existing dtls code base will be the fastest way to > help. > > On Thu, Oct 6, 2016 at 3:16 PM, Max Lapshin wrote: > >> >> Hi. >> >> We have successully implemented WebRTC in Flussonic and it is already >> working with Chrome and Firefox >> >> It means that we have working DTLS. It is not 100% doing what DTLS must >> do, but it works now. >> >> >> I know that there is a plan to do it in OTP team, but maybe we can help >> to add it faster? >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz@REDACTED Fri Oct 7 10:37:04 2016 From: aschultz@REDACTED (Andreas Schultz) Date: Fri, 7 Oct 2016 10:37:04 +0200 Subject: [erlang-questions] erlang and dtls (in webrtc) In-Reply-To: References: Message-ID: <68b52142-7789-6825-c6ba-45027c11ccdd@tpip.net> Hi Ingela, On 10/07/2016 09:59 AM, Ingela Andin wrote: > Hi Max! > > I have some commits not yet visible in github also taking steps forward. If you have some git repo that I can look at I can see if we can > get some use > part of your solution too, maybe making it a PR. I am hoping to have some thing runnable soon and I have included some of Andreas Schultz > ideas and some of my own. > It has some priority now for Ericsson too :) That is great news. Since there are now so many people that would like to play and hopefully help with DTLS, could you share the current state of your DTLS work? Andreas > > Regards Ingela Erlang/OTP team - Ericsson AB > > 2016-10-07 9:44 GMT+02:00 austin aigbe >: > > Good news and well done. > A pull request to the existing dtls code base will be the fastest way to help. > > On Thu, Oct 6, 2016 at 3:16 PM, Max Lapshin > wrote: > > > Hi. > > We have successully implemented WebRTC in Flussonic and it is already working with Chrome and Firefox > > It means that we have working DTLS. It is not 100% doing what DTLS must do, but it works now. > > > I know that there is a plan to do it in OTP team, but maybe we can help to add it faster? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From raimo+erlang-questions@REDACTED Fri Oct 7 10:42:16 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Fri, 7 Oct 2016 10:42:16 +0200 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <1630608036.2221527.1475768815304@mail.yahoo.com> References: <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> <20161005134442.GB37632@erix.ericsson.se> <1733027913.10116944.1475685671184@mail.yahoo.com> <20161006093047.GA92680@erix.ericsson.se> <1630608036.2221527.1475768815304@mail.yahoo.com> Message-ID: <20161007084216.GA26486@erix.ericsson.se> On Thu, Oct 06, 2016 at 03:46:55PM +0000, Vans S wrote: > Okay going to take another attempt at it. In evented mode. > > Some points before we get into it. gen_statem fits here because it ensures 2 key points. > > #1 If we get any message while in waiting_socket state, we should not process it / crash / drop it. > #2 We need to transition out of waiting_socket state, before processing future messages. > > This code might be off as I did not test it, so treat as pseudo code. > Your mail client really messed the code up so I had to reformat it before reading it... init() -> ???{ok, waiting_socket, #{}, ?????[{state_timeout, 2000, no_socket_passed}]}. %% Timeouts %% Crash here or handle ourselves handle_event(state_timeout, no_socket_passed, waiting_socket, D) -> ????{stop, {shutdown, tcp_closed}, D}; %% Crash here or handle, we should NEVER get any message here handle_event(_, _, waiting_socket, D) -> ????{stop, {shutdown, tcp_closed}, D}; handle_event(named_timeout, endpoint_timeout, _, D) -> ????{stop, {shutdown, tcp_closed}, D}; %% Basic socket ops handle_event(info, {pass_socket, Socket}, waiting_socket, D) -> ????{next_state, established, D#{socket=>?Socket}, [{named_timeout, 15000, keep_alive}, {named_timeout, 120000, endpoint_timeout}]}; handle_event(named_timeout, keep_alive, S, D) -> ??? send_keepalive(), ??? {next_state, S, D, [{named_timeout, 15000, keep_alive}]}; %% Login chain %% Moves our state to 'ingame' if success %% Not implemented in example %% Ingame handle_event(info, {tcp, Socket, Bin},?ingame, D)?-> ?case proc_on_bin(Bin) of ? ? {build, Building, X, Y} -> ? ? ? ? ?{BuildUUID, BuildTime} = get_build_time(Building), ? ? ? ? ?register_build_uuid(BuildUUID, Building, X, Y), ? ? ? ? ? ? ? ? %% Make some helper functions to make this cleaner ? ? ? ? % A cleaner option is to send a message to mailbox ? ? ? ? % after parsing packet ? ? ? ? ? {next_state,?ingame, D, [{named_timeout, BuildTime, {built,?BuildUUID}}, ?{named_timeout, 15000, keep_alive}, ?{named_timeout, 120000, endpoint_timeout}]}; ? ? {cancel_build, BuildUUID} -> ? ? ? ? unregister_build_uuid(BuildUUID), ? ? ? ??{next_state,?ingame, D, [ ????????????{named_timeout, infinity, BuildUUID}, ????????????{named_timeout, 15000, keep_alive}, ????????????{named_timeout, 120000, endpoint_timeout} ????????]} end; handle_event(named_timeout, {built, BuildUUID}, ingame, D) -> ? ? Socket = maps:get(socket, D), ? ? notify_client_building_upgraded(Socket, BuildUUID), ? ??{next_state,?ingame, D}; > > > To me this named_timeout really fits this model, keeps the code clean, keeps it all in one place. state_timeout helps us correctly transition our states from outgame/ingame/lobby/etc. I also like the look of the named timeouts in that code. It is quite readable. How to bump a timer is not solved, but I think needs to be solved for state_timeout anyway... Solving it with erlang:start_timer/3,4 might look like this (selected parts): handle_event( internal, {timeout,TimerRef,keep_alive, S, #{named_timeout := #{keep_alive := TimerRef}} = D) -> send_keepalive(), {next_state, S, named_timeout(1500, keep_alive, D)}; handle_event(info, {tcp,Socket,Bin}, ingame, D) -> ... {next_state, ingame, named_timeout( BuildTime, {built,BuildUUID}, named_timeout( 15000, keep_alive, named_timeout, 120000, endpoint_timeout, D)}; handle_event( internal, {timeout,TimerRef,{built,BuildUUID} = Built}, S, D) -> case D of #{named_timeout := #{Built := TimerRef}} -> Socket = maps:get(socket, D), notify_client_building_upgraded(Socked, BuildUUID), {next_state, ingame, named_timeout(TimerRef, Built, D)}; _ -> %% What to do with unexpected timeout... end; The named_timeout/3 helper function remains to be written, matching the timer event is much messier, and you need to remember to remove the TimerRef from the state when you get the timeout event... So, named_timeout would make timer handling easier. The question is if it will be flexible enough? > Possibly we could have 1 process for loggedout/outgame/lobby and 1 process for ingame, so we can chat while playing. > The original 'timeout' seems out of place, only use case I see for it is if the process is idle to hibernate it, or to kill the process after a certain time, or other similar. > > So to summarize, gen_statem allows us to guarantee state transitions even if other messages arrive to mailbox. ?Thus if we were to implement this code using processes or gen_server we > would be recreating a lot of gen_statem. ?Using the handle_event_function callback mode allows us to have this kind of flexible gen_statem. > > This is my final argument in support of named_timeout. ?Thank you for your attention. Your case is certainly worth considering. We'll start with completing state_timeout and then we will see... -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From vans_163@REDACTED Fri Oct 7 16:38:58 2016 From: vans_163@REDACTED (Vans S) Date: Fri, 7 Oct 2016 14:38:58 +0000 (UTC) Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <20161007084216.GA26486@erix.ericsson.se> References: <2095514048.8665120.1475505032853@mail.yahoo.com> <20161004152453.GA17981@erix.ericsson.se> <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> <20161005134442.GB37632@erix.ericsson.se> <1733027913.10116944.1475685671184@mail.yahoo.com> <20161006093047.GA92680@erix.ericsson.se> <1630608036.2221527.1475768815304@mail.yahoo.com> <20161007084216.GA26486@erix.ericsson.se> Message-ID: <1920865962.97615.1475851138431@mail.yahoo.com> > Your mail client really messed the code up so I had to reformat it > before reading it... Il make sure to send email as plain text. > handle_event(internal, {timeout,TimerRef,keep_alive, S, #{named_timeout := #{keep_alive := TimerRef}} = D) -> I looked at timer functions and something interesting is I don't see any way to retrieve the remaining time of a TimerRef, nor any way to bump it. Maybe I missed something? Another possible option is {named_timeout, incr|decr|set, 15000, {built, BuildUUID}}, so the 3 size tuple implies set. This does not solve referencing the time thought. If named_timeout/n returns a TimerRef, we save that in our state, and there is a way to incr/decr/set/time_remaining it. I think then this is ideal. Maybe also expose named_timeout/2 which will be simpler and return {named_timeout, 15000, Name}. For those who do not want to track TimerRef. A possible extra consideration is if we want to pass extra non-key data for the timer callback. Right now we can only pass Name, and we consider Name the timer key. What if we want something like this {named_timeout, 15000, {built, BuildUUID, BuildingUniqueState}}. This way we MAY gain some extra flexibility. I can't think of a practical use case now, but there may be. Now we cannot cancel this timer. As BuildingUniqueState is not referenced anywhere except inside the callback. Making a 4 size tuple can remedy this. I am not sure where this fits thought. Anyways this is another consideration that might have a use. {named_timeout, 15000, {built, BuildUUID}, BuildingUniqueState} On Friday, October 7, 2016 4:42 AM, Raimo Niskanen wrote: On Thu, Oct 06, 2016 at 03:46:55PM +0000, Vans S wrote: > Okay going to take another attempt at it. In evented mode. > > Some points before we get into it. gen_statem fits here because it ensures 2 key points. > > #1 If we get any message while in waiting_socket state, we should not process it / crash / drop it. > #2 We need to transition out of waiting_socket state, before processing future messages. > > This code might be off as I did not test it, so treat as pseudo code. > Your mail client really messed the code up so I had to reformat it before reading it... init() -> {ok, waiting_socket, #{}, [{state_timeout, 2000, no_socket_passed}]}. %% Timeouts %% Crash here or handle ourselves handle_event(state_timeout, no_socket_passed, waiting_socket, D) -> {stop, {shutdown, tcp_closed}, D}; %% Crash here or handle, we should NEVER get any message here handle_event(_, _, waiting_socket, D) -> {stop, {shutdown, tcp_closed}, D}; handle_event(named_timeout, endpoint_timeout, _, D) -> {stop, {shutdown, tcp_closed}, D}; %% Basic socket ops handle_event(info, {pass_socket, Socket}, waiting_socket, D) -> {next_state, established, D#{socket=> Socket}, [{named_timeout, 15000, keep_alive}, {named_timeout, 120000, endpoint_timeout}]}; handle_event(named_timeout, keep_alive, S, D) -> send_keepalive(), {next_state, S, D, [{named_timeout, 15000, keep_alive}]}; %% Login chain %% Moves our state to 'ingame' if success %% Not implemented in example %% Ingame handle_event(info, {tcp, Socket, Bin}, ingame, D) -> case proc_on_bin(Bin) of {build, Building, X, Y} -> {BuildUUID, BuildTime} = get_build_time(Building), register_build_uuid(BuildUUID, Building, X, Y), %% Make some helper functions to make this cleaner % A cleaner option is to send a message to mailbox % after parsing packet {next_state, ingame, D, [{named_timeout, BuildTime, {built, BuildUUID}}, {named_timeout, 15000, keep_alive}, {named_timeout, 120000, endpoint_timeout}]}; {cancel_build, BuildUUID} -> unregister_build_uuid(BuildUUID), {next_state, ingame, D, [ {named_timeout, infinity, BuildUUID}, {named_timeout, 15000, keep_alive}, {named_timeout, 120000, endpoint_timeout} ]} end; handle_event(named_timeout, {built, BuildUUID}, ingame, D) -> Socket = maps:get(socket, D), notify_client_building_upgraded(Socket, BuildUUID), {next_state, ingame, D}; > > > To me this named_timeout really fits this model, keeps the code clean, keeps it all in one place. state_timeout helps us correctly transition our states from outgame/ingame/lobby/etc. I also like the look of the named timeouts in that code. It is quite readable. How to bump a timer is not solved, but I think needs to be solved for state_timeout anyway... Solving it with erlang:start_timer/3,4 might look like this (selected parts): handle_event( internal, {timeout,TimerRef,keep_alive, S, #{named_timeout := #{keep_alive := TimerRef}} = D) -> send_keepalive(), {next_state, S, named_timeout(1500, keep_alive, D)}; handle_event(info, {tcp,Socket,Bin}, ingame, D) -> ... {next_state, ingame, named_timeout( BuildTime, {built,BuildUUID}, named_timeout( 15000, keep_alive, named_timeout, 120000, endpoint_timeout, D)}; handle_event( internal, {timeout,TimerRef,{built,BuildUUID} = Built}, S, D) -> case D of #{named_timeout := #{Built := TimerRef}} -> Socket = maps:get(socket, D), notify_client_building_upgraded(Socked, BuildUUID), {next_state, ingame, named_timeout(TimerRef, Built, D)}; _ -> %% What to do with unexpected timeout... end; The named_timeout/3 helper function remains to be written, matching the timer event is much messier, and you need to remember to remove the TimerRef from the state when you get the timeout event... So, named_timeout would make timer handling easier. The question is if it will be flexible enough? > Possibly we could have 1 process for loggedout/outgame/lobby and 1 process for ingame, so we can chat while playing. > The original 'timeout' seems out of place, only use case I see for it is if the process is idle to hibernate it, or to kill the process after a certain time, or other similar. > > So to summarize, gen_statem allows us to guarantee state transitions even if other messages arrive to mailbox. Thus if we were to implement this code using processes or gen_server we > would be recreating a lot of gen_statem. Using the handle_event_function callback mode allows us to have this kind of flexible gen_statem. > > This is my final argument in support of named_timeout. Thank you for your attention. Your case is certainly worth considering. We'll start with completing state_timeout and then we will see... -- / Raimo Niskanen, Erlang/OTP, Ericsson AB _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions From ingela.andin@REDACTED Fri Oct 7 16:56:09 2016 From: ingela.andin@REDACTED (Ingela Andin) Date: Fri, 7 Oct 2016 16:56:09 +0200 Subject: [erlang-questions] erlang and dtls (in webrtc) In-Reply-To: <68b52142-7789-6825-c6ba-45027c11ccdd@tpip.net> References: <68b52142-7789-6825-c6ba-45027c11ccdd@tpip.net> Message-ID: Hi! A lot of the changes are already in maint. Forinstance I refactored connection states to be maps to allow diffrent "fields" for TLS and DTLS. I am in the middle of implementing retransmissions and I will share when the code is a bit less "in between". Regards Ingela 2016-10-07 10:37 GMT+02:00 Andreas Schultz : > Hi Ingela, > > On 10/07/2016 09:59 AM, Ingela Andin wrote: > >> Hi Max! >> >> I have some commits not yet visible in github also taking steps forward. >> If you have some git repo that I can look at I can see if we can >> get some use >> part of your solution too, maybe making it a PR. I am hoping to have some >> thing runnable soon and I have included some of Andreas Schultz >> ideas and some of my own. >> It has some priority now for Ericsson too :) >> > > That is great news. Since there are now so many people that would like to > play and hopefully help with DTLS, could you share the current state of > your DTLS work? > > Andreas > > >> Regards Ingela Erlang/OTP team - Ericsson AB >> >> 2016-10-07 9:44 GMT+02:00 austin aigbe > eshikafe@REDACTED>>: >> >> Good news and well done. >> A pull request to the existing dtls code base will be the fastest way >> to help. >> >> On Thu, Oct 6, 2016 at 3:16 PM, Max Lapshin > > wrote: >> >> >> Hi. >> >> We have successully implemented WebRTC in Flussonic and it is >> already working with Chrome and Firefox >> >> It means that we have working DTLS. It is not 100% doing what >> DTLS must do, but it works now. >> >> >> I know that there is a plan to do it in OTP team, but maybe we >> can help to add it faster? >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions < >> http://erlang.org/mailman/listinfo/erlang-questions> >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions < >> http://erlang.org/mailman/listinfo/erlang-questions> >> >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Fri Oct 7 17:10:11 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Fri, 7 Oct 2016 17:10:11 +0200 Subject: [erlang-questions] gen_statem and multiple timeout vs 1 In-Reply-To: <1920865962.97615.1475851138431@mail.yahoo.com> References: <84499889.9421135.1475597308543@mail.yahoo.com> <20161004193947.GE5083@fhebert-ltm2.internal.salesforce.com> <269322506.4361336.1475614279739@mail.yahoo.com> <20161005035237.GF5083@fhebert-ltm2.internal.salesforce.com> <20161005134442.GB37632@erix.ericsson.se> <1733027913.10116944.1475685671184@mail.yahoo.com> <20161006093047.GA92680@erix.ericsson.se> <1630608036.2221527.1475768815304@mail.yahoo.com> <20161007084216.GA26486@erix.ericsson.se> <1920865962.97615.1475851138431@mail.yahoo.com> Message-ID: <20161007151011.GB61103@erix.ericsson.se> On Fri, Oct 07, 2016 at 02:38:58PM +0000, Vans S wrote: > > Your mail client really messed the code up so I had to reformat it > > before reading it... > > > Il make sure to send email as plain text. :-) > > > > handle_event(internal, {timeout,TimerRef,keep_alive, S, #{named_timeout := #{keep_alive := TimerRef}} = D) -> > > I looked at timer functions and something interesting is I don't see any way to retrieve the remaining time of a TimerRef, > nor any way to bump it. Maybe I missed something? That is right. With the gen_* model it is not possible to reach the engine state from the callback code other than by returning actions, and they can not return values back to the callback code. But you can bump a timer by restarting it, which is normally what you want when you talk about bumping a timer. That is: start the timer with the time I say from now and cancel the running timer. > > > Another possible option is {named_timeout, incr|decr|set, 15000, {built, BuildUUID}}, so the 3 size tuple implies set. Yes. But how useful is incrementing and decrementing a running timer? > > This does not solve referencing the time thought. Nope. You need to hold the TimerRef in the callback state for this. Use erlang:start_timer/3,4. > > If named_timeout/n returns a TimerRef, we save that in our state, and there is a way to incr/decr/set/time_remaining it. > I think then this is ideal. An action can not return a value, as I explained above. If you want a TimerRef to save in the callback state - use erlang:start_timer/3,4. Read the manual of erlang:cancel_timer/2,3: it returns the time left so you can then calculate a new timer time to set. This API also allows you to set timeouts on absolute times instead of relative... There is little reason to implement wrappers for this. > > Maybe also expose named_timeout/2 which will be simpler and return {named_timeout, 15000, Name}. > For those who do not want to track TimerRef. A function called from the callback module can not affect the engine state. It has to be done from an action that is processed by the engine after the callback module returns from the state function. You seem to desire a timer helper library... > > > A possible extra consideration is if we want to pass extra non-key data for the timer callback. Right now we can only pass > Name, and we consider Name the timer key. What if we want something like this {named_timeout, 15000, {built, BuildUUID, BuildingUniqueState}}. Using erlang:start_timer/3,4 you can accomplish this. > > This way we MAY gain some extra flexibility. I can't think of a practical use case now, but there may be. Now we cannot cancel this timer. As > BuildingUniqueState is not referenced anywhere except inside the callback. Making a 4 size tuple can remedy this. I am not sure > where this fits thought. Anyways this is another consideration that might have a use. > > > {named_timeout, 15000, {built, BuildUUID}, BuildingUniqueState} That is possible. {named_timeout, Time, Name, Msg} with the call signature StateName({named_timeout,Name}, Msg, Data) But I think often it is just annoying with the Msg term in {state_timeout,T,Msg} and {timeout,T,Msg}. So not having it in {named_timeout,T,Name} is either a blessing or an anomaly... Or it is with named_timer the Msg may actually be useful. :-) -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From mjtruog@REDACTED Sat Oct 8 03:45:01 2016 From: mjtruog@REDACTED (Michael Truog) Date: Fri, 07 Oct 2016 18:45:01 -0700 Subject: [erlang-questions] [ANN] CloudI 1.5.4 Released! Message-ID: <57F84F9D.9020309@gmail.com> Download 1.5.4 from http://sourceforge.net/projects/cloudi/files/latest/download (checksums at the bottom of this email) CloudI (http://cloudi.org/) is a "universal integrator" using an Erlang core to provide fault-tolerance with efficiency and scalability. The CloudI API provides a minimal interface to communicate among services so programming language agnostic and protocol agnostic integration can occur. CloudI currently integrates with the programming languages C/C++, Elixir, Erlang, Java, JavaScript/node.js, PHP, Perl, Python, and Ruby, while including many reusable services that rely on the CloudI service bus. The details for this release are below: * An important bugfix was added for internal services that use both CloudI service requests and info messages at high request rates with the duo_mode service configuration option set to true (service state returned from the info message handling wasn't always saved previously under these conditions) * Request rate testing was done both with and without the persistence of service request data (using cloudi_service_queue in destination mode) as described at http://cloudi.org/faq.html#5_LoadTesting * The 'nice' and 'cgroup' service configuration options were added for external services * The logging_set CloudI Service API function was added to allow multiple logging modifications to occur dynamically (similar to nodes_set for CloudI nodes configuration) * Bugs were fixed and other improvements were added (see the ChangeLog for more detail) Please mention any problems, issues, or ideas! Thanks, Michael SHA256 CHECKSUMS cloudi-1.5.4.tar.gz (14325429 bytes) c1307d3d7d6676c60d2ab75114c56c2790e0b8217b695f77c0da067d2d8eb556 cloudi-1.5.4.tar.bz2 (11732493 bytes) 71b77846544777eeab6a3ecf36b433ce3a4afeaf71904e9ef7a5dfdac913c8ec From mjtruog@REDACTED Sat Oct 8 06:33:54 2016 From: mjtruog@REDACTED (Michael Truog) Date: Fri, 07 Oct 2016 21:33:54 -0700 Subject: [erlang-questions] [ANN] CloudI 1.5.4 Released! In-Reply-To: <57F84F9D.9020309@gmail.com> References: <57F84F9D.9020309@gmail.com> Message-ID: <57F87732.5030103@gmail.com> There was a platform-related build-system issue that required an update in the 1.5.4 release, so the downloadable files have changed to have the different checksums below. I regret the extra email, it was a reminder to not do releases on Friday. There were various improvements to cloudi_service_queue which I neglected to mention: * persisted service requests will not be corrupted due to a partial write * retry_delay configuration argument for service name destinations that appear infrequently or periodically as consumers of service requests UPDATED SHA256 CHECKSUMS cloudi-1.5.4.tar.gz (14326898 bytes) b849d373578d488fd107fdc5ca29a97d48da979bb6066f1b78183151dddca21f cloudi-1.5.4.tar.bz2 (11732026 bytes) b214dd3639e113147c2be09333f7350cb3dd12e677cb118fcc0d3879df214ae1 On 10/07/2016 06:45 PM, Michael Truog wrote: > Download 1.5.4 from > http://sourceforge.net/projects/cloudi/files/latest/download > (checksums at the bottom of this email) > > CloudI (http://cloudi.org/) is a "universal integrator" using an > Erlang core to > provide fault-tolerance with efficiency and scalability. The CloudI API > provides a minimal interface to communicate among services so programming > language agnostic and protocol agnostic integration can occur. > CloudI currently integrates with the programming languages > C/C++, Elixir, Erlang, Java, JavaScript/node.js, PHP, Perl, Python, > and Ruby, > while including many reusable services that rely on the CloudI service > bus. > > The details for this release are below: > * An important bugfix was added for internal services that use > both CloudI service requests and info messages at high request > rates with the duo_mode service configuration option set to true > (service state returned from the info message handling wasn't > always saved previously under these conditions) > * Request rate testing was done both with and without the persistence > of service request data (using cloudi_service_queue in destination > mode) > as described at http://cloudi.org/faq.html#5_LoadTesting > * The 'nice' and 'cgroup' service configuration options were added for > external services > * The logging_set CloudI Service API function was added to allow > multiple logging modifications to occur dynamically > (similar to nodes_set for CloudI nodes configuration) > * Bugs were fixed and other improvements were added > (see the ChangeLog for more detail) > > Please mention any problems, issues, or ideas! > Thanks, > Michael > > SHA256 CHECKSUMS > cloudi-1.5.4.tar.gz (14325429 bytes) > c1307d3d7d6676c60d2ab75114c56c2790e0b8217b695f77c0da067d2d8eb556 > cloudi-1.5.4.tar.bz2 (11732493 bytes) > 71b77846544777eeab6a3ecf36b433ce3a4afeaf71904e9ef7a5dfdac913c8ec From hans.r.nilsson@REDACTED Tue Oct 11 13:32:09 2016 From: hans.r.nilsson@REDACTED (Hans Nilsson R) Date: Tue, 11 Oct 2016 13:32:09 +0200 Subject: [erlang-questions] Patch package OTP 19.1.3 released Message-ID: <7faec108-817d-b658-96ec-166fab276cc0@ericsson.com> Patch Package: OTP 19.1.3 Git Tag: OTP-19.1.3 Date: 2016-10-11 Trouble Report Id: OTP-13953 Seq num: seq13199 System: OTP Release: 19 Application: ssh-4.3.4 Predecessor: OTP 19.1.2 Check out the git tag OTP-19.1.3, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- ssh-4.3.4 ------------------------------------------------------- --------------------------------------------------------------------- Note! The ssh-4.3.4 application can *not* be applied independently of other applications on an arbitrary OTP 19 installation. On a full OTP 19 installation, also the following runtime dependency has to be satisfied: -- stdlib-3.1 (first satisfied in OTP 19.1) --- Fixed Bugs and Malfunctions --- OTP-13953 Application(s): ssh Related Id(s): seq13199 Intermittent ssh ERROR REPORT mentioning nonblocking_sender Full runtime dependencies of ssh-4.3.4: crypto-3.3, erts-6.0, kernel-3.0, public_key-1.1, stdlib-3.1 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4101 bytes Desc: S/MIME Cryptographic Signature URL: From lcastro@REDACTED Tue Oct 11 18:49:08 2016 From: lcastro@REDACTED (Laura M. Castro) Date: Tue, 11 Oct 2016 18:49:08 +0200 Subject: [erlang-questions] [ANN] [Eurocast 2017] Call for papers Message-ID: Hello all, This year's edition of the EUROCAST International Conference, held biennially in Las Palmas de Gran Canaria (Spain), features a new workshop on "Functional concurrency and distribution". If you or someone you know are working on a project that could show the EUROCAST attendance the power of functional programming, please consider submitting a 2-page abstract before OCTOBER 31st. If there's a local user group in your area that wouldn't mind receiving a copy of this call for papers, I would be grateful if you would forward this announcement. Details about the CFP and submission are to be found at: http://eurocast2017.fulp.ulpgc.es, and the description of the workshop is as follows: "The prominence of concurrent implementations and distributed deployments of software systems in all business areas is undeniable, and with the formulation of the 'Internet of Things' it is reaching massive dimensions. At the same time, the functional paradigm is gaining specific weight, and languages such as Erlang, Haskell, Elixir, Scala/Akka, Clojure... are increasing their popularity enormously. This situation provides new opportunities to evaluate the implementation of concurrent and distributed systems at a very large scale, and suggest new problems for the research community to solve. We encourage submissions related to actor-model concurrency, coming from any member of the functional programming community. We encourage submissions with academic significance and research relevance, but also with an industrial background, such as experience reports. We aim attendees to engage in presentations and discussions that will expose them to recent developments on research problems, new techniques and tools, novel applications, and lessons from users' experiences, as well as common areas relevant to the practice of functional concurrent and distributed programming." Thanks for your time and apologies if you receive this more than once! -- Laura M. Castro Universidade da Coru?a http://www.madsgroup.org/staff/laura From aschultz@REDACTED Tue Oct 11 19:25:15 2016 From: aschultz@REDACTED (Andreas Schultz) Date: Tue, 11 Oct 2016 19:25:15 +0200 (CEST) Subject: [erlang-questions] fprof problem in 19.1? Message-ID: <991446198.356467.1476206715518.JavaMail.zimbra@tpip.net> Hi, I'm not sure whether if it's my fault or if something wrong with fprof. Running a trace on rather busy system (trace file is 250MB), I get this error (full output): Erlang/OTP 19 [erts-8.1] [source] [64-bit] [async-threads:10] [kernel-poll:false] Eshell V8.1 (abort with ^G) (ergw-gtp-c-node@REDACTED)1> fprof:trace([start, {file, "/tmp/ergw.trace"}, verbose, {procs,all}]). ok (ergw-gtp-c-node@REDACTED)2> fprof:trace([stop]). ok (ergw-gtp-c-node@REDACTED)3> fprof:profile({file, "/tmp/ergw.trace"}). Reading trace data... .................................................. ................................................., .................................................. ................................................., .................................................. ................................................., .................................................. ................................................., .................................................. ...................................... End of erroneous trace! {error,{incorrect_trace_data,fprof,1714, [{trace_ts,#Port<0.692>,send_to_non_existing_process, {inet_reply,#Port<0.692>,ok}, [], {1476,206371,694992}}]}} Any idea what's going on here? Regards Andreas From max.lapshin@REDACTED Tue Oct 11 21:58:42 2016 From: max.lapshin@REDACTED (Max Lapshin) Date: Tue, 11 Oct 2016 22:58:42 +0300 Subject: [erlang-questions] erlang and dtls (in webrtc) In-Reply-To: References: <68b52142-7789-6825-c6ba-45027c11ccdd@tpip.net> Message-ID: https://github.com/flussonic/dtls sorry for dirty and incomplete code, but this works for us in our webrtc implementation. We need only key material from DTLS and this works with Firefox and Chrome (it means that it works with their ideas about selected curves). Hope that it will help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From snar@REDACTED Wed Oct 12 19:11:22 2016 From: snar@REDACTED (Alexandre Snarskii) Date: Wed, 12 Oct 2016 20:11:22 +0300 Subject: [erlang-questions] 19.x <-> 18.x wire protocol incompatibility ? Message-ID: <20161012171122.GA92846@staff.retn.net> Hi! Got into interesting 'does not work' situation while connecting newly deployed c-node (compiled with otp 19.0 libraries) with older 18.3 erlang node: nodes were unable to communicate with messages 2016-10-12 18:43:28.404 [warning] emulator '' got a corrupted external term from '' on distribution channel ... logged on erlang side. After some research, root cause was isolated: C-node was initialized using ei_connect_xinit with random() % 16 for its creation id. In 18.x, Pid of C-node was encoded as ERL_PID_EXT and used only two lower bits of creation id. However, in 19.x there is a new Pid wire presentation, ERL_NEW_PID_EXT, which encodes all 32 bits of creation id. Worse yet, this presentation is automatically selected for encoding any Pid with creation id > 3, and, as this presentation is not known by older 18.x nodes, this leads to connection drop. Solution: if you expect your C-nodes to communicate with pre-19.x erlang nodes, you may use only values 0..3 for creation id. In my case I just replaced random() % 16 with random() % 4. Question: may be it makes sense to document this wire protocol incompatibility somewhere ? From sverker.eriksson@REDACTED Wed Oct 12 19:54:53 2016 From: sverker.eriksson@REDACTED (Sverker Eriksson) Date: Wed, 12 Oct 2016 19:54:53 +0200 Subject: [erlang-questions] 19.x <-> 18.x wire protocol incompatibility ? In-Reply-To: <20161012171122.GA92846@staff.retn.net> References: <20161012171122.GA92846@staff.retn.net> Message-ID: <4225311b-3cf9-3b40-b1bb-d5a831775678@ericsson.com> This is a bug. ei_connect_xinit() should do that masking for you. The idea was to introduce larger creation values in a stepwise protocol compatible way. OTP 19 should understand and forward ERL_NEW_PID_EXT's but never create them. And then in some future major release start using large creation values encoded with ERL_NEW_PID_EXT. From the release notes of 19.0: OTP-13488 Application(s): erl_interface, erts, jinterface Handle terms (pids,ports and refs) from nodes with a 'creation' value larger than 3. This is a preparation of the distribution protocol to allow OTP 19 nodes to correctly communicate with future nodes (20 or higher). The 'creation' value differentiates different incarnations of the same node (name). /Sverker, Erlang/OTP On 10/12/2016 07:11 PM, Alexandre Snarskii wrote: > Hi! > > Got into interesting 'does not work' situation while connecting newly > deployed c-node (compiled with otp 19.0 libraries) with older 18.3 > erlang node: nodes were unable to communicate with messages > > 2016-10-12 18:43:28.404 [warning] emulator '' got a > corrupted external term from '' on distribution channel ... > > logged on erlang side. > > After some research, root cause was isolated: C-node was initialized using > ei_connect_xinit with random() % 16 for its creation id. In 18.x, Pid of > C-node was encoded as ERL_PID_EXT and used only two lower bits of creation id. > However, in 19.x there is a new Pid wire presentation, ERL_NEW_PID_EXT, which > encodes all 32 bits of creation id. Worse yet, this presentation is > automatically selected for encoding any Pid with creation id > 3, and, > as this presentation is not known by older 18.x nodes, this leads to > connection drop. > > Solution: if you expect your C-nodes to communicate with pre-19.x erlang > nodes, you may use only values 0..3 for creation id. In my case I just > replaced random() % 16 with random() % 4. > > Question: may be it makes sense to document this wire protocol incompatibility > somewhere ? > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From lukas@REDACTED Thu Oct 13 09:28:34 2016 From: lukas@REDACTED (Lukas Larsson) Date: Thu, 13 Oct 2016 09:28:34 +0200 Subject: [erlang-questions] fprof problem in 19.1? In-Reply-To: <991446198.356467.1476206715518.JavaMail.zimbra@tpip.net> References: <991446198.356467.1476206715518.JavaMail.zimbra@tpip.net> Message-ID: This seems to be a bug in fprof. There should be a clause handling the send_to_non_existing_process trace event. The same fault can be triggered with using this: 4> fprof:trace([start, {file, "/tmp/ergw.trace"}, verbose, {procs,all}]). ok 5> Pid = spawn(fun() -> ok end). <0.69.0> 6> Pid ! hej. hej 7> fprof:trace([stop]). ok 8> fprof:profile({file, "/tmp/ergw.trace"}). Reading trace data... .......... End of erroneous trace! {error, {incorrect_trace_data,fprof,1714, [{trace_ts,<0.60.0>,send_to_non_existing_process,hej, <0.69.0>, {1476,338902,646909}}]}} To fix it, this clause: https://github.com/erlang/otp/blob/maint/lib/tools/src/fprof.erl#L1700 should to be change to allow send_to_non_existing_process. Feel free to sumbit a PR with the fix, if not I'll do a fix soon (tm). Lukas On Tue, Oct 11, 2016 at 7:25 PM, Andreas Schultz wrote: > Hi, > > I'm not sure whether if it's my fault or if something wrong with fprof. > > Running a trace on rather busy system (trace file is 250MB), I get this > error (full output): > > Erlang/OTP 19 [erts-8.1] [source] [64-bit] [async-threads:10] > [kernel-poll:false] > > Eshell V8.1 (abort with ^G) > (ergw-gtp-c-node@REDACTED)1> fprof:trace([start, {file, > "/tmp/ergw.trace"}, verbose, {procs,all}]). > ok > (ergw-gtp-c-node@REDACTED)2> fprof:trace([stop]). > ok > (ergw-gtp-c-node@REDACTED)3> fprof:profile({file, "/tmp/ergw.trace"}). > Reading trace data... > .................................................. > ................................................., > .................................................. > ................................................., > .................................................. > ................................................., > .................................................. > ................................................., > .................................................. > ...................................... > End of erroneous trace! > {error,{incorrect_trace_data,fprof,1714, > [{trace_ts,#Port<0.692>,send_ > to_non_existing_process, > {inet_reply,#Port<0.692>,ok}, > [], > {1476,206371,694992}}]}} > > > Any idea what's going on here? > > Regards > Andreas > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick@REDACTED Thu Oct 13 10:11:06 2016 From: nick@REDACTED (Niclas Eklund) Date: Thu, 13 Oct 2016 10:11:06 +0200 Subject: [erlang-questions] Hangs in prim_inet:close_port/1 Message-ID: <28bd7388-8b30-d2d0-79bc-93da1a876ad9@tail-f.com> Hi! I think I've managed to hit the same issue as discussed in this thread - http://erlang.org/pipermail/erlang-questions/2013-December/076215.html I.e. it hangs in prim_inet:close_port/1 while it has a {tcp_closed,#Port<0.679615>} message in it's message queue. Quoting Bj?rn-Egil - "We are aware of this problem and I think there is a fix to R16B03. I don't see it on maint on GitHub yet." Does anyone which change set fixed this issue? Best regards, Nick From bjorn-egil.xb.dahlberg@REDACTED Thu Oct 13 10:45:11 2016 From: bjorn-egil.xb.dahlberg@REDACTED (=?UTF-8?Q?Bj=c3=b6rn-Egil_Dahlberg_XB?=) Date: Thu, 13 Oct 2016 10:45:11 +0200 Subject: [erlang-questions] Hangs in prim_inet:close_port/1 In-Reply-To: <28bd7388-8b30-d2d0-79bc-93da1a876ad9@tail-f.com> References: <28bd7388-8b30-d2d0-79bc-93da1a876ad9@tail-f.com> Message-ID: <1c4d7ad7-da1a-e64a-ed1c-10c5d7acda48@ericsson.com> On 10/13/2016 10:11 AM, Niclas Eklund wrote: > Hi! > > I think I've managed to hit the same issue as discussed in this thread > - > http://erlang.org/pipermail/erlang-questions/2013-December/076215.html > I.e. it hangs in prim_inet:close_port/1 while it has a > {tcp_closed,#Port<0.679615>} message in it's message queue. Quoting > Bj?rn-Egil - "We are aware of this problem and I think there is a fix > to R16B03. I don't see it on maint on GitHub yet." Does anyone which > change set fixed this issue? I believe it's this one: $ git log --oneline | grep prim_inet_close 9f99896 Merge branch 'rickard/prim_inet_close/OTP-11491' into maint // Bj?rn-Egil From nick@REDACTED Thu Oct 13 11:13:52 2016 From: nick@REDACTED (Niclas Eklund) Date: Thu, 13 Oct 2016 11:13:52 +0200 Subject: [erlang-questions] Hangs in prim_inet:close_port/1 In-Reply-To: <1c4d7ad7-da1a-e64a-ed1c-10c5d7acda48@ericsson.com> References: <28bd7388-8b30-d2d0-79bc-93da1a876ad9@tail-f.com> <1c4d7ad7-da1a-e64a-ed1c-10c5d7acda48@ericsson.com> Message-ID: <8b7e82b5-afd8-12a6-4e24-3a85436aeff9@tail-f.com> On 10/13/2016 10:45 AM, Bj?rn-Egil Dahlberg XB wrote: > On 10/13/2016 10:11 AM, Niclas Eklund wrote: >> Hi! >> >> I think I've managed to hit the same issue as discussed in this >> thread - >> http://erlang.org/pipermail/erlang-questions/2013-December/076215.html >> I.e. it hangs in prim_inet:close_port/1 while it has a >> {tcp_closed,#Port<0.679615>} message in it's message queue. Quoting >> Bj?rn-Egil - "We are aware of this problem and I think there is a fix >> to R16B03. I don't see it on maint on GitHub yet." Does anyone which >> change set fixed this issue? > I believe it's this one: > $ git log --oneline | grep prim_inet_close > 9f99896 Merge branch 'rickard/prim_inet_close/OTP-11491' into maint > > // Bj?rn-Egil Hi! Thanks for the input Bj?rn-Egil but that fix is in the version I use. Best regards, Nick > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From arunp@REDACTED Thu Oct 13 12:55:15 2016 From: arunp@REDACTED (ARUN P) Date: Thu, 13 Oct 2016 16:25:15 +0530 Subject: [erlang-questions] Can we use same mnesia schema on different nodes In-Reply-To: <579CBE22.7080206@utl.in> References: <579CBE22.7080206@utl.in> Message-ID: <57FF6813.8070503@utl.in> Hi, I have a hardware device, will call it SCC ( System Control Card ) on which an erlang application is running which uses mnesia as the database. The hardware device is a distributed and portable one and its IP address is dynamically generated based on some physical conditions. ie, My system set up contains 10 Racks and each Rack has SCC connected to it, and based on the Rack on which SCC is connected application will dynamically generate the IP address ie; the IP address of SCC connected on Rack -1 will be 10.1.1.1 and the erlang node name will be SCC@REDACTED and the IP address of SCC connected on Rack -2 will be 10.1.1.2 the erlang node name will be SCC@REDACTED . The problem domain is : In some situations i will be replacing SCC of Rack-1 to Rack-2 and as soon as i replace , IP address of SCC will change to 10.1.1.2 and so the erlang node. In this situation I am unable to access the data stored in mnesia when the SCC was present in Rack-1 and this is because the on Rack-2 application will create a new schema based on the new erlang node name. But the prime objective of my system is data persistence, even though I replace the device on any Rack, I should be able to access the previously stored data. Can anybody kindly suggest me how to solve this issue. - All the tables and schema created are disc copy . Thanks in advance, Arun P From francesco@REDACTED Thu Oct 13 12:59:18 2016 From: francesco@REDACTED (Francesco Cesarini) Date: Thu, 13 Oct 2016 11:59:18 +0100 Subject: [erlang-questions] OSCON 2017 (Austin Texas) Call for Talks Closes Soon. Message-ID: Hi All, every year since 2008, Erlang (and more recently Elixir) has been represented at OSCON through talks, workshops and tutorials. The next edition will be in Austin, Texas May 8th - 11th 2017. It would be great to keep this tradition going.The call for Speakers is now out and closes October 25th. http://conferences.oreilly.com/oscon/oscon-tx/public/cfp/502 Alongside language talks, it would be great to hear more about OSS projects, tools and libraries. There is a genuine buzz and interest around Erlang and Elixir, and it is up to you to keep it going. Areas you should think about are applicability, novelty and interest to the wider developer community. There are many submissions (well over 1000), so think it through and make sure you put in a high quality talk. If you have any questions or thoughts, feel free to reach out. I've been on the Program Committee for a few years now, and can help. Cheers, Francesco From garry@REDACTED Thu Oct 13 16:33:40 2016 From: garry@REDACTED (Garry Hodgson) Date: Thu, 13 Oct 2016 10:33:40 -0400 Subject: [erlang-questions] establishing preconditions at startup Message-ID: I'm working on an application whose startup phase relies on some potentially lengthy setup of tunnels and such before we can start up our ranch listeners and start regular communications. I've thought of a number of approaches (application dependencies, supervisors, deferred initialization using handle_info, etc) that could ensure this sequencing, but they feel awkward, like I'm working against the grain. So how do other people handle this? Is there canonical erlangy way to do it? I reflexively avoid doing long running things in otp callbacks, but I expect there are better and worse places/times to do this. Thanks From akat.metin@REDACTED Thu Oct 13 16:46:37 2016 From: akat.metin@REDACTED (Metin Akat) Date: Thu, 13 Oct 2016 17:46:37 +0300 Subject: [erlang-questions] establishing preconditions at startup In-Reply-To: References: Message-ID: I do it with application dependencies and calling the setup from an init function of a gen_server. I don't know if this is considered wrong or not, but I've never had any problems. On Thu, Oct 13, 2016 at 5:33 PM, Garry Hodgson wrote: > I'm working on an application whose startup phase relies on some > potentially lengthy setup of tunnels and such before we can start up our > ranch listeners and start regular communications. I've thought of a number > of approaches (application dependencies, supervisors, deferred > initialization using handle_info, etc) that could ensure this sequencing, > but they feel awkward, like I'm working against the grain. > > So how do other people handle this? Is there canonical erlangy way to do > it? I reflexively avoid doing long running things in otp callbacks, but I > expect there are better and worse places/times to do this. > > Thanks > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dszoboszlay@REDACTED Thu Oct 13 17:03:41 2016 From: dszoboszlay@REDACTED (=?UTF-8?Q?D=C3=A1niel_Szoboszlay?=) Date: Thu, 13 Oct 2016 15:03:41 +0000 Subject: [erlang-questions] Can we use same mnesia schema on different nodes In-Reply-To: <57FF6813.8070503@utl.in> References: <579CBE22.7080206@utl.in> <57FF6813.8070503@utl.in> Message-ID: Hi, When the name of the node changes, you need to repair the Mnesia schema, as it contains the names of the nodes where the tables belong. Before starting Mnesia open the schema.DCD or schema.DAT file (depending on the storage type of your schema). It will be either a disk_log or a dets file, containing {schema, Tab, Opts} entries. Among the options you will find the node(s) where the table shall exist. Simply change the old node name to the new node name there, and save the new schema under its original file name. Then you can start Mnesia and all your data will be there. Cheers, Daniel On Thu, 13 Oct 2016 at 12:55 ARUN P wrote: > Hi, > I have a hardware device, will call it SCC ( System Control Card ) > on which an erlang application is running which uses mnesia as the > database. The hardware device is a distributed and portable one and its > IP address is dynamically generated based on some physical conditions. > ie, My system set up contains 10 Racks and each Rack has SCC connected > to it, and based on the Rack on which SCC is connected application will > dynamically generate the IP address ie; the IP address of SCC connected > on Rack -1 will be 10.1.1.1 and the erlang node name will be > SCC@REDACTED and the IP address of SCC connected on Rack -2 will be > 10.1.1.2 the erlang node name will be SCC@REDACTED . > > The problem domain is : > > In some situations i will be replacing SCC of Rack-1 to Rack-2 and > as soon as i replace , IP address of SCC will change to 10.1.1.2 and so > the erlang node. In this situation I am unable to access the data stored > in mnesia when the SCC was present in Rack-1 and this is because the on > Rack-2 application will create a new schema based on the new erlang node > name. But the prime objective of my system is data persistence, even > though I replace the device on any Rack, I should be able to access the > previously stored data. Can anybody kindly suggest me how to solve this > issue. > > - All the tables and schema created are disc copy . > > Thanks in advance, > Arun P > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arunp@REDACTED Thu Oct 13 17:39:47 2016 From: arunp@REDACTED (ARUN P) Date: Thu, 13 Oct 2016 21:09:47 +0530 Subject: [erlang-questions] Can we use same mnesia schema on different nodes In-Reply-To: References: <579CBE22.7080206@utl.in> <57FF6813.8070503@utl.in> Message-ID: <57FFAAC3.1070101@utl.in> Hi Daniel Thank you for the valuable comment, but i have a doubt, since the dets operations are not transactional in the middle editing schema.DAT file, if my application gets restart, will it corrupt the schema.DAT file and if happens so how can i safe guard the data.? Thanks in advance Arun On Thursday 13 October 2016 08:33 PM, D?niel Szoboszlay wrote: > Hi, > > When the name of the node changes, you need to repair the Mnesia > schema, as it contains the names of the nodes where the tables belong. > > Before starting Mnesia open the schema.DCD or schema.DAT file > (depending on the storage type of your schema). It will be either a > disk_log or a dets file, containing {schema, Tab, Opts} entries. Among > the options you will find the node(s) where the table shall exist. > Simply change the old node name to the new node name there, and save > the new schema under its original file name. Then you can start Mnesia > and all your data will be there. > > Cheers, > Daniel > > On Thu, 13 Oct 2016 at 12:55 ARUN P > wrote: > > Hi, > I have a hardware device, will call it SCC ( System Control > Card ) > on which an erlang application is running which uses mnesia as the > database. The hardware device is a distributed and portable one > and its > IP address is dynamically generated based on some physical conditions. > ie, My system set up contains 10 Racks and each Rack has SCC connected > to it, and based on the Rack on which SCC is connected application > will > dynamically generate the IP address ie; the IP address of SCC > connected > on Rack -1 will be 10.1.1.1 and the erlang node name will be > SCC@REDACTED and the IP address of SCC > connected on Rack -2 will be > 10.1.1.2 the erlang node name will be SCC@REDACTED > . > > The problem domain is : > > In some situations i will be replacing SCC of Rack-1 to > Rack-2 and > as soon as i replace , IP address of SCC will change to 10.1.1.2 > and so > the erlang node. In this situation I am unable to access the data > stored > in mnesia when the SCC was present in Rack-1 and this is because > the on > Rack-2 application will create a new schema based on the new > erlang node > name. But the prime objective of my system is data persistence, even > though I replace the device on any Rack, I should be able to > access the > previously stored data. Can anybody kindly suggest me how to solve > this > issue. > > - All the tables and schema created are disc copy . > > Thanks in advance, > Arun P > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From snar@REDACTED Thu Oct 13 17:47:23 2016 From: snar@REDACTED (Alexandre Snarskii) Date: Thu, 13 Oct 2016 18:47:23 +0300 Subject: [erlang-questions] 19.x <-> 18.x wire protocol incompatibility ? In-Reply-To: <4225311b-3cf9-3b40-b1bb-d5a831775678@ericsson.com> References: <20161012171122.GA92846@staff.retn.net> <4225311b-3cf9-3b40-b1bb-d5a831775678@ericsson.com> Message-ID: <20161013154723.GA8919@staff.retn.net> On Wed, Oct 12, 2016 at 07:54:53PM +0200, Sverker Eriksson wrote: > This is a bug. ei_connect_xinit() should do that masking for you. Thanks for clarification. https://github.com/erlang/otp/pull/1202 > The idea was to introduce larger creation values in a stepwise > protocol compatible way. OTP 19 should understand and forward > ERL_NEW_PID_EXT's but never create them. And then in some future > major release start using large creation values encoded > with ERL_NEW_PID_EXT. > > From the release notes of 19.0: > > OTP-13488 Application(s): erl_interface, erts, jinterface > > Handle terms (pids,ports and refs) from nodes with a > 'creation' value larger than 3. This is a preparation > of the distribution protocol to allow OTP 19 nodes to > correctly communicate with future nodes (20 or higher). > The 'creation' value differentiates different > incarnations of the same node (name). > > > /Sverker, Erlang/OTP > > > On 10/12/2016 07:11 PM, Alexandre Snarskii wrote: > > Hi! > > > > Got into interesting 'does not work' situation while connecting newly > > deployed c-node (compiled with otp 19.0 libraries) with older 18.3 > > erlang node: nodes were unable to communicate with messages > > > > 2016-10-12 18:43:28.404 [warning] emulator '' got a > > corrupted external term from '' on distribution channel ... > > > > logged on erlang side. > > > > After some research, root cause was isolated: C-node was initialized using > > ei_connect_xinit with random() % 16 for its creation id. In 18.x, Pid of > > C-node was encoded as ERL_PID_EXT and used only two lower bits of creation id. > > However, in 19.x there is a new Pid wire presentation, ERL_NEW_PID_EXT, which > > encodes all 32 bits of creation id. Worse yet, this presentation is > > automatically selected for encoding any Pid with creation id > 3, and, > > as this presentation is not known by older 18.x nodes, this leads to > > connection drop. > > > > Solution: if you expect your C-nodes to communicate with pre-19.x erlang > > nodes, you may use only values 0..3 for creation id. In my case I just > > replaced random() % 16 with random() % 4. > > > > Question: may be it makes sense to document this wire protocol incompatibility > > somewhere ? > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions From achowdhury918@REDACTED Thu Oct 13 19:28:10 2016 From: achowdhury918@REDACTED (Akash Chowdhury) Date: Thu, 13 Oct 2016 13:28:10 -0400 Subject: [erlang-questions] Erlang node core dumps In-Reply-To: <7048f1e6e95f4875a6673e16cb355479@OMZP1LUMXCA16.uswin.ad.vzwcorp.com> References: <7048f1e6e95f4875a6673e16cb355479@OMZP1LUMXCA16.uswin.ad.vzwcorp.com> Message-ID: All, I am using Erlang 19.0 & erts-8.0 I am getting following core dump (log when core was loaded using gdb) Core was generated by `/apps/erlang/19.0/lib/erlang/erts-8.0/bin/beam.smp -K true -P 2000001 -A 15 -W'. Program terminated with signal SIGKILL, Killed. #0 0xffffffff7dddc1a0 in __pollsys () from /lib/64/libc.so.1 [Current thread is 148 (Thread 1 (LWP 1))] (gdb) where #0 0xffffffff7dddc1a0 in __pollsys () from /lib/64/libc.so.1 #1 0xffffffff7ddcaf54 in _pollsys () from /lib/64/libc.so.1 #2 0xffffffff7dd75f7c in pselect () from /lib/64/libc.so.1 #3 0xffffffff7dd76320 in select () from /lib/64/libc.so.1 #4 0x00000001001ba400 in erts_sys_main_thread () at sys/unix/sys.c:1409 #5 0x0000000100079f2c in erl_start (argc=52, argv=0xfffffffffffffffe) at beam/erl_init.c:2259 #6 0x000000010002bf4c in main (argc=54, argv=0xffffffff7ffff3f8) at sys/unix/erl_main.c:30 Erlang_crash_dump is not generated. Does anyone has experience about this core? What can be the reason of this core dump? Any suggestion from anyone will be highly appreciated. Thanks. - Akash -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesper.louis.andersen@REDACTED Thu Oct 13 22:21:30 2016 From: jesper.louis.andersen@REDACTED (Jesper Louis Andersen) Date: Thu, 13 Oct 2016 20:21:30 +0000 Subject: [erlang-questions] establishing preconditions at startup In-Reply-To: References: Message-ID: On Thu, Oct 13, 2016 at 4:34 PM Garry Hodgson wrote: > So how do other people handle this? Is there canonical erlangy way to do > it? I reflexively avoid doing long running things in otp callbacks, but > I expect there are better and worse places/times to do this. > > What happens if your app does its correct setup, however way you decide to do that; runs for 3 days and then, suddenly, your tunnels go down? Surely, your app better handle this situation in some way or the other, or the system will not give service in the the way it's supposed to do. This tend to lead to observation #1: system startup is but a special case of running with degraded service. Answer how you want degraded service to run and this will go a long way to explain how your system should behave in its startup phase. What you can tolerate in a degraded service mode depends on the application. In some circumstances, you can get away with closing down listening sockets for a while, in others you have ways to tell the other end that the connection it just got can't be served at the moment because there is a situation in your end. In some situations you can even continue giving service, but note to the other end certain replies are guesses because the real system is down at the moment. Observation #2: Supervisor trees maintain invariants of your system. What invariants you want to maintain is application dependent. But in my experience, it is often the case that some invariants are easier to maintain than others. A strong invariant such as "we have a working tunnel" is hard to maintain because it involves other distributed systems over which we have no control. A weaker invariant such as "either, there is no connection at the moment, we have tried to establish one for N milliseconds, or there is a connection" is easier to maintain in the supervision tree. One particular problem with a strong invariant is that it may give you a situation where your supervision tree is not fully constructed because it waits on a tunnel (which never happen). Methods I've used with success: * Have a gen_event manager to track the current state of the system. Use this to enable/disable service based on tunnel availability. * Use gproc, for the same thing. This is what e.g., https://github.com/shopgun/turtle does (written by yours truly together with the other nice people working at Shopgun) * Employ a circuit breaker and use its state to maintain the tunnel. * Use ETS * Use plain old messaging when state changes happen The key point is that you will have to handle partially degraded service when things start going wrong anyway. So you need to maintain that in a robust fashion. Once you understand how to maintain that robustly, it is often going to guide a natural path for the startup situation. The same kind of message "tunnel X we depend on just went away" or "tunnel X we depend on just came back" is the natural state changes in the application which should be used. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gomoripeti@REDACTED Fri Oct 14 00:13:15 2016 From: gomoripeti@REDACTED (=?UTF-8?B?UGV0aSBHw7Ztw7ZyaQ==?=) Date: Fri, 14 Oct 2016 00:13:15 +0200 Subject: [erlang-questions] dialyzer error on fun2ms output Message-ID: Hi list, I noticed that the below code snippet although compiles and works fine results in a dialyzer error (on for example OTP 19.1) ``` -module(ms). -export([t/0]). -include_lib("stdlib/include/ms_transform.hrl"). t() -> MS = dbg:fun2ms(fun(All) -> message(All) end), erlang:trace_pattern({m, f, '_'}, MS). ``` and the error is ms.erl:9: The call erlang:trace_pattern({'m', 'f', '_'},MS::[{'$1',[],[{'message','$1'},...]},...]) breaks the contract (MFA,MatchSpec) -> non_neg_integer() when MFA :: trace_pattern_mfa() | 'send' | 'receive', MatchSpec :: MatchSpecList::trace_match_spec() | boolean() | 'restart' | 'pause' The reason is that dbg:fun2ms generates the match-spec: [{'$1',[],[{message,'$1'}]}] But the type as seen in erlang.erl or erts_internal.erl only allows a list or the wildcard atom '_' as the match-spec head. -type trace_match_spec() :: [{[term()] | '_' ,[term()],[term()]}]. The match-spec grammar definition in http://erlang.org/doc/apps/erts/match_spec.html clearly allows a match variable as head but I understand it is impossible to express '$' as a proper erlang type. I'm not sure if ms_transform should be modified to always generate match-specs according to the type spec or rather the type-spec should be adjusted to allow all legal match specifications (eg by adding atom() to the match head type) thanks Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From arunp@REDACTED Fri Oct 14 08:33:49 2016 From: arunp@REDACTED (ARUN P) Date: Fri, 14 Oct 2016 12:03:49 +0530 Subject: [erlang-questions] Can we use same mnesia schema on different nodes In-Reply-To: <57FFAAC3.1070101@utl.in> References: <579CBE22.7080206@utl.in> <57FF6813.8070503@utl.in> <57FFAAC3.1070101@utl.in> Message-ID: <58007C4D.6050707@utl.in> > Hi Daniel > > Thank you for the valuable comment, but i have a doubt, since the > dets operations are not transactional in the middle editing schema.DAT > file, if my application gets restart, will it corrupt the schema.DAT > file and if happens so how can i safe guard the data.? > > Thanks in advance > Arun > > On Thursday 13 October 2016 08:33 PM, D?niel Szoboszlay wrote: >> Hi, >> >> When the name of the node changes, you need to repair the Mnesia >> schema, as it contains the names of the nodes where the tables belong. >> >> Before starting Mnesia open the schema.DCD or schema.DAT file >> (depending on the storage type of your schema). It will be either a >> disk_log or a dets file, containing {schema, Tab, Opts} entries. >> Among the options you will find the node(s) where the table shall >> exist. Simply change the old node name to the new node name there, >> and save the new schema under its original file name. Then you can >> start Mnesia and all your data will be there. >> >> Cheers, >> Daniel >> >> On Thu, 13 Oct 2016 at 12:55 ARUN P > > wrote: >> >> Hi, >> I have a hardware device, will call it SCC ( System Control >> Card ) >> on which an erlang application is running which uses mnesia as the >> database. The hardware device is a distributed and portable one >> and its >> IP address is dynamically generated based on some physical >> conditions. >> ie, My system set up contains 10 Racks and each Rack has SCC >> connected >> to it, and based on the Rack on which SCC is connected >> application will >> dynamically generate the IP address ie; the IP address of SCC >> connected >> on Rack -1 will be 10.1.1.1 and the erlang node name will be >> SCC@REDACTED and the IP address of SCC >> connected on Rack -2 will be >> 10.1.1.2 the erlang node name will be SCC@REDACTED >> . >> >> The problem domain is : >> >> In some situations i will be replacing SCC of Rack-1 to >> Rack-2 and >> as soon as i replace , IP address of SCC will change to 10.1.1.2 >> and so >> the erlang node. In this situation I am unable to access the data >> stored >> in mnesia when the SCC was present in Rack-1 and this is because >> the on >> Rack-2 application will create a new schema based on the new >> erlang node >> name. But the prime objective of my system is data persistence, even >> though I replace the device on any Rack, I should be able to >> access the >> previously stored data. Can anybody kindly suggest me how to >> solve this >> issue. >> >> - All the tables and schema created are disc copy . >> >> Thanks in advance, >> Arun P >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hans.r.nilsson@REDACTED Fri Oct 14 10:13:03 2016 From: hans.r.nilsson@REDACTED (Hans Nilsson R) Date: Fri, 14 Oct 2016 10:13:03 +0200 Subject: [erlang-questions] Patch package OTP 19.1.4 released Message-ID: <9c71f0cf-18ff-2649-0ded-051d4468330a@ericsson.com> Patch Package: OTP 19.1.4 Git Tag: OTP-19.1.4 Date: 2016-10-14 Trouble Report Id: OTP-13966 Seq num: System: OTP Release: 19 Application: ssh-4.3.5 Predecessor: OTP 19.1.3 Check out the git tag OTP-19.1.4, and build a full OTP system including documentation. Apply one or more applications from this build as patches to your installation using the 'otp_patch_apply' tool. For information on install requirements, see descriptions for each application version below. --------------------------------------------------------------------- --- ssh-4.3.5 ------------------------------------------------------- --------------------------------------------------------------------- Note! The ssh-4.3.5 application can *not* be applied independently of other applications on an arbitrary OTP 19 installation. On a full OTP 19 installation, also the following runtime dependency has to be satisfied: -- stdlib-3.1 (first satisfied in OTP 19.1) --- Fixed Bugs and Malfunctions --- OTP-13966 Application(s): ssh If a client illegaly sends an info-line and then immediatly closes the TCP-connection, a badmatch exception was raised. Full runtime dependencies of ssh-4.3.5: crypto-3.3, erts-6.0, kernel-3.0, public_key-1.1, stdlib-3.1 --------------------------------------------------------------------- --------------------------------------------------------------------- --------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4101 bytes Desc: S/MIME Cryptographic Signature URL: From t6sn7gt@REDACTED Fri Oct 14 19:48:49 2016 From: t6sn7gt@REDACTED (Donald Steven) Date: Fri, 14 Oct 2016 13:48:49 -0400 Subject: [erlang-questions] Help please with list syntax Message-ID: <3155cbd1-f7e3-4ec2-c52b-92ca07d0e1d7@aim.com> Hi all, I'm sorry to ask such a simple question, but "I've tried everything" and I'm still getting nowhere. ---------------------------------------------------------------------------------- The intent is simply at create a list 'L' of (MIDI) velocities, of length 'Events'. The calling function is: VelocityL = makeVelocityL(Events, SoftestNote, LoudestNote, []), which should return something like (if 'Events' were to equal 3): [40, 42, 91] I'm testing this by following the above line of code with: io:format("VelocityL ~p~n", [VelocityL]), but this produces nonsense like: "(*[" (including the double quotes at either end ---------------------------------------------------------------------------------- The function that is called is: makeVelocityL( 0, _, _, L) -> lists:reverse(L); makeVelocityL(Events, SoftestNote, LoudestNote, L) -> io:format("(for testing purposes) Events: ~p, SoftestNote: ~p, LoudestNote: ~p, L: ~p~n", [Events, SoftestNote, LoudestNote, L]), Velocity = trunc(rand:uniform() * (LoudestNote - SoftestNote)) + SoftestNote, io:format("(For testing purposes) Velocity: ~p~n", [Velocity]), makeVelocityL(Events - 1, SoftestNote, LoudestNote, [Velocity | L]). and the output of the io:format statements is: (for testing purposes) Events: 3, SoftestNote: 36, LoudestNote: 96, L: [] (For testing purposes) Velocity: 40 (for testing purposes) Events: 2, SoftestNote: 36, LoudestNote: 96, L: "(" (For testing purposes) Velocity: 42 (for testing purposes) Events: 1, SoftestNote: 36, LoudestNote: 96, L: "*(" (For testing purposes) Velocity: 91 so clearly the values are OK but 'L' is nonsense. So, the problem must be in the recursion statement: makeVelocityL(Events - 1, SoftestNote, LoudestNote, [Velocity | L]). but I've tried every combination and order of | and ++ to no avail. ---------------------------------------------------------------------------------- Your help would be greatly appreciated. Thanks. Don From erlang.org@REDACTED Fri Oct 14 19:57:20 2016 From: erlang.org@REDACTED (Stanislaw Klekot) Date: Fri, 14 Oct 2016 19:57:20 +0200 Subject: [erlang-questions] Help please with list syntax In-Reply-To: <3155cbd1-f7e3-4ec2-c52b-92ca07d0e1d7@aim.com> References: <3155cbd1-f7e3-4ec2-c52b-92ca07d0e1d7@aim.com> Message-ID: <20161014175720.GA25867@jarowit.net> On Fri, Oct 14, 2016 at 01:48:49PM -0400, Donald Steven wrote: > The intent is simply at create a list 'L' of (MIDI) velocities, of > length 'Events'. The calling function is: > > VelocityL = makeVelocityL(Events, SoftestNote, LoudestNote, []), > > which should return something like (if 'Events' were to equal 3): > [40, 42, 91] > > I'm testing this by following the above line of code with: > > io:format("VelocityL ~p~n", [VelocityL]), > > but this produces nonsense like: "(*[" (including the double quotes > at either end ~p format tries to write strings as strings, and since string is just a list of small integers, it matches what you have there. In fact, 40 stands for '(', 42 is '*', and 91 is '['. You can check yourself: #v+ 1> [40, 42, 91] == "(*[". true #v- What you want probably is to use ~w format, since it prints lists of integers as just lists of integers. -- Stanislaw Klekot From per@REDACTED Fri Oct 14 23:37:22 2016 From: per@REDACTED (Per Hedeland) Date: Fri, 14 Oct 2016 23:37:22 +0200 Subject: [erlang-questions] Erlang node core dumps In-Reply-To: References: <7048f1e6e95f4875a6673e16cb355479@OMZP1LUMXCA16.uswin.ad.vzwcorp.com> Message-ID: <9e3e2cc2-fdf2-6d0e-5bb3-1f7ae692ccee@hedeland.org> On 2016-10-13 19:28, Akash Chowdhury wrote: > > I am getting following core dump (log when core was loaded using gdb) > > > > Core was generated by `/apps/erlang/19.0/lib/erlang/erts-8.0/bin/beam.smp > -K true -P 2000001 -A 15 -W'. > > Program terminated with signal SIGKILL, Killed. SIGKILL is, when not the result of a frustrated and/or incompetent user, the hallmark of the Linux OOM (Out Of Memory) killer, called upon when the Linux kernel's habit of giving out more memory than it actually has available comes back to bite it in the *ss, and obliging by randomly killing perfectly innocent processes. Sort of... But there is something very wrong here, since SIGKILL should *not* generate a core dump, and since it is uncatchable, there's no way an application such as the Erlang VM can *make* it generate a core dump. Thus I'm leaning towards a comment from a quick googling, which I from experience know to have some merit: "Don't trust gdb". I.e. the core dump may actually be somehow corrupt, and the SIGKILL info simply wrong. In any case *if* you are running Linux, and the OOM killer hit you, it should be logged in /var/log/messages or the like. > #0 0xffffffff7dddc1a0 in __pollsys () from /lib/64/libc.so.1 > > #1 0xffffffff7ddcaf54 in _pollsys () from /lib/64/libc.so.1 > > #2 0xffffffff7dd75f7c in pselect () from /lib/64/libc.so.1 > > #3 0xffffffff7dd76320 in select () from /lib/64/libc.so.1 Hm, select() - I guess you are *not* running Linux? Anyway, this does not seem to me to be an issue with the Erlang VM, but rather with your OS / system. --Per Hedeland From achowdhury918@REDACTED Sat Oct 15 04:41:15 2016 From: achowdhury918@REDACTED (Akash Chowdhury) Date: Fri, 14 Oct 2016 22:41:15 -0400 Subject: [erlang-questions] Erlang node core dumps In-Reply-To: <9e3e2cc2-fdf2-6d0e-5bb3-1f7ae692ccee@hedeland.org> References: <7048f1e6e95f4875a6673e16cb355479@OMZP1LUMXCA16.uswin.ad.vzwcorp.com> <9e3e2cc2-fdf2-6d0e-5bb3-1f7ae692ccee@hedeland.org> Message-ID: Hi, Thanks a lot for the reply. Our servers are of Solaris Regards. On Friday, October 14, 2016, Per Hedeland wrote: > On 2016-10-13 19:28, Akash hury wrote: > > > > I am getting following core dump (log when core was loaded using gdb) > > > > > > > > Core was generated by `/apps/erlang/19.0/lib/erlang/ > erts-8.0/bin/beam.smp > > -K true -P 2000001 -A 15 -W'. > > > > Program terminated with signal SIGKILL, Killed. > > SIGKILL is, when not the result of a frustrated and/or incompetent user, > the hallmark of the Linux OOM (Out Of Memory) killer, called upon when > the Linux kernel's habit of giving out more memory than it actually has > available comes back to bite it in the *ss, and obliging by randomly > killing perfectly innocent processes. Sort of... > > But there is something very wrong here, since SIGKILL should *not* > generate a core dump, and since it is uncatchable, there's no way an > application such as the Erlang VM can *make* it generate a core dump. > Thus I'm leaning towards a comment from a quick googling, which I from > experience know to have some merit: "Don't trust gdb". I.e. the core > dump may actually be somehow corrupt, and the SIGKILL info simply wrong. > > In any case *if* you are running Linux, and the OOM killer hit you, it > should be logged in /var/log/messages or the like. > > > #0 0xffffffff7dddc1a0 in __pollsys () from /lib/64/libc.so.1 > > > > #1 0xffffffff7ddcaf54 in _pollsys () from /lib/64/libc.so.1 > > > > #2 0xffffffff7dd75f7c in pselect () from /lib/64/libc.so.1 > > > > #3 0xffffffff7dd76320 in select () from /lib/64/libc.so.1 > > Hm, select() - I guess you are *not* running Linux? Anyway, this does > not seem to me to be an issue with the Erlang VM, but rather with your > OS / system. > > --Per Hedeland > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Sat Oct 15 10:57:16 2016 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Sat, 15 Oct 2016 10:57:16 +0200 Subject: [erlang-questions] [ANN] geas 2.0.9 Message-ID: Hi, Geas 2.0.9 has been released. This is only an update for Erlang 19.1 database. Geas is a tool that will detect the runnable official Erlang release window for your project, either from source code or beam files. Geas will tell you what are the offending functions in the beam/source files that reduce the available window. Geas will tell you if some beam files are compiled native. https://github.com/crownedgrouse/geas Cheers, Eric From arunp@REDACTED Sat Oct 15 13:09:04 2016 From: arunp@REDACTED (ARUN P) Date: Sat, 15 Oct 2016 16:39:04 +0530 Subject: [erlang-questions] Reusing mnesia schema on different nodes In-Reply-To: References: <579CBE22.7080206@utl.in> <57FF6813.8070503@utl.in> Message-ID: <58020E50.9030300@utl.in> Hi, As far as i know to reuse mnesia schema on different nodes, when the name of the node changes, we need to repair the Mnesia schema, as it contains the names of the nodes where the tables belong. So before starting Mnesia i will open the schema.DAT file, a dets file which contains {schema, Tab, Opts} entries. Among the options change the old node name to the new node name, and save the new schema under its original file name. Then if we start Mnesia and all our data will be there. but i have a doubt, since the dets operations are not transactional in the middle editing schema.DAT file, if my application gets restart, will it corrupt the schema.DAT file and if happens so how can i safe guard the data.? Can anybody suggest me how to tackle this problem. ? Thanks in advance Arun -------------- next part -------------- An HTML attachment was scrubbed... URL: From per@REDACTED Sat Oct 15 13:16:22 2016 From: per@REDACTED (Per Hedeland) Date: Sat, 15 Oct 2016 13:16:22 +0200 Subject: [erlang-questions] Erlang node core dumps In-Reply-To: References: <7048f1e6e95f4875a6673e16cb355479@OMZP1LUMXCA16.uswin.ad.vzwcorp.com> <9e3e2cc2-fdf2-6d0e-5bb3-1f7ae692ccee@hedeland.org> Message-ID: <8830951f-db32-e0b0-357c-a5c1638a9414@hedeland.org> On 2016-10-15 04:41, Akash Chowdhury wrote: > Hi, > Thanks a lot for the reply. Our servers are of Solaris Ha - it so happens that my quote about gdb in the previous message was from a thread about this exact thing happening (or appearing to happen) on precisely Solaris. If you google "solaris sigkill core dump" (I did it without "solaris") it should be the first hit. The person saying it is/was well-known and -respected in the Solaris user community, I think he eventually went to work for Sun (back when it was its own company). Anyway his comment was probably not in the vein of my interpretation (that the core dump was corrupt and gdb had problems decoding it), but rather that gdb might have problems in general on Solaris, being strictly a third-party add-on in that environment. AFAIK Solaris shipped with "native" debugger(s) even after Sun's C toolchain was made a separate product. You might want to try 'adb' or 'mdb'. --Per From kennethlakin@REDACTED Sat Oct 15 15:16:23 2016 From: kennethlakin@REDACTED (Kenneth Lakin) Date: Sat, 15 Oct 2016 06:16:23 -0700 Subject: [erlang-questions] Reusing mnesia schema on different nodes In-Reply-To: <58020E50.9030300@utl.in> References: <579CBE22.7080206@utl.in> <57FF6813.8070503@utl.in> <58020E50.9030300@utl.in> Message-ID: <103846ea-7e93-2795-1f40-cd9ff820b190@gmail.com> On 10/15/2016 04:09 AM, ARUN P wrote: > but i have a doubt, since the dets operations are not transactional in > the middle editing schema.DAT file, if my application gets restart... Based on Szoboszlay's earlier email to you it sounds like you'll be doing the search and replace on the Mnesia schema file while Mnesia is stopped. From his message: [0] "Before starting Mnesia open the schema.DCD or schema.DAT file (depending on the storage type of your schema). It will be either a disk_log or a dets file, containing {schema, Tab, Opts} entries. Among the options you will find the node(s) where the table shall exist. Simply change the old node name to the new node name there, and save the new schema under its original file name. Then you can start Mnesia and all your data will be there." So, as long as Mnesia is not started on the node, then -AFAIK- nothing will be writing to the Mnesia schema file. [0] http://erlang.org/pipermail/erlang-questions/2016-October/090601.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From michael.c.williams10@REDACTED Sat Oct 15 14:58:03 2016 From: michael.c.williams10@REDACTED (Mike Williams) Date: Sat, 15 Oct 2016 13:58:03 +0100 Subject: [erlang-questions] Wx Message-ID: I just installed Ubuntu 16.04 and decided to upgrade my rather ancient Erlang/OTP version to 19.1 compiling from source. However ./configure says: wx : wxWidgets not found, wx will NOT be usable I have searched the net and found several instructions of how to install wx, but none of them seem to work, Has anyone managed to solve this problem? /mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpmiszczyk@REDACTED Sat Oct 15 22:49:41 2016 From: mpmiszczyk@REDACTED (Marcin Miszczyk) Date: Sat, 15 Oct 2016 22:49:41 +0200 Subject: [erlang-questions] Defining type with chars. Message-ID: Hi. Does anyone know why -type dna_nucleotide() :: $G | $C | $T | $A. is not valid sytax, while -type dna_nucleotide() :: 71 | 67 | 84 | 65. of course is? mr -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Sat Oct 15 23:28:23 2016 From: eric.pailleau@REDACTED (=?ISO-8859-1?Q?=C9ric_Pailleau?=) Date: Sat, 15 Oct 2016 23:28:23 +0200 Subject: [erlang-questions] Wx In-Reply-To: References: Message-ID: <1tojh85mt21tjchnm49mu2tl.1476566903688@email.android.com> Hi, You need to install wx 3.0 from ubuntu packages. Regards "Envoy? depuis mon mobile " Eric ---- Mike Williams a ?crit ---- >_______________________________________________ >erlang-questions mailing list >erlang-questions@REDACTED >http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.pailleau@REDACTED Sat Oct 15 23:57:28 2016 From: eric.pailleau@REDACTED (PAILLEAU Eric) Date: Sat, 15 Oct 2016 23:57:28 +0200 Subject: [erlang-questions] Wx In-Reply-To: <1tojh85mt21tjchnm49mu2tl.1476566903688@email.android.com> References: <1tojh85mt21tjchnm49mu2tl.1476566903688@email.android.com> Message-ID: Hi, here is below the .so that wxe_driver needs. $> ldd /usr/local/lib/erlang/lib/wx-1.7.1/priv/wxe_driver.so | grep wx | cut -d '=' -f 1 libwx_gtk2u_stc-3.0.so.0 libwx_gtk2u_xrc-3.0.so.0 libwx_gtk2u_html-3.0.so.0 libwx_gtk2u_adv-3.0.so.0 libwx_gtk2u_core-3.0.so.0 libwx_baseu-3.0.so.0 libwx_gtk2u_gl-3.0.so.0 libwx_gtk2u_aui-3.0.so.0 libwx_baseu_xml-3.0.so.0 So this imply those packages : libwxbase3.0-dev libwxgtk3.0-dev Regards Le 15/10/2016 ? 23:28, ?ric Pailleau a ?crit : > Hi, > You need to install wx 3.0 from ubuntu packages. > Regards > > "Envoy? depuis mon mobile " Eric > > > > ---- Mike Williams a ?crit ---- > > I just installed Ubuntu 16.04 and decided to upgrade my rather ancient > Erlang/OTP version to 19.1 compiling from source. > However ./configure says: > wx : wxWidgets not found, wx will NOT be usable > I have searched the net and found several instructions of how to install > wx, but none of them seem to work, Has anyone managed to solve this > problem? > /mike > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > From maxp72@REDACTED Sat Oct 15 23:58:28 2016 From: maxp72@REDACTED (Max Kuzmins) Date: Sat, 15 Oct 2016 22:58:28 +0100 Subject: [erlang-questions] Defining type with chars. In-Reply-To: References: Message-ID: Hi Marcin, There's a char() type available, that's defined as 0..16#10ffff. Perhaps you're not allowed to use $G syntax as it would be unclear what standard your character refers to. Regards, Max On 15 October 2016 at 21:49, Marcin Miszczyk wrote: > Hi. > > Does anyone know why > -type dna_nucleotide() :: $G | $C | $T | $A. > is not valid sytax, while > -type dna_nucleotide() :: 71 | 67 | 84 | 65. > of course is? > > mr > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From byaruhaf@REDACTED Sun Oct 16 00:04:03 2016 From: byaruhaf@REDACTED (Franklin Byaruhanga) Date: Sat, 15 Oct 2016 22:04:03 +0000 Subject: [erlang-questions] Wx In-Reply-To: References: <1tojh85mt21tjchnm49mu2tl.1476566903688@email.android.com> Message-ID: Try the instructions below:- wget http://archive.ubuntu.com/ubuntu/pool/universe/w/wxwidgets3.0/libwxbase3.0-0_3.0.2-1_amd64.deb yes Y | sudo dpkg -i libwxbase3.0-0_3.0.2-1*.deb yes Y | sudo apt-get -fy install wget http://archive.ubuntu.com/ubuntu/pool/universe/w/wxwidgets3.0/libwxgtk3.0-0_3.0.2-1_amd64.deb yes Y | sudo dpkg -i libwxgtk3.0-0_3.0.2-1*.deb yes Y | sudo apt-get -fy install On Sun, Oct 16, 2016 at 12:57 AM PAILLEAU Eric wrote: > Hi, > here is below the .so that wxe_driver needs. > > $> ldd /usr/local/lib/erlang/lib/wx-1.7.1/priv/wxe_driver.so | grep wx | > cut -d '=' -f 1 > libwx_gtk2u_stc-3.0.so.0 > libwx_gtk2u_xrc-3.0.so.0 > libwx_gtk2u_html-3.0.so.0 > libwx_gtk2u_adv-3.0.so.0 > libwx_gtk2u_core-3.0.so.0 > libwx_baseu-3.0.so.0 > libwx_gtk2u_gl-3.0.so.0 > libwx_gtk2u_aui-3.0.so.0 > libwx_baseu_xml-3.0.so.0 > > So this imply those packages : > > libwxbase3.0-dev > libwxgtk3.0-dev > > Regards > > > Le 15/10/2016 ? 23:28, ?ric Pailleau a ?crit : > > Hi, > > You need to install wx 3.0 from ubuntu packages. > > Regards > > > > "Envoy? depuis mon mobile " Eric > > > > > > > > ---- Mike Williams a ?crit ---- > > > > I just installed Ubuntu 16.04 and decided to upgrade my rather ancient > > Erlang/OTP version to 19.1 compiling from source. > > However ./configure says: > > wx : wxWidgets not found, wx will NOT be usable > > I have searched the net and found several instructions of how to install > > wx, but none of them seem to work, Has anyone managed to solve this > > problem? > > /mike > > > > > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- Regards, Franklin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpmiszczyk@REDACTED Sun Oct 16 02:26:13 2016 From: mpmiszczyk@REDACTED (Marcin Miszczyk) Date: Sun, 16 Oct 2016 02:26:13 +0200 Subject: [erlang-questions] Defining type with chars. In-Reply-To: References: Message-ID: By standard you mean encoding? Chars should handle both easily ( http://erlang.org/doc/reference_manual/data_types.html#id64690 in "Number" section) But even then $G is allowed both in code and shell, only type definition is deficient. I'm just wondering if it's bug, oversight, or are there some obvious reasons I'm just not seeing. mr On Sat, Oct 15, 2016 at 11:58 PM, Max Kuzmins wrote: > Hi Marcin, > > There's a char() type available, that's defined as 0..16#10ffff. Perhaps > you're not allowed to use $G syntax as it would be unclear what standard > your character refers to. > > Regards, > Max > > On 15 October 2016 at 21:49, Marcin Miszczyk wrote: > >> Hi. >> >> Does anyone know why >> -type dna_nucleotide() :: $G | $C | $T | $A. >> is not valid sytax, while >> -type dna_nucleotide() :: 71 | 67 | 84 | 65. >> of course is? >> >> mr >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangud@REDACTED Sun Oct 16 07:47:03 2016 From: dangud@REDACTED (Dan Gudmundsson) Date: Sun, 16 Oct 2016 05:47:03 +0000 Subject: [erlang-questions] Wx In-Reply-To: References: <1tojh85mt21tjchnm49mu2tl.1476566903688@email.android.com> Message-ID: I'm on a windows machine right now but the OpenGL libs that are needed libgl and libglu for the linking to succeed, I don't know if wxWidgets-lib brings them in as deps. On Sun, Oct 16, 2016 at 12:10 AM Franklin Byaruhanga wrote: > > Try the instructions below:- > > wget > http://archive.ubuntu.com/ubuntu/pool/universe/w/wxwidgets3.0/libwxbase3.0-0_3.0.2-1_amd64.deb > > > yes Y | sudo dpkg -i libwxbase3.0-0_3.0.2-1*.deb > > yes Y | sudo apt-get -fy install > > > > wget > http://archive.ubuntu.com/ubuntu/pool/universe/w/wxwidgets3.0/libwxgtk3.0-0_3.0.2-1_amd64.deb > > yes Y | sudo dpkg -i libwxgtk3.0-0_3.0.2-1*.deb > > yes Y | sudo apt-get -fy install > > > On Sun, Oct 16, 2016 at 12:57 AM PAILLEAU Eric > wrote: > > Hi, > here is below the .so that wxe_driver needs. > > $> ldd /usr/local/lib/erlang/lib/wx-1.7.1/priv/wxe_driver.so | grep wx | > cut -d '=' -f 1 > libwx_gtk2u_stc-3.0.so.0 > libwx_gtk2u_xrc-3.0.so.0 > libwx_gtk2u_html-3.0.so.0 > libwx_gtk2u_adv-3.0.so.0 > libwx_gtk2u_core-3.0.so.0 > libwx_baseu-3.0.so.0 > libwx_gtk2u_gl-3.0.so.0 > libwx_gtk2u_aui-3.0.so.0 > libwx_baseu_xml-3.0.so.0 > > So this imply those packages : > > libwxbase3.0-dev > libwxgtk3.0-dev > > Regards > > > Le 15/10/2016 ? 23:28, ?ric Pailleau a ?crit : > > Hi, > > You need to install wx 3.0 from ubuntu packages. > > Regards > > > > "Envoy? depuis mon mobile " Eric > > > > > > > > ---- Mike Williams a ?crit ---- > > > > I just installed Ubuntu 16.04 and decided to upgrade my rather ancient > > Erlang/OTP version to 19.1 compiling from source. > > However ./configure says: > > wx : wxWidgets not found, wx will NOT be usable > > I have searched the net and found several instructions of how to install > > wx, but none of them seem to work, Has anyone managed to solve this > > problem? > > /mike > > > > > > > > _______________________________________________ > > erlang-questions mailing list > > erlang-questions@REDACTED > > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -- > Regards, > Franklin > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luca@REDACTED Sun Oct 16 09:44:13 2016 From: luca@REDACTED (Luca Spiller) Date: Sun, 16 Oct 2016 09:44:13 +0200 Subject: [erlang-questions] Help debugging binary memory usage Message-ID: Hi everyone, One of our nodes seems to have a memory leak. After a couple of days the memory usage gets so high that the OOM killer kills it, and it's restarted. It seems to have been going on for a few years, as it works fine the whole time so nobody noticed - it just uses up all the memory on the box. A bit of background: the node is making hundreds of HTTP requests per second. There are a thousand or so worker processes responsible for this, which make a request, inspect the response headers, and based on these start other processes. The process then sleeps for X time (seconds to minutes) and does the same again. The response body can be any size, but we don't care about that in the application (but I'd assume it gets converted to a binary by lhttpc). I should also note that some of the requests are made over TLS. https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-system.png This is the output from Observer, as you can see it shows that binaries are using 2569 MB of RAM. When the node has been restarted and running for a few minutes this is usually < 10 MB. Most of the worker processes (95%+) which make the requests are started shortly after the node starts and hang around forever. https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-processes.png This is the process list from Observer, sorted by memory, it doesn't appear to show anything interesting. The worker processes (XXX:init/1) use roughly the same amount of memory after they've been running for a few minutes. As I understand large binaries stick around until the system is under 'high memory pressure' before being GCed. In my case the node uses up half the swap, and all the RAM - is that not high enough? After that the OOM killer jumps in and deals with it forcibly. So... what can I do to debug this? Thanks, Luca Spiller -------------- next part -------------- An HTML attachment was scrubbed... URL: From puzza007@REDACTED Sun Oct 16 10:05:47 2016 From: puzza007@REDACTED (Paul Oliver) Date: Sun, 16 Oct 2016 08:05:47 +0000 Subject: [erlang-questions] Help debugging binary memory usage In-Reply-To: References: Message-ID: Hey Luca, Check out https://github.com/ferd/recon and http://dieswaytoofast.blogspot.com/2012/12/erlang-binaries-and-garbage-collection.html Cheers, Paul. On Sun, Oct 16, 2016 at 8:53 PM Luca Spiller wrote: > Hi everyone, > > One of our nodes seems to have a memory leak. After a couple of days the > memory usage gets so high that the OOM killer kills it, and it's restarted. > It seems to have been going on for a few years, as it works fine the whole > time so nobody noticed - it just uses up all the memory on the box. > > A bit of background: the node is making hundreds of HTTP requests per > second. There are a thousand or so worker processes responsible for this, > which make a request, inspect the response headers, and based on these > start other processes. The process then sleeps for X time (seconds to > minutes) and does the same again. The response body can be any size, but we > don't care about that in the application (but I'd assume it gets converted > to a binary by lhttpc). I should also note that some of the requests are > made over TLS. > > > https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-system.png > > This is the output from Observer, as you can see it shows that binaries > are using 2569 MB of RAM. When the node has been restarted and running for > a few minutes this is usually < 10 MB. Most of the worker processes (95%+) > which make the requests are started shortly after the node starts and hang > around forever. > > > https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-processes.png > > This is the process list from Observer, sorted by memory, it doesn't > appear to show anything interesting. The worker processes (XXX:init/1) use > roughly the same amount of memory after they've been running for a few > minutes. > > As I understand large binaries stick around until the system is under > 'high memory pressure' before being GCed. In my case the node uses up half > the swap, and all the RAM - is that not high enough? After that the OOM > killer jumps in and deals with it forcibly. > > So... what can I do to debug this? > > Thanks, > > Luca Spiller > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arunp@REDACTED Sun Oct 16 14:56:39 2016 From: arunp@REDACTED (ARUN P) Date: Sun, 16 Oct 2016 18:26:39 +0530 Subject: [erlang-questions] System limit for erlang process and Pid Message-ID: <58037907.4030803@utl.in> Hi, Am a newbie in erlang programming world and I seriously need some assistance. I am running an application in which am using *timer:apply_interval(1000, ?MODULE, monitor_loop , [])*, as per my analysis found that this function call creates new processes at every 1000 millisecond interval. Since my *monitor loop* function only takes around 800 ms to complete its duty and then it dies, the number of processes are not getting increased. But as per my observation every time am getting a new PID and the PID number is always increasing and I came to know that the pid is limited by, 1 word for a process identifier from the current local node + 5 words for a process identifier from another node. So what will happen if my application continuous to run for months without stopping, will it crash the entire erlang vm. How the garbage collection will be happening for the dead process since for every process some amount of heap memory is getting allocated. Can somebody kindly give me some guidance. Thanks in advance, Arun -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikpelinux@REDACTED Sun Oct 16 19:09:39 2016 From: mikpelinux@REDACTED (Mikael Pettersson) Date: Sun, 16 Oct 2016 19:09:39 +0200 Subject: [erlang-questions] System limit for erlang process and Pid In-Reply-To: <58037907.4030803@utl.in> References: <58037907.4030803@utl.in> Message-ID: <22531.46163.48119.761086@gargle.gargle.HOWL> ARUN P writes: > Hi, > Am a newbie in erlang programming world and I seriously need some > assistance. I am running an application in which am using > *timer:apply_interval(1000, ?MODULE, monitor_loop , [])*, as per my > analysis found that this function call creates new processes at every > 1000 millisecond interval. Since my *monitor loop* function only takes > around 800 ms to complete its duty and then it dies, the number of > processes are not getting increased. But as per my observation every > time am getting a new PID and the PID number is always increasing and I > came to know that the pid is limited by, 1 word for a process identifier > from the current local node + 5 words for a process identifier from > another node. > > So what will happen if my application continuous to run for months > without stopping, will it crash the entire erlang vm. No. The VM is quite capable of running for _years_. I assume the local pids will wrap around and unused ones be reused, but I haven't checked this. > How the garbage > collection will be happening for the dead process since for every > process some amount of heap memory is getting allocated. The memory allocated to an Erlang process, including its heap and stack, is deallocated (not GC:d, more like free()) when that process terminates. From shapovalovts@REDACTED Sun Oct 16 19:34:22 2016 From: shapovalovts@REDACTED (Taras Shapovalov) Date: Sun, 16 Oct 2016 20:34:22 +0300 Subject: [erlang-questions] Unix Domain Sockets in v19 Message-ID: Hey guys, I would like to try the experimental feature of v19 -- unix sockets, but cannot get how it should be used. For example, if I send some request to docker with gen_udp, then I will get {error,eprototype}: [taras@REDACTED ~]$ erl Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false] Eshell V8.1 (abort with ^G) 1> {ok, Sockout} = gen_udp:open(0, [{ifaddr, {local, "/tmp/testsockout"}}]). {ok,#Port<0.413>} 2> gen_udp:send(Sockout, {local, "/var/run/docker.sock"}, 0, "http:/containers/json"). {error,eprototype} 3> The socket is available and accessable by the user. Say, this works fine: curl --unix-socket /var/run/docker.sock http:/containers/json Any idea what is going wrong there? I will appreciate if someone points me to any documentation (I know the final description of the feature is not ready for now, but maybe there is some draft already exists?). Also do you know if httpc module supports the unix sockets since 19.0? If yes, how to do the same with httpc? Best regards, Taras -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartin4242@REDACTED Sun Oct 16 22:12:50 2016 From: mmartin4242@REDACTED (Michael Martin) Date: Sun, 16 Oct 2016 15:12:50 -0500 Subject: [erlang-questions] Help debugging binary memory usage In-Reply-To: References: Message-ID: <7eb43b72-a0db-62f0-b47c-adb8efbbd8c4@gmail.com> Possible message leak? Check for unhandled messages, and log them. See the section on unhandled messages here . On 10/16/2016 03:05 AM, Paul Oliver wrote: > Hey Luca, > > Check out https://github.com/ferd/recon and > http://dieswaytoofast.blogspot.com/2012/12/erlang-binaries-and-garbage-collection.html > > > Cheers, > Paul. > > On Sun, Oct 16, 2016 at 8:53 PM Luca Spiller > wrote: > > Hi everyone, > > One of our nodes seems to have a memory leak. After a couple of > days the memory usage gets so high that the OOM killer kills it, > and it's restarted. It seems to have been going on for a few > years, as it works fine the whole time so nobody noticed - it just > uses up all the memory on the box. > > A bit of background: the node is making hundreds of HTTP requests > per second. There are a thousand or so worker processes > responsible for this, which make a request, inspect the response > headers, and based on these start other processes. The process > then sleeps for X time (seconds to minutes) and does the same > again. The response body can be any size, but we don't care about > that in the application (but I'd assume it gets converted to a > binary by lhttpc). I should also note that some of the requests > are made over TLS. > > https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-system.png > > This is the output from Observer, as you can see it shows that > binaries are using 2569 MB of RAM. When the node has been > restarted and running for a few minutes this is usually < 10 MB. > Most of the worker processes (95%+) which make the requests are > started shortly after the node starts and hang around forever. > > https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-processes.png > > This is the process list from Observer, sorted by memory, it > doesn't appear to show anything interesting. The worker processes > (XXX:init/1) use roughly the same amount of memory after they've > been running for a few minutes. > > As I understand large binaries stick around until the system is > under 'high memory pressure' before being GCed. In my case the > node uses up half the swap, and all the RAM - is that not high > enough? After that the OOM killer jumps in and deals with it forcibly. > > So... what can I do to debug this? > > Thanks, > > Luca Spiller > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 02:27:22 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 00:27:22 +0000 (UTC) Subject: [erlang-questions] Unix Domain Sockets in v19 In-Reply-To: References: Message-ID: <709691206.938384.1476664042862@mail.yahoo.com> {ok, Socket} = gen_tcp:connect({local, <<"/tmp/unix_socket">>}, 0, [{active, true}, binary], 10000), ok = gen_tcp:send(Socket, <<"Hello to socket">>)), use as regular socket now. Same thing for listening. Listening will fail if a file exists with same name as the unix_socket. On Sunday, October 16, 2016 1:35 PM, Taras Shapovalov wrote: Hey guys, I would like to try the experimental feature of v19 -- unix sockets, but cannot get how it should be used. For example, if I send some request to docker with gen_udp, then I will get? {error,eprototype}: [taras@REDACTED ~]$ erl Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false] Eshell V8.1? (abort with ^G) 1> {ok, Sockout} = gen_udp:open(0, [{ifaddr, {local, "/tmp/testsockout"}}]). {ok,#Port<0.413>} 2> gen_udp:send(Sockout, {local, "/var/run/docker.sock"}, 0, "http:/containers/json"). {error,eprototype} 3> The socket is available and accessable by the user. Say, this works fine: curl --unix-socket /var/run/docker.sock http:/containers/json Any idea what is going wrong there? I will appreciate if someone points me to any documentation (I know the final description of the feature is not ready for now, but maybe there is some draft already exists?). Also do you know if httpc module supports the unix sockets since 19.0? If yes, how to do the same with httpc? Best regards, Taras _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider@REDACTED Mon Oct 17 11:41:00 2016 From: schneider@REDACTED (Schneider) Date: Mon, 17 Oct 2016 11:41:00 +0200 Subject: [erlang-questions] Statem troubles Message-ID: <8786ac04-d87f-af6c-9dd1-6699b809ace4@xs4all.nl> Dear list, I feel really stupid this morning, trying to get the gen_statem working. Even the code_lock example given in the gen_statem behaviour documentation doesn't work: Erlang/OTP 19 [erts-8.0.5] [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false] Eshell V8.0.5 (abort with ^G) 1> c(code_lock). {ok,code_lock} 2> code_lock:start_link("123"). Lock ** exception exit: {callback_mode,ok} TIA, Frans -module(code_lock). -behaviour(gen_statem). -define(NAME, code_lock). -export([start_link/1]). -export([button/1]). -export([init/1,callback_mode/0,terminate/3,code_change/4]). -export([locked/3,open/3]). start_link(Code) -> gen_statem:start_link({local,?NAME}, ?MODULE, Code, []). button(Digit) -> gen_statem:cast(?NAME, {button,Digit}). init(Code) -> do_lock(), Data = #{code => Code, remaining => Code}, {ok,locked,Data}. callback_mode() -> state_functions. locked( cast, {button,Digit}, #{code := Code, remaining := Remaining} = Data) -> case Remaining of [Digit] -> do_unlock(), {next_state,open,Data#{remaining := Code},10000}; [Digit|Rest] -> % Incomplete {next_state,locked,Data#{remaining := Rest}}; _Wrong -> {next_state,locked,Data#{remaining := Code}} end. open(timeout, _, Data) -> do_lock(), {next_state,locked,Data}; open(cast, {button,_}, Data) -> do_lock(), {next_state,locked,Data}. do_lock() -> io:format("Lock~n", []). do_unlock() -> io:format("Unlock~n", []). terminate(_Reason, State, _Data) -> State =/= locked andalso do_lock(), ok. code_change(_Vsn, State, Data, _Extra) -> {ok,State,Data}. From dgud@REDACTED Mon Oct 17 11:50:38 2016 From: dgud@REDACTED (Dan Gudmundsson) Date: Mon, 17 Oct 2016 09:50:38 +0000 Subject: [erlang-questions] Reusing mnesia schema on different nodes In-Reply-To: <103846ea-7e93-2795-1f40-cd9ff820b190@gmail.com> References: <579CBE22.7080206@utl.in> <57FF6813.8070503@utl.in> <58020E50.9030300@utl.in> <103846ea-7e93-2795-1f40-cd9ff820b190@gmail.com> Message-ID: Note that this "hack" will only work if you are using a single node system. If you have several nodes you should take a backup, traverse and change the nodename in that backup, install it on all nodes and restart all nodes from backup. see mnesia install_fallback. Or you wipe the mnesia dir on the moved node, and use mnesia:change_config to connect to the other nodes, use mnesia:change_table_copy_type on schema to disc and use mnesia:add_table_copy(Tab, ...) on the other tabs you want a local copy. Then you use mnesia:del_table_copy(schema, OldNameName) .. /Dan On Sat, Oct 15, 2016 at 3:16 PM Kenneth Lakin wrote: > On 10/15/2016 04:09 AM, ARUN P wrote: > > but i have a doubt, since the dets operations are not transactional in > > the middle editing schema.DAT file, if my application gets restart... > > Based on Szoboszlay's earlier email to you it sounds like you'll be > doing the search and replace on the Mnesia schema file while Mnesia is > stopped. > > From his message: [0] > > "Before starting Mnesia open the schema.DCD or schema.DAT file > (depending on the storage type of your schema). It will be either a > disk_log or a dets file, containing {schema, Tab, Opts} entries. Among > the options you will find the node(s) where the table shall exist. > Simply change the old node name to the new node name there, and save the > new schema under its original file name. Then you can start Mnesia and > all your data will be there." > > So, as long as Mnesia is not started on the node, then -AFAIK- nothing > will be writing to the Mnesia schema file. > > [0] http://erlang.org/pipermail/erlang-questions/2016-October/090601.html > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Korpilla@REDACTED Mon Oct 17 13:26:30 2016 From: Oliver.Korpilla@REDACTED (Oliver Korpilla) Date: Mon, 17 Oct 2016 13:26:30 +0200 Subject: [erlang-questions] Port handlers and binary leaks? Message-ID: Hello. Recent reading got me concerned about leaking memory by having big refc binaries. In our system we have permanent TCP and SCTP handlers that receive outside messages, then forward it into the system for processing. For example, these events have ASN.1 payloads. Decoding is done in throw-away processes... however, I'm concerned that the TCP and SCTP handling processes might cause memory leaks because every payload buffer is handled there first. I saw in "Erlang in Anger" that routers should only return where to route to, not handle the message. But when handling sockets I don't see that option? What is good practice here? Am "I" at risk? Thanks and best regards, Oliver From Dinislam.Salikhov@REDACTED Mon Oct 17 10:58:59 2016 From: Dinislam.Salikhov@REDACTED (Salikhov Dinislam) Date: Mon, 17 Oct 2016 11:58:59 +0300 Subject: [erlang-questions] Tail call optimization Message-ID: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> Hello. Erlang guarantees tail recursion optimization and states it in the documentation: http://erlang.org/doc/reference_manual/functions.html#id78464 Does erlang guarantee that tail call optimization is done in a generic case, without recursion? Say, we have a function calling a function from another module as its final statement: alpha() -> xxx:beta(). Is it guaranteed that xxx:beta() will use the stack of alpha() regardless whether recursion is involved. I mean whether the language guarantees it rather than virtual machine may provide such optimization. Thanks in advance, Salikhov Dinislam -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmytro.lytovchenko@REDACTED Mon Oct 17 14:00:43 2016 From: dmytro.lytovchenko@REDACTED (Dmytro Lytovchenko) Date: Mon, 17 Oct 2016 14:00:43 +0200 Subject: [erlang-questions] Tail call optimization In-Reply-To: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> Message-ID: In the doc page you linked: > If the last expression of a function body is a function call, a *tail recursive* call is done. Compiler will replace call opcode with a tail call (call_last, call_ext_last, apply_last). You can check it with "erl -S test.erl" to see assembly, and in erl console: "l(modulename)." to load the module then "erts_debug:df(modulename)." to create disassembly from BEAM VM memory (it will be a bit different from the erl -S output). See that your calls are replaced with one of: call_last, call_ext_last, apply_last. 2016-10-17 10:58 GMT+02:00 Salikhov Dinislam < Dinislam.Salikhov@REDACTED>: > Hello. > > Erlang guarantees tail recursion optimization and states it in the > documentation: > http://erlang.org/doc/reference_manual/functions.html#id78464 > > Does erlang guarantee that tail call optimization is done in a generic > case, without recursion? > Say, we have a function calling a function from another module as its > final statement: > alpha() -> > xxx:beta(). > Is it guaranteed that xxx:beta() will use the stack of alpha() regardless > whether recursion is involved. > I mean whether the language guarantees it rather than virtual machine may > provide such optimization. > > Thanks in advance, > Salikhov Dinislam > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemenkov@REDACTED Mon Oct 17 14:08:02 2016 From: lemenkov@REDACTED (Peter Lemenkov) Date: Mon, 17 Oct 2016 14:08:02 +0200 Subject: [erlang-questions] Unix Domain Sockets in v19 In-Reply-To: References: Message-ID: Hello All! I belive this wasn't standartized prior to documenting, so there might be some discrepancies. Here is how I send arbitrary data to DGRAM Unix-socket: https://gist.github.com/lemenkov/22c4d621b432c3a27574facddd9862f8 {ok, UnixSock} = gen_udp:open(0, [local]), gen_udp:send(UnixSock, {local, "/dev/log"}, 0, "HELLO"). it works fine. 2016-10-16 19:34 GMT+02:00 Taras Shapovalov : > Hey guys, > > I would like to try the experimental feature of v19 -- unix sockets, but > cannot get how it should be used. For example, if I send some request to > docker with gen_udp, then I will get {error,eprototype}: > > [taras@REDACTED ~]$ erl > Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:4:4] [async-threads:10] > [kernel-poll:false] > > Eshell V8.1 (abort with ^G) > 1> {ok, Sockout} = gen_udp:open(0, [{ifaddr, {local, "/tmp/testsockout"}}]). > {ok,#Port<0.413>} > 2> gen_udp:send(Sockout, {local, "/var/run/docker.sock"}, 0, > "http:/containers/json"). > {error,eprototype} > 3> > > The socket is available and accessable by the user. Say, this works fine: > > curl --unix-socket /var/run/docker.sock http:/containers/json > > Any idea what is going wrong there? > > I will appreciate if someone points me to any documentation (I know the > final description of the feature is not ready for now, but maybe there is > some draft already exists?). > > Also do you know if httpc module supports the unix sockets since 19.0? If > yes, how to do the same with httpc? > > Best regards, > > Taras > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -- With best regards, Peter Lemenkov. From dmytro.lytovchenko@REDACTED Mon Oct 17 14:36:08 2016 From: dmytro.lytovchenko@REDACTED (Dmytro Lytovchenko) Date: Mon, 17 Oct 2016 14:36:08 +0200 Subject: [erlang-questions] Tail call optimization In-Reply-To: <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> Message-ID: There is nothing about recursion in documentation. Your module: -module(tc). -export([a/0]). a() -> b(). b() -> c(). c() -> z(). z() -> self(). Compiles to (memory dump): 00007F07875A2DA8: i_func_info_IaaI 0 tc a 0. 00007F07875A2DD0: i_call_only_f tc:b/0. 00007F07875A2DE0: i_func_info_IaaI 0 tc b 0. 00007F07875A2E08: i_call_only_f tc:c/0. 00007F07875A2E18: i_func_info_IaaI 0 tc c 0. 00007F07875A2E40: i_call_only_f tc:z/0. 00007F07875A2E50: i_func_info_IaaI 0 tc z 0. 00007F07875A2E78: self_r x(0). 00007F07875A2E80: return. Note the call_only functions. These are the tail calls. 2016-10-17 14:28 GMT+02:00 Salikhov Dinislam < Dinislam.Salikhov@REDACTED>: > The confusing thing is that the doc says about tail *recursive* call. > For example, if I have a call chain: > a() -> > % some code > b(). > b() -> > % some code > c(). > % ... > y() -> > %some code > z(). > Recursion is *not* involved here. And I'd like to know if erlang requires > (and guarantees) that all tail callees in the chain above use the stack of > the caller. > AFAIU, compiler is free to not apply the optimization if it is not stated > in the specification (and it is pure luck that the compiler does). > > Salikhov Dinislam > > > On 10/17/2016 03:00 PM, Dmytro Lytovchenko wrote: > > In the doc page you linked: > > If the last expression of a function body is a function call, a *tail > recursive* call is done. > > Compiler will replace call opcode with a tail call (call_last, > call_ext_last, apply_last). You can check it with "erl -S test.erl" to see > assembly, and in erl console: "l(modulename)." to load the module then > "erts_debug:df(modulename)." to create disassembly from BEAM VM memory (it > will be a bit different from the erl -S output). > > See that your calls are replaced with one of: call_last, call_ext_last, > apply_last. > > 2016-10-17 10:58 GMT+02:00 Salikhov Dinislam com>: > >> Hello. >> >> Erlang guarantees tail recursion optimization and states it in the >> documentation: >> http://erlang.org/doc/reference_manual/functions.html#id78464 >> >> Does erlang guarantee that tail call optimization is done in a generic >> case, without recursion? >> Say, we have a function calling a function from another module as its >> final statement: >> alpha() -> >> xxx:beta(). >> Is it guaranteed that xxx:beta() will use the stack of alpha() regardless >> whether recursion is involved. >> I mean whether the language guarantees it rather than virtual machine may >> provide such optimization. >> >> Thanks in advance, >> Salikhov Dinislam >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 15:12:46 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 13:12:46 +0000 (UTC) Subject: [erlang-questions] Help debugging binary memory usage In-Reply-To: <7eb43b72-a0db-62f0-b47c-adb8efbbd8c4@gmail.com> References: <7eb43b72-a0db-62f0-b47c-adb8efbbd8c4@gmail.com> Message-ID: <1684801280.1320952.1476709966314@mail.yahoo.com> Seems you have been bitten by a topic I recently discussed. ?This is a common pitfall with Erlang's share nothing heap. ?Look at the recent command line args added http://erlang.org/doc/man/erl.html?specifically hmax. For more details check out the readme?https://github.com/vans163/stargate, specifically the Websocket Example section. Tl;Dr The most likely reason is that you have a long living process that is processing large binaries, large binaries fragment the shared process heap beyond GC cleanup. Only solution is to kill the long living process from time to time. On Sunday, October 16, 2016 4:13 PM, Michael Martin wrote: Possible message leak? Check for unhandled messages, and log them. See the section on unhandled messages here. On 10/16/2016 03:05 AM, Paul Oliver wrote: Hey Luca, Check out?https://github.com/ferd/recon?and?http://dieswaytoofast.blogspot.com/2012/12/erlang-binaries-and-garbage-collection.html Cheers, Paul. On Sun, Oct 16, 2016 at 8:53 PM Luca Spiller wrote: Hi everyone, One of our nodes seems to have a memory leak. After a couple of days the memory usage gets so high that the OOM killer kills it, and it's restarted. It seems to have been going on for a few years, as it works fine the whole time so nobody noticed - it just uses up all the memory on the box. A bit of background: the node is making hundreds of HTTP requests per second. There are a thousand or so worker processes responsible for this, which make a request, inspect the response headers, and based on these start other processes. The process then sleeps for X time (seconds to minutes) and does the same again. The response body can be any size, but we don't care about that in the application (but I'd assume it gets converted to a binary by lhttpc). I should also note that some of the requests are made over TLS. https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-system.png This is the output from Observer, as you can see it shows that binaries are using 2569 MB of RAM. When the node has been restarted and running for a few minutes this is usually < 10 MB. Most of the worker processes (95%+) which make the requests are started shortly after the node starts and hang around forever. https://dl.dropboxusercontent.com/u/21557257/20161016-erl/observer-processes.png This is the process list from Observer, sorted by memory, it doesn't appear to show anything interesting. The worker processes (XXX:init/1) use roughly the same amount of memory after they've been running for a few minutes. As I understand large binaries stick around until the system is under 'high memory pressure' before being GCed. In my case the node uses up half the swap, and all the RAM - is that not high enough? After that the OOM killer jumps in and deals with it forcibly. So... what can I do to debug this? Thanks, Luca Spiller _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 15:14:24 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 13:14:24 +0000 (UTC) Subject: [erlang-questions] Statem troubles In-Reply-To: <8786ac04-d87f-af6c-9dd1-6699b809ace4@xs4all.nl> References: <8786ac04-d87f-af6c-9dd1-6699b809ace4@xs4all.nl> Message-ID: <2043963941.1334655.1476710064294@mail.yahoo.com> Upgrade to R 19.1+. gen_statem should be marked unstable/beta in the docs. It changes too fast. On Monday, October 17, 2016 5:41 AM, Schneider wrote: Dear list, I feel really stupid this morning, trying to get the gen_statem working. Even the code_lock example given in the gen_statem behaviour documentation doesn't work: Erlang/OTP 19 [erts-8.0.5] [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false] Eshell V8.0.5? (abort with ^G) 1> c(code_lock). {ok,code_lock} 2> code_lock:start_link("123"). Lock ** exception exit: {callback_mode,ok} TIA, Frans -module(code_lock). -behaviour(gen_statem). -define(NAME, code_lock). -export([start_link/1]). -export([button/1]). -export([init/1,callback_mode/0,terminate/3,code_change/4]). -export([locked/3,open/3]). start_link(Code) -> ? ? gen_statem:start_link({local,?NAME}, ?MODULE, Code, []). button(Digit) -> ? ? gen_statem:cast(?NAME, {button,Digit}). init(Code) -> ? ? do_lock(), ? ? Data = #{code => Code, remaining => Code}, ? ? {ok,locked,Data}. callback_mode() -> ? ? state_functions. locked( ? cast, {button,Digit}, ? #{code := Code, remaining := Remaining} = Data) -> ? ? case Remaining of ? ? ? ? [Digit] -> ??? ? ? do_unlock(), ? ? ? ? ? ? {next_state,open,Data#{remaining := Code},10000}; ? ? ? ? [Digit|Rest] -> % Incomplete ? ? ? ? ? ? {next_state,locked,Data#{remaining := Rest}}; ? ? ? ? _Wrong -> ? ? ? ? ? ? {next_state,locked,Data#{remaining := Code}} ? ? end. open(timeout, _,? Data) -> ? ? do_lock(), ? ? {next_state,locked,Data}; open(cast, {button,_}, Data) -> ? ? do_lock(), ? ? {next_state,locked,Data}. do_lock() -> ? ? io:format("Lock~n", []). do_unlock() -> ? ? io:format("Unlock~n", []). terminate(_Reason, State, _Data) -> ? ? State =/= locked andalso do_lock(), ? ? ok. code_change(_Vsn, State, Data, _Extra) -> ? ? {ok,State,Data}. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinoski@REDACTED Mon Oct 17 15:18:02 2016 From: vinoski@REDACTED (Steve Vinoski) Date: Mon, 17 Oct 2016 09:18:02 -0400 Subject: [erlang-questions] Statem troubles In-Reply-To: <8786ac04-d87f-af6c-9dd1-6699b809ace4@xs4all.nl> References: <8786ac04-d87f-af6c-9dd1-6699b809ace4@xs4all.nl> Message-ID: On Mon, Oct 17, 2016 at 5:41 AM, Schneider wrote: > Dear list, > > I feel really stupid this morning, trying to get the gen_statem working. > Even the code_lock example given in the gen_statem behaviour documentation > doesn't work: > If you run your code on 19.1, it works fine. On 19.0 your init/1 should return {callback_mode(),locked,Data} --steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 15:17:50 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 13:17:50 +0000 (UTC) Subject: [erlang-questions] Port handlers and binary leaks? In-Reply-To: References: Message-ID: <1811753914.1328580.1476710270677@mail.yahoo.com> Holding a ref to a binary greater then 52 bytes keeps it from getting cleaned, under 52 bytes it gets copied. To hold a ref you need to add that binary/iolist-member to a map/list or ets table. If all your tcp process is doing is forwarding iolist/binary to another process for processing. ?That should not cause any issues. On Monday, October 17, 2016 7:26 AM, Oliver Korpilla wrote: Hello. Recent reading got me concerned about leaking memory by having big refc binaries. In our system we have permanent TCP and SCTP handlers that receive outside messages, then forward it into the system for processing. For example, these events have ASN.1 payloads. Decoding is done in throw-away processes... however, I'm concerned that the TCP and SCTP handling processes might cause memory leaks because every payload buffer is handled there first. I saw in "Erlang in Anger" that routers should only return where to route to, not handle the message. But when handling sockets I don't see that option? What is good practice here? Am "I" at risk? Thanks and best regards, Oliver _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From schneider@REDACTED Mon Oct 17 15:28:55 2016 From: schneider@REDACTED (Schneider) Date: Mon, 17 Oct 2016 15:28:55 +0200 Subject: [erlang-questions] Statem troubles In-Reply-To: References: <8786ac04-d87f-af6c-9dd1-6699b809ace4@xs4all.nl> Message-ID: <6fd49ec7-7adc-c5f0-46e5-5886ae5a2461@xs4all.nl> Thanks! I am running 19.0 and using the 19.1 docs indeed. On 10/17/2016 03:18 PM, Steve Vinoski wrote: > > > On Mon, Oct 17, 2016 at 5:41 AM, Schneider > wrote: > > Dear list, > > I feel really stupid this morning, trying to get the gen_statem > working. Even the code_lock example given in the gen_statem > behaviour documentation doesn't work: > > > If you run your code on 19.1, it works fine. On 19.0 your init/1 should > return > > {callback_mode(),locked,Data} > > --steve From dn.nhattan@REDACTED Mon Oct 17 14:02:28 2016 From: dn.nhattan@REDACTED (Tan Duong) Date: Mon, 17 Oct 2016 14:02:28 +0200 Subject: [erlang-questions] How to setup Erlang to run on physical cores Message-ID: Hi everybody, I recently get to experiment an Erlang program. My machine is a multicore CPUs system, which contains some physical cores (say n), each cores features hyper threads (so the maximum CPU threads are 2*n) However, I just want to experiment the program on physical cores only (n cores), not with hyperthreading. is there any mechanism to do so? Best Regards, Tan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sperber@REDACTED Mon Oct 17 14:24:41 2016 From: sperber@REDACTED (Michael Sperber) Date: Mon, 17 Oct 2016 14:24:41 +0200 Subject: [erlang-questions] 2nd Call for Contributions: BOB 2017 - Berlin, Feb 24, 2017 (Deadline Oct 30) Message-ID: (Erlang proposals are very welcome at BOB!) BOB Conference 2017 "What happens when we use what's best for a change?" http://bobkonf.de/2017/en/cfp.html Berlin, February 24 Call for Contributions Deadline: October 30, 2016 You are actively engaged in advanced software engineering methods, implement ambitious architectures and are open to cutting-edge innovation? Attend this conference, meet people that share your goals, and get to know the best software tools and technologies available today. We strive to offer a day full of new experiences and impressions that you can use to immediately improve your daily life as a software developer. If you share our vision and want to contribute, submit a proposal for a talk or tutorial! Topics ------ We are looking for talks about best-of-breed software technology, e.g.: - functional programming - persistent data structures and databases - types - formal methods for correctness and robustness - abstractions for concurrency and parallelism - metaprogramming - probabilistic programming - ... everything really that isn?t mainstream, but you think should be. Presenters should provide the audience with information that is practically useful for software developers. This time, we?re especially interested in experience reports. But this could also take other forms, e.g.: - introductory talks on technical background - overviews of a given field - demos and how-tos Requirements ------------ We accept proposals for presentations of 45 minutes (40 minutes talk + 5 minutes questions), as well as 90 minute tutorials for beginners. The language of presentation should be either English or German. Your proposal should include (in your presentation language of choice): - an abstract of max. 1500 characters. - a short bio/cv - contact information (including at least email address) - a list of 3-5 concrete ideas of how your work can be applied in a developer?s daily life -additional material (websites, blogs, slides, videos of past presentations, ?) Submit here: https://docs.google.com/forms/d/e/1FAIpQLSfFuyBhBTCOTS0zTXBzY1KVuKpumyIBTucLcJ1ArC1XpWsG-Q/viewform Organisation - direct questions to bobkonf at active minus group dot de - proposal deadline: October 30, 2016 - notification: November 15, 2016 - program: December 1, 2016 NOTE: The conference fee will be waived for presenters, but travel expenses will not be covered. Speaker Grants -------------- BOB has Speaker Grants available to support speakers from groups under-represented in technology. We specifically seek women speakers and speakers who are not be able to attend the conference for financial reasons. Details are here: http://bobkonf.de/2017/en/speaker-grants.html Shepherding ----------- The program committee offers shepherding to all speakers. Shepherding provides speakers assistance with preparing their sessions, as well as a review of the talk slides. Program Committee ----------------- (more information here: http://bobkonf.de/2017/programmkomitee.html) - Matthias Fischmann, zerobuzz UG - Matthias Neubauer, SICK AG - Nicole Rauch, Softwareentwicklung und Entwicklungscoaching - Michael Sperber, Active Group - Stefan Wehr, factis research Scientific Advisory Board - Annette Bieniusa, TU Kaiserslautern - Torsten Grust, Uni T?bingen - Peter Thiemann, Uni Freiburg From Dinislam.Salikhov@REDACTED Mon Oct 17 14:28:55 2016 From: Dinislam.Salikhov@REDACTED (Salikhov Dinislam) Date: Mon, 17 Oct 2016 15:28:55 +0300 Subject: [erlang-questions] Tail call optimization In-Reply-To: References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> Message-ID: <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> The confusing thing is that the doc says about tail *recursive* call. For example, if I have a call chain: a() -> % some code b(). b() -> % some code c(). % ... y() -> %some code z(). Recursion is *not* involved here. And I'd like to know if erlang requires (and guarantees) that all tail callees in the chain above use the stack of the caller. AFAIU, compiler is free to not apply the optimization if it is not stated in the specification (and it is pure luck that the compiler does). Salikhov Dinislam On 10/17/2016 03:00 PM, Dmytro Lytovchenko wrote: > In the doc page you linked: > > If the last expression of a function body is a function call, a > *tail recursive* call is done. > > Compiler will replace call opcode with a tail call (call_last, > call_ext_last, apply_last). You can check it with "erl -S test.erl" to > see assembly, and in erl console: "l(modulename)." to load the module > then "erts_debug:df(modulename)." to create disassembly from BEAM VM > memory (it will be a bit different from the erl -S output). > > See that your calls are replaced with one of: call_last, > call_ext_last, apply_last. > > 2016-10-17 10:58 GMT+02:00 Salikhov Dinislam > >: > > Hello. > > Erlang guarantees tail recursion optimization and states it in the > documentation: > http://erlang.org/doc/reference_manual/functions.html#id78464 > > > Does erlang guarantee that tail call optimization is done in a > generic case, without recursion? > Say, we have a function calling a function from another module as > its final statement: > alpha() -> > xxx:beta(). > Is it guaranteed that xxx:beta() will use the stack of alpha() > regardless whether recursion is involved. > I mean whether the language guarantees it rather than virtual > machine may provide such optimization. > > Thanks in advance, > Salikhov Dinislam > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dinislam.Salikhov@REDACTED Mon Oct 17 14:47:43 2016 From: Dinislam.Salikhov@REDACTED (Salikhov Dinislam) Date: Mon, 17 Oct 2016 15:47:43 +0300 Subject: [erlang-questions] Tail call optimization In-Reply-To: References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> Message-ID: <77857496-d429-f410-0a9c-4e877039f930@kaspersky.com> > There is nothing about recursion in documentation. The only doc that I've managed to find about the subject is the link from my initial post (http://erlang.org/doc/reference_manual/functions.html#id78464). And it says about recursion: in sub-chapter's header, in sub-chapter itself and in the example (all in all, everywhere). Is there another documentation that you mean? On 10/17/2016 03:36 PM, Dmytro Lytovchenko wrote: > There is nothing about recursion in documentation. > > Your module: > -module(tc). > -export([a/0]). > a() -> b(). > b() -> c(). > c() -> z(). > z() -> self(). > > Compiles to (memory dump): > 00007F07875A2DA8: i_func_info_IaaI 0 tc a 0. > 00007F07875A2DD0: i_call_only_f tc:b/0. > > 00007F07875A2DE0: i_func_info_IaaI 0 tc b 0. > 00007F07875A2E08: i_call_only_f tc:c/0. > > 00007F07875A2E18: i_func_info_IaaI 0 tc c 0. > 00007F07875A2E40: i_call_only_f tc:z/0. > > 00007F07875A2E50: i_func_info_IaaI 0 tc z 0. > 00007F07875A2E78: self_r x(0). > 00007F07875A2E80: return. > > Note the call_only functions. These are the tail calls. > > > 2016-10-17 14:28 GMT+02:00 Salikhov Dinislam > >: > > The confusing thing is that the doc says about tail *recursive* call. > For example, if I have a call chain: > a() -> > % some code > b(). > b() -> > % some code > c(). > % ... > y() -> > %some code > z(). > Recursion is *not* involved here. And I'd like to know if erlang > requires (and guarantees) that all tail callees in the chain above > use the stack of the caller. > AFAIU, compiler is free to not apply the optimization if it is not > stated in the specification (and it is pure luck that the compiler > does). > > Salikhov Dinislam > > > On 10/17/2016 03:00 PM, Dmytro Lytovchenko wrote: >> In the doc page you linked: >> > If the last expression of a function body is a function call, a >> *tail recursive* call is done. >> >> Compiler will replace call opcode with a tail call (call_last, >> call_ext_last, apply_last). You can check it with "erl -S >> test.erl" to see assembly, and in erl console: "l(modulename)." >> to load the module then "erts_debug:df(modulename)." to create >> disassembly from BEAM VM memory (it will be a bit different from >> the erl -S output). >> >> See that your calls are replaced with one of: call_last, >> call_ext_last, apply_last. >> >> 2016-10-17 10:58 GMT+02:00 Salikhov Dinislam >> > >: >> >> Hello. >> >> Erlang guarantees tail recursion optimization and states it >> in the documentation: >> http://erlang.org/doc/reference_manual/functions.html#id78464 >> >> >> Does erlang guarantee that tail call optimization is done in >> a generic case, without recursion? >> Say, we have a function calling a function from another >> module as its final statement: >> alpha() -> >> xxx:beta(). >> Is it guaranteed that xxx:beta() will use the stack of >> alpha() regardless whether recursion is involved. >> I mean whether the language guarantees it rather than virtual >> machine may provide such optimization. >> >> Thanks in advance, >> Salikhov Dinislam >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 15:40:23 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 13:40:23 +0000 (UTC) Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: References: Message-ID: <1268859049.1367627.1476711623864@mail.yahoo.com> I am interested in this too. Only way I know of so far is to use taskset or equivalent. ?Ideally Erlang should bind each scheduler to each single cpu as speced by the topology. On Monday, October 17, 2016 9:35 AM, Tan Duong wrote: Hi everybody, I recently get to experiment an Erlang program.My machine is a multicore CPUs system, which contains some physical cores (say n), each cores features hyper threads (so the maximum CPU threads are 2*n)However, I just want to experiment the program on physical cores only (n cores), not with hyperthreading.is there any mechanism to do so? Best Regards,Tan _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From dszoboszlay@REDACTED Mon Oct 17 15:49:54 2016 From: dszoboszlay@REDACTED (=?UTF-8?Q?D=C3=A1niel_Szoboszlay?=) Date: Mon, 17 Oct 2016 13:49:54 +0000 Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: References: Message-ID: If you don't mind binding the schedulers to logical cores, I think the quickest solution is: erl +sbt ts +SP 50:50 This will tell Erlang to lay out schedulers by first binding to the first logical CPU in each core, then the second and so on. Then +SP tells to only use 50% of the available logical CPU-s (e.g. 1 hyper thread per core). You can avoid the scheduler binding and/or better deal with NUMA architectures by passing in a custom CpuTopology that reveals only 1 hyper thread/core for Erlang. This can be done with the +sct command line flag, but the syntax for CpuTopologies is a bit complex. On Mon, 17 Oct 2016 at 15:35 Tan Duong wrote: > Hi everybody, > > I recently get to experiment an Erlang program. > My machine is a multicore CPUs system, which contains some physical cores > (say n), each cores features hyper threads (so the maximum CPU threads are > 2*n) > However, I just want to experiment the program on physical cores only (n > cores), not with hyperthreading. > is there any mechanism to do so? > > Best Regards, > Tan > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mononcqc@REDACTED Mon Oct 17 15:51:34 2016 From: mononcqc@REDACTED (Fred Hebert) Date: Mon, 17 Oct 2016 09:51:34 -0400 Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: <1268859049.1367627.1476711623864@mail.yahoo.com> References: <1268859049.1367627.1476711623864@mail.yahoo.com> Message-ID: <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> On 10/17, Vans S wrote: >I am interested in this too. Only way I know of so far is to use taskset or equivalent. ?Ideally Erlang should bind each scheduler to each single cpu as speced by the topology. > > On Monday, October 17, 2016 9:35 AM, Tan Duong wrote: > > > Hi everybody, >I recently get to experiment an Erlang program.My machine is a multicore CPUs system, which contains some physical cores (say n), each cores features hyper threads (so the maximum CPU threads are 2*n)However, I just want to experiment the program on physical cores only (n cores), not with hyperthreading.is there any mechanism to do so? >Best Regards,Tan You may both want to look at the +sct option for the erl executable: http://erlang.org/doc/man/erl.html#+sct Regards, Fred. From olopierpa@REDACTED Mon Oct 17 15:57:29 2016 From: olopierpa@REDACTED (Pierpaolo Bernardi) Date: Mon, 17 Oct 2016 15:57:29 +0200 Subject: [erlang-questions] Tail call optimization In-Reply-To: <77857496-d429-f410-0a9c-4e877039f930@kaspersky.com> References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> <77857496-d429-f410-0a9c-4e877039f930@kaspersky.com> Message-ID: On Mon, Oct 17, 2016 at 2:47 PM, Salikhov Dinislam wrote: >> There is nothing about recursion in documentation. > The only doc that I've managed to find about the subject is the link from my > initial post > (http://erlang.org/doc/reference_manual/functions.html#id78464). > And it says about recursion: in sub-chapter's header, in sub-chapter itself > and in the example (all in all, everywhere). Is there another documentation > that you mean? Yes, the chapter sub-header has 'recursion' in the name. But the text says: "If the last expression of a function body is a function call, a tail recursive call is done." This in no way can be read as meaning that when the last expression of a function body is a function call then a tail call is not mandated. Maybe changing "tail recursive call" to "tail call" would remove an element of distraction and be more to the point though. From seriy.pr@REDACTED Mon Oct 17 15:59:07 2016 From: seriy.pr@REDACTED (=?UTF-8?B?0KHQtdGA0LPQtdC5INCf0YDQvtGF0L7RgNC+0LI=?=) Date: Mon, 17 Oct 2016 16:59:07 +0300 Subject: [erlang-questions] Unix Domain Sockets in v19 Message-ID: One more example of UNIX sockets in erl19 there: http://tryerl.seriyps.ru/#id=6ef3 (see unix_sockets/0 function) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 16:04:58 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 14:04:58 +0000 (UTC) Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> References: <1268859049.1367627.1476711623864@mail.yahoo.com> <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> Message-ID: <2041382197.1377091.1476713098517@mail.yahoo.com> > You may both want to look at the +sct option for the erl executable:?> http://erlang.org/doc/man/erl.html#+sct? This is the right answer. > erl +sbt ts +SP 50:50 This wont bind to the specific system cores. ?It will use all cores the OS allows. On Monday, October 17, 2016 9:51 AM, Fred Hebert wrote: On 10/17, Vans S wrote: >I am interested in this too. Only way I know of so far is to use taskset or equivalent. ?Ideally Erlang should bind each scheduler to each single cpu as speced by the topology. > >? ? On Monday, October 17, 2016 9:35 AM, Tan Duong wrote: > > > Hi everybody, >I recently get to experiment an Erlang program.My machine is a multicore CPUs system, which contains some physical cores (say n), each cores features hyper threads (so the maximum CPU threads are 2*n)However, I just want to experiment the program on physical cores only (n cores), not with hyperthreading.is there any mechanism to do so? >Best Regards,Tan You may both want to look at the +sct option for the erl executable: http://erlang.org/doc/man/erl.html#+sct Regards, Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Mon Oct 17 16:08:35 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Mon, 17 Oct 2016 16:08:35 +0200 Subject: [erlang-questions] Unix Domain Sockets in v19 In-Reply-To: References: Message-ID: <20161017140835.GA58490@erix.ericsson.se> On Sun, Oct 16, 2016 at 08:34:22PM +0300, Taras Shapovalov wrote: > Hey guys, > > I would like to try the experimental feature of v19 -- unix sockets, but > cannot get how it should be used. For example, if I send some request to > docker with gen_udp, then I will get {error,eprototype}: > > [taras@REDACTED ~]$ erl > Erlang/OTP 19 [erts-8.1] [source] [64-bit] [smp:4:4] [async-threads:10] > [kernel-poll:false] > > Eshell V8.1 (abort with ^G) > 1> {ok, Sockout} = gen_udp:open(0, [{ifaddr, {local, "/tmp/testsockout"}}]). > {ok,#Port<0.413>} > 2> gen_udp:send(Sockout, {local, "/var/run/docker.sock"}, 0, > "http:/containers/json"). > {error,eprototype} > 3> > > The socket is available and accessable by the user. Say, this works fine: > > curl --unix-socket /var/run/docker.sock http:/containers/json > > Any idea what is going wrong there? You send an UDP datagram to a TCP socket. Other than that your code looks just fine... find /usr/include -type f | xargs fgrep EPROTOTYPE : /usr/include/asm-generic/errno.h:#define EPROTOTYPE 91 /* Protocol wrong type for socket */ > > I will appreciate if someone points me to any documentation (I know the > final description of the feature is not ready for now, but maybe there is > some draft already exists?). > > Also do you know if httpc module supports the unix sockets since 19.0? If > yes, how to do the same with httpc? I should pass all options through to gen_tcp, so it is not impossible that it can handle the 'local' address family. Give it a try! > > Best regards, > > Taras -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From Dinislam.Salikhov@REDACTED Mon Oct 17 16:14:41 2016 From: Dinislam.Salikhov@REDACTED (Salikhov Dinislam) Date: Mon, 17 Oct 2016 17:14:41 +0300 Subject: [erlang-questions] Tail call optimization In-Reply-To: References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> <77857496-d429-f410-0a9c-4e877039f930@kaspersky.com> Message-ID: <0d983a6b-f204-5226-cbad-1ce1d076a22e@kaspersky.com> Tail call != tail recursive call. The former is more general and includes the latter as a particular case. I'd change the wording as follows: *6.3 Tail function call* If the last expression of a function body is a function call, a *tail call optimization* is done. IMO, it would eliminate any ambiguity here. Anyway, thank you for clarification. Salikhov Dinislam On 10/17/2016 04:57 PM, Pierpaolo Bernardi wrote: > On Mon, Oct 17, 2016 at 2:47 PM, Salikhov Dinislam > wrote: >>> There is nothing about recursion in documentation. >> The only doc that I've managed to find about the subject is the link from my >> initial post >> (http://erlang.org/doc/reference_manual/functions.html#id78464). >> And it says about recursion: in sub-chapter's header, in sub-chapter itself >> and in the example (all in all, everywhere). Is there another documentation >> that you mean? > Yes, the chapter sub-header has 'recursion' in the name. > > But the text says: "If the last expression of a function body is a > function call, a tail recursive call is done." > > This in no way can be read as meaning that when the last expression of > a function body is a function call then a tail call is not mandated. > > Maybe changing "tail recursive call" to "tail call" would remove an > element of distraction and be more to the point though. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raimo+erlang-questions@REDACTED Mon Oct 17 16:22:26 2016 From: raimo+erlang-questions@REDACTED (Raimo Niskanen) Date: Mon, 17 Oct 2016 16:22:26 +0200 Subject: [erlang-questions] Tail call optimization In-Reply-To: References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> <77857496-d429-f410-0a9c-4e877039f930@kaspersky.com> Message-ID: <20161017142226.GB58490@erix.ericsson.se> On Mon, Oct 17, 2016 at 03:57:29PM +0200, Pierpaolo Bernardi wrote: > On Mon, Oct 17, 2016 at 2:47 PM, Salikhov Dinislam > wrote: > >> There is nothing about recursion in documentation. > > The only doc that I've managed to find about the subject is the link from my > > initial post > > (http://erlang.org/doc/reference_manual/functions.html#id78464). > > And it says about recursion: in sub-chapter's header, in sub-chapter itself > > and in the example (all in all, everywhere). Is there another documentation > > that you mean? > > Yes, the chapter sub-header has 'recursion' in the name. > > But the text says: "If the last expression of a function body is a > function call, a tail recursive call is done." > > This in no way can be read as meaning that when the last expression of > a function body is a function call then a tail call is not mandated. > > Maybe changing "tail recursive call" to "tail call" would remove an > element of distraction and be more to the point though. Yes! We should change that detail in the documentation. Recursion is not a prerequisite for tail call optimization. This is an implication of the fact that neither the compiler nor the VM, due to hot code loading and due to that modules are compiled independently, can depend on what a Module:Function call does (this time) so the tail call optimization has to be done from the caller's side. And it is hard to figure out a real use case where code depending on tail call optimization does not eventually call back to itself hence use recursion. So the tail call optimization is something the language depends hard on, and the documentation is confusing when using the word "recursive" in this context. It probably comes from the course material where it talks about "tail recursive" vs. "body recursive" calls, so this documentation probably just wanted to use a familiar (but slightly incorrect) term... -- / Raimo Niskanen, Erlang/OTP, Ericsson AB From dszoboszlay@REDACTED Mon Oct 17 16:33:16 2016 From: dszoboszlay@REDACTED (=?UTF-8?Q?D=C3=A1niel_Szoboszlay?=) Date: Mon, 17 Oct 2016 14:33:16 +0000 Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: <2041382197.1377091.1476713098517@mail.yahoo.com> References: <1268859049.1367627.1476711623864@mail.yahoo.com> <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> <2041382197.1377091.1476713098517@mail.yahoo.com> Message-ID: Well, I tested, and it does work as long as you have 2 thread/core: erl +sbt ts +SP 50:50 Erlang/OTP 17 Klarna-g48fc1a0 [erts-6.4.1.5] [source-48fc1a0] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false] Eshell V6.4.1.5 (abort with ^G) 1> erlang:system_info(cpu_topology). [{processor,[{core,[{thread,{logical,0}}, {thread,{logical,2}}]}, {core,[{thread,{logical,1}},{thread,{logical,3}}]}]}] 2> erlang:system_info(scheduler_bindings). {0,1} I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. On Mon, 17 Oct 2016 at 16:05 Vans S wrote: > > You may both want to look at the +sct option for the erl executable: > > http://erlang.org/doc/man/erl.html#+sct > > This is the right answer. > > > > erl +sbt ts +SP 50:50 > > This wont bind to the specific system cores. It will use all cores the OS > allows. > > > On Monday, October 17, 2016 9:51 AM, Fred Hebert wrote: > > > On 10/17, Vans S wrote: > >I am interested in this too. Only way I know of so far is to use taskset > or equivalent. Ideally Erlang should bind each scheduler to each single > cpu as speced by the topology. > > > > On Monday, October 17, 2016 9:35 AM, Tan Duong > wrote: > > > > > > Hi everybody, > >I recently get to experiment an Erlang program.My machine is a multicore > CPUs system, which contains some physical cores (say n), each cores > features hyper threads (so the maximum CPU threads are 2*n)However, I just > want to experiment the program on physical cores only (n cores), not with > hyperthreading.is there any mechanism to do so? > >Best Regards,Tan > > You may both want to look at the +sct option for the erl executable: > http://erlang.org/doc/man/erl.html#+sct > > > Regards, > > Fred. > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 16:38:53 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 14:38:53 +0000 (UTC) Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: References: <1268859049.1367627.1476711623864@mail.yahoo.com> <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> <2041382197.1377091.1476713098517@mail.yahoo.com> Message-ID: <2132938032.1416133.1476715133178@mail.yahoo.com> > I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. Now bind them to cpus 1 and 3. You got bound to 0 and 1 by default. ?Also erlang is not using the hyperthreaded cpus you have available. ? On Monday, October 17, 2016 10:33 AM, D?niel Szoboszlay wrote: Well, I tested, and it does work as long as you have 2 thread/core: erl +sbt ts +SP 50:50Erlang/OTP 17 Klarna-g48fc1a0 [erts-6.4.1.5] [source-48fc1a0] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false] Eshell V6.4.1.5 ?(abort with ^G)1> erlang:system_info(cpu_topology).[{processor,[{core,[{thread,{logical,0}},? ? ? ? ? ? ? ? ? ? {thread,{logical,2}}]},? ? ? ? ? ? ?{core,[{thread,{logical,1}},{thread,{logical,3}}]}]}]2> erlang:system_info(scheduler_bindings).{0,1} I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. On Mon, 17 Oct 2016 at 16:05 Vans S wrote: > You may both want to look at the +sct option for the erl executable:?> http://erlang.org/doc/man/erl.html#+sct? This is the right answer. > erl +sbt ts +SP 50:50 This wont bind to the specific system cores.? It will use all cores the OS allows. On Monday, October 17, 2016 9:51 AM, Fred Hebert wrote: On 10/17, Vans S wrote: >I am interested in this too. Only way I know of so far is to use taskset or equivalent.? Ideally Erlang should bind each scheduler to each single cpu as speced by the topology. > >? ? On Monday, October 17, 2016 9:35 AM, Tan Duong wrote: > > > Hi everybody, >I recently get to experiment an Erlang program.My machine is a multicore CPUs system, which contains some physical cores (say n), each cores features hyper threads (so the maximum CPU threads are 2*n)However, I just want to experiment the program on physical cores only (n cores), not with hyperthreading.is there any mechanism to do so? >Best Regards,Tan You may both want to look at the +sct option for the erl executable: http://erlang.org/doc/man/erl.html#+sct Regards, Fred. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From dszoboszlay@REDACTED Mon Oct 17 16:55:37 2016 From: dszoboszlay@REDACTED (=?UTF-8?Q?D=C3=A1niel_Szoboszlay?=) Date: Mon, 17 Oct 2016 14:55:37 +0000 Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: <2132938032.1416133.1476715133178@mail.yahoo.com> References: <1268859049.1367627.1476711623864@mail.yahoo.com> <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> <2041382197.1377091.1476713098517@mail.yahoo.com> <2132938032.1416133.1476715133178@mail.yahoo.com> Message-ID: Yes, I don't use the hyperthreaded CPUs. But that was the point of the original question: how to disable hyperthreading for Erlang? If you want better control over which cores to use, you need to use +sct, I agree. On Mon, 17 Oct 2016 at 16:38 Vans S wrote: > > I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. > > Now bind them to cpus 1 and 3. > > You got bound to 0 and 1 by default. Also erlang is not using the > hyperthreaded cpus you have available. > > > On Monday, October 17, 2016 10:33 AM, D?niel Szoboszlay < > dszoboszlay@REDACTED> wrote: > > > Well, I tested, and it does work as long as you have 2 thread/core: > > erl +sbt ts +SP 50:50 > Erlang/OTP 17 Klarna-g48fc1a0 [erts-6.4.1.5] [source-48fc1a0] [64-bit] > [smp:2:2] [async-threads:10] [kernel-poll:false] > > Eshell V6.4.1.5 (abort with ^G) > 1> erlang:system_info(cpu_topology). > [{processor,[{core,[{thread,{logical,0}}, > {thread,{logical,2}}]}, > {core,[{thread,{logical,1}},{thread,{logical,3}}]}]}] > 2> erlang:system_info(scheduler_bindings). > {0,1} > > I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. > > On Mon, 17 Oct 2016 at 16:05 Vans S wrote: > > > You may both want to look at the +sct option for the erl executable: > > http://erlang.org/doc/man/erl.html#+sct > > This is the right answer. > > > > erl +sbt ts +SP 50:50 > > This wont bind to the specific system cores. It will use all cores the OS > allows. > > > On Monday, October 17, 2016 9:51 AM, Fred Hebert wrote: > > > On 10/17, Vans S wrote: > >I am interested in this too. Only way I know of so far is to use taskset > or equivalent. Ideally Erlang should bind each scheduler to each single > cpu as speced by the topology. > > > > On Monday, October 17, 2016 9:35 AM, Tan Duong > wrote: > > > > > > Hi everybody, > >I recently get to experiment an Erlang program.My machine is a multicore > CPUs system, which contains some physical cores (say n), each cores > features hyper threads (so the maximum CPU threads are 2*n)However, I just > want to experiment the program on physical cores only (n cores), not with > hyperthreading.is there any mechanism to do so? > >Best Regards,Tan > > You may both want to look at the +sct option for the erl executable: > http://erlang.org/doc/man/erl.html#+sct > > > Regards, > > Fred. > > > > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vans_163@REDACTED Mon Oct 17 17:11:27 2016 From: vans_163@REDACTED (Vans S) Date: Mon, 17 Oct 2016 15:11:27 +0000 (UTC) Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: References: <1268859049.1367627.1476711623864@mail.yahoo.com> <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> <2041382197.1377091.1476713098517@mail.yahoo.com> <2132938032.1416133.1476715133178@mail.yahoo.com> Message-ID: <1776186908.1457183.1476717087701@mail.yahoo.com> > Yes, I don't use the hyperthreaded CPUs. But that was the point of the original question: how to disable hyperthreading for Erlang?> If you want better control over which cores to use, you need to use?+sct, I agree. Your right my mistake. I misread the initial question thinking it said "how to choose the physical cores erlang runs on and to not run it on hyperthreaded cores". On Monday, October 17, 2016 10:55 AM, D?niel Szoboszlay wrote: Yes, I don't use the hyperthreaded CPUs. But that was the point of the original question: how to disable hyperthreading for Erlang?If you want better control over which cores to use, you need to use?+sct, I agree. On Mon, 17 Oct 2016 at 16:38 Vans S wrote: > I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. Now bind them to cpus 1 and 3. You got bound to 0 and 1 by default.? Also erlang is not using the hyperthreaded cpus you have available. ? On Monday, October 17, 2016 10:33 AM, D?niel Szoboszlay wrote: Well, I tested, and it does work as long as you have 2 thread/core: erl +sbt ts +SP 50:50Erlang/OTP 17 Klarna-g48fc1a0 [erts-6.4.1.5] [source-48fc1a0] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false] Eshell V6.4.1.5 ?(abort with ^G)1> erlang:system_info(cpu_topology).[{processor,[{core,[{thread,{logical,0}},? ? ? ? ? ? ? ? ? ? {thread,{logical,2}}]},? ? ? ? ? ? ?{core,[{thread,{logical,1}},{thread,{logical,3}}]}]}]2> erlang:system_info(scheduler_bindings).{0,1} I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. On Mon, 17 Oct 2016 at 16:05 Vans S wrote: > You may both want to look at the +sct option for the erl executable:?> http://erlang.org/doc/man/erl.html#+sct? This is the right answer. > erl +sbt ts +SP 50:50 This wont bind to the specific system cores.? It will use all cores the OS allows. On Monday, October 17, 2016 9:51 AM, Fred Hebert wrote: On 10/17, Vans S wrote: >I am interested in this too. Only way I know of so far is to use taskset or equivalent.? Ideally Erlang should bind each scheduler to each single cpu as speced by the topology. > >? ? On Monday, October 17, 2016 9:35 AM, Tan Duong wrote: > > > Hi everybody, >I recently get to experiment an Erlang program.My machine is a multicore CPUs system, which contains some physical cores (say n), each cores features hyper threads (so the maximum CPU threads are 2*n)However, I just want to experiment the program on physical cores only (n cores), not with hyperthreading.is there any mechanism to do so? >Best Regards,Tan You may both want to look at the +sct option for the erl executable: http://erlang.org/doc/man/erl.html#+sct Regards, Fred. _______________________________________________ erlang-questions mailing list erlang-questions@REDACTED http://erlang.org/mailman/listinfo/erlang-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From dn.nhattan@REDACTED Mon Oct 17 17:06:20 2016 From: dn.nhattan@REDACTED (Tan Duong) Date: Mon, 17 Oct 2016 17:06:20 +0200 Subject: [erlang-questions] How to setup Erlang to run on physical cores In-Reply-To: References: <1268859049.1367627.1476711623864@mail.yahoo.com> <20161017135134.GA18602@fhebert-ltm2.internal.salesforce.com> <2041382197.1377091.1476713098517@mail.yahoo.com> <2132938032.1416133.1476715133178@mail.yahoo.com> Message-ID: Hi Daniel, Thank you for answering me. I have tried as well on a Ubuntu Machine. And the scheduler bindings are as you described. I haven't run the programs though. I am planing to watch the CPU untilization via htop (also hwloc-ls) and possibly will let you know. On the other hand, the same command are not available for Mac OSx. There I had to change + to - prefixing sbt and SP. when I tried, it just return no information erl -sbt ts -SP 50:50 Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.0 (abort with ^G) 1> erlang:system_info(cpu_topology). undefined 2> erlang:system_info(scheduler_bindings). {unbound,unbound,unbound,unbound} Anyway, this is just a minor. I have heard that Erlang scheduler can not be bound to CPU on Mac. Best, On Mon, Oct 17, 2016 at 4:55 PM, D?niel Szoboszlay wrote: > Yes, I don't use the hyperthreaded CPUs. But that was the point of the > original question: how to disable hyperthreading for Erlang? > If you want better control over which cores to use, you need to use +sct, > I agree. > > On Mon, 17 Oct 2016 at 16:38 Vans S wrote: > >> > I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. >> >> Now bind them to cpus 1 and 3. >> >> You got bound to 0 and 1 by default. Also erlang is not using the >> hyperthreaded cpus you have available. >> >> >> On Monday, October 17, 2016 10:33 AM, D?niel Szoboszlay < >> dszoboszlay@REDACTED> wrote: >> >> >> Well, I tested, and it does work as long as you have 2 thread/core: >> >> erl +sbt ts +SP 50:50 >> Erlang/OTP 17 Klarna-g48fc1a0 [erts-6.4.1.5] [source-48fc1a0] [64-bit] >> [smp:2:2] [async-threads:10] [kernel-poll:false] >> >> Eshell V6.4.1.5 (abort with ^G) >> 1> erlang:system_info(cpu_topology). >> [{processor,[{core,[{thread,{logical,0}}, >> {thread,{logical,2}}]}, >> {core,[{thread,{logical,1}},{thread,{logical,3}}]}]}] >> 2> erlang:system_info(scheduler_bindings). >> {0,1} >> >> I have 2 schedulers bound to logical cores 0 & 1, exactly as intended. >> >> On Mon, 17 Oct 2016 at 16:05 Vans S wrote: >> >> > You may both want to look at the +sct option for the erl executable: >> > http://erlang.org/doc/man/erl.html#+sct >> >> This is the right answer. >> >> >> > erl +sbt ts +SP 50:50 >> >> This wont bind to the specific system cores. It will use all cores the >> OS allows. >> >> >> On Monday, October 17, 2016 9:51 AM, Fred Hebert >> wrote: >> >> >> On 10/17, Vans S wrote: >> >I am interested in this too. Only way I know of so far is to use taskset >> or equivalent. Ideally Erlang should bind each scheduler to each single >> cpu as speced by the topology. >> > >> > On Monday, October 17, 2016 9:35 AM, Tan Duong >> wrote: >> > >> > >> > Hi everybody, >> >I recently get to experiment an Erlang program.My machine is a multicore >> CPUs system, which contains some physical cores (say n), each cores >> features hyper threads (so the maximum CPU threads are 2*n)However, I just >> want to experiment the program on physical cores only (n cores), not with >> hyperthreading.is there any mechanism to do so? >> >Best Regards,Tan >> >> You may both want to look at the +sct option for the erl executable: >> http://erlang.org/doc/man/erl.html#+sct >> >> >> Regards, >> >> Fred. >> >> >> >> _______________________________________________ >> erlang-questions mailing list >> erlang-questions@REDACTED >> http://erlang.org/mailman/listinfo/erlang-questions >> >> >> >> > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtrlists@REDACTED Mon Oct 17 18:12:03 2016 From: rtrlists@REDACTED (Robert Raschke) Date: Mon, 17 Oct 2016 18:12:03 +0200 Subject: [erlang-questions] Port handlers and binary leaks? In-Reply-To: References: Message-ID: I find the easiest way to think about (large) binaries is through remembering that they are shared "objects". Any process that has a "pointer" into a binary like that, means the binary as a whole needs to stick around. Pattern matching on binaries creates such pointers. A common problem arises when you have one process creating such a binary, then taking it apart using pattern matching, and passing those "parts" on for further processing. All those parts now refer back to the original large binary, which therefore cannot get collected. In order to work around this "optimisation", the common approach is for the process taking apart the binary to make copies of the parts to pass along for further processing. Thus references into the original binary are avoided and it can get collected. Cheers, Robby On 17 Oct 2016 13:26, "Oliver Korpilla" wrote: > Hello. > > Recent reading got me concerned about leaking memory by having big refc > binaries. > > In our system we have permanent TCP and SCTP handlers that receive outside > messages, then forward it into the system for processing. > > For example, these events have ASN.1 payloads. Decoding is done in > throw-away processes... however, I'm concerned that the TCP and SCTP > handling processes might cause memory leaks because every payload buffer is > handled there first. > > I saw in "Erlang in Anger" that routers should only return where to route > to, not handle the message. But when handling sockets I don't see that > option? > > What is good practice here? Am "I" at risk? > > Thanks and best regards, > Oliver > _______________________________________________ > erlang-questions mailing list > erlang-questions@REDACTED > http://erlang.org/mailman/listinfo/erlang-questions > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erlang@REDACTED Mon Oct 17 19:17:27 2016 From: erlang@REDACTED (Joe Armstrong) Date: Mon, 17 Oct 2016 19:17:27 +0200 Subject: [erlang-questions] Tail call optimization In-Reply-To: <20161017142226.GB58490@erix.ericsson.se> References: <094fff82-86d0-b1cc-ee78-ce0cb92702c4@kaspersky.com> <261b0e6d-96fb-8d74-3ebf-4d9762a4fbf3@kaspersky.com> <77857496-d429-f410-0a9c-4e877039f930@kaspersky.com> <20161017142226.GB58490@erix.ericsson.se> Message-ID: I think there is a confusion between what is commonly called "tail recursion" and "last call optimization" - so to clarify this I'll try to explain exactly what happens. The correct name for the optimization used in Erlang is "last call optimization". Saying that something is "tail recursive" is a short-hand way of saying "the function is recursive" and that we only find pure function calls in the tail-positions of all branches of a function. Last call optimization is best understood by looking at how we compile code for a simple stack-based language. Suppose we have some code like this: X -> call a call b call c This is a kind of pseudo code, it just means the function X just calls three functions a,b and c (could be any programming language) On a conventional stack machine, this is compiled into something like this: X: pushAddr 1 goto a 1:pushAddr 2 goto b 2:pushAddr 3 goto c 3:ret pushAddr pushed a return address onto the stack. goto